Sunday, January 24, 2010

3D and the Open Web

This post is a call for feedback, about Syntensity's engine (the Intensity Engine) being used on the web. The basic idea is that we - people on the web - should avoid what happened with Flash, a single vendor controlling a closed technology, which became the de facto standard for video and interactive content for many years. Instead, we need an open source solution, not controlled by any one corporation. I think the Intensity Engine is close to being suitable for that, and am looking for feedback on this idea, and help in achieving it.

A big part of the success of the web is its openness: We can run websites on any of several webservers, some of them open source (Apache, lighttpd, nginx, etc.), and visit those websites using likewise open source web browsers (Firefox, Chromium, etc.). That's an amazing achievement. And let's not forget that just a few years ago Internet Explorer was dangerously close to a choke hold on the web browser side.

Video, these days, is often in the tech news: Specifically, using open, standardized technologies to play video on the web, using HTML5. There are some problems along the way, but overall we might be close to getting past video on the web being entirely reliant on Flash - a closed-source product controlled by a single company, and one that doesn't necessarily perform as well or the same on all platforms.

Here I'd like to talk about the "3D Web". The term was overhyped in the past, but really all I mean here is 3D content, mainly games and virtual worlds, that are accessible on the web. This is currently a smaller area than the web in general, or even video on the web, but it is growing in importance. My concern is that the open web should avoid 'Flashification' of 3D content, where a single closed-source product becomes the de-facto standard in the area, like Flash had (and mostly still has) in video, 2D gaming and interactive content. If we want to avoid that, the time is now.

There are some open technologies that show promise, mainly WebGL and Google's O3D. These may well end up succeeding. However, neither is a complete game engine, like for example the Unity 3D web plugin. There is a lot more that is necessary over what is present in WebGL and O3D - physics, content creation tools, a proper API and useful libraries, network protocols (for multiplayer), etc. etc. Some of that might be added to WebGL and O3D using JavaScript. However, many games are too computationally intensive, even with the best JavaScript engines out there.

Perhaps at some point Google Native Client (NaCl) will allow running game engines on the web. But instead of entirely relying on that, I think the open web needs an open source 3D gaming engine. The time to do it is now, before something else non-open comes to dominate the field. I'd like to suggest the Intensity Engine for that purpose: It is a complete, stable, cross-platform game engine. It works right now (outside of browsers) and has been in production for several months, successfully, on syntensity.com. It is 100% open source, and the current license, the AGPL, can be modified immediately to something else, like the BSD license, if that makes sense for this purpose. Also, the Intensity Engine was built with something like the web in mind - we use JavaScript to create games (Google V8 right now, and we also did some tests with SpiderMonkey), for example. In our mind, the ability to download and run games was always in parallel to how web browsers download and run web pages.

One concrete idea among others is to port the Intensity Engine's rendering system to O3D, and build a browser plugin of the result. The benefit being O3D is already set up as a browser plugin, while the Intensity Engine provides all the other game engine stuff. Alternatively, we can just port the Intensity Engine as-is to be a web browser plugin, assuming that would work with SDL (if not, would need to work on replacing that).

So, I'm looking for feedback about this topic, and ideas and help for how to move it forward. I really feel it isn't just us over here (in 3D gaming) that care about this stuff - lots of people want the web to remain open, and that should include 3D content and games.

Thanks for your responses!

Edit: I posted on relevant mailing lists about this,

16 comments:

  1. Intensity in the webbrowser would be impressive, WebGL and O3D both appear to be Java-based, which clearly atleast one of you devs know. WebGL is (obviously) very close to OpenGL, apparently very similar to OpenGL ES, currently no webbrowser supports it, only the betas of several (Chromium, Firefox, Safari, however, WebGL can be used on any system that supports OpenGL or OpenGL ES, It would be impressive to see Intensity games running off the web on handheld platforms like the PSP and the Iphone.

    O3D has the advantage of being a downloaded plugin, but requires porting to new platforms.

    I want to know how QuakeLive works really, more importantly, I heard some where its based off Id_Tech_3, which means the Plugin SHOULD be opensource, but I see no mention of that.

    ReplyDelete
  2. Wow, thats great! first a nice engine with multiple online worlds and now this.
    You guys have a nice vision for where the future of gamming is leading to.

    Mm.. but I have one question.. if you rewrite the rendering engine to use O3D would it also mean that the collaborative editting capabilities and other features of the cube2 engine could be lost?

    On the other hand.. wouldn't it be a lot of work to rewrite the renderig engine completely? Maybe porting the cube2 engine to NaCl might be easier, more maintenaible, and would keep the current features of the cube2 engine.
    Perhaps it's not so long until Google finally releases NaCl, considering they are about to start shipping Google Chrome OS devices soon.

    But of course this is just me pondering, I think I'm not knowledgeable enough to point you to the right direction.

    ReplyDelete
  3. @Spummy: Quake Live is based on Id Tech 3, I believe, but it's their code so they don't need to make it open source. It's in fact closed source, unless I am mistaken.

    @Ferk: The porting would be in a way that keeps the in-game editing and other nice cube 2 features. What they render for editing cube geometry, we would map to calls in O3D. That's the idea at least.

    Porting to NaCl would definitely be less work. However, I am not aware of a timeline for when it will be ready for general use, and I wouldn't want to bet on when that would happen (but maybe somebody can clarify that for us?). Yet, even if NaCl were ready tomorrow, I am not sure it would be a true solution, the connection between NaCl code to WebGL (presumably how we would do rendering) might be too slow - Cube does a *LOT* of OpenGL calls each frame. But this is also something that perhaps somebody can clarify for us.

    - kripken

    ReplyDelete
  4. I'm trying to remember, as owners of sourcecode, is one allowed to make a non-Opensource branches after already GPLing it? I am certain they are allowed to re-license, but are they allowed to make multiple differently licensed branches?

    ReplyDelete
  5. @Spummy:

    Yeah, if you own some code, you can release it under multiple licenses. What you GPLed will always stay GPLed, but you can also release the same source under other licenses.

    - kripken

    ReplyDelete
  6. I think the web need a compact statically typed fast compiling script language like Go (golang.org), not a cpu-dependent environment or an non-standard 3d API.

    ReplyDelete
  7. I read through your post kripken, and I really like what you say near
    the end when you say
    "One concrete idea among others is to port the Intensity Engine's
    rendering system to O3D, and build a browser plugin of the result."

    I think that's an excellent plan, but also one that would require a lot
    of new code, since the whole engine would have to reside entirely in
    javascript.

    I think such a plan would be remiss not to include a switch to render to
    WebGL as well as o3d (much like, for instance, the ogre engine renders
    to both OpenGL and DX). We have no idea which technique will win in the
    long run, but the ability to choose should come naturally with good
    software engineering.

    Where I disagree, however, is in the suggestion that a stand alone
    plugin will become the next 3d in the web standard. If anyone wins that
    game it will be Adobe with their flash plugin--that's the only plugin
    with an install base you can count on. And your whole article is about
    how we should avoid flash on the web.

    So I think lets go with plan (a) intensity engine, or something similar
    that we all build together ontop of o3d and webGL. I'm coming at this
    from the webGL side, but again, the decision is only super important if
    the software is poorly architected and does not expose a scene graph to
    the rest of the app upon with both o3d and webGL backends may be
    constructed.

    My gut feeling here is that we should use event passing to get between
    network and physics webworker threads and the main thread (with access
    to the dom--the graphics thread that is). This means we need to define
    an event model to update the scene graph--and suddenly whether we choose
    o3d or webGL to reflect that stream of scene graph deltas seems to
    matter a lot less as long as we're resigned to javascript land and


    Thoughts?
    -Daniel

    ReplyDelete
  8. Daniel, I think you touch upon some very important issues. Here are my thoughts:

    I am doubtful about being rendering-system agnostic. OpenGL and DirectX are a clear parallel of each other - their calls are so similar, you can even translate them 1-to-1 (as Wine, in fact, does). But WebGL and O3D are not like that. For one thing, O3D stores the state of all the rendered objects and issues calls itself, while in WebGL you need to issue them. In other words, WebGL and O3D are like OpenGL and Ogre, not OpenGL and DirectX. And I am unaware of engines that are rendering-system agnostic for things like OpenGL and Ogre.

    Second, Flash does not worry me. Flash's 3D capabilities are limited, and even should they add more, it is far from a full game engine (and building one in ActionScript won't work either). What does worry me is Unity. We need something open source to counter that. And, to have decent multiplayer capabilities+performance, I don't see how we can avoid creating a new browser plugin - since we need UDP, fast physics in native code, etc. The performance just isn't there, our tests show, for a pure WebGL+JavaScript solution. But I think we *can* succeed with a browser plugin, if done right.

    - kripken

    ReplyDelete
  9. I think you're going to want a scene graph abstraction ontop of WebGL-- that might look a lot more like ogre than raw webGL and hence it would be like making Ogre2 and having underneath it Ogre and OpenGL--definitely possible to do (though redundant in this example since ogre itself has an openGL backend). O3d isn't exactly the scene graph you want anyway--so build an abstraction ontop of O3d--then make a webGL implementation of that abstraction so it can run anywhere at lower perf

    I think flash's 3d abilities may increase a lot--and sure it's in action script, but they have the scene graph model down pat and it's arguably going to get as good as o3d once adobe's engineers release something (hopefully not for a while since we want to stay free).

    If you need UDP then I agree you need a plugin. But I think you'll find that TCP is sufficient for a large majority of applications and only very high perf games need more. I'm certain for instance even space and flight simulators do just fine on TCP due to dead reckoning, etc (in my dorm it did a lot better with TCP than UDP in my repeated tests). Ditching 90% of your users for slightly lower ping times on lossy connections seems like throwing the baby out with the bathwater.

    I'm still convinced that a low perf pluginless client gets us a foot in the door--then when native client starts becoming popular we port the bottlenecks there--and even maybe ask for udp hooks. I still think a few TCP connections may be sufficient to avoid the head of line blocking you seem so concerned about. I talked to Cory from Second Life and he said that going with a custom UDP protocol was the worst decision going forward because it ended up being the source of so many bugs and very small perf boosts.

    ReplyDelete
  10. @Daniel:

    I agree it would be possible to wrap around WebGL and O3D - much as you say. But it would be a major effort (unlike wrapping DirectX and OpenGL).

    Flight simulators might work on TCP. But the most *popular* game genres, such as sport games, platform games, FPSes, etc., will not. Those games need you to show objects that change their position quickly and *unpredictably*, unlike Second Life or flight simulators.

    As I said elsewhere, the 3D games market is bigger - by far - than the virtual worlds market. Just look at how many people enjoy the Wii and other consoles, games like CounterStrike, etc., and compare that to the amount of people on Second Life. So I think by allowing fast-paced games, we are reaching *more* people, not less. Perhaps less out of the existing virtual worlds market, but far more overall.

    Second Life should have used TCP, I fully agree. They don't care about fast-paced games, so UDP was needless complexity. But for my approach in this project, (1) UDP *is* necessary, and (2) it is already implemented and stable, so there is no downside to continuing to use it (at least assuming we can continue to use the same codebase).

    - kripken

    ReplyDelete
  11. I agree UDP support has NO downsides.
    I just hope your code is modular enough that adding TCP support, which is obviously easier than UDP since it's lossless, is trivial. You could support both and let the users decide what to use. (it would have allowed me to play your game in the dorms where my UDP packets have 6 seconds of lag versus 6 ms of lag to leave campus for TCP)

    This isn't a zero sum game. Just because you want to support fast paced action shouldn't require you to bar slower paced games--and those can be supported directly from the web client we build, if you're interested in pluginless web support.

    ReplyDelete
  12. @Daniel:

    You're absolutely right, I agree.

    Currently we support only UDP, through ENet. But it wouldn't be too hard to replace the small number of ENet calls with calls to a wrapper layer, and write a TCP backend for that. However this is low on my priority list because I don't think any of our current games would work at all on TCP (except perhaps the lobby world).

    - kripken

    ReplyDelete
  13. Do you have hard numbers to back this up. Surely they will work just fine on a high end university internet 2 connection. I suspect more scenarios than you imagine will work just dandy.

    I can recommend you use the tcpsst TCP wrapper because it's websocket compatible (can receive incoming connections from websockets and can communicate with them) and also lets you connect with more than one TCP connection to avoid the head-of-line blocking issues which you're so concerned about.
    Basically if you really think packets get more than 10 out of order: connect with 10 tcp sockets...and voilla: out of order delivery on TCP... the library makes it super simple to specify ordered or unordered

    ReplyDelete
  14. @Daniel:

    There are some hard numbers here:

    http://simula.no/research/nd/publications/Simula.ND.35/simula_pdf_file

    Note also that the issue is not just out-of-order delivery, but packet loss and TCP's exponential backoff. In other words, basically the same reasons VoIP uses UDP and not TCP - and VoIP has an easier problem - it can buffer a short while, unlike action games.

    I imagine on a very high end, ~0% packet loss connection that TCP might work well. But it would be a very rare thing.

    Still, it would still be worth adding, even for those few use cases. And using more sockets as you suggest would help. If we do get around to adding TCP as an option, then tcpsst certainly looks interesting, thanks for the tip.

    ReplyDelete
  15. Hey thanks for the paper reference: this will come in really handy as a reference.

    Remember with N TCP connections you need to make sure your probability of N losses in the time it takes to recover from 1 loss is acceptably small, so your newer unordered packets make it thru.

    Unfortunately since loss rates don't follow a gaussian and are much more fractal in nature you can't just increase N to a large constant and claim 2^-N (prob of independent events) is small :-/

    Networks are going to have periods of time where your loss rate exceeds any acceptable number--but then your UDP algorithms may fail too.

    I don't think TCP's exponential backoff is the issue here because we're sending small chunks of data--and on standard connections longlived TCP connections do fairly well, transmission rate wise (otherwise people would switch ISPs) --it's really the head of line blocking (I think--correct me if I'm wrong)

    I agree it would be great to get those use cases supported in your engine set. If you need any help integrating with tcpsst in the sirikata source tree (it's a plugin) let me know. Its dependencies should only be std::tr1 and boost::asio, but you can also write a compliant version yourself. Ours has been benchmarked as performing at extermely high speeds on 100MBit lan, so it should be sufficient for any WAN work you folks have.

    ReplyDelete
  16. @Daniel:

    I think exponential backoff is still an issue.

    For example, if the network is cut off for a short while, exponential backoff will greatly worsen the situation. But this is the lesser problem.

    A more serious issue is that if you just drop a reliable message a few times, everything else that is ordered with it will wait longer due to exponential backoff. In fact the proper approach would be to *increase* the frequency of sending that particular message, not decrease it as TCP does. (That assumes you know the connection as a whole is healthy, of course. Otherwise, it adds insult to injury.)

    For these reasons I am skeptical of running action games over TCP. But I would be very curious to see any demos of such things that you guys get working.

    - kripken

    ReplyDelete