Monday, December 5, 2011

Emscripten stuff on other blog

As mentioned previously, I've moved to working fulltime on Emscripten now. So Emscripten-related blogposts will now be on my other blog. (You can also follow me on twitter.)

I'll use this blog for Syntensity-specific stuff, that is, about porting Sauerbraten and/or Syntensity which is based on it to the web. I have not had any progress to report on that recently, since I am completely blocked on OpenGL issues: I can compile the C++ to JS, but I can't convert the OpenGL code to WebGL. I've asked around for help but so far no luck, hopefully it will happen though.

Thursday, November 10, 2011

Emscripten Updates

Lots of stuff has happened with Emscripten, I haven't blogged because I've been too busy. Here are some updates:
  • Emscripten was used to compile the Android H264 codec to JavaScript, in a project called Broadway. (live demo)
  • I gave a talk about Emscripten at SPLASH 2011 in Portland. (slides)
  • I gave a talk about Emscripten at JSConf.EU in Berlin. (slides)
  • Performance of the generated code has been improving due to progress in the relevant projects (JS engines, Emscripten and Closure Compiler). Some numbers appear in the slides linked to above (the upper link is more recent).
  • Bundled headers. This makes it easier to use Emscripten on non-Linux platforms (Linux being the platform that most development is done on) and in a portable way that does not depend on your local system headers.
  • Some library bugfixes that resolved almost all the open issues on speak.js, the Emscripten port of eSpeak to JavaScript which lets you do text-to-speech on the web.
  • j2k.js, a port of OpenJPEG to JavaScript with a nice API, letting you decode JPEG2000 images on the web. This might be helpful with pdf.js.
  • Support for LLVM svn (soon to be 3.0). Note that revision 141881 is known to work, others should but not necessarily.
  • Many other improvements and bugfixes to Emscripten. I should probably formally release a 2.0 version, but I can't seem to decide when.
  • Finally, I am now working fulltime on Emscripten and Emscripten-related things (at Mozilla, where I already worked but on other stuff before). So progress on Emscripten will be faster :)

Sunday, October 9, 2011

llvm-svn branch has been merged

The llvm-svn branch of Emscripten has been merged to master, in preparation for Emscripten 2.0, after all tests have been fixed and all speed regressions resolved. If you are currently using master, the consequences of that are:
  • You should use LLVM's svn (soon to be 3.0). LLVM 2.9 might still work, but it isn't guaranteed. Also, LLVM 3.0 is deprecating llvm-gcc, so Emscripten no longer uses that in its tests (as with 2.9, it might still work, but it might not). clang in 3.0 is much improved and is able to compile much more code than 2.9, so llvm-gcc is less necessary; there is also dragonegg which combines LLVM and GCC in a different manner, but its website says it is not mature yet.
  • Emscripten now uses its own header files, not your system headers. That means that Emscripten should now work on all platforms exactly the same. However, if you were using Emscripten to compile something that relied on your system headers, you might need to change how your project is built (that is, tell it to use those headers and not just Emscripten's bundled ones in system/include). Note that if you do not use the bundled headers, you will probably need to use the -H flag with emscripten.py, which tells it what headers to parse for constants (library.js needs to be aware of constants in your library headers, so that it is synchronized with them).
Please report bugs if you find them. If there are no show-stoppers, Emscripten 2.0 will be released soon.

Saturday, September 24, 2011

Road to Emscripten 2.0

Lots and lots of work has been taking place on emscripten (the LLVM-to-JS compiler). I haven't been breaking things out into smaller releases, instead, there will be a 2.0 release in the near future (last release was 1.5). The remaining issues before that are:
  • Fix any remaining regressions in the llvm-svn branch compared to the master branch, and merge llvm-svn to mastert. The llvm-svn branch uses LLVM's svn, which will soon become 3.0. Some of the code changes in LLVM have hurt our generated code, however most of the issues are now fixed. The latest update is a fix for exception handling which has led to a 5% smaller ammo.js build (compared to 2.9), with no speed decrease :)
  • Bundle headers with emscripten. As more people have begun to use emscripten, we have been seeing more issues with platform-specific problems, almost all due to using different system headers (for example, issues #82 and #85 on github). Bundling working headers will fix that, in a similar way as to how SDKs and NDKs typically bundle a complete build environment. Currently the plan is to use the newlib headers for libc and start from there.
As mentioned in a previous blog post, emscripten 2.0 will require LLVM 3.0, and will no longer officially support the deprecated llvm-gcc compiler (it might still work though). This is a significant change which may affect projects using emscripten, please let me know if you are aware of any issues there.

Note also that I will be merging llvm-svn to master before LLVM 3.0 goes stable. That means that you will need to build LLVM from source at that time since there won't be LLVM binaries. Of course, you will still be able to use a previous revision of emscripten, that works with LLVM 2.9, with no issues.

Friday, September 9, 2011

LLVM 3.0, llvm-svn Branch

LLVM 3.0 will probably be released in about a month. In preparation for that, I've gotten emscripten to work properly with LLVM svn in the llvm-svn branch.

As in the past, I intend to only support one version of LLVM at a time, since it takes too much effort to do any more - our automatic tests already require several hours, doubling that for another LLVM version is a huge burden.

With LLVM 3.0 it looks like llvm-gcc is pretty much obsoleted. It isn't being developed much, and remains on gcc 4.2 (I am guessing due to Apple's aversion to the GPL3?). There is Dragonegg, which is a plugin for recent GCC versions which uses LLVM as the backend, however as of 2.9 Dragonegg is not considered mature. I am not sure if 3.0 will be sufficiently stable or not.

So, the question is what compilers to use with LLVM 3.0. Clang goes without saying. The good news is that Clang can finally build all of the source code in the emscripten automatic tests which is very nice (although I did need to file a bug last week for libc++ - which is kind of ironic considering it's an LLVM project ;) but kudos to the LLVM people for the quick fix). As for other compilers, llvm-gcc seems of little importance if Clang can compile the same code, since llvm-gcc is deprecated. Dragonegg is interesting but not sure it makes sense to use it before it is fully ready - we might end up wasting a lot of time on bugs.

So my current plan is to move to a single compiler, Clang, in the emscripten test suite. That is what is currently done in the llvm-svn branch. The main risk here is of code that gcc can compile but Clang cannot. Is anyone aware of any significant cases of that? In particular I am curious about Python, I have not tried to compile it with Clang yet (the emscripten test suite has a prebuilt .ll file - we should fix that).

If no one raises any concerns about this plan, I'll merge the llvm-svn branch into master in the near future.

Thursday, September 8, 2011

The VTable Customization Hack

Recently my main focus in emscripten (the LLVM-to-JavaScript compiler) has been on the bindings generator: A tool to make it easy to use C++ code from within JavaScript. Why is this needed? Well, assume you have some C++ class,

class MyClass {
public:
  MyClass();
  virtual void doSomething();
};

The bindings generator will autogenerate bindings code so that you can do the following from JavaScript:

var inst = new MyClass;
inst.doSomething();

In other words, use that class from JavaScript almost as if it was a native JavaScript class.

Turns out that really doing this is not easy to do ;) One issue is callbacks from C++ into JavaScript: Imagine that you compiled some C++ library into JavaScript, and at some point the C++ code will expect to receive an object on which is a virtual function, which it will call. The virtual function is a common design pattern where you can basically get a callback to your own code. Typically you would create a new subclass, implement that virtual function, create an instance, and pass it to the library. That function will then be called when needed from the library.

Why is this difficult when mixing C++ and JavaScript? The main issue is that in C++ you would be creating those new classes and functions at compile time. But in JavaScript you are doing it at runtime. Creating a new class at runtime is not simple, but it was one option I considered. However compilation speed was too much of a concern. Instead, I went for a vtable customization approach.

The vtable of a class is a list of addresses to its virtual functions. Virtual functions at runtime work as follows: The code goes to the vtable, and to the proper index into it, loads the address, and calls that function. So by replacing the vtable you can change what gets called. However this still turned out to be fairly difficult. The reason is that the bindings code gets you into this situation:

// 1: Original C++ codevoid MyClass::doSomething();

// 2: Autogenerated C++ bindings code
void emscripten_bind_MyClass_doSomething(MyClass *self)
{ self->doSomething(); }


// C++/JS barrier

// 3: Autogenerated JS bindings code

MyClass.prototype.doSomething = function() {

  _emscripten_bind_MyClass_doSomething(this.ptr);
};

// 4: Handwritten JS code
myClassInstance.doSomething();

The top layer is the original C++ code in the library you are compiling. Next is the generated C++ bindings code. This does almost nothing except for it being defined as "extern C", so that there is no C++ name mangling. Below that is the JS bindings code, which also seems fairly trivial here, but generally speaking it handles type conversions, object caching and a few other crucial things. Finally, at the bottom is the handwritten JS code you create yourself.

So, the idea of the vtable customization hack is to receive a concrete object, then copy and modify its vtable, replacing functions as desired. The replacements can be native, normal JS functions, and presto: Your C++ library is calling back into your handwritten JS code. However, how do you modify the vtable, exactly? When your handwritten code wants to modify it, what it specifies is code on the third level, something like this:

customizeVTable(myClassInstance, [{
  original: MyClass.prototype.doSomething,
  replacement: function() { print('hello world!') }
}]);

Here we want to replace doSomething with a custom JS function. But what appears in the vtable is not the third-layer function specified here. It isn't even the second-layer function! It's the first-layer one. How can you get to there, from here..?

A natural idea is to add something to the second layer,

// 2: Autogenerated C++ bindings code
void emscripten_bind_MyClass_doSomething(MyClass *self)
{ self->doSomething(); }
void *emscripten_bind_MyClass_doSomething_addr = &MyClass:doSomething;

- basically, have the address of the function in the bindings code. You can then read it at runtime and use that. But there are a few problems here. The first is that this code won't compile! The right-hand-side is a two-part pointer, consisting of a class and an representation of the function in the class. You can't convert that to void* (well, GCC will let you, but it won't work). Even if you do get around the compilation issue, though, you will be left with that representation of the function. I had hoped it was a simple offset into the vtable - but it isn't, at least not in Clang. After some mucking around with trying to figure out what in the world it was, I realized there was a better solution anyhow, because of the other reason that this approach is a bad idea: This approach forces you to add a lot of bindings code, a little for every single function. That's a lot of overhead, considering you will likely use that information for very few functions!

So instead, I arrived at the following hack:
  • Add a terminating 0 to all vtables at compile time. (This adds some overhead, but there is one vtable per class, and it's just one 32-bit value for each).
  • Copy the object's vtable.
  • Replace all the vtable elements with 'canary functions', that report back to you with their index in the vtable.
  • Call the function you want to replace, through the third-layer function you have available in JavaScript.
  • Since you replaced the entire vtable, you end up calling one of those. The canary function then reports back by setting a value. That value is the index of the function you want to replace in the vtable.
  • Copy the vtable again, this time the only modification is to replace the function at the index that you just found with the replacement function you want run instead.
  • (There are some additional complications, for example due to how emscripten handles C++ function pointers in JavaScript - pointers to functions are just integers, like all pointers, so there is a lookup table to map them to actual JS functions. Another issue is that the third-layer JS bindings code will try to convert types, and if you pass it the wrong things it will fail, so calling the canaries must be done very carefully. But the description above is the main idea.)
This ends up working properly. You can see the code in tools/bindings_generator.js (search for customizeVTable), and you can see it used in the latest version of ammo.js (the README there has been updated with documentation for it).


Thursday, August 11, 2011

Rewritten Physics Engine Demo

Initial testing of ammo.js (a port of Bullet Physics to JavaScript using Emscripten) found some issues, but they have been quickly resolved. ammo.js should be ready for use now.

Completing that allowed me to rewrite the original Emscripten Bullet demo using ammo.js. That is, the original demo code - creating the scene and so forth - was written in C++, and was compiled alongside Bullet into JavaScript for the original demo. What I did now was to write the scene generating code in JavaScript, where it uses Bullet through ammo.js's autogenerated bindings. Check out the demo here, and read the JavaScript embedded in the HTML file to see a complete example of using ammo.js.

I'm very happy with the result: The JavaScript code in the demo is very nice to work with now, and in addition it outperforms the original demo due to build system improvements that were completed since the original demo was finished.

Monday, August 8, 2011

ammo.js - Ready for Testing!

After months of work, ammo.js (the Bullet physics engine compiled to JavaScript using Emscripten) is now ready for testing. To do that, grab builds/ammo.js, and look at examples/hello_world.js for a code example.

examples/hello_world.js is almost a 1 to 1 manual translation of HelloWorld.cpp (which you can find in bullet/Demos/HelloWorld/HelloWorld.cpp). That is possible since the ammo.js bindings let you write very natural JavaScript, for example this C++ code
btTransform groundTransform;
groundTransform.setIdentity();
becomes this JavaScript code
var groundTransform = new btTransform();
groundTransform.setIdentity();
There are some limitations to the automatically generated bindings code (see the ammo.js README for more details), but overall it basically works.

This is the result of a lot of hard work by bretthart on CppHeaderParser and by me on the bindings generator in Emscripten that uses it. Turns out it's pretty hard to automatically generate bindings from C++ to JavaScript, who would have thought ;) In any case we appear to have things in good shape now. I will probably write a separate blogpost about the bindings methodology later on.

The speed of ammo.js should be fairly decent, most optimizations are applied to the underlying Bullet code except for the LLVM ones and for typed arrays. However the binding code itself is not optimized at all yet. Overall things should be fast enough for testing, with some additional speedups still possible later on.

Please test ammo.js and file issues on github :)

Sunday, July 31, 2011

Emscripten 1.5!

Version 1.5 of Emscripten, the LLVM to JavaScript compiler, is out. Lots of new stuff:
  • A Text-to-Speech demo using eSpeak. Not much had to be done to get this to work, a few library functions were missing but that is pretty much it. I did need to bundle getopt and strtok C sources in the project though. Also, I had to use typed arrays type 2, since the eSpeak source code is not as platform independent as we would like (so this ended up being a good test of typed arrays 2 actually). For more details, source code etc., see the demo page.
  • max99x has written a nice Filesystem API. See that link for documentation. It makes the emulated filesystem much more flexible and useful. The text-to-speech demo uses it, as do all the automatic tests. Aside from the API itself, this update comes with a ton of library additions for IO related things.
  • max99x also wrote parsing code to detect field names in LLVM metadata. This lets you use the original C/C++ field names in your JavaScript, so integrating compiled code and JavaScript becomes much easier. I am thinking about extending this for use in the bindings generator.
  • Speaking of the bindings generator, it has seen a lot of work and things are finally starting to run with Bullet, at least a 'hello world' of creating a btVector3. There is still some work ahead before it is finished, not sure how much.


Sunday, July 10, 2011

Emscripten 1.4!

Version 1.4 of Emscripten, the open source LLVM to JavaScript compiler, has been released.

Some significant improvements this time, including
  • Support for compiling and loading dynamic libraries, thanks to max99x for writing this very useful (and not easy to write!) feature. You can now compile a module as a shared library, and load it from your main compiled script just like you would load a normal shared library in native code, using dlopen() and so forth. This can potentially be very useful, both in not needing to rewrite code that is already split up into modules, and also in that it lets you load the main module quickly since other stuff is split out into other files, which can be loaded later on demand. I hope to see a demo of this up soon.

  • Automatic bindings generation. Until now, you could compile a C or C++ library and run it on the web, but using it from normal JavaScript was clunky. Thankfully bretthart pointed me to CppHeaderParser, a pure Python C++ header parser, which Emscripten can now use to generate bindings (for more details on the header parser, see here). The result is a set of JavaScript objects that wrap the compiled C++ code, so you can write quite natural JavaScript code to access them, for example, var inst = new CppClass() to create an instance, inst.doSomething() to call a function, etc. A lot of basic stuff already works (see part 2 of test_scriptaclass), I am currently investigating the use of this with Bullet in ammo.js, hopefully I will succeed there and have a more detailed blogpost afterwards.

  • Library stuff, lots of fixes and additions there, thanks to max99x and timdawborn.

Wednesday, June 22, 2011

Emscripten 1.3!

Version 1.3 of Emscripten, the open source LLVM to JavaScript compiler, has been released.

No new demo this time, sorry. However the Python demo has been updated to improve performance and enable raw_input to work (it prompts for input using window.prompt). Press 'execute' in the demo to see it work.

Main updates:
  • Support for a new usage of typed arrays, TA2. In TA2, a single shared buffer is used, and int8, int32 etc. are all accessed through views into that buffer. The main benefit here is memory usage - this mode takes much less memory than TA1 (the original typed array usage), and in most cases probably less than the non-typed array case.

    In theory, this can also be faster. However that doesn't appear to be the case in my benchmarks, due to the need to divide pointers by 2 or 4 constantly (pointers are raw addresses, while indexes into typed arrays take into account the size of the element. int32vec[1] is at address 4!), and since JS engines still do not heavily optimize typed arrays.

    One thing you can do, though, is use dangerous nonportable LLVM optimizations with TA2. TA2 lets you write an int and read the first character, and get the 'right' result. Of course 'right' will depend on the endianness, so this is very dangerous and not recommended. However you can compile two versions, one for each endianness. This can potentially be faster.

  • Some relooper optimizations were done, which gave us a nice speed improvement. I'll probably do a full blogpost on performance issues, but to briefly summarize, we seem to be getting close to the speed of handwritten JS code, which is to say, as fast as we can probably get. In absolute terms, compared to gcc -O3 (the fastest native code), we are around 5X slower (on the latest development versions of SpiderMonkey and V8). But there is a big spread: In raw numeric processing we are often just 2-3X slower, which is about the same as Scala, Haskell, and Mono, but certain other operations are costlier and in some benchmarks we are up to 10X slower.

Other news:
  • Still hoping for people to help out with OpenGL/WebGL stuff. Please step up! I don't know what to do there myself.
  • Next main project for me personally will probably be better tools to integrate compiled code with normal JS code. One option is to use SWIG to generate bindings. We could then compile a C++ library and use it in a natural way on the web, which would be very cool. If you know SWIG, or don't but want to see this happen (like me ;) then please get in touch.
  • I had some discussions with people interested in compiling certain large projects to the web, for example Second Life and Mono. Both have significant technical difficulties (rendering and networking for Second Life, the non-existence of an interpreter and the limitations of mono-llvm in Mono), but if the people interested in each are serious enough to do the work to overcome the respective difficulties, I have promised to do the raw C++ to JS conversion for each of those projects. Hopefully cool things will happen here.

Tuesday, May 31, 2011

Followup to Doom on the Web

The Doom on the Web demo has been viewed over 35,000 times so far. Based on the responses, I'd like to clarify some things that I should have mentioned before (sorry for not doing so).

First off, I should have linked to the details page more clearly. It explains a bit about the demo, where it is currently known to work, etc.

Now, to the main issue: This demo is not a good benchmark of anything. The goal here was not to run a version of Doom with good performance, but to run DOOM itself, the original, with as few changes as possible, on the web. I made no effort to optimize the code, which was written and heavily optimized for a completely different architecture (it uses fixed-point arithmetic! :) I wasn't sure if it would be playable at all.

That is, the point of the demo is, "Doom can be run, with hardly any modifications, on the web." Showing that sounded like a cool thing to do, so I spent several evenings and a weekend or two on it.

As for frame rates, Doom caps them at 35 - you won't see it get any better than that, simply because of how the main loop works (and as mentioned before, I didn't try to improve it). It will also max out your CPU, even if it doesn't need to, for the same reasons. It might be possible to optimize this with some modifications to the Doom source code, but I didn't look into that. So, if you are seeing 35fps and 100% CPU, that doesn't mean your machine is actually working hard to generate it.

For example, I get close to 35fps on a slow 1.2GHz laptop. A modern machine would probably be able to get over 100fps with a proper main loop. And again, even this is not a fair measure of how fast Doom could be, if it were actually optimized for JavaScript. So please don't run this demo, be disappointed by the speed, and say "JavaScript is too slow, we need Flash/NaCl/Java/native apps" etc. The demo can't be used to conclude anything like that. Valid benchmarks (which this demo is not) show that JavaScript is quite fast, and getting faster more quickly than any other language - something that shouldn't be surprising, given that it probably has the most developer effort put into it, these days.

I hope the above explains what frame rates in the demo actually mean (that is, almost nothing). Now, aside from that, some people said the demo was very slow for them. I suspect that depends on the browser, as both Firefox and Safari play it very speedily even on older machines (as I mentioned earlier, my old laptop at 1.2GHz runs it well on FF7). On the other hand, Opera runs it slowly, while Chrome is unplayable (I reported the issue to them, and am doing my best to help figure it out). So, performance depends on the browser. That's disappointing, clearly, but that is another point of this demo - to push the limits, and hopefully to motivate JavaScript engine devs to fix whatever bugs are in the way, and be even faster and more awesome.

P.S. I apologize for the quality of sound in the demo. I never used any audio generating API before, and I still don't know what the numbers I am passing from Doom to the Audio Data API actually mean ;) I basically just hacked together something quickly, got it to the point it is in the demo, and stopped there. Someone that knows this stuff could probably make it sound right.

Monday, May 30, 2011

ammo.js: Bullet on the Web

We already had a demo of the Bullet physics engine in JavaScript a while ago. People asked for an easier way to use it, so I set up ammo.js, a separate project to port Bullet to the web.

The starting point is Bullet compiled to JavaScript using Emscripten, and the main challenge is to make a friendly API for JavaScript applications to use. See Issue #1 in that github repo, we are currently looking for ideas and help in doing this. Discussion also takes place on Emscripten's IRC channel (#emscripten on irc.mozilla.org).

Emscripten 1.2, Doom on the Web

Emscripten, the LLVM to JavaScript compiler, is now at version 1.2. The main updates in this release were to enable this demo of Doom on the Web - a playable version of the classic game Doom, compiled from C to JavaScript and rendering using Canvas.

The demo is known to work on Firefox and Safari. It works, but slowly, on Opera. I can't get it to run properly in Chrome due to a problem with V8. I have no idea if it runs on IE9, since I don't have a Windows machine, but since IE9 has a fast JS engine and supports canvas, it should (please let me know if you try it there). Edit: Here's a screencast of the demo running on Firefox Nightly if you can't run it yourself.

Highlights of Emscripten 1.2:
  • Many improvements to Emscripten's implementation of the SDL API in JavaScript, including support for color palettes (Doom uses a 256-color palette), input events (we translate normal web keyboard events into their SDL forms), and audio (for now, just using the Mozilla Audio Data API - it's the most straightforward API at this point. Patches are welcome for other ones).
  • Many improvements to the CHECK_* and CORRECT_* options, which are very important for generating optimized code using Emscripten. In particular, there is a new AUTO_OPTIMIZE option which will output a summary of which checks ran how any times, and how many of those checks failed, giving you a picture of which lines are important to be optimized, and which can be.
  • Some additional experimental work is ongoing about supporting OpenGL in WebGL. I don't know either OpenGL or WebGL very well, I'm learning as I go, and I'm not sure how feasible this project is. If you can help here, please do!
  • Various bug fixes. Thanks to all the people that submitted bug reports. In addition compiling Doom uncovered a few small bugs, for example we were not doing bit shifts on 64-bit integers properly.

Sunday, May 1, 2011

Emscripten 1.1!

Emscripten is an LLVM to JavaScript compiler, allowing you to run code written in C or C++ on the web. I released version 1.1 today, with the following updates:
  • A much improved Bullet demo - check it out! This version is much faster. The main differences are use of memory compression (see below), LLVM optimizations, and CubicVR.js for rendering.
  • QUANTUM_SIZE == 1, a.k.a memory compression. This is an advanced, and somewhat risky, optimization technique. I see speedups of around 25%, but take note, this must be used carefully. See the docs.
  • Dead function elimination tool: A Python script that scrubs an .ll file to remove unneeded functions. This is useful to reduce the size of the generated code and speed up compilation. Note though that if you want to compile a library, then this tool will remove functions that you probably want left in - it removes everything that cannot be reached by main(). The test runner now uses this by default.
  • Various performance improvements and bug fixes.

Sunday, April 10, 2011

Emscripten Moves to GitHub

After starting on Google Code, and later adding a git mirror, Emscripten has now moved entirely to GitHub. emscripten.org and so forth should now forward to there.

The main reason for the change is the inconvenience in maintaining two clones. hg-git helped greatly, but it remained a constant hassle. Meanwhile GitHub has been getting more popular and more useful. The last straw was GitHub's recent addition of a much nicer issue tracker. So I decided to make the move today, which coincides nicely with the release of Emscripten 1.0: A fresh start on the road ahead to 2.0.

Code will not be updated in the Google Code page anymore, and I pushed a final commit there to warn people they are running old code if they get there by mistake. I moved the important wiki pages to GitHub, which leaves only one thing left behind, the open issues. If you have an open issue on Google Code that you care about, please either open a new issue on GitHub for it, or tell me and I'll do it for you.

Saturday, April 9, 2011

Emscripten 1.0!

It's been almost a year since I started Emscripten (which, if you haven't heard of it, is a tool to compile LLVM to JavaScript), during which it took up much of my spare time. So I am very pleased to announce that today Emscripten has reached the 1.0 milestone. This release comes with a demo of rendering PDFs on the web (warning: that page downloads >12MB, since it includes Poppler and FreeType. It's like downloading an entire desktop app, almost).

Other highlights in this release:
  • Very significant optimization of memory use in the compiler. This was necessary for the PDF demo to build, since it is far larger than previous demos.
  • Full support for the recently released LLVM 2.9.
  • The Emscripten documentation paper is finished. It explains how Emscripten works, so you might be interested in it if you care what Emscripten does under the hood (but if you just want to use Emscripten you don't need to read it).
Overall Emscripten is now in very good shape. It can probably compile most any C/C++ project out there, subject to some limitations (like JS not allowing C-style multithreading). At times some manual intervention is needed, like changing the project's settings so it doesn't generate inline assembly, and of course bugs probably still exist, but recently the code I have compiled has tended to just work (hence the rate of commits has greatly decreased recently).

The speed of the generated code can be quite good. By default Emscripten compiles with very conservative settings, so the code will be slow, but optimizing the code is not that hard to do. Optimized code tends to run around 10x slower than gcc -O3, which is obviously not great, but on the other hand fairly decent and more than good enough for many purposes. And of course, that ratio will improve along with advancements in JavaScript engines, LLVM, and the Closure Compiler.

So, Emscripten 1.0 is in my opinion pretty solid. There are no major outstanding bugs, and no major missing features. (But I do have plans for some major improvements, which are difficult, but should end up with code that runs at least twice as fast.) Now that Emscripten is at 1.0, I am hoping to see it used in more places. I'm starting to propose at Mozilla that we use it in various ways, and also I'd love to see things like GTK or Qt ported to the web - if anyone wants to collaborate on that, let me know.

Saturday, March 19, 2011

Emscripten moving to LLVM 2.9

LLVM 2.9 will be released very soon, and Emscripten has just been updated to support it.

Emscripten has a lot of automatic tests - they take over 2 hours to run on my laptop - so I won't be running tests for LLVM 2.8 anymore (that would double the time the tests take). Until LLVM 2.9 is formally released with binary builds, you can build LLVM from svn source (the instructions on the Emscripten wiki are useful), or use LLVM 2.8 with Emscripten 0.9 (the last release of Emscripten that supports 2.8).

If you do build LLVM 2.9 and put it in a different location than 2.8 was, don't forget to update your ~/.emscripten file so it uses the version you want. Also, if you update LLVM to 2.9 and want to use llvm-gcc, you need to update that to their current svn as well.

There were not a lot of changes for Emscripten to support 2.9, so it is possible 2.8 will still work. But as mentioned above, I am not testing it, so I can't say for sure.

Sunday, March 6, 2011

Puzzles on the Web

Check out this very cool port of Simon Tatham's Portable Puzzle Collection to the web, by Jacques Le Roux, using Emscripten.

Nice quote from there:
This was basically just an experiment to see how hard it would be to port C code to a web application running entirely on the client (turns out not that hard).
:)

Saturday, March 5, 2011

Emscripten 0.9!

The demo this time is OpenJPEG: JPEG 2000 decoding in JavaScript.

Aside from OpenJPEG, lots of stuff in this release, including
  • Line number debugging: Emscripten can optionally add the original source file and line to the generated JavaScript (if you compiled the source using '-g'). Useful for debugging when things go wrong, especially with the new autodebugger tool, which rewrites an LLVM bitcode file to add printouts of every store to memory. Figuring out why generated code doesn't work is then as simple as running that same code in lli (the LLVM interpreter) and in JavaScript, and diff'ing the output, then seeing which original source code line is responsible.

  • Line-specific CORRECT'ing: The main speed issue with Emscripten is that JavaScript and C have different semantics. For example, -5/2 in C is -2, while +5/2 is +2, whereas in JavaScript, naive division gives floating point numbers, but worse, there is no single operator that will create the same behavior as C (Math.floor on -5/2 gives -3, and Math.ceil on +5/2 gives +3). So in this example (unless we have a trick we can use, like |0 if the value is 32-bit and signed), we must check the sign of the value, and round accordingly - and that is slow. Similar things happen not just in rounding, but also with signedness and numerical overflows, and therefore Emscripten has the CORRECT_SIGNS, CORRECT_OVERFLOWS and CORRECT_ROUNDINGS options.

    With line-specific correcting in the 0.9 release, you can find out which lines actually run into such problems, and tell Emscripten to generate the 100% correct code only in them. Most of the time, the slow and correct code isn't needed, so this option is very useful. I will write a wiki page soon to give more examples of how to use it to optimize the generated code (meanwhile, check out the linespecific test).

  • 20% faster compilation, mainly from optimizing the analyzer pass.

  • Strict mode JavaScript. The compiler will now generate strict mode JavaScript, which is simpler, less bug-prone, and in the future will allow JS engines to run it more quickly.

Saturday, February 12, 2011

Synchronizing a git mirror with hg-git

Posting this since it might save someone else the time it took me to figure out.

There are several tutorials for creating a git repo from an hg one using hg-git. But none of them go much into how to keep the git repo up to date when the hg one changes (assuming work is done in the hg repo). And, just doing a push with hg-git to github fails... silently :( (But it succeeded the first time, to create the repo.) Even adding -v --debug in hopes of getting some useful information is no help.

The solution is to push to a local git repo first, then push from there to github. But even pushing to a local git repo won't just work - you will get
error: refusing to update checked out branch: refs/heads/master
error: By default, updating the current branch in a non-bare repository
error: is denied, because it will make the index and work tree inconsistent
error: with what you pushed,
To get around that, create another branch in the local git repo (git branch side), switch to it (git checkout side), then push to it using hg-git. Then switch back to master (git checkout master) and finally push that to github.

I think it might also work to make the local git repo into a bare repo, but the above worked for me so I stopped there.

Friday, February 11, 2011

Git Mirror for Emscripten

We now have a git mirror on GitHub! Getting the Emscripten code is now as easy as

git clone git://github.com/kripken/emscripten.git

Code will be mirrored there, so you can either use git with that GitHub repo, or hg with the Google Code repo, and the result will be the same. (However for project stuff - issues, wiki, etc. - we will continue to use the Google Code project page.)

Tuesday, February 8, 2011

FreeType Demo

A simple demo of FreeType in JavaScript can be seen here.

Sunday, February 6, 2011

Emscripten 0.8!

The main highlights of this release are:
  • Tests for FreeType and zlib, two important real-world codebases. Aside from all the fixes and improvements necessary to get them to work, the test infrastructure now runs the entire build procedure (using emmaken) for those two tests, giving even more complete test coverage.
  • File emulation. Just enough to let compiled C/C++ code think it is accessing a filesystem. For example, the FreeType test loads a TrueType font from a file (but really it's a virtual filesystem, set up in JavaScript).
  • Additional compilation options for overflows and signedness (CHECK_OVERFLOWS, CORRECT_OVERFLOWS, CHECK_SIGNS). These allow even more C/C++ code to be compiled and run properly, but are switchable, so code that doesn't need them can run fast.
I'll post a web demo soon.

Thursday, January 20, 2011

Emscripten Overview Writeup

If you're interested in how Emscripten works, then this writeup I am working on may interest you. It is currently the best explanation of the underlying techniques Emscripten uses to compile LLVM to JavaScript, including the memory model, the Relooper algorithm, etc.

It isn't meant to be a manual or a practical guide. For that, as always see the wiki.

Tuesday, January 18, 2011

LIL browser demo

I just saw this demo of LIL (a Little Interpreted Language) running in the browser, compiled from C to JavaScript using Emscripten. Cool stuff!

Saturday, January 15, 2011

A Completely Speculative History & Future of H.264 and WebM

  1. Several years ago, Google decides something needs to be done about web video, because (1) H.264 requires royalties, which means that parts of the web are proprietary (even if it is a standard), and Google believes an open web is in its best interest, and (2) for similar reasons, H.264 is incompatible with the W3C, Mozilla's Firefox, and Opera, so it will never become universal anyhow. Google is willing to go to great lengths to solve this issue, including large sums of money and developer time.
  2. Google approaches MPEG-LA (or major entities that are members), and quietly floats the idea of 'freeing' H.264, by way of a large one-time payment from Google, after which H.264 will be royalty-free, and can then be blessed by the W3C.
  3. Negotiations fail. Google offers large amounts of cash, but it isn't enough for MPEG-LA, which believes it is close to having a complete lock on the market, which it can leverage for even more cash later on.
  4. Google threatens to support a competing format with all its resources, thereby threatening the future profitability of H.264.
  5. MPEG-LA decides to call Google's bluff.
  6. Google makes good on its threat, buying On2 and freeing its VP8 video codec as part of WebM, a royalty-free format for web video. (Note: I'll use 'WebM' to refer to 'VP8', a lot of the time.)
  7. Mozilla, Opera, etc. naturally support this move, as it is good for the open web. Apple and Microsoft, whose motivations are otherwise, do not support this move - they are both already heavily invested in H.264, and for them life would be simplest if WebM never existed.
  8. As a reaction to WebM, MPEG-LA makes H.264's licensing less expensive, and for a longer period of time.
  9. Google makes good on another part of its threat to MPEG-LA, removing H.264 support from Chrome. MPEG-LA is surprised Google is willing to hobble its own browser in order to get a leg up in this fight.

    (This brings us to the present time.)


  10. Nothing much changes, at first. Most web video is seen through Flash anyhow. However, the block of WebM supporters, which is now Firefox, Chrome and Opera - whose share in the market is large, and growing - gets video providers on the web to pay close attention to WebM.
  11. Flash introduces WebM support. Most video encoded for desktop viewing can now be encoded in WebM, and viewable through Flash or an HTML5 video element in Firefox, Chrome and Opera. Even video shown with DRM can be encoded in WebM, but must be shown in Flash. On the other hand, in the mobile space, a complete stack of hardware&software support is still really just present for H.264, and Apple doesn't support anything else, so video encoded for mobile viewing is primarily done in H.264.
  12. WebM's video quality improves, in part benefiting from the fact that while open source and royalty-free, WebM is not a formal specification or standard, so rapid development and changes are possible. WebM becomes equivalent or superior to H.264.
  13. Google switches YouTube to primarily use WebM for encoding video meant for desktop use. There is hardly any impact on users, due to most video being shown in Flash anyhow (which now supports WebM). However, video for mobile viewing remains encoded in H.264.
  14. Hardware support for WebM begins to ship in a great deal of new mobile devices, and eventually in a majority of new mobile devices.
  15. As a reaction to WebM's rise, MPEG-LA once more lessens the royalties for H.264, in an attempt to make it more competitive.
  16. A new version of Google's Android ships, on a new flagship phone from Google, that has complete hardware and software support for WebM. The device primarily views YouTube video in WebM format.
  17. Google, stating WebM's superior quality, begins to 'favor' WebM over H.264 on YouTube, for mobile content. More specifically, while both WebM and H.264 are supported, WebM content is encoded at higher quality levels (this is accomplished not by decreasing H.264 quality, but by adding a higher level of quality exclusively for WebM). The result is that mobile devices viewing YouTube give a better user experience if the device does so using WebM.
  18. Apple makes the rational decision and supports WebM on new iOS devices - hardware support is already there, and Apple cannot compromise on user experience. Whatever monetary benefit Apple gains from MPEG-LA from H.264 is completely eclipsed by Apple's iOS business, so this is a no-brainer.
  19. With the majority of new mobile devices shipping with WebM support (Android and iOS), smaller players (Blackberry, WebOS, Windows Phone) are forced to support it as well.
  20. The online video market has its anti-DRM moment, just like online audio already had. Video is shown without DRM, which simplifies delivery and cuts costs, and piracy remains at the same levels as before (just as with audio).
  21. Once it is clear H.264 has lost in the mobile space, and that DRM is no longer needed, there is no reason for Microsoft and Apple not to support WebM in the HTML5 video element, on Windows and OS X respectively, in order to ensure their users the best experience.
  22. With DRM no longer an issue and widespread support for WebM in the HTML5 video element, WebM becomes the universal standard for video on the web, on both desktop and mobile. Content producers have little reason to even support a fallback to Flash - some do, but many do not, at little detriment to them or their users.
  23. Google wins the fight, and the open web greatly benefits.
Some things that can change this future history:
  • MPEG-LA deciding to make H.264 100% royalty-free. This will kill MPEG-LA's profits, but may still be worthwhile for MPEG-LA members, since if done properly - and promptly - it can ensure H.264 becomes the standard for web video. Whether there remains enough profit from H.264 (from hardware, services, etc.) for this move to make sense, is not clear. But if this does happen, WebM loses, but really Google wins, since it got what it set out to get.
  • A new video format can appear, or a newer version of an existing format, which requires new hardware support but has benefits to justify the switch. Given the battle between H.264 and WebM, I would expect the new format's backers to learn the lessons of the past and make it free on the web (or, if they are not willing to do that - then to not even bother to create a new format). If such a new format appears, and becomes the universal standard for web video, the result is that Google wins in this case as well.
  • The fight gets taken to the courts. I doubt a simple injunction will be granted to either side, as both are powerful, influential, and have many patents to back up their claims - so there is no quick victory. Instead there is a lengthy court battle. To justify the cost, there must be a significant chance of large future profits, and if H.264 is already in decline, that might not be the case. However, it might still make sense for MPEG-LA to take Google to court, just to get it to settle for some amount of money, in which case Google wins overall, but MPEG-LA gets a little more money than otherwise. However, if the goal isn't a settlement, but an actual attempt to kill WebM, then things can get interesting. I don't think anyone can say for sure how that fight would turn out - does WebM infringe on H.264 patents? Does H.264 infringe on WebM patents (VP8 patents, granted to On2, and now owned by Google, which would countersue)? Perhaps both? Such a 'fight to the death' in the courts seems unlikely, in part due to that unpredictability, so all we can say for sure in this case is that several law firms will greatly benefit.
DISCLAIMER: I have no inside knowledge about any of this.

Friday, January 14, 2011

Emscripten Usage Change

I refactored the python scripts in Emscripten to make them more sane. There is one difference in how Emscripten is used: If ~/.emscripten does not exist, it will copy tests/settings.py to ~/.emscripten, at which point you would edit the paths etc. in ~/.emscripten - not in tests/settings.py, which is how things were before (and that was bad).

If you already have a file at ~/.emscripten, and you probably do if you already ran Emscripten in the past, then that file will not contain all the necessary information. The simplest thing is for you to copy your edited tests/settings.py into ~/.emscripten. Or, you can delete ~/.emscripten and run the tests (python tests/runner.py), which will copy tests/settings.py for you into ~/.emscripten (but remember to change the paths, if you need to).

Sorry for the inconvenience, but the previous way things were was just a hack, which had to be fixed.

Wiki pages on the project site have been updated.

Saturday, January 1, 2011

JavaScript, Native Client, and Emscripten

This is a response to this blog post, which is titled "Mozilla’s Rejection of NativeClient Hurts the Open Web", which appeared prominently on Hacker News.


I disagree with the thesis in that blog post. My reasons are entirely technical.


First off, NaCl is not portable yet. PNaCl is working towards that, but it will take time. Until it is portable, comparing it to JavaScript is comparing apples and oranges. They simply do very different things - one is fast, the other is portable.


Second, while PNaCl is being worked on, at the same time a lot of effort is being put into making JavaScript engines faster. Now, for purposes of comparison to PNaCl, we don't need all JavaScript code to run as fast as native code. For our purposes here, we can care only about 'implicitly statically typed' JavaScript code - the code that can, in theory, be compiled so it runs as fast as native code. Implicitly statically typed code is code that uses a single type for each variable, and even though it's written in a dynamic language, could correspond almost 1-to-1 to code in a fast statically typed language like C.


Such code can be created automatically from C or C++ using something like Emscripten. Or, you can write such code on purpose, for example, PyPy is written in RPython, which is basically implicitly statically typed Python. More generally, you might write the performance-sensitive parts of a JavaScript application in an implicitly statically typed manner, while the rest can have fun with dynamic typing and the benefits that gives.


The important issue is that JavaScript engines can optimize implicitly statically typed code very well, both theoretically and practically. Techniques like tracing, type analysis, etc. are already being used in JavaScript engines like SpiderMonkey and V8, and progress is very fast. Also, PyPy can give us an idea of the long-term potential in such an approach.


So, while NaCl is working towards portability, JavaScript is working on speed. To clarify, again, I'm not talking about running all JavaScript code at native speed. But there is a big, relevant subset which can be run very quickly. Once JavaScript engines achieve the goal of running that code at native speed, then the performance advantage of NaCl will have vanished. At that point both NaCl and JavaScript will be fast (and, if PNaCl is completed, they will both also be portable).


Once we get there, I believe JavaScript will be preferable for the simple fact that it has a natural fallback built in - implicitly statically typed JavaScript is perfectly valid JavaScript, so even if your JS engine doesn't achieve the full speed of native code, at least it can run it. Whereas NaCl will simply not run at all unless the NaCl plugin is installed (and that may never happen on iOS devices, and may never happen by default on any desktop browser but Chrome).


Note that one might devise a fallback for NaCl by writing an emulator, or even a compiler, for PNaCl in JavaScript - perhaps using Emscripten (which does exactly what is needed - convert LLVM into JavaScript). If the speed-intensive parts of the code in that approach are implicitly statically typed, then we have come full circle, and the two approaches of JavaScript+Emscripten or PNaCl+Emscripten essentially converge, with a minor disadvantage to NaCl for being more complex (a special NaCl compiler, and a special NaCl runtime).