Live with grace. Write superb software.

web 3.0

  • fast-live-reload Presentation

    When creating web applications, a lot of time is consumed switching back and forth between the code and the application itself. What if there would be an application that would refresh your browsers automatically?

  • Native vs Bluebird vs Core-Promise Benchmark

    I just took for a test the native promises implementation (that is available since node v0.12), the bluebird implementation, and core-promise, my own TypeScript implementation.

    The tests are here for the native, bluebirdor core-promise. Note that all the implementations pass the full Promises/A+ test spectrum of 872 tests.

    I ran each test file 11 times, using:

    for f in `seq 0 10`; do time mocha test-core-promise.js > /dev/null 2>&1; done

    Obviously, I changed the file name between iterations.

    I removed 11.9 seconds, because these are setTimeouts in the actual tests, remaining with actual execution of promises time, and these are the final results (in seconds):


    Native Bluebird Core-Promise

    1.616 1.760 1.564

    1.567 1.626 1.520

    1.568 1.669 1.548

    1.539 1.657 1.573

    1.574 1.595 1.557

    1.583 1.604 1.580

    1.543 1.582 1.540

    1.529 1.622 1.567

    1.600 1.605 1.567

    1.407 1.661 1.544

    1.467 1.640 1.587



    Unsurprisingly the native implementation won performance wise, but only less than 1% compared to the core-promise, but ~5% compared to bluebird. Also core-promise defaults to the native promises implementation if it's available, if you would use the exported Promise class (for tests I accessed the CorePromise implementation on purpose).

    var Promise = require("core-promise").Promise;

    Not to mention readability of code: core-promise's Promise vs bluebird's Promise.

    *Disclaimer* I am the creator of the core-promise, and yes I feel pretty good getting better performance than the "unmatched performance" of bluebird. :)

  • TypeScript unit testing with Mocha

    I am writing some unit tests for my TypeScript collection classes (actually migrating some older tests), and I decided in going with Mocha.

    How easy it is? Well, as simple as 1, 2, 3... 4 :) :

    1. Install the typing files

    tsd install mocha
    tsd install node
    tsd install assert

    2. Write your test(s)

    For this example, I'm presenting you a super simple test, but obviously nothing stops you from adding new ones (check my github core-lang project for details):

    /// <reference path="../../../typings/mocha/mocha.d.ts"/>
    /// <reference path="../../../typings/node/node.d.ts"/>
    /// <reference path="../../../typings/assert/assert.d.ts"/>
    import { list, XIterable, XIterator } from "../../main/core/Iterable";
    import assert = require("assert");
    describe('ArrayList', function() {
        it('should allow filtering', function() {
            var items : XIterable<number> = list([1, 2, 3, 4]),
                filteredItems: XIterable<number>,
                iterator : XIterator<number>;
            filteredItems = items.filter(it => it % 2 == 0);
            iterator = filteredItems.iterator();
            assert.equal(, 2);
            assert.equal(, 4);
            assert.notEqual(iterator.hasNext(), true);

    3. Build your test(s)

    I use Grunt, so I have (I deleted all the non test stuff), but is basically easy to see the target that is named "test" across all the plugins in the original full Grunt file:

  • WebComponents

    I'm more and more interested in web components as a technology. Apparently out there there are two ways to do components:

    On one hand there are implementations like BosonicPolymer or X-Tag, where you can define new components that extend the HTML vocabulary, and they seem to search some common ground. The big thing that concerns me is actually shared behavior, since no one uses raw JS contexts anymore. Can you truly do that, without reinventing the wheel, without a new framework? Is integration with existing frameworks easily done for non-trivial components? (e.g. a calendar component might need data validation, that is offered by the framework, like YUI3 for example).

    Another approach is to create a widget system, like GWT composites, or YUI widgets, and the idea is that you can have widgets reusing widgets, and generally build really powerful abstractions on top of that, sticking for the most part to HTML (eventually with extensions - GWT works with XHTML and different namespaces). There are other variations in using different templating engines (e.g. handlebars), different UI class hierarchies, but the general idea is the same. You have widgets that are pure JS objects from the framework, tied to some visual representation, with composition part of the API either declaratively (by templates) either imperatively (e.g. a calendar widget manually creates the input widgets, spans and divs it needs for its visual representation, and keeps references to them).

    The difference that I see is that extending the HTML vocabulary in theory could provide a unifying common platform, allowing me (allegedly) to reuse the components across frameworks and also get a far more clean markup, since now the markup describes real components (instead of a bunch of divs that are bounded by some voodoo JS behind the curtains pulling all the logic strings).

    But I have a feeling the real challenge is framework integration, and that's why Angular didn't jumped into the Polymer bandwagon even if they are made both by people from the same company.

  • What Do We Learn from the Broken Firefox WebDriver Support

    Originally published at the project's site:

    With the new release of Firefox 47, the WebDriver support was left in limbo. On one hand, the old WebDriver API was not accessible anymore, on the other hand the new API (Marionette) explicitly didn’t support it. Kid you not, they actually used the word explicitly when saying they don’t support version 47, despite Mozilla telling on the release notes to use it.

    “Go use it, it’s broken” [probably someone at Mozilla]

    Of course, this lead to the Mozilla team releasing a hotfix for Firefox, namely 47.0.1 which fixed, you guessed it, allowing the old WebDriver API to work. The only small problem is that Mozilla knowingly introduced a bug that nuked all the WebDriver Firefox tests. For 3 weeks. Between June 7 and June 28.



    Now that we know this might happen, how do we mitigate this?

    The answer is containers. Docker containers.

    Germanium, out of the box, comes into two flavours. One one hand it’s the library itself that we know and love and that gets tested against a set of browsers.

    Now, for Firefox and Chrome, also docker images are automatically built, that guarantees such changes in infrastructure aren’t destroying your tests stability. For Firefox, since version 47 wasn’t running anymore using WebDriver, the container used Firefox 46 and the old API (the Marionette support is at this stage abysmal).

    Of course, this means we’re not on the bleedingest of edges of the browser version, but that is OK. In the Continuous Integration system, we want to be sure we don’t get all our tests failing just because WebDriver has a bug now, especially since the API itself it’s still coagulating, so this is bound to happen some more. The key is to lock the moving parts of the testing environment, and Germanium does that by default.

    Also these images themselves are used to test Germanium itself. So you know that all the API calls that are documented there are running as expected.


The one to rule them all. The browsers that is.


SharpKnight is an Android chess game.


MagicGroup is an eclipse plugin.