Sabbatical Over

Aww, my 8-week sabbatical is now over. I wish I had more time, but I feel I used it well and there are certainly lots of Firefox bugs I want to work on too, so perhaps it’s about that time now (also, it’s not that long till Christmas anyway!)

So, what did I do on my sabbatical?

As I mentioned in the previous post, I took the time off primarily to work on a game, and that’s pretty much what I did. Except, I ended up working on two games. After realising the scope for our first game was much larger than we’d reckoned for, we decided to work on a smaller puzzle game too. I had a prototype working in a day, then that same prototype rewritten because DOM is slow in another day, then it rewritten again in another day because it ends up, canvas isn’t particularly fast either. After that, it’s been polish and refinement; it still isn’t done, but it’s fun to play and there’s promise. We’re not sure what the long-term plan is for this, but I’d like to package it with a runtime and distribute it on the major mobile app-stores (it runs in every modern browser, IE included).

The first project ended up being a first-person, rogue-like, dungeon crawler. None of those genres are known for being particularly brief or trivial games, so I’m not sure what we expected, but yes, it’s a lot of work. In this time, we’ve gotten our idea of the game a bit more solid, designed some interaction, worked on various bits of art (texture-sets, rough monsters) and have an engine that lets you walk around an area, pick things up and features deferred, per-pixel lighting. It doesn’t run very well on your average phone at the moment, and it has layout bugs in WebKit/Blink based browsers. IE11’s WebGL also isn’t complete enough to render it as it is, though I expect I could get a basic version of it working there. I’ve put this on the back-burner slightly to focus on smaller projects that can be demoed and completed in a reasonable time-frame, but I hope to have the time to return to it intermittently and gradually bring it up to the point where it’s recognisable as a game.

You can read a short paragraph and see a screenshot of both of these games at our team website, or see a few more on our Twitter feed.

What did I learn on my sabbatical?

Well, despite what many people are pretty eager to say, the web really isn’t ready as a games platform. Or an app platform, in my humble opinion. You can get around the issues if you have a decent knowledge of how rendering engines are implemented and a reasonable grasp of debugging and profiling tools, but there are too many performance and layout bugs for it to be comfortable right now, considering the alternatives. While it isn’t ready, I can say that it’s going to be amazing when it is. You really can write an app that, with relatively little effort, will run everywhere. Between CSS media queries, viewport units and flexbox, you can finally, easily write a responsive layout that can be markedly different for desktop, tablet and phone, and CSS transitions and a little JavaScript give you great expressive power for UI animations. WebGL is good enough for writing most mobile games you see, if you can avoid jank caused by garbage collection and reflow. Technologies like CocoonJS makes this really easy to deploy too.

Given how positive that all sounds, why isn’t it ready? These are the top bugs I encountered while working on some games (from a mobile specific viewpoint):

WebGL cannot be relied upon

WebGL has finally hit Chrome for Android release version, and has been enabled in Firefox and Opera for Android for ages now. The aforementioned CocoonJS lets you use it on iOS too, even. Availability isn’t the problem. The problem is that it frequently crashes the browser, or you frequently lose context, for no good reason. Changing the orientation of your phone, or resizing the browser on desktop has often caused the browser to crash in my testing. I’ve had lost contexts when my app is the only page running, no DOM manipulation is happening, no textures are being created or destroyed and the phone isn’t visibly busy with anything else. You can handle it, but having to recreate everything when this happens is not a great user experience. This happens frequently enough to be noticeable, and annoying. This seems to vary a lot per phone, but is not something I’ve experienced with native development at this scale.

An aside, Chrome also has an odd bug that causes a security exception if you load an image (on the same domain), render it scaled into a canvas, then try to upload that canvas. This, unfortunately, means we can’t use WebGL on Chrome in our puzzle game.

Canvas performance isn’t great

Canvas ought to be enough for simple 2d games, and there are certainly lots of compelling demos about, but I find it’s near impossible to get 60fps, full-screen, full-resolution performance out of even quite simple cases, across browsers. Chrome has great canvas acceleration and Firefox has an accelerated canvas too (possibly Aurora+ only at the moment), and it does work, but not well enough that you can rely on it. My puzzle game uses canvas as a fallback renderer on mobile, when WebGL isn’t an option, but it has markedly worse performance.

Porting to Chrome is a pain

A bit controversial, and perhaps a pot/kettle situation coming from a Firefox developer, but it seems that if Chrome isn’t your primary target, you’re going to have fun porting to it later. I don’t want to get into specifics, but I’ve found that Chrome often lays out differently (and incorrectly, according to specification) when compared to Firefox and IE10+, especially when flexbox becomes involved. Its transform implementation is also quite buggy too, and often ignores set perspective. There’s also the small annoyance that some features that are unprefixed in other browsers are still prefixed in Chrome (animations, 3d transforms). I actually found Chrome to be more of a pain than IE. In modern IE (10+), things tend to either work, or not work. I had fewer situations where something purported to work, but was buggy or incorrectly implemented.

Another aside, touch input in Chrome for Android has unacceptable latency and there doesn’t seem to be any way of working around it. No such issue in Firefox.

Appcache is awful

Uh, seriously. Who thought it was a good idea that appcache should work entirely independently of the browser cache? Because it isn’t a good idea. Took me a while to figure out that I have to change my server settings so that the browser won’t cache images/documents independently of appcache, breaking appcache updates. I tend to think that the most obvious and useful way for something to work should be how it works by default, and this is really not the case here.

Aside, Firefox has a bug that means that any two pages that have the same appcache manifest will cause a browser crash when accessing the second page. This includes an installed version of an online page using the same manifest.

CSS transitions/animations leak implementation details

This is the most annoying one, and I’ll make sure to file bugs about this in Firefox at least. Because setting of style properties gets coalesced, animations often don’t run. Removing display:none from an element and setting a style class to run a transition on it won’t work unless you force a reflow in-between. Similarly, switching to one style class, then back again won’t cause the animation on the first style-class to re-run. This is the case at least in Firefox and Chrome, I’ve not tested in IE. I can’t believe that this behaviour is explicitly specified, and it’s certainly extremely unintuitive. There are plenty of articles that talk about working around this, I’m kind of amazed that we haven’t fixed this yet. I’m equally concerned about the bad habits that this encourages too.

DOM rendering is slow

One of the big strengths of HTML5 as an app platform is how expressive HTML/CSS are and how you can easily create user interfaces in it, visually tweak and debugging them. You would naturally want to use this in any app or game that you were developing for the web primarily. Except, at least for games, if you use the DOM for your UI, you are going to spend an awful lot of time profiling, tweaking and making seemingly irrelevant changes to your CSS to try and improve rendering speed. This is no good at all, in my opinion, as this is the big advantage that the web has over native development. If you’re using WebGL only, you may as well just develop a native app and port it to wherever you want it, because using WebGL doesn’t make cross-device testing any easier and it certainly introduces a performance penalty. On the other hand, if you have a simple game, or a UI-heavy game, the web makes that much easier to work on. The one exception to this seems to be IE, which has absolutely stellar rendering performance. Well done IE.

This has been my experience with making web apps. Although those problems exist, when things come together, the result is quite beautiful. My puzzle game, though there are still browser-specific bugs to work around and performance issues to fix, works across varying size and specification of phone, in every major, modern browser. It even allows you to install it in Firefox as a dedicated app, or add it to your homescreen in iOS and Chrome beta. Being able to point someone to a URL to play a game, with no further requirement, and no limitation of distribution or questionable agreements to adheer to is a real game-changer. I love that the web fosters creativity and empowers the individual, despite the best efforts of various powers that be. We have work to do, but the future’s bright.


As of Friday night, I am now on a two month unpaid leave. There are a few reasons I want to do this. It’s getting towards the 3-year point at Mozilla, and that’s usually the sort of time I get itchy feet to try something new. I also think I may have been getting a bit close to burn-out, which is obviously no good. I love my job at Mozilla and I think they’ve spoiled me too much for me to easily work elsewhere even if that wasn’t the case, so that’s one reason to take an extended break.

I still think Mozilla is a great place to work, where there are always opportunities to learn, to expand your horizons and to meet new people. An unfortunate consequence of that, though, is that I think it’s also quite high-stress. Not the kind of obvious stress you get from tight deadlines and other external pressures, but a more subtle, internal stress that you get from constantly striving to keep up and be the best you can be. Mozilla’s big enough now that it’s not uncommon to see people leave, but it does seem that a disproportionate amount of them cite stress or needing time to deal with life issues as part of the reason for moving on. Maybe we need to get better at recognising that, or at encouraging people to take more personal time?

Another reason though, and the primary reason, is that I want to spend some serious time working on creating a game. Those who know me know that I’m quite an avid gamer, and I’ve always had an interest in games development (I even spoke about it at Guadec some years back). Pre-employment, a lot of my spare time was spent developing games. Mostly embarrassingly poor efforts when I view them now, but it’s something I used to be quite passionate about. At some point, I think I decided that I preferred app development to games development, and went down that route. Given that I haven’t really been doing app development since joining Mozilla, it feels like a good time to revisit games development. If you’re interested in hearing about that, you may want to follow this Twitter account. We’ve started already, and I like to think that what we have planned, though very highly influenced by existing games, provides some fun, original twists. Let’s see how this goes 🙂

Getting healthy

I’ve never really considered myself an unhealthy person. I exercise quite regularly and keep up with a reasonable amount of active hobbies (climbing, squash, tennis). That’s not really lapsed much, except for the time the London Mozilla office wasn’t ready and I worked at home – I think I climbed less during that period. Apparently though, that isn’t enough… After EdgeConf, I noticed in the recording of the session I participated in that I was looking a bit more plump than the mental image I had of myself. I weighed myself, and came to the shocking realisation that I was almost 14 stone (89kg). This put me well into the ‘overweight’ category, and was at least a stone heavier than I thought I was.

I’d long been considering changing my diet. I found Paul Rouget’s post particularly inspiring, and discussing diet with colleagues at various work-weeks had put ideas in my head. You could say that I was somewhat of a diet sceptic; I’d always thought that exercise was the key to maintaining a particular weight, especially cardiovascular exercise, and that with an active lifestyle you could get away with eating what you like. I’ve discovered that, for the most part, this was just plain wrong.

Before I go into the details of what I’ve done over the past 5 months, let me present some data:

[chartboot version= ‘2.3’ code= ‘3664’ border= ‘1’ width= ‘625’ height= ‘325’ attribution= ‘1’ jsondesc= ‘{“containerId”:”visualization3664″,”dataTable”:{“cols”:^{“id”:””,”label”:”Date”,”pattern”:””,”type”:”date”},{“id”:””,”label”:”Weight (kg)”,”pattern”:””,”type”:”number”}|,”rows”:^{“c”:^{“v”:”Date(2013, 1, 10)”,”f”:null},{“v”:87.996848,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 11)”,”f”:null},{“v”:87.089664,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 12)”,”f”:null},{“v”:87.089664,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 14)”,”f”:null},{“v”:86.636072,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 15)”,”f”:null},{“v”:87.089664,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 16)”,”f”:null},{“v”:86.636072,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 17)”,”f”:null},{“v”:85.728888,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 18)”,”f”:null},{“v”:85.728888,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 20)”,”f”:null},{“v”:85.728888,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 21)”,”f”:null},{“v”:84.821704,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 23)”,”f”:null},{“v”:85.275296,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 24)”,”f”:null},{“v”:85.728888,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 25)”,”f”:null},{“v”:84.821704,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 27)”,”f”:null},{“v”:84.821704,”f”:null}|},{“c”:^{“v”:”Date(2013, 1, 28)”,”f”:null},{“v”:84.821704,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 2)”,”f”:null},{“v”:84.821704,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 3)”,”f”:null},{“v”:84.821704,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 4)”,”f”:null},{“v”:84.368112,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 7)”,”f”:null},{“v”:84.368112,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 8)”,”f”:null},{“v”:83.91452,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 9)”,”f”:null},{“v”:84.368112,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 10)”,”f”:null},{“v”:83.91452,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 11)”,”f”:null},{“v”:83.91452,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 12)”,”f”:null},{“v”:82.553744,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 13)”,”f”:null},{“v”:83.007336,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 14)”,”f”:null},{“v”:83.007336,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 15)”,”f”:null},{“v”:83.460928,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 17)”,”f”:null},{“v”:83.007336,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 25)”,”f”:null},{“v”:83.460928,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 26)”,”f”:null},{“v”:83.007336,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 27)”,”f”:null},{“v”:83.007336,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 28)”,”f”:null},{“v”:82.553744,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 29)”,”f”:null},{“v”:82.553744,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 30)”,”f”:null},{“v”:82.100152,”f”:null}|},{“c”:^{“v”:”Date(2013, 2, 31)”,”f”:null},{“v”:82.100152,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 1)”,”f”:null},{“v”:81.192968,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 2)”,”f”:null},{“v”:81.192968,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 3)”,”f”:null},{“v”:80.739376,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 4)”,”f”:null},{“v”:81.192968,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 5)”,”f”:null},{“v”:81.192968,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 6)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 7)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 8)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 9)”,”f”:null},{“v”:80.739376,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 10)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 11)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 12)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 13)”,”f”:null},{“v”:80.285784,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 14)”,”f”:null},{“v”:79.832192,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 15)”,”f”:null},{“v”:79.832192,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 16)”,”f”:null},{“v”:79.3786,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 17)”,”f”:null},{“v”:79.3786,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 18)”,”f”:null},{“v”:78.925008,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 19)”,”f”:null},{“v”:78.925008,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 20)”,”f”:null},{“v”:78.471416,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 21)”,”f”:null},{“v”:78.017824,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 22)”,”f”:null},{“v”:78.471416,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 23)”,”f”:null},{“v”:78.471416,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 24)”,”f”:null},{“v”:78.471416,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 25)”,”f”:null},{“v”:78.471416,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 26)”,”f”:null},{“v”:78.017824,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 27)”,”f”:null},{“v”:78.017824,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 28)”,”f”:null},{“v”:77.564232,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 29)”,”f”:null},{“v”:77.564232,”f”:null}|},{“c”:^{“v”:”Date(2013, 3, 30)”,”f”:null},{“v”:77.564232,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 1)”,”f”:null},{“v”:77.11064,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 2)”,”f”:null},{“v”:77.11064,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 3)”,”f”:null},{“v”:77.11064,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 4)”,”f”:null},{“v”:77.564232,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 5)”,”f”:null},{“v”:77.564232,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 6)”,”f”:null},{“v”:77.11064,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 7)”,”f”:null},{“v”:77.11064,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 8)”,”f”:null},{“v”:76.203456,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 9)”,”f”:null},{“v”:76.203456,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 10)”,”f”:null},{“v”:76.203456,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 11)”,”f”:null},{“v”:76.203456,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 12)”,”f”:null},{“v”:76.203456,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 13)”,”f”:null},{“v”:75.749864,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 14)”,”f”:null},{“v”:75.749864,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 15)”,”f”:null},{“v”:75.296272,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 17)”,”f”:null},{“v”:74.84268,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 18)”,”f”:null},{“v”:75.296272,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 19)”,”f”:null},{“v”:74.84268,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 20)”,”f”:null},{“v”:74.84268,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 21)”,”f”:null},{“v”:74.84268,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 22)”,”f”:null},{“v”:75.296272,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 23)”,”f”:null},{“v”:75.296272,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 24)”,”f”:null},{“v”:74.84268,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 25)”,”f”:null},{“v”:74.389088,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 28)”,”f”:null},{“v”:74.84268,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 29)”,”f”:null},{“v”:73.935496,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 30)”,”f”:null},{“v”:73.935496,”f”:null}|},{“c”:^{“v”:”Date(2013, 4, 31)”,”f”:null},{“v”:73.481904,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 1)”,”f”:null},{“v”:72.57472,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 2)”,”f”:null},{“v”:73.028312,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 3)”,”f”:null},{“v”:73.028312,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 17)”,”f”:null},{“v”:73.481904,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 18)”,”f”:null},{“v”:73.028312,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 19)”,”f”:null},{“v”:72.57472,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 20)”,”f”:null},{“v”:72.57472,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 21)”,”f”:null},{“v”:72.121128,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 22)”,”f”:null},{“v”:72.121128,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 23)”,”f”:null},{“v”:72.121128,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 24)”,”f”:null},{“v”:71.667536,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 25)”,”f”:null},{“v”:71.667536,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 26)”,”f”:null},{“v”:71.213944,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 27)”,”f”:null},{“v”:70.760352,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 28)”,”f”:null},{“v”:70.760352,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 29)”,”f”:null},{“v”:70.760352,”f”:null}|},{“c”:^{“v”:”Date(2013, 5, 30)”,”f”:null},{“v”:70.760352,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 1)”,”f”:null},{“v”:71.213944,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 2)”,”f”:null},{“v”:71.213944,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 3)”,”f”:null},{“v”:70.30676,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 4)”,”f”:null},{“v”:70.30676,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 5)”,”f”:null},{“v”:69.853168,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 6)”,”f”:null},{“v”:69.853168,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 7)”,”f”:null},{“v”:70.30676,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 8)”,”f”:null},{“v”:69.399576,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 9)”,”f”:null},{“v”:69.399576,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 10)”,”f”:null},{“v”:69.853168,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 11)”,”f”:null},{“v”:70.30676,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 12)”,”f”:null},{“v”:69.399576,”f”:null}|},{“c”:^{“v”:”Date(2013, 6, 13)”,”f”:null},{“v”:68.945984,”f”:null}|}|,”p”:null},”options”:{“hAxis”:{“useFormatFromData”:true,”viewWindow”:{“max”:null,”min”:null},”minValue”:null,”maxValue”:null},”legend”:”right”,”vAxes”:^{“title”:null,”minValue”:null,”maxValue”:null,”useFormatFromData”:true,”viewWindow”:{“max”:null,”min”:null}},{“useFormatFromData”:true,”viewWindow”:{“max”:null,”min”:null},”minValue”:null,”maxValue”:null}|,”wmode”:”opaque”,”booleanRole”:”certainty”,”displayAnnotations”:true,”displayRangeSelector”:false,”scaleType”:”maximize”,”min”:65,”max”:90,”thickness”:”2″,”dateFormat”:”dd/MM/yyyy”,”displayZoomButtons”:false},”state”:{},”view”:{},”isDefaultVisualization”:true,”chartType”:”AnnotatedTimeLine”}’ ]

I started my new diet on February 10th, and as of today (July 13th), I’ve lost 3 stone/~19kg and am well in the ‘ideal’ weight range. Of course, BMI is a pretty rough measure of anything, but I feel better, I’m in much better shape and I find physical activity far more enjoyable than I used to. So how did I do it?

One of the things that really intrigued me about Paul’s diet was that he said that all he did to lose the weight was change what he ate. Nothing else. This was a pretty enticing idea. I never thought I’d be able to give up things like pasta and bread, but if it really meant your weight would just start decreasing with no further effort, it almost seems silly not to give it a try… So I gave it a try. I cut out the major sources of carbohydrate in my diet (potato, pasta, rice, bread, snacks) and indeed my weight, as you can see, immediately started dropping. If you have the weight to lose, the results are pretty dramatic, and much faster than you’d expect. At this point, I was doing no extra exercise, and although I was snacking much less, my portion sizes for meals were unchanged.

I found some nice alternatives for the things I missed. Pasta and rice are quite nicely replaced by steamed, crushed cauliflower. Steamed aubergine makes a nice filler too. For potato, sweet potato is pretty much just better as far as I’m concerned, and celeriac is also a nice alternative. I never really found an alternative to bread, so I still have breakfast with my parents on the weekends and eat my mum’s home-made wholemeal bread. In moderation, I’ve not found it to interrupt my weight-loss at all. I still have porridge for breakfast, and I’m not strict about keeping to any particular amount of carbs in a day. If I gain weight on a day, I just try to be a bit more careful the next day.

The first two stone just dropped off. I did no extra exercise, I didn’t count my calories. The only thing I did was avoid high-carbohydrate foods and weight myself every day. There seem to be mixed opinions on weigh-in frequency, but being able to see the numbers go up and down was pretty significant motivation for me. Your mileage, as ever, may vary. After getting to about 12 stone, my fellow London Mozillian Jonathan Watt challenged me to beat him to 70kg. I’m very grateful for that, as at that time I was pretty happy with 12 stone (it’s in the ideal range and it felt and looked noticeable to me).

The next stone, though still reasonably easy, didn’t come without effort. However, this increased effort was enjoyable, and is now a part of my life (and I intend that to continue). It ends up that carrying 12kg less weight while climbing makes it much more enjoyable, so I was able to climb longer and more frequently. Similarly, I started running with another colleague, Ryan Watson. The weight continued to come off, if anything at an increased rate now, and I was reaching weights I hadn’t been since my early teens.

The last few pounds has been difficult though. I wanted to hit 10 stone 12 to say that I lost 3 stone exactly (perhaps a slightly obsessive compulsion with whole numbers), but realistically, I think 70kg/~11 stone is the weight I’ll maintain. I’m now training for strength to change my body composition to something that will more easily allow me to maintain this weight.

A lot of people helped me to get this far. Ryan was especially encouraging and helped me train when I started to up the exercise. Without Jonathan’s competition, I may have settled for a weight that was still really above the weight I should be and last but not least, I have to thank my wonderful partner Laura for accommodating my new diet and helping me find lots of tasty things to eat. Not to mention my wonderful friends, family and colleagues, all of whom have been terrifically encouraging and supportive. Thanks, everyone 🙂

Writing and deploying a small Firefox OS application

For the last week I’ve been using a Geeksphone Keon as my only phone. There’s been no cheating here, I don’t have a backup Android phone and I’ve not taken to carrying around a tablet everywhere I go (though its use has increased at home slightly…) On the whole, the experience has been positive. Considering how entrenched I was in Android applications and Google services, it’s been surprisingly easy to make the switch. I would recommend anyone getting the Geeksphones to build their own OS images though, the shipped images are pretty poor.

Among the many things I missed (Spotify is number 1 in that list btw), I could have done with a countdown timer. Contrary to what the interfaces of most Android timer apps would have you believe, it’s not rocket-science to write a usable timer, so I figured this would be a decent entry-point into writing mobile web applications. For the most part, this would just be your average web-page, but I did want it to feel ‘native’, so I started looking at the new building blocks site that documents the FirefoxOS shared resources. I had elaborate plans for tabs and headers and such, but turns out, all I really needed was the button style. The site doesn’t make hugely clear that you’ll actually need to check out the shared resources yourself, which can be found on GitHub.

Writing the app was easy, except perhaps for getting things to align vertically (for which I used the nested div/”display: table-cell; vertical-alignment: middle;” trick), but it was a bit harder when I wanted to use some of the new APIs. In particular, I wanted the timer to continue to work when the app is closed, and I wanted it to alert you only when you aren’t looking at it. This required use of the Alarm API, the Notifications API and the Page Visibility API.

The page visibility API was pretty self-explanatory, and I had no issues using it. I use this to know when the app is put into the background (which, handily, always happens before closing it. I think). When the page gets hidden, I use the Alarm API to set an alarm for when the current timer is due to elapse to wake up the application. I found this particularly hard to use as the documentation is very poor (though it turns out the code you need is quite short). Finally, I use the Notifications API to spawn a notification if the app isn’t visible when the timer elapses. Notifications were reasonably easy to use, but I’ve yet to figure out how to map clicking on a notification to raising my application – I don’t really know what I’m doing wrong here, any help is appreciated! Update: Thanks to Thanos Lefteris in the comments below, this now works – activating the notification will bring you back to the app.

The last hurdle was deploying to an actual device, as opposed to the simulator. Apparently the simulator has a deploy-to-device feature, but this wasn’t appearing for me and it would mean having to fire up my Linux VM (I have my reasons) anyway, as there are currently no Windows drivers for the Geeksphone devices available. I obviously don’t want to submit this to the Firefox marketplace yet, as I’ve barely tested it. I have my own VPS, so ideally I could just upload the app to a directory, add a meta tag in the header and try it out on the device, but unfortunately it isn’t as easy as that.

Getting it to work well as a web-page is a good first step, and to do that you’ll want to add a meta viewport tag. Getting the app to install itself from that page was easy to do, but difficult to find out about. I think the process for this is harder than it needs to be and quite poorly documented, but basically, you want this in your app:

And you want all paths in your manifest and appcache manifest to be absolute (you can assume the host, but you can’t have paths relative to the directory the files are in). This last part makes deployment very awkward, assuming you don’t want to have all of your app assets in the root directory of your server and you don’t want to setup vhosts for every app. You also need to make sure your server has the webapp mimetype setup. Mozilla has a great online app validation tool that can help you debug problems in this process.

Timer app screenshot
And we’re done! (Ctrl+Shift+M to toggle responsive design mode in Firefox)

Visiting the page will offer to install the app for you on a device that supports app installation (i.e. a Firefox OS device). Not bad for a night’s work! Feel free to laugh at my n00b source and tell me how terrible it is in the comments 🙂

Tips for smooth scrolling web pages (EdgeConf follow-up)

I’m starting to type this up as EdgeConf draws to a close. I spoke on the performance panel, with Shane O’Sullivan, Rowan Beentje and Pavel Feldman, moderated by Matt Delaney, and tried to bring a platform perspective to the affair. I found the panel very interesting, and it reminded me how little I know about the high-level of web development. Similarly, I think it also highlighted how little consideration there usually is for the platform when developing for the web. On the whole, I think that’s a good thing (platform details shouldn’t be important, and they have a habit of changing), but a little platform knowledge can help in structuring things in a way that will be fast today, and as long as it isn’t too much of a departure from your design, it doesn’t hurt to think about it. At one point in the panel, I listed a few things that are particularly slow from a platform perspective today. While none of these were intractable problems, they may not be fixed in the near future and feedback indicated that they aren’t all common knowledge. So what follows are a few things to avoid, and a few things to do that will help make your pages scroll smoothly on both desktop and mobile. I’m going to try not to write lies, but I hope if I get anything slightly or totally wrong, that people will correct me in the comments and I can update the post accordingly 🙂

Avoid reflow

When I mentioned this at the conference, I prefaced it with a quick explanation of how rendering a web page works. It’s probably worth reiterating this. After network and such have happened and the DOM tree has been created, this tree gets translated into what we call the frame tree. This tree is similar to the DOM tree, but it’s structured in a way that better represents how the page will be drawn. This tree is then iterated over and the size and position of these frames are calculated. The act of calculating these positions and sizes is referred to as reflow. Once reflow is done, we translate the frame tree into a display list (other engines may skip this step, but it’s unimportant), then we draw the display list into layers. Where possible, we keep layers around and only redraw parts that have changed/newly become visible.

Really, reflow is actually quite fast, or at least it can be, but it often forces things to be redrawn (and drawing is often slow). Reflow happens when the size or position of things changes in such a way that dependent positions and sizes of elements need to be recalculated. Reflow usually isn’t something that will happen to the entire page at once, but incautious structuring of the page can result in this. There are quite a few things you can do to help avoid large reflows; set widths and heights to absolute values where possible, don’t reposition or resize things, don’t unnecessarily change the style of things. Obviously these things can’t always be avoided, but it’s worth thinking if there are other ways to achieve the result you want that don’t force reflow. If positions of things must be changed, consider using a CSS translate transform, for example – transforms don’t usually cause reflow.

If you absolutely have to do something that will trigger reflow, it’s important to be careful how you access properties in JavaScript. Reflow will be delayed as long as possible, so that if multiple things happen in quick succession that would cause reflow, only a single reflow actually needs to happen. If you access a property that relies on the frame tree being up to date though, this will force reflow. Practically, it’s worth trying to batch DOM changes and style changes, and to make sure that any property reads happen outside of these blocks. Interleaving reads and writes can end up forcing multiple reflows per page-draw, and the cost of reflow can add up quickly.

Avoid drawing

This sounds silly, but you should really only make the browser do as little drawing as is absolutely necessary. Most of the time, drawing will happen on reflow, when new content appears on the screen and when style changes. Some practical advice to avoid this would be to avoid making DOM changes near the root of the tree, avoid changing the size of things and avoid changing text (text drawing is especially slow). While repositioning doesn’t always force redrawing, you can ensure this by repositioning using CSS translate transforms instead of top/left/bottom/right style properties. Especially avoid causing redraws to happen while the user is scrolling. Browsers try their hardest to keep up the refresh rate while scrolling, but there are limits on memory bandwidth (especially on mobile), so every little helps.

Thinking of things that are slow to draw, radial gradients are very slow. This is really just a bug that we should fix, but if you must use CSS radial gradients, try not to change them, or put them in the background of elements that frequently change.

Avoid unnecessary layers

One of the reasons scrolling can be fast at all on mobile is that we reduce the page to a series of layers, and we keep redrawing on these layers down to a minimum. When we need to redraw the page, we just paste these layers that have already been drawn. While the GPU is pretty great at this, there are limits. Specifically, there is a limit to the amount of pixels that can be drawn on the screen in a certain time (fill-rate) – when you draw to the same pixel multiple times, this is called overdraw, and counts towards the fill-rate. Having lots of overlapping layers often causes lots of overdraw, and can cause a frame’s maximum fill-rate to be exceeded.

This is all well and good, but how does one avoid layers at a high level? It’s worth being vaguely aware of what causes stacking contexts to be created. While layers usually don’t exactly correspond to stacking contexts, trying to reduce stacking contexts will often end up reducing the number of resulting layers, and is a reasonable exercise. Even simpler, anything with position: fixed, background-attachment: fixed or any kind of CSS transformed element will likely end up with its own layer, and anything with its own layer will likely force a layer for anything below it and anything above it. So if it’s not necessary, avoid those if possible.

What can we do at the platform level to mitigate this? Firefox already culls areas of a layer that are made inaccessible by occluding layers (at least to some extent), but this won’t work if any of the layers end up with transforms, or aren’t opaque. We could be smarter with culling for opaque, transformed layers, and we could likely do a better job of determining when a layer is opaque. I’m pretty sure we could be smarter about the culling we already do too.

Avoid blending

Another thing that slows down drawing is blending. This is when the visual result of an operation relies on what’s already there. This requires the GPU (or CPU) to read what’s already there and perform a calculation on the result, which is of course slower than just writing directly to the buffer. Blending also doesn’t interact well with deferred rendering GPUs, which are popular on mobile.

This alone isn’t so bad, but combining it with text rendering is not so great. If you have text that isn’t on a static, opaque background, that text will be rendered twice (on desktop at least). First we render it on white, then on black, and we use those two buffers to maintain sub-pixel anti-aliasing as the background varies. This is much slower than normal, and also uses up more memory. On mobile, we store opaque layers in 16-bit colour, but translucent layers are stored in 32-bit colour, doubling the memory requirement of a non-opaque layer.

We could be smarter about this. At the very least, we could use multi-texturing and store non-opaque layers as a 16-bit colour + 8-bit alpha, cutting the memory requirement by a quarter and likely making it faster to draw. Even then, this will still be more expensive than just drawing an opaque layer, so when possible, make sure any text is on top of a static, opaque background when possible.

Avoid overflow scrolling

The way we make scrolling fast on mobile, and I believe the way it’s fast in other browsers too, is that we render a much larger area than is visible on the screen and we do that asynchronously to the user scrolling. This works as the relationship between time and size of drawing is not linear (on the whole, the more you draw, the cheaper it is per pixel). We only do this for the content document, however (not strictly true, I think there are situations where whole-page scrollable elements that aren’t the body can take advantage of this, but it’s best not to rely on that). This means that any element that isn’t the body that is scrollable can’t take advantage of this, and will redraw synchronously with scrolling. For small, simple elements, this doesn’t tend to be a problem, but if your entire page is in an iframe that covers most or all of the viewport, scrolling performance will likely suffer.

On desktop, currently, drawing is synchronous and we don’t buffer area around the page like on mobile, so this advice doesn’t apply there. But on mobile, do your best to avoid using iframes or having elements that have overflow that aren’t the body. If you’re using overflow to achieve a two-panel layout, or something like this, consider using position:fixed and margins instead. If both panels must scroll, consider making the largest panel the body and using overflow scrolling in the smaller one.

I hope we’ll do something clever to fix this sometime, it’s been at the back of my mind for quite a while, but I don’t think scrolling on sub-elements of the page can ever really be as good as the body without considerable memory cost.

Take advantage of the platform

This post sounds all doom and gloom, but I’m purposefully highlighting what we aren’t yet good at. There are a lot of things we are good at (or reasonable, at least), and having a fast page need not necessarily be viewed as lots of things to avoid, so much as lots of things to do.

Although computing power continues to increase, the trend now is to bolt on more cores and more hardware threads, and the speed increase of individual cores tends to be more modest. This affects how we improve performance at the application level. Performance increases, more often than not, are about being smarter about when we do work, and to do things concurrently, more than just finding faster algorithms and micro-optimisation.

This relates to the asynchronous scrolling mentioned above, where we do the same amount of work, but at a more opportune time, and in a way that better takes advantage of the resources available. There are other optimisations that are similar with regards to video decoding/drawing, CSS animations/transitions and WebGL buffer swapping. A frequently occurring question at EdgeConf was whether it would be sensible to add ‘hints’, or expose more internals to web developers so that they can instrument pages to provide the best performance. On the whole, hints are a bad idea, as they expose platform details that are liable to change or be obsoleted, but I think a lot of control is already given by current standards.

On a practical level, take advantage of CSS animations and transitions instead of doing JavaScript property animation, take advantage of requestAnimationFrame instead of setTimeout, and if you find you need even more control, why not drop down to raw GL via WebGL, or use Canvas?

I hope some of this is useful to someone. I’ll try to write similar posts if I find out more, or there are significant platform changes in the future. I deliberately haven’t mentioned profiling tools, as there are people far more qualified to write about them than I am. That said, there’s a wiki page about the built-in Firefox profiler, some nice documentation on Opera’s debugging tools and Chrome’s tools look really great too.

Firefox for Android in 2013

Lucas Rocha and I gave a talk at FOSDEM over the weekend on Firefox for Android. It went ok, I think we could have rehearsed it a bit better, but it was generally well-received and surprisingly well-attended! I’m sure Lucas will have the slides up soon too. If you were unfortunate enough not to have attended FOSDEM, and doubly unfortunate that you missed our talk (guffaw), we’ll be reiterating it with a bit more detail in the London Mozilla space on February 22nd. We’ll do our best to answer any questions you have about Firefox for Android, but also anything Mozilla-related. If you’re interested in FirefoxOS, there may be a couple of phones knocking about too. Do come along, we’re looking forward to seeing you 🙂

p.s. I’ll be talking on a performance panel at EdgeConf this Saturday. Though it’s fully booked, I think tickets occasionally become available again, so might be worth keeping an eye on. They’ll be much cleverer people than me knocking about, but I’ll be doing my best to answer your platform performance related questions.

Progressive Tile Rendering

So back from layout into graphics again! For the last few weeks, I’ve been working with Benoit Girard on getting progressive tile rendering finished and turned on by default in Firefox for Android. The results so far are very promising! First, a bit of background (feel free to skip to the end if you just want the results).

You may be aware that we use a multi-threaded application model for Firefox for Android. The UI runs in one thread and Gecko, which does the downloading and rendering of the page, runs in another. This is a bit of a simplification, but for all intents and purposes, that’s how it works. We do this so that we can maintain interactive performance – something of paramount important with a touch-screen. We render a larger area than you see on the screen, so that when you scroll, we can respond immediately without having to wait for Gecko to render more. We try to tell Gecko to render the most relevant area next and we hope that it returns in time so that the appearance is seamless.

There are two problems with this as it stands, though. If the work takes too long, you’ll be staring at a blank area (well, this isn’t quite true either, we do a low-resolution render of the entire page and use that as a backing in this worst-case scenario – but that often doesn’t work quite right and is a performance issue in and of itself…) The second problem is that if a page is made up of many layers, or updates large parts of itself as you scroll, uploading that work to the graphics unit can take a significant amount of time. During this time, the page will appear to ‘hang’, as unfortunately, you can’t upload data to the GPU and continue to use it to draw things (this isn’t true in every single case, but again, for our purposes, it is).

Progressive rendering tries to spread this load by breaking up that work into several smaller tiles, and processing them one-by-one, where appropriate. This helps us mitigate those pauses that may happen for particularly complex/animated pages. Alongside this work, we also add the ability for a render to be cancelled. This is good for the situation that a page takes so long to render that by the time it’s finished, what it rendered is no longer useful. Currently, because a render is done all at once, if it takes too long, we can waste precious cycles on irrelevant data. As well as splitting up this work, and allowing it to be cancelled, we also try to do it in the most intelligent order – render areas that the user can see that were previously blank first, and if that area intersects with more than one tile, make sure to do it in the order that maintains visual coherence the best.

A cherry on the top (which is still very much work-in-progress, but I hope to complete it soon), is that splitting this work up into tiles makes it easy to apply nice transitions to make the pathological cases not look so bad. With that said, how’s about some video evidence? Here’s an almost-Nightly (an extra patch or two that haven’t quite hit central), with the screenshot layer disabled so you can see what can happen in a pathological case:

And here’s the same code, with progressive tile rendering turned on and a work-in-progress fading patch applied.

This page is a particularly slow page to render due to the large radial gradient in the background (another issue which will eventually be fixed), so it helps to highlight how this work can help. For a fast-to-render page that we have no problems with, this work doesn’t have such an obvious effect (though scrolling will still be smoother). I hope the results speak for themselves 🙂


Eurogamer Expo 2012

One of the perks of being a Virgin Media customer (beyond getting my name wrong and constant up-sell harassment) is that I got cheap, early-access Eurogamer Expo tickets! This was my first Eurogamer Expo, though I’m no stranager to ECTS or ATEI/EAG. The setup was quite good – perhaps a bit smaller than I expected, but nice to see a games show that’s actually aimed at gamers. I was always amused at the hoops you had to jump through to get tickets for ECTS and ATEI; more so when you actually visit the events and realise the majority of people there are gamers who have jumped through those same hoops. Good to see that the games industry, finally, after several years, got wise.

There was a fair amount on show. Lots of soon and not-so-soon to be released games, the WiiU, a surprising and pleasing amount of indie content and various bits and bobs. The WiiU was certainly the main attraction, but was managed terribly and was extremely disappointing. While most of the company reps were great and very helpful, a couple of Nintendo’s were oddly aggressive and patronising. I don’t think anyone at Eurogamer needs to be told how to play WiiU mini-games, or have buttons on their controllers pressed for them. The decision to dedicate three entire kiosks in the WiiU section to a video panorama viewer was baffling too. It’s almost as if no one at Nintendo has picked up a smart-phone in the last 5 years or so – this isn’t astounding stuff. Wonderful 101 seemed quite fun, but not as fun as I was expecting. The rest of the WiiU content was very disappointing. Pikmin 3 looking bland and boring was especially upsetting. It’s ironic that playing on the console has secured my decision not to buy it on release. I could easily write about how disappointing the WiiU was for a lot longer, but I just don’t care enough.

What was pleasantly surprising was how good Sony’s presence and content was. Reps were polite and helpful, not getting in the way where they weren’t needed and turning up when they were. Much like a good waiter. They had plenty of kiosks and space, and queues were minimal (not due to lack of interest, mind). Playstation All-stars Battle Royale, though clearly a Smash Bros. rip-off, is actually a very good one. We spent quite a while on it, and it was very enjoyable (possibly more so than Smash Bros. Brawl, but it doesn’t even approach the heights of Melee). The cross-play was especially impressive too, mirroring almost the exact same game frame-for-frame with only minor graphical omissions. Stand-out game of the show had to be When Vikings Attack, though. Incredibly simple concept, but perfect execution and impressive cross-play again. The only disappointment was that it doesn’t have a confirmed release date, but Clever Beans say it will be on PSN before the end of the year. This is definitely day-one purchase material.

Carmageddon definitely deserves a mention. It’s just as much fun as it was all those years ago, and the tablet/smartphone port has been handled perfectly. A shame that there was no demo or footage for the Carmageddon Reincarnation project, but hopefully it made a few more people aware. Also worth mentioning was God of War: Ascension, which although is more of the same, it’s a brilliant same that it’s more of. The multiplayer worked surprisingly well too, though a LAN setup is always going to be more fun than online. There were a few things that I’d have liked to have tried, but queues prevented me – nothing I would deem queue-worthy though. Hitman looked quite impressive, but the whole misogyny thing has put me off. Same goes for Tomb Raider. Dishonoured looked interesting, but not so interesting to queue for. Halo 4 looked like more of the same, though the considerable graphical upgrade certainly doesn’t hurt. Dead or Alive 5 was quite fun, and pleasing to see that they’ve returned to the mechanics of Dead or Alive 2 (clearly the series high). Disappointing amount of guys picking bikini-clad women to fight and leaving the camera aimed at crotch/chest areas; we evened the score a bit by playing as ridiculous-looking guys and aiming at the groin. Yes, I am 12. Disappointed to see that they’ve not included Zack’s weird sports-bra costume. The indie games arcade section is probably worth mentioning in that almost everything in it was terrible and just trading on a quirky look with zero gameplay to back it up. I conclude that there’s still plenty of room for ideas and innovation in the British indie games community.

All in all, a pretty fun event. Slightly disappointing that the industry still hasn’t moved on from the whole booth-babe thing, but it’s definitely far less prevalent than it used to be, so that leaves me with some hope. The graphical standard of console games is astounding, especially given there hasn’t been a hardware refresh in over 5 years. I’ll definitely be returning next year.


position:fixed in Firefox Mobile

It seems, somehow, for the last few months, I’ve been working on layout. I’m not quite sure how it happened, as anyone who’s spoken to me or follows me on Twitter would know that I have a very healthy fear of the Gecko layout code. I still have that fear, but I’d like to think now that it’s coupled with a tiny amount of understanding; understanding that has, dare I say it, even let me have fun while working on this code!

Those of you that have used browsers on mobile phones (all of you?), especially not the very latest phones, may be familiar with an annoying problem. That is, elements that have position:fixed in their style tend to jump around the place like they’ve had too much coffee. You commonly see this on sites that have a persistent bar at the top or bottom of the page, or floating confirmation notifications, things like this. Brad Frost wrote about this far more eloquently than I could here. This has always annoyed me, especially after learning more about how browsers work. Certainly in Gecko, we have all of the context we need for this not to happen. It also ended up that this problem had been worked on long before I even joined Mozilla last year, so that made it doubly surprising that we suffered from this problem in all releases of Firefox Mobile.

When I first came across this last year, I discovered that the support was already there in the old Firefox Mobile, but disabled by default due to it causing test failures. I was working on other things then, and wasn’t at all acquainted with layout code, so I let it be. Revisiting it for the new, native Firefox Mobile though, these test failures didn’t exist anymore. Enabling this basic support that would let position:fixed elements move and zoom with user input correctly was not too big a deal – just flip an environment variable and write a small amount of support code. This landed in Firefox 15 and is tracked in Bug 607417. Just this is enough for a lot of mobile sites to start using position:fixed (I’m looking at you, Twitter and Facebook!).

This wasn’t enough for me though. Around this time, Android 3.x (Honeycomb) tablets had been around for quite a while and the Galaxy Nexus with Android 4.0 (Ice-cream Sandwich) had just come out, both with even better support for position:fixed. Not to mention the iPhone, which has excellent support. A problem with our implementation in Firefox 15, is that anything fixed to the bottom or right of the screen, or anything that doesn’t anchor to the top-left in any way, may become inaccessible after zooming in. In recent versions of the Android stock browser, not only do these remain accessible, but they zoom very smoothly too. Not wanting to be one-upped by what could be considered our main competition, I started to work on more comprehensive position:fixed support. This work was tracked in Bug 758620.

When zooming in our browser, we don’t change how the page is laid out, but fixed position elements are still rendered relative to the viewport. What we want (at least, for now) is for fixed position elements to lay out with respect to this viewport so that they always remain visible, and to transition smoothly between these states. To achieve this, I changed layout so that fixed position elements are laid out to what we call the scroll-port. When we zoom in, we change the scroll-port, otherwise you wouldn’t be able to scroll to the bottom-right of the page, but this only changes scrolling behaviour and nothing else. This change also made it so that fixed-position children of the document would be relayed out when this scrollport was changed. This fixed the inaccessibility problem, but left nasty-looking transitions when zooming in.

To fix the transitions was quite a bit more comprehensive and lead me down a long path of causing and fixing various layout bugs. When a page is rendered, the DOM tree is parsed into a frame tree, which better represents the layout of the page. This frame tree is then parsed into a display-list, which represents how to draw the page. This display-list is then optimised and parsed/drawn into a layer tree, which is the final representation before we composite it. There’s cleverness to make sure that layers aren’t recreated unnecessarily, but that’s another subject for another time. We wanted to be able to annotate the layer tree so that when compositing, we have enough information to determine how to place fixed-position layers when zooming. This involved creating a new display-list item with the information about how the element is fixed (to the top? left? right? bottom?), and also that would guarantee that the items representing this element would end up on their own layer. Once this was done, code in the compositor was added to leverage this information and draw layers in the right place.

This is an area that a lot of browsers have difficulty with, so it was a fun problem to work on. Try out a couple of my testcases if you’re interested, they expose how different browsers handle this situation, and in the case of a few of them, some bugs 🙂 We’re still not perfect, but we’re better than we were before – and to my feeling, we’re adequate now. This work landed in Firefox 16.

So is there work left to do? Well, unfortunately, yes. I’ve just finished off support for fixed backgrounds and backgrounds with multiple fixed/non-fixed layers, and this will arrive in Firefox 18. This is tracked in Bug 786502. I also think that the best behaviour would be for fixed position elements to layout to whichever is largest of the CSS viewport or the scroll-port, and for scrolling to be within the CSS viewport and push the edges when you reach them. I’m told this is what happens in IE10 on Windows 8, and is similar (but slightly better) to what gets done in Safari on iOS. I think it’s about time for a break from this particular feature for me, though.


How can Mozilla and Gnome work together?

I’ve been pretty lax on blogging lately, but here’s something that’s troubling me. I haven’t really done any work directly related to Gnome since I started working at Mozilla. Ends up writing browsers is pretty hard, and any recreational programming time I get, I don’t particularly feel inclined to work on Gnome. I have, however, been attending Guadec this week. I haven’t missed one since 2006 and I don’t intend to. What’s troubling me, is that although Mozilla were kind enough to sponsor my presence here (we’re hiring!), Gnome doesn’t seem to be hugely relevant to us anymore. I’d love to be corrected of course, but judging by the amount of effort we’re putting into the Gtk+3 port, themeing and other Linux-related bugs, I’m pretty sure this is the case.

I have some ideas about this, but I’d like to be brief. For now. So, my simple question is, How can Mozilla and Gnome work better together?

[Edit]: Seems my blog’s commenting form is broken. Until it’s fixed, feel free to mail me your comments, I’d love to hear them! (address on the side of the page)

[Edit2]: Comments appear to be working again, but if they fail, do mail me!