Accelerated layer-rendering, and learning by (some) success

Perhaps the title of my last blog post seemed a little negative, so I wanted to write on this topic again, on some of the things I’ve learnt since then, and some of the success I’ve had since then too. Failure was probably too strong a word, but better to be too negative than too positive about these things, especially when surrounded by the amazing talent there is at Mozilla…

I finished off previously by saying there are other, easier problems to solve, and I think I’m making some decent progress in those areas. I described before how shadow layers work, and how the chrome process can use GL-accelerated layer compositing, but the content process is always restricted to basic (unaccelerated) layers. This introduces the bottleneck of getting the image data from system memory to video memory. I was probably over-zealous in my previous approach. While asynchronous updates would be great, we could try to minimise those updates first. This is almost certainly something we should be doing anyway.

One of the ways we do this is by something that (confusingly) gets called ‘rotation’, in the source code. I mentioned scrolling before. To reiterate, we render to a larger buffer than is visible on the screen and when panning, we move that buffer and ask the content process to re-render the newly exposed area. We then update again when that’s finished. Hopefully, that happens quickly, but when it doesn’t, you may see some checker-boarding. When the content process re-renders, theoretically it only needs to re-render the newly exposed pixels, as it already has the rest of the page rendered. This could involve copying all the existing pixels upwards (assuming we’re scrolling downwards) and then rendering in the newly exposed area, but instead of doing this, we say ‘the origin of this buffer is now at these coordinates’, and we treat the buffer as if it wrapped around (thus ‘rotation’).

There are problems with this, however. For example, if you were to zoom into a rotated buffer whose rotation coordinates are visible on the screen, you may see a ‘seam’ at that position. Similarly, when re-using the existing pixels on the buffer, if the new scroll coordinates meant that the sample grid is no longer aligned with the previous sample grid, you may see odd artifacts on scaled images and text that was cut-off in a previous render. The following example demonstrates this:


The results of a misaligned sample grid

On the left is the original image (a checkerboard, purposefully chosen as it’s sort-of a worst case scenario), and on the right, the same image with a 1-pixel border added on the left and upper edges. They both have the same, bi-linear scale applied to them, and the border is then cropped on the right image. You can immediately see that the same image does not result, and putting them together draws extra attention to this. This is what happens when you try to combine the results of two sampling image operations that have misaligned sample grids.

The code makes some attempt at separating out situations where this will happen and marking them so that in those situations, rotation doesn’t occur and the entire buffer is re-rendered. I don’t know what assumptions you can make about cairo’s sampling, or indeed how we drive it to draw pages, but certainly this code is over-zealous with marking when resampling will occur. For example, we zoom pages to fit the width of the screen by default. And any zoom operation marks the surface to say it will be resampled. We also update the content process’s scroll co-ordinates every 20 pixels. So, for the overwhelmingly common case, we re-render the entire buffer every 20 pixels. On a dual-core (or more) machine, assuming your cores aren’t saturated, this doesn’t matter so much without hardware-acceleration, as the chrome process oughtn’t be affected by what’s happening in the content process, and when it finishes, it just does a simple page-flip anyway. Unfortunately, this isn’t the case in practice, I guess due to the memory bandwidth required to re-render such a large surface, and perhaps due to non-ideal scheduling (remember, these are guesses. I’ve been terribly lazy when it comes to testing these theories).

Even more unfortunately, this is a terrible hit for GL-accelerated layers, as we don’t do page-swapping, we do synchronous buffer uploads. Also, the default shadow layer size is 200%x300% of the visible area. So let’s say you have 1280×752 pixels visible (as is the case on a 1280×800 Honeycomb tablet), every 20 pixels you scroll you’re doing a synchronous, 9.4mb upload from system memory to ‘video memory’ (I put this in quotes, as I don’t want to go down the path of explaining shared memory architecture and how it ends up working on android. It would be long and I’d probably be wrong). Even worse, most android devices have a maximum texture size of 2048×2048, so we have to tile these textures – so you’re then splitting up these uploads, with texture-binds in-between, making it even slower.

Well, you might then say, “well at least in some cases you’ll still get the benefit of rotation, right?” Well, unfortunately, you’d be wrong. We disable buffer rotation entirely on shadow layers. So we have a number of problems here. I discovered this when I noticed how frequently we were doing whole-buffer updates, both on GL and software. The first thing I thought was to just disable the marking of possibly-resampling surfaces (you can do this by either not setting PAINT_WILL_RESAMPLE in BasicLayers.cpp, or ignoring it in ThebesLayerBuffer.cpp – you’ll notice that it checks MustRetainContent, which returns TRUE for shadow layers). This ought to get you the benefit of rotation, at the expense of some visible artifacing. The bug for enabling buffer rotation is here. But then I ran into this bug, which I fixed. This gets you buffer rotation being used more frequently with software rendering, but when using hardware acceleration, things now appear very broken. Doubly-so if you use tiles.

So next, I investigated why things were broken when using hardware acceleration. First was to alter desktop to use tiles. After doing this, and picking a small tile-size, I noticed that a lot of drawing was then broken. This ended up being this bug, which I fixed. Now more of the screen is visible, but rotation is still broken. This ended up being a two-fold problem. The first being that we don’t handle uploads to GL layers correctly when there’s rotation, and the second being we don’t handle rendering of rotated GL layers when we have tiles. I fix both of these in this bug.

So after hacking away the resampling check and fixing the various rendering bugs that rotation then exposes, you can see the benefit it would get you. Unfortunately, there’s still a lot of work to be done, and even when this works perfectly, it isn’t going to benefit all situations (we could still do with a fast-path for texture upload on android, and asynchronous updates or page-flipping). But on some sites (my own, for example, and my favourite test-site, engadget.com), the difference is pretty big. So four bugs fixed and a deeper knowledge of how layers are put together, I count this one as a success 🙂

]]>

Desktop Summit 2011 Thoughts

Another year, another great desktop summit. This year I went courtesy of Mozilla, and I’m very grateful they deemed it worthwhile. Having been though, I think attendance of events like this is invaluable for open-source hackers. Not only for the chance to present your work and attend talks, not only for the numerous networking opportunities, but purely for the inspiration. Every time I attend Desktop Summit/Guadec/FOSDEM, I never fail to come away with new ideas and fresh inspiration to hopefully do more and better work in the future.

Some great talks this year, though I won’t go into naming them as the list would be too long and I’d likely leave some out. One of the things that really left an impression though, was one of the things that I think perhaps was missing slightly. On the way to the beach party, I met some Spanish KDE users who were also on their way (and props to the KDE community by the way, you guys know how to party!) They said it was their first conference like this and they just came to see what it was like. They’d noticed that the summit was very developer-centric though. This got me to thinking, why is this?

Certainly, I wouldn’t argue for a complete change of focus, as as a developer, as I mentioned, I find these things invaluable. On the other hand, perhaps we ought to do more to include our users? Guadec does stand for the Gnome *Users* and Developers European Conference, after all. I think we’ve done a lot more to be inclusive of the non-programming parts of Gnome development (UX/visual design, documentation, community management, distribution) over the years, but maybe we need to extend that effort and start targeting users who haven’t yet begun contributing.

With that in mind, I have a few ideas to help include users more in the future:

  • High-level feature talks – We could have talks that deal with new features of applications, the desktop, maybe even libraries, but at a high level. Less jargon, more screenshots, videos and demonstrations. It’s easy for a developer to see what the new latest features of Gnome are, as they can just check it out, build it, fix the inevitable problems with that build and try it out. I think it might be interesting and fun to prepare talks that are purely high-level presentations and demonstrations. Off of the back of that, you’d perhaps get more people interested in the project.
  • Beginners tutorials – We could run beginner classes on using, and perhaps developing the Gnome desktop environment, but aimed at people with little to no experience. This is pretty difficult of course, but then I’ve never seen the Gnome community fail to rise to and conquer a difficult challenge. Maybe a beginners guide to writing a Gnome Shell extension in JavaScript, or setting up a JavaScript development environment. Perhaps a beginners guide to establishing a useful work-flow in Gnome 3, for common tasks like document editing or web-browsing. Even more useful perhaps, a beginners guide on filing useful bugs?
  • Install-fests – I ran this idea by Emmanuele Bassi, and he brought up the very good point that it’s hard to find the resources to run things like this. I also get the feeling that until we have more people interested, more general members of the public and novice users, this may be quite poorly attended. Still an idea to think about though.
While these ideas may be of limited use and I might be completely wrong, I do think getting more involved with our users at events like this could benefit us. Taking the main theme of Dirk’s keynote this year, we should probably be making a greater effort to listen to our users.

]]>

Desktop Summit 2011

I’ll be at Berlin tomorrow, for the Desktop Summit. I’ll be presenting a talk, Clutter Everywhere, with Damien Lespiau and Neil Roberts. It’s right after Emmanuele Bassi’s talk, Heart of Blingness: Clutter and GNOME. I highly recommend you attend both talks!

This will be my first Guadec/Desktop Summit as part of Mozilla, so if you have any questions about Firefox Mobile, I’ll do my best to answer them. I hope that I’ll see some of my new colleagues too – do come and say hello, you can’t miss me (I’m the one with the ridiculous hair)!

]]>