More lessons to learn from games development

I’ve been increasingly thinking, as our applications become more and more animated and interactive, that we’re going about certain things in a fundamentally wrong way. Often, with the (unfortunately not quite public, but hopefully coming soon!) project I’m working on, I find that after I’ve crafted a nifty new animation or feature, that it works great in a limited test-case, but has some worst-case performance scenarios that render it pointless. Of course, you can always go back and optimise things, and it’s an oft-cited mistake to optimise early, or to micro-optimise, but surely there’s a way of going about things, or a way of thinking, that would limit these situations in the first place?

This post follows on somewhat from my section of our talk at last year’s Guadec.

As I highlighted back then, I think that there are very different types of developers, depending on what you’re making. Going on how often I hear people talking about kernel developers, I think I’m safe in saying it’s widely believed. I identify as an application developer. I occasionally dabble in middleware/infrastructure, and I’d like to think that my background prior to working in open-source was more graphics/games development, but it’s ‘app’ development where I think my heart lies. And there’s something wrong with how I (and perhaps others) go about this.

It usually starts with some kind of basic prototype, quickly hacked together (of course, everything starts with an idea, but I’m taking that as given). This prototype may evolve into a final product, or in some cases, it leads to thinking and re-architecting things and going from the ground up in a new, clean base. I’ve found that myself, and others, usually break the app into objects and scenes, where an object is usually either an interactive widget, the interface to some data or a container for either, and scenes usually represent the interface to a particular task in our application.

We tend to be quite good at abstracting data access so as not to expose the nasty internals of the various backing stores. Sometimes we go a step further, and we make some of these objects or scenes pluggable, and we separate them into interfaces and implementations, allowing them to be easily replaced when the code gets out of control, or becomes obsolete. We tend to be quite good at breaking high-level tasks into logical blocks that can be implemented in bite-size chunks, without things becoming overwhelming.

Unfortunately, we tend to stop at that level of abstraction, and now that computing is becoming more pervasive, good hardware is becoming more commodity and there are many more of us, that isn’t enough anymore. The things I’ve mentioned above can lead to very functional and logical applications, but they don’t guarantee performance. Not being able to guarantee performance means we can’t guarantee consistency or interactivity, which harms usability and the perception of beauty.

This is where games development comes in. Beyond the gameplay and fun elements, games are all about performance. If you don’t guarantee consistency of performance in a game, it becomes frustrating. No one wants to make a frustrating game. This focus on performance seems to reflect on every aspect of games development, and I think now that applications are becoming more dynamic, it’s something we need to learn from.

Where app developers would break things into logical blocks from the point of view of components of a task, games developers tend to break things into blocks that are not interdependent. They also go much further in breaking things up than app developers tend to go. This makes things much easier to parallel-ise, an important feature now that even phones are becoming dual-core.

Games developers also go a step further with this separation, by separating large tasks into component parts. A common problem you see in app development is not spreading load enough. An app developer will think ‘task B requires data A to execute, so create data A’. A games developer may think ‘task B needs to execute in X time, so ensure data A before X time’. This sort of thinking much more commonly leads to breaking up tasks over time, and minimises blocking.

App developers like to do everything on the fly. JIT is the name of the game. We tend to only think about dependencies when we need them. Games developers aren’t afraid to have loading screens if it means they can guarantee the performance of what comes ahead. When they don’t want loading, they think ahead and set things up so that they get streamed in the background and are ready before they’re needed. Being prepared like this minimises the time it takes for a task to respond and complete.

As well as guaranteeing performance, pre-loading also guarantees memory consumption. Memory consumption is so often an afterthought for app developers, but often becomes an issue in a desktop environment. By pre-loading (and pre-allocating), you can not only guarantee memory consumption, but you can think of cache coherency and memory fragmentation, both of which can have a huge effect on fluidity (and therefore consistency).

Another technique I seem to read about a lot in modern games development, is the idea of budget. Once you have broken down your game/application into blocks, you take the high level blocks (for games, say, graphics, AI, physics, audio, input) and you allocate them a time budget. This comes down again to the consistency of performance. If you’re aiming for 60fps, you have about 16.5ms to have your screen ready. Spend any longer and you either have to sacrifice a frame, or you have to allow visible artifacts (tearing) to appear on the screen (and then you have even less time to prepare the next frame). This is the major area where I think application development is lacking. I see very few applications that try to guarantee a particular refresh with techniques like this. In fairness, games developers usually have a target hardware too, but it’s still a different way of thinking, and that doesn’t stop an app developer from targeting their own machine.

I can’t say that I’ve followed too much of this advice myself, but I hope to in the future. Some of the recent performance improvements I’ve made in our app aren’t really optimisations, but just spreading the load over time. For those writing Clutter-based animations, I’ve also written a new component for Mx to help, MxActorManager. This allows batches of actor creation/addition/removal to be spread over a set time-slice and I’ve used it in our current project with a reasonable amount of success.

Now that Gnome 3.0 is out (congratulations btw, it’s awesome!), I can see that other applications may want to raise the visual bar a bit. I hope this post serves as a reminder that if you want a highly interactive and animated application, that it may take more than just optimisation of our old applications and refinement of our old techniques to get there.

]]>

Leave a Reply

Your email address will not be published. Required fields are marked *