February 05, 2006

Tim Sweeney gives a presentation at POPL 2006

Last month, Tim Sweeney of Epic Games (developers of the Unreal engine) gave a presentation titled, "The Next Mainstream Programming Language: A Game Developer's Perspective". Although I did not attend the POPL 2006 conference, I found reading the slides interesting. They can be opened with Open Office if you don't have Microsoft Powerpoint.

On the opening slide, he argues that the programming languages we use for game development today fail us in two ways.

"Where are today'’s languages failing?
- Concurrency
- Reliability"

Next generation consoles such as Xbox 360 and Playstation 3 have multiple parallel processors and the languages we typically use (mostly C++) offer almost no help in utilizing them beyond some libraries with low-level thread synchronization primitives like semaphores and mutexes. As for reliability, we all know how easy it is in C++ to dereference a null or dangling pointer, deallocate a reachable object or index off the end of an array.

I fully agree that these two areas need to be addressed in future programming languages for game development. Further, I think there is something even more important: productivity.

Although better reliability and language support for concurrency can certainly improve productivity, if only as a secondary effect, it should be addressed directly. For example, a C++ program with 100,000s of lines of code can take several minutes to build, even for relatively small changes (for example to a header file). This can certainly be improved by careful management of physical dependencies between source files and use of programming practices like the pimpl idiom.

But compare this to a similarly sized program written in Java, which can be rebuilt from scratch in seconds and where the build times for incremental changes are imperceptibly quick. The Java IDE that I have been using recently (Eclipse) builds every time I save a source file so the project is always up-to-date and ready to run immediately.

I was a little disappointed that he only briefly mentioned the subject of tools. By tools I mean, for example, the software that we use to make game content and the software that runs behind the scenes building all the game assets and gluing them together. This becomes more important every year as we squeeze more and more content into games.

Here as well, productivity is key. Not just the productivity of individual programmers but the productivity of the whole team. A new programming language can only be a small piece of that puzzle. But there are certain language features that can make a difference. Reflection is number one on my list. Reflection can be used to automate many software problems involving interoperation between tools and game code, such as automatic GUI generation, versioning, distributed builds, etc.

As with productivity, I would rate reflection (or some similar language feature) above language support for concurrency. At least for next generation consoles. Maybe not next-next-generation!

From his slides:

"Solved problems:
Random memory overwrites
Memory leaks

Solvable:
Accessing arrays out-of-bounds
Dereferencing null pointers
Integer overflow
Accessing uninitialized variables

50% of the bugs in Unreal can be traced to these problems!"

I'm not sure why "accessing uninitialized variables" is listed as only solvable. It is solved! With the exception of C++, no mainstream language I know of allows the programmer to access an uninitialized variable. That is entirely a C++ problem and can be avoided by using a language like Java or Lua for example.

In most languages, the other three "solvable" problems result in runtime exceptions. Even integer overflow can be checked at runtime by some languages such as C#. Sweeney argues that we would be better off if these problems were caught at compile time. In a sense I agree because when I introduce a bug into a program, I want to know as soon as possible. I would rather have the compiler tell me immediately (or in 10 minutes if I am using C++!) than get a report from QA several months later and spend hours tracking down the problem.

But I don't think compile time checks are the best solution. More than likely they will complicate the language. The examples of programs annotated with compile time checks from his slides certainly look more complicated than they would be without. Also, I am sure there will be cases where the compiler will not be smart enough and either miss a problem or be too conservative and raise an error when there is no problem.

It seems to me that unit testing is a better solution. If one uses TDD to ensure that every line of code is thoroughly covered by unit tests then the introduction of such a bug will result in a unit test failure in the majority of cases. And if unit tests are run after every compile, then the offending bug will be identified immediately at compile time, without any changes to the programming language.

Something I have been thinking about is how one might better design a language to support TDD. But that will have to be the subject of a future post.

This page is powered by Blogger. Isn't yours?