January 30, 2005

C++ data definition language

In a previous post, I said I would consider some alternative ways of taking advantage of the benefits of C# (primarily reflection) on game console platforms. Two approaches spring to mind.

The first approach is to use an existing interpreted language with reflection support, such as Python. An interpreted language is appealing because, if the interpreter, debugger and other tools can be easily used on a console platform, there is considerably less development effort and risk than a C# to C++ translator. Of course, for performance reasons, a considerable amount of code would still need to be written in a compiled language like C++.

The second approach is to develop a C++ reflection framework. Although this would still be a considerable effort, it is lower risk than a C# to C++ translator because it is still in the realm of library development rather than compiler development.

Actually, these are not necessarily alternative approaches. For many games, it would be appropriate to employ both, writing "higher-level" code in an interpreted language and "lower-level" code in C++. In fact, a C++ reflection framework would be useful for allowing more transparent interoperation between the two languages by automatically generating interpreted bindings for C++ classes where appropriate. This would allow the interpreted language to easily manipulate C++ objects.

Also, I believe reflection is as useful for this "lower-level" code as it is for code that can be appropriately implemented in an interpreted language.

I have developed a C++ reflection framework before, with a certain amount of success. The main problem I encountered was that the semantics of C++ are too "loose". A language that supports reflection really needs sufficiently "tight" semantics that a program can unambiguously understand itself by examining its class hierarchy at runtime. For example, if a C++ program examines one of its classes and sees that a function returns a pointer, does that mean the function returns a pointer to an object or a pointer to an array? If it is an array, how can it determine the size of the array? In general it can't. Take this example:

class Foo {

Foo *getParent();
Bar *getBars();

A program examining this class through reflection cannot know whether getBars() returns a pointer to an object or a pointer to an array. It takes a human to realize that "Bars" is a plural noun and so the function probably returns a pointer to an array.

A solution to these two problems would be to avoid the direct use of pointers in reflected classes. This could be achieved by replacing pointers with more abstract types that can be unambiguously interpreted through reflection:

class Foo {

Reference<Foo> getParent();
Array<Bar> getBars();

Now it is clear, even to a program using reflection, that getParent() returns a reference to an object and getBars() returns a reference to an array of objects. The Array type would provide a way of determining the size of the array.

This seems like a reasonable approach. Take C++ as a starting point and find the aspects of the language that don't work well with reflection. Then use the powerful abstraction and meta-programming facilities of C++ to build a C++ sub-language with the necessary "tight" semantics. The sub-language would have a lot in common with languages like Java and C# but it would still be "proper" C++.

A problem with this approach is that C++ metadata is only known to the compiler. Except for the very limited metadata available through RTTI, it is lost after compile time and there is no way for the program to access it once it is running. So many approaches to C++ reflection require that the programmer express it again using some kind of data structure or meta-program that parallels the program's class hierarchy. See Reflection Support by Means of Template Metaprogramming.

I think this is kind of silly because a key advantage of reflection is to avoid exactly this kind of redundancy. Admittedly, if reflection is used extensively, the redundancy of the parallel data structure or meta-program may be less troublesome than the redundancy of the code that would be written if reflection were not supported. Also, this approach does not support automatic code generation. There is no way for a compile time tool to generate code based on the reflected metadata.

To avoid this redundancy, it is possible to use a C++ parser to examine the program before compilation and automatically generate the metadata code. This is the approach I used for my previous C++ reflection project. It is also the approach described in the paper Non-intrusive object introspection in C++. There are numerous suitable C++ parsers. This project used a modified version of the G++ compiler. I used Visual C++ as the parser and extracted the metadata from the debug information in the generated PDB files. Open C++ and Aspect C++ are also possibilities.

This approach works quite well. An issue is finding a parser that can deal with all the different dialects of C++ that are used in a multi-platform game project. Another issue is that a standard C++ parser does not enforce the use of a C++ sub-language better suited to reflection. From my previous experiences with C++ reflection, I think this is the key to success.

Another approach, and the approach that I think I will investigate in more depth if time permits, is to use a Data Definition Language (DDL) to define classes. My instinct is to make the DDL as close to the language of C++ class definitions as possible. But I would modify and restrict it where necessary to ensure that it enforced the use of the reflected C++ sub-language. It would be rather like Managed C++ or C++/CLI. These are both C++ variants modified to support, among other things, reflection.

Then a C++ programmer should feel as at home working with the DDL for a class as they would working with a C++ class definition. A compile time tool would parse the DDL and automatically generate the corresponding C++ class definition, which can then be compiled by the C++ compiler. The code generator would also output the data structure that parallels the class hierarchy. This would be the underlying data structure that allows the program to examine itself through reflection. Furthermore, the compile time tool could use the information in the DDL to generate other kinds of code, such as serialization code.

To be clear, I would not be developing a new programming language or compiler. It would be a simple DDL that would take the place of C++ header files for reflected classes.

This is not a new approach, even in game development. For example, this is exactly the approach employed by NetZ for describing objects that are synchronized in network games. It is also the approach of UnrealScript's "native classes". In fact, the combination of an interpreted language like Python with automatic binding to C++ code described through a DDL would be remarkably similar to UnrealScript + native classes.

Of the benefits of C# that I listed, which would and which would not apply to this approach? The biggest benefit - reflection - still holds. The benefits that are built on top of reflection, such as automatic code generation and automatic memory management also hold.

Tools are a problem. There will be no IDEs or automatic refactoring tools with knowledge of the DDL syntax, unless it is very close to C++. This is a good reason to keep the DDL syntax as close as possible to standard C++, possibly a subset of the C++ syntax. Then at least all the usual C++ IDEs and tools such as Visual Assist will work.

With regard to testing, reflection would permit the development of a unit testing framework like NUnit or JUnit. The DDL compiler could automatically generate mock objects to support a mocking framework similar to EasyMock.

There is no possibility of using the .NET class library. But there are C++ class libraries such as STL and boost. A design goal should be to ensure that it is possible to use types that are not described with DDL, such as std::string.

With regard to higher level language features such as array bounds checking, this can be made part of the semantics of the reflected C++ sub-language. For example, a template array class might perform bounds checking.

The DDL compiler can take steps to ensure fast build times, such as minimizing physical dependencies between generated header files and automatically employing the pimpl idiom in the generated C++ code for appropriate classes.

January 29, 2005

Contact details

In light of recent industry events, should anyone want to contact me directly over the next few weeks, I can be emailed at "apatrick at mail dot com".

January 22, 2005

First experience of test driven development

I have been meaning to try Test Driven Development (TDD) for some time now. Until now I had been putting it off, mostly because I thought it would take quite a long time to learn and because I thought it would be a long time before I really "got it" and started seeing any real benefits. Having just written my first program using TDD, I am pleased to report that my learning time was almost nil and I saw benefits within about 20 minutes of first applying it, even for the development of a very simple program.

The basic idea behind TDD is that you write a test before you write the tested code. You don't write all the tests for a program before you start. Rather you alternate between writing a test and then writing just enough code to make the test pass. Then to clean up, you refactor the tests and the code, testing again after each step. The technique is highly iterative. In most cases, each iteration takes no more than a couple of minutes and you modify only a few lines of code.

I don't want to go into much detail because I am still at the beginner stage and I am sure there are some things I have got wrong.

The way I went about it was to rewrite a program I wrote a few weeks ago, this time using TDD. I used Visual C# 2005 together with NUnit. I configured the project to automatically run NUnit after every build.

The tangible benefits I have seen so far are:

Even at this early stage, I would encourage any programmer to try it out. The benefits are apparent even writing a program that is only a few hundred lines or so.

January 16, 2005

Review of the benefits of C# for games programming

I have just about reached the limits of what I can hope to achieve with my C# to C++ translator project. There is so much more I would like to do but it is getting to the stage where I cannot make significant progress in weekend time alone. I'm also extremely busy at work and have little energy left. Maybe I'll pick up where I left off in a few months.

I did pretty well in the time available I think. I can translate simple C# programs to C++. The build process is well integrated into my IDE. I can do C# source level debugging using a standard C++ debugger. It took 6 days solid work to get this far.

This is a summary of what I have learned and what I think the benefits of C# would be in console game development. I'll probably round it off with a review of the disadvantages and the alternatives once my enthusiasm has had a change to die down a little!


Faster build times

Reuse existing C# and C++ development tools

Testing and test driven development

Fast, safe and fully automatic memory management

Comprehensive class library

Compiled code should have similar performance to C++

Higher level language features

Automatic code generation framework

C++ / C# interoperation


Code coverage analysis

Code coverage analysis is a way of verifying that every code path in a program is exercised at least once by a collection of test cases. Consider this simplified example:

public class MathUtils

public static int Sign(int x)
if (x<0)
return -1;
else if (x>0)
return 1;
return 0;

class TestMathUtils
public void TestNegative()
Assertion.AssertEquals(MathUtils.Sign(-5), -1);

public void TestPositive()
Assertion.AssertEquals(MathUtils.Sign(5), 1);
There are three code paths through the function Sign(). Of those, two are exercised by unit tests and one is untested. Test Driven Development (TDD) demands that the programmer writes unit tests first and then only as much code as is necessary to make those tests pass. In fact it also demands that she first deliberately write code that makes the test fail! An introduction to TDD can be found in a series of articles by Robert C. Martin, starting here.

If the intent is that all code paths should be tested, for example when using TDD, it is useful to have an automated tool that can detect untested code paths. In a sense, this tests the tests and provides confidence that the TDD principles are being followed. It is even possible to integrate code coverage analysis into the normal build process and run it to ensure the tests are complete (or more accurately, not incomplete) prior to running the tests themselves.

I would like to build support for code coverage analysis into my C# to C++ translator. I will do it by generating additional C++ code that will track which blocks of code have been executed and which haven't. There is no need to track each individual C# statement or MSIL instruction. It is sufficient to track entry or exit from each basic block. A basic block is a sequence of instructions ending with a control flow transfer instruction and containing no control flow targets. For example, the body of an inner loop containing no other control flow constructs would be a basic block.

A simple approach would be to associate a bit-field with each C# method. Each basic block in the method's MSIL would be represented by a bit in the bit-field. The translator would prepend each basic block with a C++ statement to set the corresponding bit in the bit-field.

Before running the unit tests, the bit-fields would be cleared. Then afterwards, in order to verify that the unit tests fully exercised the tested code, it is a simple matter of checking that all the bits are set. If any are still clear, it should be possible to display an error message indicating the source file, method and line number that is untested.

January 15, 2005

Mono class library licensing terms

I just checked out the licensing terms of the Mono .NET class library. This is Mono's implementation of classes like System.String, System.Collections.ArrayList, etc. It is just C# source code so, using my C# to C++ translator, I will be able to translate it to C++ just like any other C# program.

It is under the very unrestrictive MIT license. Specifically, it is allowable to use it in commercial projects, modify it and sell it without having to make the game's code publicly available. I will look into modifying the core classes to work with my reference counting memory manager. Then my C# to C++ translator will have a complete runtime library. Yay!

C# memory manager for video games console platform

In a previous post I described a fully automatic memory manager that was guaranteed not to leak memory. It was based on reference counting, weak references and compile time analysis. I plan to use this in the runtime for my C# to C++ translator. Now in this post I will look into a possible allocation strategy.

When designing a memory manager, there is an important trade-off to be made between dealing with fragmentation, allocation / deallocation performance and utilization of available memory.

On certain platforms and for certain applications, fragmentation is the downfall of many C++ memory managers. Consider a video game on a console platform. Fragmentation is a serious concern. Console games sometimes run for days on end, for example when abandoned by the player or when running in attract mode in a retail store. Fragmentation must be the number one cause of crashes in long running console games.

It is not a premature optimization to try and maximize memory manager performance. It is not uncommon to see memory management burning 10% or more CPU time on console platforms.

It is always possible to use more memory in a console video game. I have never worked on a project where we haven't used absolutely all the memory available on the platform. Efficient use of memory means more content and better quality content.

Unfortunately, this is a trade-off. I just can't have a memory manager that does not fragment memory, has fast allocation / deallocation and at the same time makes 100% use of all available memory.

For example, a memory manager based on free lists or pools will not suffer from unpredictable fragmentation and will be super fast. But the programmer will have to work hard to use every last bit of memory and carefully tune the sizes of the pools.

On the other hand, consider best-fit memory managers. These do a reasonable job of dealing with fragmentation and are excellent for making use of as much memory as possible. But, especially if they do their best to minimize fragmentation, they are slower than free list based implementations.

There are memory manager designs that are fast and make very efficient use of memory. But because the trade-off always balances, they fragment more quickly, potentially leading to a crash. An example of this kind would be a free list based memory manager that allocates new pools on demand.

I think I know a way to side-step the trade-off. The trade-off still holds but I can design the memory manager so that fragmentation is not such a concern. By releasing myself from the fragmentation side of the trade-off, I am better able to ensure that my memory manager is fast and uses memory efficiently.

A game that crashes after a day is no good. A game that drops 10 frames once every 24 hours during long runs does not have a problem. During these 10 (or so) frames the game would be compacting its heap, thereby eliminating all fragmentation, allowing it to continue for another 24 hours until the next time it needs to compact. I pulled the numbers 10 and 24 hours out of the air. The actual numbers do not matter so long as they do not have a negative impact on the quality of the game.

A game can also force compactions at times when the player will not notice them, such as between frontend and gameplay, between levels and at checkpoints. If a game has been running for a long time without controller input, this probably means the player has abandoned it. They will not notice a compaction when they next pick up the controller and start playing again.

This approach would not be feasible without reflection or some other means of utilizing the program's metadata. In order to compact the heap, the memory manager must relocate the objects into a contiguous block of memory. It must also "fix up" all the fields that reference these relocated objects. This is where reflection comes in. Reflection makes it easy to find the fields that need to be fixed up.

It is possible to get close to this in a language without reflection through the use of a handle table, which avoids the need to fix up fields. But this does not side-step the trade-off. By introducing double-indirections whenever a field is dereferenced, the memory manager has reduced performance.

For my C# runtime's memory manager, I plan to do something along these lines. The first thing I will try will be based on free-lists and pools. The heap will actually be a stack of pools, each pool being able to satisfy allocations of a fixed size. Whenever an object is allocated, it will be satisfied from the first pool with a free slot available. If no pools have a free slot, a new pool will be pushed onto the stack. If the stack is too full to allocate a new pool, the stack will be compacted. If that fails, then the problem is not fragmentation. The heap has truly run out of memory. When an object is automatically deallocated because its reference count has reached zero, its pool's slot will be made available for a future allocation of the same size.

I still need to think about variable sized arrays because these do not pool well. Perhaps I will allocate arrays using best-fit and compact the heap whenever this fails. Array allocations should be less common so the lower performance of best-fit seems appropriate because it will reduce the number of compactions that are necessary.

Now I'm starting to optimize prematurely! I just want to make sure that I can think of a workable solution before I commit too much to my C# to C++ translator. The real memory manager will be designed based on profiling actual allocation patterns.

January 08, 2005

Prototype pattern using reflection

The prototype pattern is one of the originals described by the "Gang of Four" in their book "Design Patterns". The idea is that a class contains a virtual method that is overridden in each derived class to create a clone. Given a prototype object, it is then possible to create as many clones as needed by invoking the clone method.

An application of this pattern in video games would be to implement spawning. The level data would contain a prototype object for each spawning entity. Whenever an instance of this kind of entity needed to be spawned, the prototype would be cloned and the clone positioned at the spawn point.

Although this is very powerful, one thing I don't like about the prototype pattern is it involves a lot of typing. I actually have to override the clone method every time I add a new entity class. As I will show, this code is almost completely redundant.

I have modified the prototype pattern to take advantage of the C# reflection API. Now there is no need to override a clone method. The prototypes are cloned simply by using reflection to examine the prototype at runtime, instantiate a new object of the same class and then copy the property values from the prototype into the clone. If a property is of a reference type, I also provide a way of specifying whether to do shallow or deep copy. Shallow copy is the default. Deep copy is requested by applying a DeepCopy attribute to the property.
public class DeepCopyAttribute: Attribute


public class PrototypeCloner
public object Clone(object prototype)
object clone = CreateNewInstance(prototype.GetType());
CopyPropertyValuesFromPrototype(clone, prototype);
return clone;

private object CreateNewInstance(Type type)
ConstructorInfo defaultConstructor =
type.GetConstructor(new Type[0]);
return defaultConstructor.Invoke(new object[0]);

private bool IsPropertyDeepCopied(PropertyInfo property)
return property.PropertyType.GetCustomAttributes(
typeof(DeepCopyAttribute), true).Length != 0;

private void CopyPropertyValuesFromPrototype(object clone, object prototype)
foreach (PropertyInfo property in
if (IsPropertyDeepCopied(property))
object prototypeProperty = property.GetValue(prototype, null);
object cloneProperty = Clone(prototypeProperty);
property.SetValue(clone, cloneProperty, null);
property.SetValue(clone, property.GetValue(prototype, null), null);

public abstract class Weapon {}

public class LaserDisruptor: Weapon
public float RateOfFire
get { return rateOfFire; }
set { rateOfFire = value; }
private float rateOfFire = 1000;

public abstract class Entity {}

public class SpaceMonkey: Entity
public float Speed
get { return speed; }
set { speed = value; }
private float speed = 5;

public Weapon Weapon
get { return weapon; }
set { weapon = value; }
private Weapon weapon = new LaserDisruptor();

class Program
static void Main(string[] args)
SpaceMonkey prototypeObject = new SpaceMonkey();

// Create 10 clones of the prototype.
PrototypeCloner cloner = new PrototypeCloner();
for (int i = 0; i != 10; ++i)
Entity clonedObject = (Entity)cloner.Clone(prototypeObject);
Something I love about C# is it's so easy to get things done. This program compiled correctly first time and did exactly what I expected! Actually the first part was easy, Visual C# 2005 parses your code as you type and tells you a lot of the errors before you even build :)

Automatically detect all memory leaks at compile time?

Yesterday at work I spent a couple of hours tracking down a troublesome memory leak. Memory leaks are my third least favorite kind of bug (race conditions and dangling pointers are worse). They are also annoying because it is next to impossible to write unit tests that verify memory does not leak. If I refactor some code, how am I to know if I have introduced a memory leak? This leads me to my question...

Is it possible to automatically detect all possible memory leaks at compile time? In general, and certainly for C-style programs, I think the answer is no. But if a program is structured in a certain way, the answer is definitely yes. This post was originally titled, "Using reference counting safely", and that is mostly what it is about.

But first, I think it is important to point out that this technique is, in a sense, inferior to garbage collection. Garbage collection makes memory leaks impossible (by memory leak I mean unreachable allocation). If memory leaks are impossible then there is no need to attempt to detect them! Of course, there are lots of trade-offs here. For example, garbage collection might be infeasible for performance reasons.

Anyway, I am going to assume that you all understand reference counting, its Achilles Heal (cyclic data structures) and how weak references can solve the problem (by breaking cycles). In summary, reference counting can be used safely if the data structure is acyclic or if cycles are broken using weak references (of one form or another).

My idea is to have the compiler or a compile-time tool analyze the program's data structures and determine whether there is any possibility of a reference counting memory manager leaking. It would do this by looking for any cycles that are not broken by weak references in the program's reference graph. If the language supports reflection, this is quite straightforward. It is simply a case of constructing a reference graph, where the nodes are classes and the arcs are potential references between objects of those classes.

The algorithm would be something like this:

for each class in the program

add a node for that class

for each class A in the program
add a directed arc from A to each of its base classes

for each non-static non-weak uninherited reference field in A
for each class B that inherits from the referenced class
(including the referenced class itself)
add a directed arc from A to B
I think that's right but I have to admit I have had to repost 3 times with bug fixes! Then it is a matter of searching for cycles in the generated graph. If there are no cycles then it is guaranteed that a reference counting memory manager will not leak.

I think this is an ideal replacement for garbage collection for my C# to C++ translator. Reference counting is more feasible on a console platform than garbage collection. It burns relatively little CPU time, accesses memory coherently and is naturally incremental.

Many existing C# programs will work unchanged, specifically those that do not use cyclic data structures. If a C# program does use cyclic data structures, I won't need to wait until it crashes to find it has a memory leak. The compiler will not even allow me to build the program and it will tell me which classes and fields are involved in the cycles I need to fix.

Conversely, the changes I need to make to C# programs will not prevent them from working correctly when run using a garbage collector, for example if they are also run on a .NET virtual machine.

Most importantly, I will have a 100% safe fully automatic memory manager, which will absolve me of the need to think about how to test for memory leaks and how to manually free memory.


I've been thinking about this some more. What the compile-time analysis does is ensure there is no way a program can represent a cyclic data structure (unless broken with weak references). Just because a program could represent such a structure does not mean it will actually create one. Unfortunately, because the
halting problem is undecidable, it is not in general possible for a static analysis to determine what a program will actually do.

My concern is that the static analysis might be too constraining in some cases. My intuition says that in most cases it will be okay. What the constraints basically mean is a program's data must have a tree structure (actually a DAG). When it doesn't, use weak references.

But there might be a few cases where the static analysis is over-constraining, although I can't think of any examples. Just in case, I thought of an alternative way of making a reference counting manager safer, without any compile-time analysis.

One of my original problems was that I could not think of a way of testing whether my code had memory leaks using unit tests. I realize now that if I use a reference counting memory manager, it is easy to determine whether a memory leak has occurred. Just run a garbage collector! It would be possible to garbage collect after every unit test, but depending on the complexity and volume of unit tests, this might slow things down quite a lot. Maybe I could garbage collect once after running all the unit tests. If there is a memory leak then I can run them again, this time with a garbage collect after every test. Or maybe I can use some kind of binary search to narrow in on the unit test with the leak.

I'm on the fence as to which technique is better. The static analysis means I just don't need to think about memory leaks but I might be frustrated by the constraints. The unit tests combined with garbage collection approach means I don't have any additional constraints but I am able to write code that leaks memory. But at least I have an automated way of finding leaking code.

January 04, 2005

Improving code turnaround time

I have been using refactoring, as described in Martin Fowler's excellent book, for a couple of years now and found that it has improved my productivity and the quality of my code considerably. One of the basic requirements of refactoring is that you must be do it incrementally. That is, after each individual change, you must build and test your code, ideally using automated tests. I will call the time it takes to make a simple modification to the code, built it and test it the "turnaround time". Quick turnaround is also essential to another approach to programming called Test Driven Development or TDD.

There is a major problem with applying these techniques to games development. Build times for most games these days is anything but fast. A build time of 3 minutes would be quite fast, even for a code change that affects a small number of source files. See Noel Llopis's articles on C++ physical structures for some tips on improving the build times of C++ programs.

Unfortunately, even using all of the techniques he describes, I don't think I will be able to get build times as fast as I would like so I can write games the way I really want to. To program efficiently, it will be necessary to make more than one code change step per build / test iteration, as I have to do now on my current game project.

As I see it, the problem is quite fundamental to C++. From C it inherits the use of header files and a separate linking step in order to achieve source-file level modularity and incremental builds. This approach simply does not scale well to current games projects. As games platforms have increased in power, the games we write for them have increased in size. Although the power of the computers we use to build them has also increased, every year we see build times get longer and longer.

If you have read any of my previous posts, you will probably not be surprised that I introduce C# at this point. C#, unlike its ancestors C and C++, does not use header files or a separate link step. And this pays off. It's build times are orders of magnitude shorter. As an experiment, I built a mid-sized C# program, probably about the same size as a typical game these days, 8MB of source code in total. A full rebuild took 38 seconds. A dependency check, with no files modified took 3 seconds. These are the kinds of build times I need to apply refactoring effectively.

I am writing a C# to C++ translator. My idea is to translate C# into C++ and then use a C++ compiler and linker to generate native code for a games platform. Although I think I can generate the C++ code in such a way as to reduce build times, for example by minimizing header dependencies and using forward declaration wherever possible, it will still take significantly longer than C# compilation.

But I've had another idea. Although I need to compile to native code in order to run on a games platform such as a console, I can run the C# program, prior to translation to C++, on any platform with a .NET virtual machine, e.g. a Windows PC. So my idea is to perform several programming iterations only building and testing for a .NET target platform. Having performed several iterations, I will build for my actual games platforms and test on those, ideally using a build farm that will automatically build, deploy and test for me, while I get on with the next batch of iterations. I think this could work very well for the majority of code, which is not tied to features specific to a particular games platform.

Obviously, even if C# is used in a games project, a lot of code will still need to be written in a language like C++, I would estimate up to 50%, depending on the project and the platforms. This code will not run on a .NET virtual machine. Fortunately the .NET framework offers many ways to allow "managed" code to link with "unmanaged" code like C++. In Microsoft's implementation, this might be done using Managed C++. So the C++ game code might be compiled to a Windows DLL that would be loaded by the .NET virtual machine and linked with the C# code being tested.

January 01, 2005

Diagnostics traces

Here is a common and useful debugging technique:
void foo(int a, double b)

printf("Enters %s/%d foo(%d, %f)", __FILE__, __LINE__, a, b);
// do stuff
printf("Exits %s/%d foo(%d, %f)", __FILE__, __LINE__, a, b);
Sometimes we need a picture of what the program is doing that cannot be determined by using debugging techniques like breakpoints or stepping through code. For example, we might need to see how a function is used over the course of several frames. How can we improve this tracing code? For starters, there is a considerable amount of redundancy. This would be better:
void foo(int a, double b)

// do stuff
Unfortunately, in C++, although we can get hold of the source filename and line number and for some compilers also the function name, there is no way to get the parameter values. This would be even better:
void foo(int a, double b)

// do stuff
That is, what if all functions automatically output a diagnostics trace without the programmer having to write any additional code? Obviously we would need to be able to turn it on and off for specific functions. Otherwise the game would slow to a crawl.

Supposing C++ was extended to support this, the debugger might have a property page with function names and checkboxes. The checkboxes would be used to enable or disable the diagnostics trace for each function. They would obviously default to off.

How would C++ need to be extended? The compiler could generate code equivalent to this:
void foo(int a, double b)

#ifdef DEBUG
if (g_trace_foo) printf("Enters %s/%d foo(%d, %f)",
__FILE__, __LINE__, a, b);
#endif // ifdef DEBUG
// do stuff
#ifdef DEBUG
if (g_trace_foo) printf("Exits %s/%d foo(%d, %f)",
__FILE__, __LINE__, a, b);
#endif // ifdef DEBUG
Then all the debugger would need to do is change the value of the global vatiable g_trace_foo when the checkbox was clicked and the trace would change accordingly.

For some functions, for example those invoked from the game's innermost loops, the overhead of checking the value of g_trace_foo or equivalent would be too great, even in debug builds. So for these cases we would need a way of directing the compiler to statically disable tracing. Maybe something like this:

void updateParticle()
// ...
I don't expect this feature in C++ any time soon. However, I can easily add this feature to my C# to C++ translator. In fact I might just try it tomorrow... The joys of writing my own compiler :)

Euclid's algorithm

Today I worked on making the code generated by my translator a little less verbose. My strategy was to try and reconstruct the original C# expressions by analyzing the stack operations carried out by the IL assembly. This turned out to be quite straightforward. The main complication was aliasing. Unlike Java byte-code, MSIL byte-code permits variables to be aliased. I worked around this by only attempting to transform stack operations back into expressions between places where aliasing might occur.

For fun, I implemented Euclid's algorithm in C# and translated it to C++. Euclid's algorithm is one of the oldest formally described algorithms. It was originally described in "Elements" around 400BC, before the invention of algebra. Does that mean computer science predates mathematics? It was described in terms of geometric constructions. 2400 years later it is still in widespread use.

Anyway, this was my implementation:
static uint GCD(uint a, uint b)

if (b == 0)
return a;
return GCD(b, a % b);
And my C# to C++ translators output:

UInt32 GCD(UInt32 a, UInt32 b)

Int si0;
UInt32 CS_36_1_36_0000;
Int8 CS_36_4_36_0001;
CS_36_4_36_0001 = Int(Int(Int(b) == Int(0)) == Int(0));
if (Int(CS_36_4_36_0001)) goto L15;
CS_36_1_36_0000 = Int(a);
goto L27;
L15: si0 = ::Test1::Program::GCD((UInt32) Int(b),
(UInt32) Int(UInt(a) % UInt(b)));
CS_36_1_36_0000 = Int(si0);
goto L27;
L27: return Int(CS_36_1_36_0000);
Much less verbose! And there are still some improvements I can make. Here is the x86 disassembly of the compiled translation:

PUBLIC	?GCD@Program@Test1@@YAIII@Z

; Function compile flags: /Ogty
; COMDAT ?GCD@Program@Test1@@YAIII@Z
_a$ = 8
_b$ = 12
; Line 48
mov ecx, DWORD PTR _b$[esp-4]
test ecx, ecx
; Line 52
mov eax, DWORD PTR _a$[esp-4]
je SHORT $L27$1362
xor edx, edx
div ecx
mov DWORD PTR _b$[esp-4], edx
mov DWORD PTR _a$[esp-4], ecx
jmp ?GCD@Program@Test1@@YAIII@Z
; Line 57
ret 0
?GCD@Program@Test1@@YAIII@Z ENDP
So it seems that, by using C++ as an intermediate language, C# programs can be compiled into quite reasonable native code. Notice that the C++ compiler was able to apply the tail recursion optimization.

This page is powered by Blogger. Isn't yours?