Speeding Up Builds

quadcore.jpgI seem spend a disproportionate amount of time waiting for builds – especially when I tend to be only updating one source file at a time. In my case it’s currently Carbide.c++ / S60 SDK builds but what I am about to say is probably equally as applicable to JAVA ME or other build environments.

I started thinking if it’s possible to speed up builds. Should I buy a new machine with dual (or even quad) processor? Should I get more memory (I only have 1Gb). Should I put the SDK or tools into ramdisk? Here’s what I found.

I began by looking at hardware based solutions such as HyperDrive4 hard drive and the PCI based GigaByte i-RAM. I came away with the conclusion that while they were obviously faster than normal hard drive, they are actually limited to a great extent by the SATA disk interface.

Next, my thoughts moved onto a software RAM disk and I found SuperSpeed RAMDisk. I learnt that it’s limited by Window’s (32 bit) max addressable memory of about 3.5Gb. However, that would have left me a reasonable 2Gb for SDK and Carbide and 1.5Gb for running the OS and apps.

Then I remembered that it’s often faster to compile/link (and also start the S60 emulator) once you have already done it once. At this point I looked at Windows File Cache performance and tuning  and the possibility of using CacheSet to optimise the cache size.

I did some tests while monitoring the Windows Performance cache copy read hits. This is the % of requests for files that result in a hit in the cache and hence no disk access. I compiled and linked a large project and afterwards started the emulator. After I had done this once, no matter how many times I re-built or started the emulator, I always got a very reasonable 95 to 99% caching of files.

Remember, I only have 1Gb of memory. So I started a few heavy apps such as Adobe Photoshop and repeated the test. I still got a high level of caching. I even tried loading eclipse with the Android SDK at the same time as Carbide and I continued to get a high level of caching.

So what did I conclude? First of all, Windows is very good at caching files. For Carbide.c++ and Symbian SDKs, 1Gb is more than enough memory to obtain the benefits of Windows File Caching even when you have other applications open. The hardware and software based RAM disks would have helped my ‘first time’ access to files but probably wouldn’t have helped much after that.

Next I looked at CPU (perhaps I should have looked at it first!) and noted that, while building, I am spending a considerable proportion of time near 100% cpu. While my machine isn’t state of the art, the latest machines would only provide low fractional increases in speed. So what about dual and quad core? These work by getting cores to simultaneously process different threads. Unfortunately, Carbide and the Symbian ABLD build mechanisms are synchronous (eg one thing compiled after the previous has completed) so I suspect there wouldn’t be many gains there. A dual core might help keep Carbide UI responsive (it’s in another thread) but that’s not what I need.

So, Carbide team, how about allowing support for multiple threaded builds? Also, I wonder if it would be possible to just do a link based on previously calculated dependencies? It’s often the case that I am changing just one file and I can manually just quickly compile that file (Ctrl Alt C). A build can take forever but the actual link is relatively painless – it’s the dependency checking that seems to kill the speed. I often know I can get away with just a new link without building anything else. I know I could link from the command line but I can’t be bothered with creating and more importantly maintaining the complex command line required. Surely, the Carbide.c++ IDE could do this for me?