Posted by fabriziogiudici
on September 18, 2009 at 2:00 PM PDT
During my last years before getting the master degree, I have been working at a free flight simulator. It run under DOS and was named FGFLY . It was written in C++, initially Borland C++ and later Watcom C++, in order to use a memory extender to bypass the infamous 640k limit. At the time I just was a student able to earn a few money with programming, and couldn't allow to spend a lot in hardware - so my computer was never at the leading edge; I remember that compiling the whole project took more than one hour.
Today computers are faster and I thankfully own a very fast computer - still, my favourite project blueMarine takes quite long to get compiled. Two years ago I ran some performance tests for my build environment, and on Mac OS X a build with Ant took 1'44" (Linux was definitely faster because of a faster file system). Today, the project has grown and takes more than 7' - on a faster machine. Since a long time blueMarine has been split in subprojects, so the biggest component takes less than 3"30. For the record, these are the times needed to perform an ant clean nbms on a MacBoocPro unibody, 2.4GHz running Leopard:
|| 3' 22"
|| 2' 10"
Compiling an application is clearly a disk bound operation, so a possible solution to make things faster is to use a faster disk. The fastest disk on the earth is a RAM disk, but unfortunately it's also unreliable. Mac OS X and Linux are very stable, but it could occasionally happen that a crash causes a RAM disk to vanish and I really can't tolerate the feeling of throwing away a few hours of work.
With Mercurial , it seems I've found a good solution to balance speed and reliability. While Subversion and CVS create service directories (.svn and CVS) in every folder of your project, thus mixing the local portion of the repository with your working area, Mercurial implements a cohesive repository contained in a single directory (.hg). This means that with just a symbolic link you can have the repository and the workspace to live on two separate volumes, thus filesystems. Furthermore, being the Mercurial repository a whole, locally cloned repository, re-creating the working area from scratch takes only a few seconds and doesn't need a network connection. Last but not least, commits are local, so you can frequently commit and in case of failure you're likely to loose no more than a few minutes of work.
Bingo! This script makes all the magics on Mac OS X:
fritz% cat makeramdisk
DeviceName=`hdid -nomount ram://$NumSectors`
diskutil eraseVolume HFS+ $VolumeName $DeviceName
ln -s $HgRepo/Metadata/.hg /Volumes/$VolumeName/blueMarine/Metadata
ln -s $HgRepo/Semantic/.hg /Volumes/$VolumeName/blueMarine/Semantic
ln -s $HgRepo/blueMarine-core/.hg /Volumes/$VolumeName/blueMarine/blueMarine-core
ln -s $HgRepo/blueMarine/.hg /Volumes/$VolumeName/blueMarine/blueMarine
cd /Volumes/$VolumeName/blueMarine/Metadata && hg update -C default
cd /Volumes/$VolumeName/blueMarine/Semantic && hg update -C default
cd /Volumes/$VolumeName/blueMarine/blueMarine-core && hg update -C default
cd /Volumes/$VolumeName/blueMarine/blueMarine && hg update -C default
It first creates and mounts a RAM disk large enough (768MB) to contain the whole blueMarine working areas, including compilation artifacts; then it creates empty directories for the working areas and create the relevant links to the Mercurial repositories, which safely live on a ZFS filesystem; at last, it invokes Mercurial to re-create the working directories.
Running it takes about 35 seconds - it's good for setting up the stuff before each working session.
Repeating the same compilation tests shows that you're saving around 50% of the time:
|| 1' 25"
|| 1' 2"
I also expect a better performance with NetBeans when it needs to scan the source files.