Posted by johnsmart
on May 15, 2009 at 1:42 AM PDT
This case study is the forth of an 8-part blog series about why so many developers adopt continuous integration, and originally published on the Atlassian blogs.
Sophie is a technical project manager in a large insurance firm. She manages the development of a web-based calculator for car insurance premiums. The calculations are done dynamically, using a sophisticated AJAX-based web interface. Car insurance premiums use complex algorithms, and it is vital that the web calculator results match those coming from the mainframe back-end to the cent. As a result, the application has a very large battery of unit tests - some 6000 unit tests are run against the calculator alone. The application also has web-based functional tests, using Selenium , and load tests using JMeter . Each build of the application needs to be tested against four different browsers (IE6, IE7, Firefox, and Safari). The team has also written a comprehensive test of integration tests which run against a production-scale Oracle test database and a CICS mainframe back-end.
All together, the full build can take up to an hour to run. Far too long for an effective Continuous Integration set up.
There is also another problem: although there are effectively Selenium test scripts that run smoke tests and functional tests against the deployed application, they need to be run in several different browsers, and on different operating systems. All of the targeted browsers can run on a Windows platform, but the department build server is set up on Linux box, and the QA department want the automated Safari tests to be run on an OS X machine.
The case for Continuous Integration
Distributed builds provided a good answer to both of Sophie's major problems. A distributed build architecture lets you run build jobs across several machines, providing potentially huge performance gains. It also enables you to run specific builds on dedicated machines, which makes it possible to run OS or environment-specific tests or build steps.
To resolve these problems, Sophie used a distributed Continuous Integration environment based on Bamboo . The CI setup consists of a main build plan, which compiles and runs the unit tests, along with build plans for functional tests running against different browsers (IE6, IE7, Firefox, and Safari), as well as other plans for the load and integration tests. Finally, there is a separate build plan for code coverage and code quality metrics. Seven build plans in all.
In Bamboo, it is a straightforward task to set up remote agents for your distributed builds . The main Bamboo server coordinates the builds, and runs build jobs either locally or on remote agents, depending on their availability. You can also use "capabilities" to help decide where a particular build job should be run. For example, Sophie has set up build agents on 5 machines: 2 recent and powerful Linux machines, 2 older Windows XP boxes, and a brand new iMac. The Firefox tests can be run anywhere, but the IE tests need to be run on the Windows machines, and the Safari tests are to be run on the Mac. She has added a custom capability called "operating.system" to help Bamboo know where to run each of the functional test build plans. Then, all she needs to do is to add an extra Requirement to each build plan to indicate what operating system that particular plan needs.
Load tests are done by deploying the application to a dedicated test server and then running a JMeter script on the build agent. Load tests are particularly processor-intensive, and for best results need to be run on fast machines. Sophie only wants load tests running on either one of the fast Linux machines, or the iMac. To do this, she has added another custom capability called "high.performance" to identify the machines with enough power to run the load tests correctly.
Syncing the build artifacts
One other issue that Sophie had to resolve was distributing the build artifacts. The normal operating procedure in most CI setups is to check out the latest copy of the source code, compile and build the application, and then run unit tests, integration tests, and so forth. In Sophie's case, this would mean that the application would be compiled, unit tested, and bundled up into a WAR file seven types for each build!
Sophie's project uses Maven 2 , so she was able to use Maven's support for snapshot releases and dependency management to optimize things in this area. The initial build plan compiles, runs unit tests, generates a WAR file, and deploys the WAR file to the Enterprise Maven snapshot repository, making it available for the other build jobs. This initial build is the only build to be triggered off by changes in the source code repository - all of the other builds are dependent on this one. The integration, functional and performance tests are set up as separate Maven projects, thus avoiding the need to rebuild and rerun the unit tests each time. These build jobs automatically download the latest application snapshot and run the appropriate tests against this version. For the functional tests, Maven profiles are used to determine which browser to use in each case.