The Build Server: Your Project's Heart Monitor

Although I've been dismissive of build servers in the past, I've increasingly come to believe that the build server is critical-- it's the heart monitor of your software project. It can tell you when your project is healthy, and it can give you advance warning when your project is about to flatline.

This is a companion discussion topic for the original blog entry at:

I can recommend FinalBuilder to automate your builds - much easier than hand writing script files. Our build machine instant messages and emails us progress on the build. We start off a build process via a small ASP.NET website which collects info on what version to build, build comments to burn into version info, creates a request text file which a windows service spots which starts finalbuilder running. Interfaces to TeamCoherence source control to get latest source, label the build.

We are a small shop tho and dont do cruise control or automated testing. The above has saved me many hours of time and makes it trivial to test the latest changes didnt break the build.

“dynamic languages (=ruby) there’s mostly no compiling”

For interpreted languages, substitute “build” with “parse”.

If the language runtime can run/parse your code, then it’s the same as building for all intents and purposes.

We use virtual machines for the build and test boxes, and run them on the testers box overnight (that mnachine runs lots more virtual machines for GUI based testing during the day).

i use to have a co-worker that would break my code all the time (to lazy to realize the setter is doing stuff, n.p. just make the variable public and set it directly ) Damn i wish i know about tests and continous integeration back then!

The more you hate you’re coworker, the more you’ll love a good build server

I learned this lesson a long time ago, at a company where everyone was checking stuff into the source repository willy-nilly, breaking each others stuff, keeping code checked out for weeks, not doing a get-latest to verify that they were building against the most current code, and so on.

To make matters worse, they had one developer who was attempting to make the software build, and he was laboring under the impression that it was his job to fix the code that someone else had checked in that had broken the build. It was a nightmare. He didn’t understand their code, and often couldn’t access their machine to check in the code they had checked out that made the code work on their machine.

They couldn’t get a weekly or monthly build out. You could forget a daily build.

I had to twist the arm of the project manager to get them to let me take that job over. First thing we did was create a build server. It wasn’t anywhere near as advanced as the stuff you’re describing here, but it was leaps and bounds ahead of what we had. First thing we did was establish a rule that said that you had to check in working code. If you checked it in and you broke the build, YOU had to fix it. And we built DAILY. And we always built on the build machine.

The build machine didn’t have any special junk on it.

Within a week we were getting daily builds out, labeled, and pinned, backed up. Within a month, it was fully scripted, so that it was creating the installer, deploying to the application server, and emailing the release manager when things went wrong on the nightly build. Amazing.

These things are literally life savers on a project.

I’d love to convince my boss here that we need one. Problem is, I’m a one-man software development team. And hardware resources are scarce. I had to fight tooth-and-nail to get a test box. Digging up the money for a build machine is a whole new game.

Of course, that doesn’t mean I’m not going to fight for it. :slight_smile:

I’m convinced! :slight_smile: Does anybody have any tips or recomendations for what software (and hardware) you need for this (with .Net)?

Mike Hofer writes…

“Digging up the money for a build machine is a whole new game.”

The build machine can be old, slow obsolete, though… it only has to be fast enough to build overnight. Use the old 400 MHz box you retired last year!

tk writes…

“Does anybody have any tips or recomendations for what software (and hardware) you need…”

You can make a pretty good start with Windows scheduled tasks, BAT files, and a command line emailer (e.g. blat).

A build machine for .NET should be relatively simple.

Of course, it’s going to depend on what you’re using for your project. If you’re using source code control (and I hope you are), you’re going to need to make sure that your build machine has the client on it–or a library that makes it easy to get to it.

You’ll need the compiler for .NET on it. That’s pretty easily done by just installing the .NET SDK on it. Make sure you get the compiler for your language of choice. The base SDK comes with compilers for C#, VB.NET, J#, and managed C++.

If you’re creating installers, make sure you can invoke it from the command line, and get it on the build machine.

If you’re running automated tests with NUnit, make sure that’s on your build machine as well. As part of your daily builds, check out the entire solution, including the tests. Label them in source code control when you make the build. If the build is successful, and the tests pass, then you can pin it in the source code repository. Don’t pin it if it’s not a successful build. (This is all based on experience with VSS, of course–we work with the tools we’re given.)

I started with scripting the build process from a batch file. That’s enough to get you started. Eventually, I wrote a Visual Basic program to drive the whole thing. I managed to get it down to a single button-click that drove the whole process. That was about 8 years ago.

These days, however, there are plenty of off-the-shelf products that will do it for you that you won’t have to maintain yourself. They’ll integrate your source code control tasks, building the software, notifying you of failed builds, and starting the install builder for you. Fairly sophisticated stuff. Check it all out. Google’s a wonderful thing. :slight_smile:

Key thing about a build environment: Do not put anything on it that isn’t absolutely necessary to building and testing your software. If your machine comes to you with extra junk, remove it.

Anything on a build machine that isn’t critical to the build introduces variables into the build process that can’t be duplicated everywhere else. You have to be able to build the software predictably. That’s the whole point of the build machine. A build environment should be a constant, not a variable.

John’s right about the machine itself, too. It doesn’t have to be fast, or even powerful. It just has to have enough computing power to be able to build your software overnight.

Heck, I just convinced my boss to recycle one of our old clunky laptops and repurpose it as my new build machine.

Imagine that. :slight_smile:


You might want to check our Parabuild for the continuous integration server.

As for the hardware, here is an article on capacity planning for software build management servers:

Hope this helps.

Just a thought about the section on Scripted Tests. Is the build server really the best place to do the tests? I think that a test scenario of delivery to another clean machine that has nothing to do with compilation or building is more like “the real world” and a better smoke test. Of course, maybe in your opinion, this is getting too far outside of “building” and more in “testing” which is of course, a huge topic all its own. But I have had good success (and early detection of failures!) with a quick and dirty “smoke test” with several projects.

Is there any reason to not use CVS? It’s worlds ahead in my limited experience (i.e., VSS, CVS).


There isn’t really any compelling reason not to use CVS. I use VSS because it’s what my company uses. Still. Despite the existence of far better tools.

Also, the advantage of executing the scripted tests (such as those done with NUnit) on the build machine is that you can execute them every time the build is executed. These tests are different tests from those executed by your QA folks, mind you. In my eyes, the NUnit tests are the smoke tests; they just verify that the build works, and whether or not I should go ahead and label it as a working build. Then I package it and deploy it to my build staging area as an installer.

Next it gets deployed to our test environment where the QA folks hammer the heck out of it and determine whether or not it satisfies the requirements, and whether or not other unanticipated defects were introduced. It’s a totally different kind of testing. In an ideal scenario, that kind of testing is done by a fleet of QA testers who know how to break the software and look for edge cases that the software developers didn’t account for in the automated tests. And they’re test environment looks as close as possible to the production environment. It doesn’t look like the build environment.

Is there any reason to not use CVS?

Any modern source control system will do. I do not consider CVS a modern source control system. You can do worse (VSS), but there are definitely better choices.

If you’re using CVS and happy with it I would use strongly consider migrating to Subversion, aka CVS 2.0, if at all possible.

Good article. We use a combination of CruiseControl.Net, Subversion, and NAnt scripts for our deployment process and it works great.

CCNet handles monitoring our Subversion repository, when new code is checked in it triggers a NAnt script which does a fresh export onto the build machine, compiles, and then deploys to the staging web server.

Well, with dynamic languages (=ruby) there’s mostly no compiling, so the only way to have this integrity check are unit tests. Which could be pretty easily run on each commit (per commit hook), I guess.

Any expiriences on “heart monitors” in dynamic language world?

We actually have two streams running, one for development and other a ‘released’ build. The released build is the one that is already installed at various customer places. Checkins to this stream are highly controlled and helps us send service packs to customers (they dont have to install the whole build again and re-qualify it).

Dev stream is the regular build to which we checkin on a daily basis.

For both streams, who ever checks in code that either breaks a smoke test or has compile failures is supposed to fix it up asap. In case of failure an auto-email is sent to all naming the culprit. This and the possibility of being held up on friday evening keeps everyone on their toes.

You don’t mention running automated functional tests once the build is complete. At the least you can check that it installs OK. Ideally you can then check some basic functionality too. Get your test team involved with this.

This can be scripted using VBscript etc, but is easier using a tool from one of the test tool vendors.

For Java we’re using Cruise Control and it’s great!