Boyd's Law of Iteration

Scott Stanfield forwarded me a link to Roger Sessions' A Better Path to Enterprise Architecture yesterday. Even though it's got the snake-oil word "Enterprise" in the title, the article is surprisingly good.

This is a companion discussion topic for the original blog entry at:

What a perfectly apt analogy. Once again, you make reading about engineering as interesting as actually engineering.

Software testing is about failing early and often.

Very true.

I’ve always heard it as an OODA loop (D for decide as opposed to P for planning):

PS - great post. Gave me a pause as I think about what my team needs to be doing better (quicker iteration).

Nit pick. Looking a little too high tech there. F-86 cockpits were all analog dials.

However, great post. My experience has been that iteration speed often correlates with successful execution.

I experience the same when writing programs. A quick compile-link-execute cycle helps a lot. 1 second is perfect, 5 is O.K., 10 is annoying, 1 minute and I forget what I wanted to run.

That’s true. Once we had a problem with project. We switched to making deliveries and having meetings with customer two times a week instead of one time a week. It did help.


For a minute there, I thought you were going to tell me never to use a recursive algorithm, since an iterative version would be faster. :slight_smile:

One might question whether to write unit tests at all, so that iteration could be even faster. However on second thought, unit tests might be quickly justified by avoiding breaking things constantly at each iteration.

Might a moderated approach be useful: only write unit tests for things that are likely to break, or as “sanity checks,” and leave the rest to chance, at least during the rapid iterative phases?

John: I’m wondering the same thing. I’ve got no unit tests on my current large-ish project.

I’m definitely breaking previously-working code with new changes, but it’s usually been in an obvious, trivial-to-fix way. Right now, I’m thinking I’m spending a lot less effort on these quick fixes than I would have to on a full set of unit tests.

I’m considering writing a small number of unit tests for the most sensitive parts, but I really can’t justify the cost of writing a full-blown test suite right now.

Write them unit tests!!! And keep them up to date, you’ll thank yourself in the future. If for no other reason then you finding your mistakes instead of the system testers. Face saving, it’s not just for managers anymore :wink:

…another important point this post brings up is the importance of ease of use. If it’s hard to do (i.e. using a manual flight stick) then it detracts from the user experience (and in the jet fighter example, might get you killed!) It’s easy to make things work, but oh so hard to make things easy to use.

Here though, the ease we’re primarily talking about is ease of the development process.

Bringing back the flight metaphor again, it should be easy to maneuver fast repeatedly, and not get tired out from the repetitions.

I dare suggest again, that one way to help do this super-fast iteration could be to write fewer tests.

One question is, at what point does this come back to bite us? Some might answer, “right away,” but I wonder if this is always necessarily true.

Also, to what extent do tests help to go even faster? A guess is that some tests do help go faster, and some don’t. It’s certainly impossible to accurately predict risk of failure for each and every piece of the software, but I wonder if teams have experimented with only writing tests for the riskier items.

I only worked at one organization (that wrote code for its own use) that embraced rapid iterative development. All the others failed.

They didn’t fail on policy, but when time came to make the effort to support it. My one employer that succeeded did it on three pillars:

  • Robust, realistic test environments. I can’t move fast if I can’t test fast.

  • Automated code build and release framework, complete with regular, frequent production releases. If you’re moving fast, everyone must know all the rules. Humans can only get involved in negotiating the release process in severe circumstances. At one time, my employer re-installed their entire worldwide production codebase every WEEK.

  • Airtight version control and recordkeeping, and a standard, quick rollback process. If you can move forward quickly, you MUST be able to move backward quickly.

In a decent-sized organization, setting these facilities up takes resources and company commitment, and precious few have the vision to make that commitment. But the payoff for rapid iteration on a large scale is astounding to see.

Again, something I already do to an extent. I focus on each feature and run the code almost obsessively, tweaking minor code fixes again and again and again. I always thought I was being a little ridiculous… but it always seemed a waste of time to try to make massive updates that would require massive bug-hunts…

I think you could sum up 80% of what Microsoft is doing wrong (eg, Vista release cycle) with the same advice: “Iterate Faster”. How many versions of OSX has Apple released in the same time frame? five?

Of course, one wonders whether the market is capable of absorbing a new version of Windows every year or year and a half.

There was one point not mentioned here that is really important:

The Mig-15’s had a high-T tail, while the Saber had a mid-rudder tail.

Why was this important you ask?

In a dive (approaching transonic flight), Mig-pilots would completely lose lateral control, while the 86’s were fine.

Speed only help if it can be applied properly and control can be maintained. For instance, my first card was a '78 Oldsmobile 98 regency deluxed. It had a 350-V8 with a nice 4-barrel carbureter (sp?). It was very fast… as long as you were going straight…lol. At the first turn, that beauty would turn like the blimp that it was.

Nyquist says you need 2 sample to make a sine-wave… hahahaha! In practical applications, my experience is that you probably need at least 10. This backs this iterative theory.


OOPA, loop-a, doop-it-y deet,
This comment’s pointless, feel free to delete.

This is fun to read, but more than a little goofy. The whole precept is this: 1950’s-era dogfighting is directly analogous to 21st-century software development.

Um, no. Oh, and did I mention no? Pilots in dogfights are under extreme amounts of physical and mental stress. Sure pilots are trained to handle this stress and they’re professionals performing a job, but under the worse stress I’ve ever experienced as a developer, I’ve never seriously felt that failing to code a function correctly would cost me my life.

I get the point the author is trying to make, and in certain situations in business I think it’s valid. But overall, it’s too simplistic, too pat, and too subject to meaningless parody: