Boyd's Law of Iteration

Just like anything else in agile…

TOO MUCH OVERTHOUGHT.

That’s right. I don’t oppose what iterations try to accomplish, but having seen the same process fail as often as it succeeds, the overhead and pressures that this type of methodology brings it just too much.

Remember when ‘killer apps’ came out almost monthly? Yeah, me neither, cuz it was a long time ago. What did they do different? They designed, coded, played, tested, relaxed, coded, played, tested, relaxed, designed some more… all they had to worry about was delivering something thta worked in the end. They didn’t have some manager pushing for a deadline that didn’t mean anything except to a salesman who was trying to make a quota so an executive could get his goals met so his bonus was larger.

Don’t put so much thought and anxiety into developing… junits are great for covering all the bases. Iterations just get in the way, because for EVERY creative process, time is always going to be the enemy. Go too fast, quality suffers, and THAT really does matter when it comes to software. And gees, don’t start flaming MS for their quality… I don’t see a single OS that is any better, and I’ve used them all. They’re not worse, but not better either.

In regards to the article, the one big missing hole I see is that TRAINING was never mentioned. You can have the worst pilots in the world flying the best jets, if they have the worst training, they’ll lose a lot more than win. Maybe TRAINING approaches should be just as heavily scrutinized in IT like it is in the Military… here endeth the rant.

Now two plus years of being on multiple scrum teams and projects I think the key of the original post:

observing, orienting, planning, and acting faster bringing a higher percentage of successes

is the key of agile. Sometimes we do 2 week iterations and sometimes a month, never like the old year long waterfall cycles. Task completion and feedback (builds and testing) come much more often and frequently which helps lead to successful outcomes. All of this goodness can also become a burden on the developers and project when supporting tools and processes end up being one of the main tasks for everyone.

Unit Tests have been a major topic in my current scrum with views across the spectrum. We have measured 30% to 50% longer coding task times for full as opposed to minimal unit tests being included. We have taken the view that this tool should be used when it looks useful in an ongoing sense and not require 100% coverage. This has allowed delivery of more features in the last 5 months and it remains to be seen if it bites us in the end (assuming the project is a success and has a long lifetime).

They should have asked some Russian pilots to fly that Mig. You know, people who actually know the airplane like their own ten fingers and have thousands of hours of flight experience with it. I bet the results would not be in F-86’s favor.

I’m working on an agile process now. Iterations are 4-6 weeks. But we’re on a multi-site team employing something akin to 60 developers and artists, many of them pretty senior, and we’re building a fairly massive client-server-SDK-content development environment. I work on the client app and right now we’re doing a lot of financial management stuff (the kind that has to be right).

The company I work for has done agile or similar sorts of things for a long time now. They’re pretty good at it. We get some pretty decent ratings and seem to mostly operate successfully.

From the trenches, I can say agile is good in some ways (sloppiness gets called out sooner, there is more focus on frequent end-to-end testing which is great in a client-server environment, and you’re always trying to take manageable bites). We couldn’t do a two week iteration with our size of project distributed over multiple sites. Sometimes even the 4 week iterations, if we count the QC cycle at the end, are pretty darn short.

You succeed, as one sharp reader pointed out, if you can get everyone working in parallel, if the overhead created by agile frequent releases isn’t too onerous, and if your utilization goes up. This is really what happens. You probably do work more, at least some of us senior folks always seem to be on a critical path and pushing to get stuff tight and solid for an iterative release. But the end result is we fairly frequently know where we’re at - risk is identified much more aggressively and mitigation actions are taken early on. That alone makes agile worth the price of admission.

And the customers get confidence when you deliver regular releases. Our project schedule was fifteen to eighteen months. We do deliveries about every 5-6 weeks. The customer sees progress, doesn’t get antsy that things are off the rails, and gets a chance to feedback regularly along the way. If you don’t think any of that has value, then you haven’t worked on many customer driven and funded projects. Also, as we almost always have some sort of release ‘canned and ready’, sales demos become much easier. And our customer can show it off to their customer and integration partners. That’s a big plus for them too.

Sometimes you bite off more than you can chew for an iteration - some parts of a task just have a heavily serialized nature or are innately tricky. That’s when agile hurts a bit because you are still pushing for your iteration endpoint and that means a harder push.

Our rule, and it is part of what keeps the customers coming back, is ‘always hit your dates’ and ‘always deliver core promised content’. Sure, we sometimes slip on the edges, on ‘good to have’ or ‘would have liked to’ aspects, but we always deliver core content on time. This is in part due to the dedicated team, but in part due to management being on top of things - agile keeps everyone more focused and problems get brought up, mitigations decided on. You don’t have time to screw around. You do, the iteration gets blown.

Agile works, but it does require a certain approach to project and risk management as well as a focus on regular builds for testing. Automation of testing and regular user driven smoke testing identify problems early and lead to better release quality. You don’t try to get everything right, just the big things… and that helps a lot.

I’ve done the waterfall model and seen six month or year long projects go off the rails. There are fewer checkpoints if you wait nine months for the first production build. And if someone in the food chain is not being honest about where the development effort is at, you don’t find out anywhere near soon enough. Agile forces a certain integrity on behalf of developers and middle management, a not inconsiderable management advantage.

Check your history…the Russians were in some enemy sorties…Shot a few times, then bugged out. If they didn’t have an easy kill they didn’t hang around. Chickens.

Great post. I emailed you about it. Robert Greene just posted a similar piece you might enjoy. http://www.powerseductionandwar.com/archives/ooda_and_you.phtml

It’s funny - before I saw this post, I had posted an entry at by blog on how Agile = OODA loop. I mailed the link off to Jeff, and he pointed me here.

Anyway - my post:
http://kg2v.blogspot.com/2008/04/agilexp-programming-and-ooda-loop.html

Interesting read. I find the analogy extremely fun to read but purely anecdotal. I think most would agree after reading this analogy that iterating faster accomplishes more work over time, if it works for F86s why not software right? If find it no different than saying working faster (acting faster) accomplishes more in a shorter period of time. I’m sorry but DUH!

The real question is why if a team does approximately the same amount of total work, how does the duration of an iteration effect the ultimate timeline when the same amount of total work needs to be accomplished? While most agree that an iteration can be too long, could it also be argued that an iteration is too short thus negatively impacting the duration of a project?

It goes without mention that iterative development incurs overhead. The overhead consists of planning each time, delivery/deployment each time, etc. The more interations you have the more overhead is incurred. So why is iterative development so hailed?

Enter parallel processing. Given a fixed amount of work to be completed the fastest (from a time perspective) way to complete that work is to have work performed in parallel as much as is reasonable. Reasonable meaning that doing things in parallel makes sense up to the point where the cost of overhead starts to outweight the benefit of parallel processing. Iterative development like parallel processing sacrifices efficiency for better utilization.

Ever worked on a puzzle? Yes those antiquated cardboard pieces that are put together to form a picture. If you have one person working on the puzzle they will do and X amount of work. If you have two people work on the same puzzle they will do X+O(verhead) amount of work, why? Simple, I’ve worked on a puzzle before with someone and have had to hunt for a piece that I had seen before. After a few moments of not finding it I ask the other person, have you seen that piece, it was right there a minute ago. They respond yes it’s over here, I’m trying to see if it fits in this area. Though we’re now doing more work, we also have twice as many people doing the work in parallel which shrinks the duration. Now extrapolate this to three, four or more people. Eventually depending on the size of the puzzle adding more people would actually increase the duration as we’d be bumping into one another.

The main difference with a software project/puzzle is that not everyone is performing the same function. There are architects, developers, testers, etc. No matter how hard you try you can’t always work in parallel as some things are sequential/serial in nature. This is true even for the puzzle example but to a lesser degree. Regardless, the name of the game is to increase the parallelism (minimize the serialization) as much as possible even though you incur overhead in doing so. Given a fixed team size (processors) and fixed amount work to be completed how can a team increase the parallelism of their work? Simple, interations… Why have a tester sitting there doing nothing until all development is complete why not chunk the work up so the tester can test what is ready. Same type of examples can be drawn up for business anlaysts, business stakeholders, customers, etc. They key is add some overhead in the form of more frequent comunnications, hand-offs, deliverables, deployments, etc. also known as iterative development so more people/processors have greater utilization.

The end result is NOT

  1. Working faster
  2. Working less
  3. Working more efficiently

Rather the end result is:

  1. Working more, with less inactive time
  2. Reduced duration (THE ULTIMATE GOAL)
  3. Working less efficiently as you have introduced overhead.

So rather than Boyd’s law of iteration (which is merely an anecdote) the real reason for this cause and effect has to do with Amdahl’s Law: http://en.wikipedia.org/wiki/Amdahl’s_Law of parallel processing. This can explain both having to few processors (long interations) and to many processors (extremely short interations) for a given amount of work. No doubt Amdahl’s law and my pathetic puzzle analogy is more boring than the fighter plane analogy. Personally I’d rather associate with fighter aces but the analogy unfortunately doesn’t explain what’s really going on. It’s very inspirational however and the masses prefer a good story. I tip my hat for the creative story. Well done!

“It goes without mention that iterative development incurs overhead. The overhead consists of planning each time, delivery/deployment each time, etc. The more interations you have the more overhead is incurred. So why is iterative development so hailed?”

I think part of it is that at least some of that “overhead” are really things that should be done anyway: proper documentation, automation, testing, etc. The fact that you have to go faster exacerbates this problem and forces you to fix the leaky plumbing and peeling paint.

Do them iterations shorter and faster!! Right with you on that one. I learn that from my own experience - speed of iteration is one significant contributor to winning the game of making killer products.

But wouldn’t you agree that its futile to expect a different result by doing the same things again and again? Isn’t “inspect and adapt” another significant contributor too? I know its’s implied and good developers/professionals will have this discipline in their blood. But may be its worth spelling out.

Is it fair to say that its important to “shorten the feedback loop” and “do better the next time”?

From the linked article:

So Pete first observes, then orients, then plans, and then acts. Boyd called this sequence OOPA (observe, orient, plan, act).

(Actually, readers familiar with Boyd’s work may recognize that Boyd called his loop OODA, for observe, orient, deploy, act. However, I have change the deploy to plan for two reasons. First, technology readers will be confused by the acronym for object-oriented design and analysis, also OODA. Second, as I have read Boyd’s works, I have concluded that “plan,” as used in the IT context, is closer to Boyd’s original meaning than is “deploy”.)

It’s a valid point.