How Long Would It Take if Everything Went Wrong?

I'm currently reading Steve McConnell's new book, Software Estimation: Demystifying the Black Art. The section on individual expert judgment provided one simple reason why my estimates are often so horribly wrong:

This is a companion discussion topic for the original blog entry at:

As a program manager, my estimation method would be to take the “best case scenario” and multiply it by three. This is my within-team estimate that I will use to set the team working to.

I will take the worst-case scenario and multiply it by eight. This is what I will tell management. Then, if it looks like the team will be done early, I look like a hero to management.

Johanna Rothman has a lot to say about project estimation. This podcast is a pretty good starting point:
as is her blog:

I am absolutely TERRIBLE at estimating the time it takes for project development. I was hoping experience would change that. If seasoned veterans still have issues (Jeff?), is there any hope for me?

“You’ll also notice that both the Best Case and Worst Case estimates are higher than the original single-point estimate.”

Huh? It seems to me that 10.5 is less than 11.25.

Ugh, I can’t believe how many commenters are repeating the “take the estimate and multiply it by ‘X’” approach. That’s exactly what the book says not to do.

Here’s a suggestion to use when estimating someone else’s time:

Estimate = (1Best case estimate)+(2Realistic estimate)+(4*Worst case estimate) / 7

It weighs the estimate towards the worst case, but takes the best case and expected estimate into account.

As for your own, if you’ve been in the game for long enough, you should know them (hmm… could those archived timesheets have some value after all?). Of course, this may mean that you know that your own estimates sucks (even if its through no fault of your own). In that case, start with your own formula and tweak it 'til it works.

Oh, and good luck :slight_smile:

PS, why is b-l-o-g-s-p-o-t objectionable content?!

Yeah, I like Steve McConnell, but the given data doesn’t really back up his sweeping statement there. Three items were estimated to take less time in the best case, one was estimated to take more time in the best case, and six were estimated the same. So I think it supports the statement that single-point estimates tend to be best-case estimates, but I don’t think it supports the statement that thinking about the worst case /always/ makes one gloomier about the best case.

How I estimate the time it will take for a project :

Estimate = [very first estimation] * 2 * 3

At my old consulting firm our project managers would double our estimates.

I try to give high-end estimates with a degree of certainty attached. “I’ll be able to complete this in 100 hrs with 95% certainty.”

I read the book too. I liked the diversity of approaches he covers and how he suggests using at least two methods. If they converge, you have more confidence in those numbers; if not, you should investigate why and address it.

My favorite quotes:

page 14: “Accuracy of +/- 5% won’t do you much good if the project’s underlying assumptions change by 100%.”

page 18: “Avoid using artificially narrow ranges. Be sure the ranges you use in your estimates don’t misrepresent your confidence in your estimates.”

page 36: “The only way to reduce the variability in the estimate is to reduce the variability in the project.”

page 132: “Do not address estimation uncertainty by biasing the estimate. Address unceartainty by expressing the estimate in uncertain terms.”

Yes, I took notes.

Huh? It seems to me that 10.5 is less than 11.25.

I think McConnell is referring to the fact that the best case got “bester” and the worst case got “worster”. That’s the goal–better estimates.

PS, why is b-l-o-g-s-p-o-t objectionable content?!

Because I get 3-4 new spam *.blogsp0t trackbacks EVERY SINGLE DAY. Cleaning it up is very tedious. I apologize to anyone who is incovenienced, but until blogsp0t (a GOOGLE company) cleans up their spam act, I have no choice…

“Huh? It seems to me that 10.5 is less than 11.25.”

I was also puzzled by this, so I looked it up when I got home. The edition I have actually reads “If you examine the estimate for Feature 4, you’ll also notice that both the Best Case and the Worst Case estimates […]”

‘Feature 4’ corresponds to ‘Delta’ above, which clearly does increase…

If you start with a guess, factoring in another guess is not materially improving your situation.

Matching planned work with historical measurements would perhaps be a beginning strategy. Surely McConnell has something more to say on such real estimation.

Funny, as I’m reading this book as well. Estimations are not a developers strong skill, I’ve found (including myself).

We’re also going through Code Complete as a group at work which is interesting (this will be my second time reading it).


Actually I think three point estimation is even better than 2 point estimation. We have a tutorial on this on our page (its a quite long read as it contains a little stories and some explanations in the beginning):

I always use:
(best case + 4 * best guess + worst case) / 6

This is surprisingly accurate, and I think it is a standard formula (part of Pert?)

Late to the game here, but I’m surprised that estimation is still primarily based on guessing. How possible would it be to develop metrics based on historical data and use them along the lines of “The last time we had to create a feature/product/method/whatever like this, it took time.” Assuming one could come up with an apples and apples comparison, of course.

I’m sure that most people would say “Well, that was last time, but THIS time we’re going to …” and fantasize about some super-improved process that’s going to make everything go way better this time around. Historical data suggests, however, that this won’t be the case …