How Long Would It Take if Everything Went Wrong?

I agree estimation is a tough problem, and I also like Steve McConnell’s work.

Obviously, if you (or your company) has previous experience in a similar project, this is the best way to go. In any case, I’ve found the most important thing to do is to break down estimates as best you can. Keep breaking them down until no tasks have estimates greater than 3 days. I find this forces you to think about the project in detail, and avoid big vague estimations.

I also find it useful just to estimate raw development time (with some self-testing and/or unit testing, depending on what you do). Then add a risk percentage (ideally proportional to how risky the project is, and/or how much variance is in your raw estimates). Finally, add a further percentage for (integration/system) testing, documentation, optimization, sickness/holidays.

So far, this has worked well for me, but it requires familiarity with the project domain and the programmers.

(And I like Steve’s advice of gradual refinement. In practice though, it’s a hard sell to management.)

estimation is done due to uncertainty
the only way to improve estimate is to reduce uncertainty
clarify requirements
define scope
small project length

I think the best one-word key ingredient to accurate estimation is experience. Sure a lot of the other factors affect how well you’re able to pinpoint an estimate, but having the ability to recognize the scenario you’re in and apply lessons learned to the entire issue is the best way to really get more accurate over time.

good practis.

If you start with a guess, factoring in another guess is not materially improving your situation.

Matching planned work with historical measurements would perhaps be a beginning strategy. Surely McConnell has something more to say on such real estimation.

It is really helpful to be able to provide an estimate that is as truthful and realistic as possible, particularly for the developer(s) that will ultimately do the work. It has been my experience that when I’ve understood the problem domain and the requirements were not subject to considerable fluctuation, I could, given time to complete the estimation task, provide a reasonably accurate response to the level of effort and basic timeframe for development.

I have been in situations, however, where the requirements were an evolving beast and the problem domain sometimes was not completely known at estimation time. Despite this state, management/marketing were insistent on having estimations and that proved to be considerably more difficult. When the requirements are in considerable flux, the task of estimating software development degrades into an exercise of pulling numbers from one’s nether region. When management then hand-waves at the number of unknowns and takes those “estimates” as the gospel truth, it puts the developer(s) behind the eight ball from square one and creates an environment where software and management are at odds with one another. Such an environment encourages short-cuts to meet unrealistic deadlines, often requires considerable overtime by software to meet milestones, breeds discontent in the ranks and likely results in a product of questionable quality.

“Huh? It seems to me that 10.5 is less than 11.25.”

Delta rose from 0.5 days to a whooping 0.75 days… Which I agree doesn’t sound all that impressive in the whole context.

One of the most important lessons my first programming mentor taught me is that estimation is harder than developing. He had me estimate a project, then worked with me to understand just how optimistic I’d been.

As with your previous post on maintenance work, I believe that junior developers should have to do estimates early in their programming careers. There are two possible feedback mechanisms - work with them to see where they’ve gone wrong, or secretly pad it for them and then review their estimate with them after the job’s completed.