How Good an Estimator Are You? Part II

Here are the answers to the quiz presented in How Good an Estimator Are You?

This is a companion discussion topic for the original blog entry at:

While narrow estimates are unlikely to be right, wide estimates are unlikely to be useful. For example, I have no real feel for how many books have been published in the US since 1776, so to get a 90% confidence interval, I used 200,000 - 2,000,000,000. McConnell would say that that’s a good range, since it actually includes the correct answer, but – geez – the upper bound is 10,000 times the lower bound.

Imagine a customer (or boss – whoever you work for) asking how long a change is going to take. “Well, between 3 hours and 3 years.” “WHAT!!!” “Well, I’m 90% sure it will fall in that range.”

In some cases, in order for an estimate to be accurate, it has to have such a massive margin of error that it’s no longer useful.

Estimation should be done by: SME (Subject matter experts) and the actual resource that will execute the task. Anything different than that, just confirm you are a newbie. I can’t estimate tasks on a space ship construction project… And that’s ok…

So, this excercise in the book sounds more like “You are bad and you didn’t waste your money in my book” crap.

While I feel that this is a very interesting quiz, and I agree that the results are interesting, I do feel that the fact that no one has ever got ten right is in part due to the nature of the questions. Personally I knew the surface temperature of the sun and the year of Alexander’s birth (to within ten years), so it was superfluous to give wide ranges of answers. The latitude of Shanghai is likewise an answer that anyone with any knowledge of the scale of latitude is liable to be able to make a fairly safe stab at. The impossible questions are the volume of the great lakes and the length of the coastline. The first because the units of measurement are befuddling - cubed measurements always are. The second is geuinely impossible. How did you measure the coastline? At what definition? A straight line between every cove or round every face of every grain of sand? If it were the latter then the true answer would be billions of miles. I’m all for encouraging people to give wider ranges, but if I gave a range of 0 to one trillion for every question I would get all ten right. I doubt that approach would help me in any practical setting, however.

I think this excercise misses one of the greatest problems that exists in software estimation. Estimates don’t exist in a vacuum. How often have you tried to provide what you felt was a “safe” estimate only to have your boss or your customer come back and tell you that your estimate was unreasonable. They immediately begin trying to pressure you into committing to a lower estimate which they will subsequently use against you when the project starts slipping.

Government contractors are notorious for “low-balling” projects, knowing that the real costs will be much higher than the winning bid. But if you provide a realistic estimate you will not win the contract. This is mentality that permeates our industry.

Quite honestly, I am not sure you can give any reasonable estimate that would satisfy the people who pay the bills. There are so many variables in a given software project, as the size of the project grows the range of possible estimates will also grow. As Boofus said, the estimate will ultimately be so vague as to be useless.

Just looking at the number of overbudget projects that continue to occur tells me that you have more chance of creating an accurate estimate using a dart board as you do using any of the estimation techniques used over the last 40+ years of software development.

Hmm. I’d be interested to see how well the test worked the other way around. i.e. Give people a randomish range and then ask them how confident they are that the result is in that range.

Not really on-topic I suppose, but you might want to correct the volume of the Great Lakes: 23,000 km^3 = 2.3 x 10^13 m^3 = 2.3 x 10^16 litres. The quoted 6.8 x 10^20 m^3 is about 2/3 of the volume of Earth. (I didn’t check your imperial numbers.)

I refused to answer 8 of the 10 questions, because I had no way to make an educated guess at 90% confident. I got 2/2, or 100%.

Not giving an estimate is much more useful than giving one too wide - at least then the question can be posed to someone who has a better chance of getting it right.

While I am a bit of a fan of McConnel’s books, I think I’ll skip this one. As a developer, I don’t really care to improve my ability to make “accurate” estimates, nor to widen my ranges of estimates so as to have the actual figures lie within my range 90% of the time.

In most real businesses, the idea of giving estimate ranges multiples of orders of magnitude wide to management are simply a fantasy.

If McConnel wants to help me with this problem, he’d teach me how to convince management that I shouldn’t waste my time making estimates that have no real hope of being accurate - at least to any degree to be useful.

I’m about 90% confident such a book would be more useful.

I think it’s a valuable exercise, and I’m dismayed that at 4 out of 10 I’m doing fairly well, based on McConnell’s histogram.

Of course people don’t like uncertainty, but it’s good to be as realistic about it as possible - I’ve given people (managers customers) estimates that reflected my uncertainty, and they are well served by that (they’d be even better served if I could do better than 4 out of 10 when I’m shooting for 9, of course). Tasks I’m confident of have a tight range and tasks with more unknowns have a broader - sometimes much broader - range - and that allows the customer to make better choices among them (including, “what can you spend some time on to reduce the uncertainty in this task?”)

6 of 10 of my estimates were a range that spanned a single order of magnitude, the rest smaller. And only half were correct. But that is just not a technique I could use in estimating my coding projects. A response of “two to twenty weeks” would get me laughed at.

It is my experience that more accurate estimations require significant time to complete. Five minutes spent coming up with an estimate of two months is probably going be well off. But a week or two spent on creating a two month estimate is likely to be much more solid.

It would be nice if estimates were not deliverable until already into a project. :slight_smile: Updating estimates during a project has always been critical for managing expectations with me.

All this talk reminds me of something I read a few years ago about the most consistently reliable estimate being made by measuring how thick the requirements documentation. (This was in the context of large projects.) Can’t remember the source.

This mostly demonstrates that most people can’t give blue-sky estimates with any accuracy. Which is tautilogical. Being surprised at these results is what surprises me. If you’d required estimates to be within an uncertainty range then you might get useful results.

I limited my estimates to two orders of magnitude (and got 5/8, excluding a couple of exact answers), because I glanced through and decided that that was more useful than the estimate that answered all of them (+/- infinity was my first answer, since you/Steve asked a stupid question).

The agile/xp approach just doesn’t scale up for projects that size

To be clear, agile/xp proponents state that it doesn’t scale up to projects that big.

but do you think Microsoft is keeping track of the number of overtime hours their employees are working to get XP out the door? I would guess not.

Umm, if they want to have a balance sheet that makes sense, you bet they’re keeping track. That doesn’t mean they use the data to help, say, Vista completion time :slight_smile:

I don’t remember where did I read this, but I did somewhere:

An estimate is just that - an ESTIMATE. You don’t have to be 100% right or something, it’s just an estimate of what it will be. Look at weather forecast is it exact? Always? Ever? :slight_smile: An estimation is just a forecast so there is (should be) no shame to be wrong… I guess.

As people have pointed out, giving an estimate with a range of several orders of magnitude is generally a non-starter in most organizations.

But I think that the point of the exercise is that if you are requested to honestly and to the best of your ability produce an estimate with a 90% confidence range and the result covers several orders of magnitude; then, for a well functioning organization, this should be a crystal clear signal that significantly more research needs to be done before the project is undertaken.

Of course, most of us work in disfunctional organizations where inconvenient truths are not welcomed. And I would hope that other chapters of McConnell’s new book will cover what to do when estimation reality collides with management fantasy. (Does it Jeff?)

Unless you are an expert on all of the subject matters above, you couldn’t hope to give an accurate estimate on them. The point was to show that if you were forced to, would you be able to account for the inherent uncertainty.

If I was asked to give an estimate on how many lines of code a transaction system would have, without any more information, my estimate would have to be a broad range. The results of this experiment, however, tell me that most people’s wouldn’t be broad enough.

Bill nailed this one: if people simply give a fixed point or too narrow band because it is the corporate culture, that culture is avoiding the problem of the underlying uncertainty.

If the developers are giving wide estimates, it means the problem is not well understood. If better estimates are wanted, more research will be required and a more thorough understanding of the problem needs to be developed.

The idea that most organizations don’t collect enough data to estimate correctly assumes that those organization that do collect and use data are successful at estimation. While I don’t have the statistics, I do have empirical evidence. I can see the results from Microsoft, Oracle, Sun, Developer Express and many others, many of whom maintain statistics about code estimation and yet continuously miss estimates. Microsoft is the poster child for missed delivery dates, even when slashing features.

Do I think the task is impossible? No. But I think that many of the development methodologies make the cost of achieving good estimates prohibitively expensive. I much more prefer an agile/xp approach which is only concerned with much smaller estimates, and therefore can be much more accurate since there are fewer unknowns.

These kind of analytical estimation problems are often referred to as Fermi Questions (named after Enrico Fermi who used to ask such questions of his students).

Do I think the task is impossible? No. But I think that many of the development methodologies make the cost of achieving good estimates prohibitively expensive. I much more prefer an agile/xp approach which is only concerned with much smaller estimates, and therefore can be much more accurate since there are fewer unknowns.

I work for a defense contractor that typically works on 1 million lines of code or more projects. The agile/xp approach just doesn’t scale up for projects that size but you have to come up with an estimate to win the contract. In this case being good at estimating is a competitive advantage. I have even heard that when the country of Italy puts out a work request, they take all of the companies estimates and average them. The company closest to average gets the contract.

I don’t think becoming good at estimating is a huge cost, it is just hard to stay disciplined and actually collect the metrics needed for the next project while you are working on the current one. Everyone says that having previous project metrics is important for making accurate estimates, but do you think Microsoft is keeping track of the number of overtime hours their employees are working to get XP out the door? I would guess not.