One of the most challenging aspects of any software project is estimation-- determining how long the work will take. It's so difficult, some call it a black art. That's why I highly recommend McConnell's book, Software Estimation: Demystifying the Black Art; it's the definitive work on the topic. Anyone running a software project should own a copy. If you think you don't need this book, take the estimation challenge: how good an estimator are you?
I bought McConnellâs book because I am lousy at estimating. I also bought it because I enjoyed âCode Completeâ and âRapid Developmentâ.
So I took the estimation challenge thinking that I would ace it.
I scored 5! I couldnât believe it.
I think most developers are bad estimators - we need all the help we can get. I think it was Fred Brooks in the Mythical Man Month who said that developers are optimists and assume that everything will go smoothly. It never does.
Our internal system already supports this âpokerâ system internally: blind estimates by feature with estimates averaged out (or occasionally thrown out). We actually take this one step further though and evaluate the relative importance of features in the same way.
So not only do we know what takes the longest, we also know the most important features. This means that we can load balance individual features and provide the client with working builds earlier. This of course manages both estimation errors and specification errors as early as possible. This is something Fogbugz doesnât seem to do (we just played with the 6.0 release, and I could be wrong)
The first part reminds me of the âaverage of 3 independent estimatesâ technique used to estimate the range for artillery or mortar. Apparently this works well in practice.
Another good technique for estimation attempts to weight each task estimate by the amount of uncertainty of that task. So the time estimate would be weighted by 1 for a module that can be coded without thinking. If the task is using an unfamiliar technology, then the time estimate would be weighted much higher. I find the exact numbers are unimportant but the introduction of uncertainty definitely helps!
I also own McConnel\s lates book, although I haven\t finished - my collegues look at me funny when I pull it out. Estimation is the most annoying thing, mostly because it can\t be done. For some reason, I always estimate optimistically and that comes back to bit me in the butt when a manager doesn\t understand why it isn\t done.
In the very first PC book I got, there was a trick about buying HD space. When all is taken into consideration, double it and then round up to nearest GB. I almost think it would serve me well with estimation too.
I always think of the story from ancient China. They wanted to make a statue of the Emperor, and all was arranged for the sculptor except for the length of the Emperorâs nose. Being divine, they could not just go up and measure it, so they decided to take a poll. They asked every member of the country how long they thought is was and took the average.
That had to be close - Right?
The moral is âa whole bunch of people guessing is still a guessâ.
It is interesting to me that this notion of using historical data to predict future estimates is finally making its way into the software world.
We have been using historical data in the oil and gas industry (any continuous process industry actually) to predict when equipment will break, how much it will cost, and what can be done to prevent it. There is actually a software niche centered around this. Reliability Centered Maintenance was started by the airline industry and is slowly making its way to different segments. Looks like the agile development folks are picking up on it as well
Tubs: Try just multiplying by four, and save half your estimate-doubling time!
Jeff said:ISome developers are better at estimating than others; you can shift critical tasks to developers with a proven track record of meeting their estimates./i
Ah, but might this be true because of the sorts of tasks the first set of developers are doing?
Unless thereâs a round-robin of tasks to developers, thereâs a risk of conflating cause and effect in estimate accuracy.
(Your shopâs mileage and practises may, of course, vary.)
âa whole bunch of people guessing is still a guessâ
True. Estimation is always going to involve a certain amount of guess work, otherwise it wouldnât be an estimation.
Aside from the actual accuracy of the estimate, another advantage of the group approach is that everyone takes responsibility for the âguessâ.
If/when it all goes wrong then the group can ask itself âwhy did WE get that estimate so wrong?â, rather than the more confrontational question of âwhy did YOU get that wrong?â.
Iâm very sorry I donât think I properly understood the task.
You could just estimate a range your guaranteed to get the answer withink but whatâs the point. For the first question we could just say 100 as a minimum knowing that the sun is hotter than boiling water and we could say 1,000,000,000 as a max just because we are sure then we would be within the range but thatâs not an estimate thatâs just a joke.
You canât plan anything based on that estimate so itâs pointless.
You want someone to say between 10,000 and 12,000 for instance because you can use this answer.
I realise it could be wrong. I think i am missing the point here.
Surely an estimate is a narrow range and a good estimate is a narrow range that is also pretty accurate. That comes with experience.
Enlighten me on the opint iâm missing, if any :). Thanks
Whenever I hear a estimate, I always double it, and add half. Itâs amazingly close to what the actual time is, esp. when it comes to networking issues.
â2. Each estimator gets a deck of cards: 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100.â
OK. In order to have 90% confidence that the result will be in range, each estimator plays both their 1 and 100 cards.
â5. If the estimates vary widely, the owners of the high and low estimates discuss the reasons why their estimates are so different. All estimators should participate in the discussion.â
OK. âI played my 1 and 100 cards in order to have 90% confidence. Why did you play your 100 and 1 cards?â