Let's Play Planning Poker!

One of the most challenging aspects of any software project is estimation-- determining how long the work will take. It's so difficult, some call it a black art. That's why I highly recommend McConnell's book, Software Estimation: Demystifying the Black Art; it's the definitive work on the topic. Anyone running a software project should own a copy. If you think you don't need this book, take the estimation challenge: how good an estimator are you?


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/10/lets-play-planning-poker.html

What do you MEAN? 10 years is a perfectly valid amount of time to spend developing a Duke Nukem game.

I bought McConnell’s book because I am lousy at estimating. I also bought it because I enjoyed “Code Complete” and “Rapid Development”.
So I took the estimation challenge thinking that I would ace it.
I scored 5! I couldn’t believe it.
I think most developers are bad estimators - we need all the help we can get. I think it was Fred Brooks in the Mythical Man Month who said that developers are optimists and assume that everything will go smoothly. It never does.

Our internal system already supports this “poker” system internally: blind estimates by feature with estimates averaged out (or occasionally thrown out). We actually take this one step further though and evaluate the relative importance of features in the same way.

So not only do we know what takes the longest, we also know the most important features. This means that we can load balance individual features and provide the client with working builds earlier. This of course manages both estimation errors and specification errors as early as possible. This is something Fogbugz doesn’t seem to do (we just played with the 6.0 release, and I could be wrong)

The first part reminds me of the ‘average of 3 independent estimates’ technique used to estimate the range for artillery or mortar. Apparently this works well in practice.

Another good technique for estimation attempts to weight each task estimate by the amount of uncertainty of that task. So the time estimate would be weighted by 1 for a module that can be coded without thinking. If the task is using an unfamiliar technology, then the time estimate would be weighted much higher. I find the exact numbers are unimportant but the introduction of uncertainty definitely helps!

Hey Now Jeff,
Your right on it’s so tough to determine an estimate time a project will take. Very interesting post.
Coding Horror Fan,
Catto

Sorry for the service interruption-- I switched servers and changed DNS, and I forgot a few things along the way.

I also own McConnel\s lates book, although I haven\t finished - my collegues look at me funny when I pull it out. Estimation is the most annoying thing, mostly because it can\t be done. For some reason, I always estimate optimistically and that comes back to bit me in the butt when a manager doesn\t understand why it isn\t done.

In the very first PC book I got, there was a trick about buying HD space. When all is taken into consideration, double it and then round up to nearest GB. I almost think it would serve me well with estimation too.

I always think of the story from ancient China. They wanted to make a statue of the Emperor, and all was arranged for the sculptor except for the length of the Emperor’s nose. Being divine, they could not just go up and measure it, so they decided to take a poll. They asked every member of the country how long they thought is was and took the average.

That had to be close - Right?

The moral is “a whole bunch of people guessing is still a guess”.

It is interesting to me that this notion of using historical data to predict future estimates is finally making its way into the software world.

We have been using historical data in the oil and gas industry (any continuous process industry actually) to predict when equipment will break, how much it will cost, and what can be done to prevent it. There is actually a software niche centered around this. Reliability Centered Maintenance was started by the airline industry and is slowly making its way to different segments. Looks like the agile development folks are picking up on it as well :wink:

Ignoring task dependencies is ridiculous. I mean, there is a reason they are called ‘dependencies’… doh!

Tubs: Try just multiplying by four, and save half your estimate-doubling time!

Jeff said:ISome developers are better at estimating than others; you can shift critical tasks to developers with a proven track record of meeting their estimates./i

Ah, but might this be true because of the sorts of tasks the first set of developers are doing?

Unless there’s a round-robin of tasks to developers, there’s a risk of conflating cause and effect in estimate accuracy.

(Your shop’s mileage and practises may, of course, vary.)

“a whole bunch of people guessing is still a guess”

True. Estimation is always going to involve a certain amount of guess work, otherwise it wouldn’t be an estimation.

Aside from the actual accuracy of the estimate, another advantage of the group approach is that everyone takes responsibility for the “guess”.

If/when it all goes wrong then the group can ask itself “why did WE get that estimate so wrong?”, rather than the more confrontational question of “why did YOU get that wrong?”.

As a rule of thumb, I make an estimate. Double it. Then double it again.

Even somthing as simple as rebooting a server always takes longer than you think.

Sorry for the service interruption-- I switched servers and changed DNS, and I forgot a few things along the way.

How long did you estimate it would take? :wink:

I’m very sorry I don’t think I properly understood the task.

You could just estimate a range your guaranteed to get the answer withink but what’s the point. For the first question we could just say 100 as a minimum knowing that the sun is hotter than boiling water and we could say 1,000,000,000 as a max just because we are sure then we would be within the range but that’s not an estimate that’s just a joke.

You can’t plan anything based on that estimate so it’s pointless.

You want someone to say between 10,000 and 12,000 for instance because you can use this answer.

I realise it could be wrong. I think i am missing the point here.

Surely an estimate is a narrow range and a good estimate is a narrow range that is also pretty accurate. That comes with experience.

Enlighten me on the opint i’m missing, if any :). Thanks

We’ve just started using Rally: http://www.rallydev.com/.

It’s phenomenal. Designed specifically for Agile planning and development, models stories, iterations, releases … the works.

We’ve hooked it up with Jira for our defect tracking.

Whenever I hear a estimate, I always double it, and add half. It’s amazingly close to what the actual time is, esp. when it comes to networking issues.

“2. Each estimator gets a deck of cards: 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100.”

OK. In order to have 90% confidence that the result will be in range, each estimator plays both their 1 and 100 cards.

“5. If the estimates vary widely, the owners of the high and low estimates discuss the reasons why their estimates are so different. All estimators should participate in the discussion.”

OK. ‘I played my 1 and 100 cards in order to have 90% confidence. Why did you play your 100 and 1 cards?’

Duke Nukem Forever is titled as such, because it will take FOREVER to be released.