I had an interesting experience after presenting at the Project Management Institute Conference recently. During the Questions period at the end, a delegate told me flat out that I was wrong. He argued that the ability to estimate must be born into you and cannot be taught.
So what evidence is there, that estimating can be taught and learned?
Let's start with a definition of an estimate.
I come across many 'estimates' which are just a single number, usually a dollar value or a date. The person estimating is staking their personal credibility on the out-turn being exactly that amount or finishing on that date. Almost every estimate turns out to be 'wrong' when stated like that.
If it turns out to be right- like many forecasts public companies make to shareholders- it's usually due to a mixture of over-cautious estimation and carefully massaging what's "in" and what's "out".
More useful to the person asking for an estimate is a range- 'it will return a benefit of $x to $y' or 'we will complete in between 2 and 3 months'.
A substantial fraction of estimates (and estimators) would still be proven wrong by that score, doing little good to either estimator or client.
To be really useful, an estimate needs to include the estimator's certainty. This is precisely the territory of statistics. A truly useful estimate is a actually statistical distribution. Most people providing estimates will not think of it in that way, thus there is a skill in eliciting a great estimate.
The quality of a forecast depends (in retrospect) on whether the actual out-turn value was within the range estimated, and the width of the range given. It's easy to be right if you give wide ranges, but not particularly useful.
Most useful, and my definition of an estimate, is a range qualified by a probability, like 'there is a 68% chance we will finish in 2 to 3 months'. This gives the recipient an indication how much confidence the estimator has in the range.
To know that teaching anything is effective you have to be able to measure the student's ability 'before' and 'after' tuition.
This has been a concern since weather forecasting began, and applies equally to human-made and computer model forecasts. The weather forecasting profession now uses a Brier score which takes into account both the estimate confidence and whether the result came withinn the estimate range. 0 is a perfect Brier score; scoring 1 means 100% confident and 100% wrong!
You can compute a Brier score for a single estimate, and you can get an overall Brier score across any set of estimates. So you can measure an estimator's success before training and repeat after training to identify the difference.
In the simplest example, I ran a training course less than 2 hours long. Participants scored on average 0.48 (on a simulated problem) before training and 0.37 on a very similar problem after- a remarkably fast improvement.
Practice and feedback are needed to learn anything. The best estimators tend to be those who get plenty of both- the weather forecasters already mentioned, and some gamblers like poker and bridge players. So the basis of teaching estimation is to provide opportunities for practice and rapid feedback.
I teach a few simple methods to help people avoid some of the most common types of error, techniques to improve repeatability and to visualise probability. Just keeping these in mind has been shown to improve estimation measurably.
A distinct other method I teach is Crowd estimation. You can't teach a crowd to estimate, but you can teach estimators how to learn from a crowd.