When do you need a Crowd?
We are used to thinking of measurements as concrete things with no margin for error. That has never been true. As I show in Everything is a Bet, even your car's speedometer reading can be way off you actual speed and still legally accepted.
If we don't have a machine to measure something, some of us go to the opposite extreme and assume it can't be measured. Or we express our feeling about the quantity on a simple scale like Red-Amber-Green or High-Medium-Low.
In his book The Failure of Risk Management Doug Hubbard demonstrates why using subjective Likert scales (like High, Medium, Low) to represent uncertainty is worse than useless, giving an unjustified impression of objectivity. He shows why you must develop statistical models with confidence intervals. But where do we get the information for statistical models?
Professor Philip Tetlock of the Wharton School of the University of Pennsylvania famously showed that we cannot rely on Expert Judgment, since experts vastly overrate their own ability to predict events, individually performing about as well, if not slightly worse, than the average daily reader of The New York Times.
So the traditional model, relying on the expertise of a responsible manager, just doesn't work.
Journalist James Surowiecki helped in 2004 by describing The Wisdom of Crowds, showing that by combining the estimates of many people we can get much better predictions than if we take the advice of a few experts. Surowiecki popularised the work of Scott E. Page, and a philosophical tradition before. Others like Colson, Cooke in 2018 have built on this, with techniques that reduce false confidence without losing too much specificity.
But how do you get a crowd of people to give you estimates that can build into the statistical models we need for Risk Management?
You already have the Crowd, your colleagues.
To be most effective, you should embrace diversity, bringing together people from many areas and different levels of responsibility.
Project Managers improve estimation accuracy by making the object being estimated as simple and concrete as possible. Work Breakdown Structure breaks complex things into simpler components, estimates those and aggregates back up.
Classic Project Management uses a simple 3-point estimate to express not just the most likely value but also the uncertainty, and the skew in probable outcomes. The result is a probability distribution as Hubbard demands. The usual PERT method over-simplifies, partly by assuming excellent estimators, but with care, we can improve on that.
In his 2009 book How to Measure Anything, Hubbard shows how to calibrate estimators, who on average are over-confident before training.
US National Academy of Sciences member Amos Tversky, and Nobel prize-winner Daniel Kahneman describe how Judgment under Uncertainty is subject to unconscious cognitive biases, some self-serving but often quite innocent.
There are ways to control for cognitive biases, particularly through the way in which estimates are asked for, and by eliciting optimistic and pessimistic estimates, before asking for a 'most likely' estimate. We can also improve estimates of confidence using techniques like the Equivalent Bet test which relate them to everyday quantities.
So we know how to gather many component-level estimates by many calibrated people. Now we must aggregate them all into an overall picture. Monte Carlo simulation is ideal for that. Monte Carlo simulation doesn't hide the uncertainty in the original estimates, but shows how uncertainty is reduced through aggregation.
Combining all these tools we can expand beyond the original idea of estimating a risk:
• We can transparently create estimates under uncertainty, and have a good understanding of how confident we should be in them.
• We can predict how the raw risk will change over time if we do nothing, the costs of one or more risk treatments and their impacts on the residual risk. Thus we can prioritise risk mitigation options.
• We can aggregate estimates of risks together to achieve an overall picture and estimated distribution. For example, we can build a consolidated view of the whole organisation's risk profile.
• Finally, we can estimate the costs and benefits of business improvement projects with confidence, and transparently compare their return on investment against risk mitigation activities.
There is a printable PDF version of this item.
With enough preparation, you can do this yourself. AcuteIP.com offers a set of services to make it easy, from workshops and technical services to managing the whole exercise.
Contact Graham.Harris at AcuteIP.com to explore possibilities.
Is this post useful to you, or could it be useful to someone you know? Please do us both a favour- spread the word by sharing it through the colourful social media buttons (at left on PC or below on mobile).
© 2018 Graham.Harris at AcuteIP.com
Recent Comments