'Entirely Predictable' vs 'Managing Uncertainty': How many rolls on the dice?

decision making risk Jul 22, 2018

A couple of social media posts about diving incidents and near misses have triggered this blog because the term ‘entirely predictable outcome’ has been used to highlight that someone shouldn’t have done what they did because it was obvious that it would end up with an injury or death. The problem is that such statements, as they applied to those particular situations, are false, even when the commentators are biased because of hindsight.

To explore this, let's look at the dictionary definitions of ‘entirely predictable’. Entirely means ‘completely’ or ‘to the full extent’ and predictable means ‘always behaving or occurring in the same way as expected’. So entirely predictable means that on 100% of occasions the outcome would be as it was experienced in the particular occasions. If this was true, people would not do things which ended up with them injured or dead (unless they truly had suicidal tendencies, and those people are few and far between).

This concept of predictability or certainty is at the core of risk management. However, the research shows that risk is not managed in a logical manner for the majority of people in the majority of situations and there are a couple of reasons for this. Kahneman and Tversky wrote a whole book on the topic! For a risk to be managed in a logical manner, the outcomes need to be entirely predictable and the following will show why this doesn’t apply to diving.

Rolling the dice

Predictability is the concept which casinos use to make money. For example, the likelihood of a 6 being rolled on a 6-sided dice as 1:6 can be quantified, or the likelihood of pulling a single card out of a new deck of cards can be calculated as 1:52. These are entirely predictable because there are only a certain number of outcomes possible. I cannot roll a 7 on a single 6-sided dice nor can I draw an ace of circles from a deck of cards. However, if I roll a six-sided dice and get a 6 on the first roll, I have as much chance of getting a 6 on the next roll because the outcomes are independent of each other. If I roll eight sixes in a row, there is still a 1:6 likelihood that the next roll will be a six. To show an even distribution of the sides of the dice, we need to roll the dice lots of times (and even then it will have bumps in the distribution). This is why small numbers don’t work, even for predictable circumstances. Furthermore, predictability is why you can’t bet on a 7 in a two-dice game in a casino. Given the possible combinations to add up to seven, it is the most prevalent number when rolling two dice together and the house always wins.

Uncertainty in Diving

Moving to diving, there are so many variables involved that predicting their likelihood as single entities or as combinatory scores would be almost impossible. As such, when we go diving (or anything where we can’t actually quantify outcomes through absolute measurement) then we aren’t managing risk, we are managing uncertainty. For example, will my equipment fail on this dive? How effective will the decompression be? What is the current going to be like? What is the visibility? Taking it further, I am using my rebreather outside the tested environment, how reliable will it be? I fitted my oxygen cells 15 months ago, will they still work ok? These are all uncertainties which can't be measured from where you are now.  

Each of these uncertainties will have a benefit or loss associated with it and consequently, the concepts of benefit vs loss are still the same i.e. we make the decision based on whether the reward is worth the potential loss and only we know what the reward is worth to us or the team (if we have talked about it). Weirdly, the equation between gain and loss does not end up as a straight 1:1 relationship. Tversky and Kahneman (authors of Thinking. Fast and Slow) showed that if we are in a ‘status quo’ position and someone asks us to change to something else, we would need to have 2-3 times the perceived benefit before we would consider the change. Note that it is perceived benefit and ‘benefit’ can mean different things to different people e.g. money, kudos, prestige or just wanting something better.

Managing Uncertainty

So how do we manage uncertainty? When we have measurable uncertainty (risk), we use logical tools like failure modes effects analysis (FMEA) to determine likelihoods and then link this to what is an acceptable failure rate and associated loss e.g. for aviation, the catastrophic loss of an aircraft is considered to be acceptable at 1 loss in 10 million flying hours.

For areas where we have unmeasurable uncertainty, many researchers have shown that we use mental shortcuts because we are not able to make those computations. These shortcuts are ‘rules of thumb’ or cognitive biases and they help us make our decisions.

Such biases include:

  • Hindsight bias - I knew it was going to happen that way.
  • Outcome bias - the more severe the outcome, the harsher we judge the action.
  • Recency bias - something more recently experienced is acted upon first.
  • Availability bias - information I can recall easily means it is more prevalent or likely to happen in the real world.
  • Selective attention bias - we only focus on what is perceived to be important right at that time, with the level of ‘importance’ determined by previous experiences.
  • Overconfidence - that we believe we are capable of doing something better than we actually are. This is linked to the Dunning-Kruger effect.

Notwithstanding the negative aspects of biases, they are essential for humans to operate at the pace we do and to minimise the amount of mental energy which we consume. The brain is one of the major consumers of energy in the body, believed to consume approximately 20% of our energy. As such, anything we can do to be cognitively efficient is a good thing. However, such biases can lead to errors which can lead to injuries or deaths.

Feedback

Any effective decision-making process requires a feedback loop to determine whether the outcome was what was expected and if it was, it is something we can repeat to ensure we have a successful (and potentially certain) outcome.

The problem with a thought process that primarily (only) focuses on an outcome is two-fold. Firstly, the next time we encounter a very similar situation, we will likely assume it to be exactly the same because we don’t have time (or the mental capacity) to check all the variables and therefore we will expect the same outcome. This means we have assumed all the variables to be the same. The other point is that when we see someone else undertaking an activity which we want to do and we believe we can do it, we assume that we can see all the variables which they are dealing with to manage this uncertainty and that we will be able to do the same. Both of those assumptions are flawed. Combining this with the limited visibility we have of near misses (thereby improving availability bias) and social/peer pressures present (which increase the benefits) matters become even worse!!

Feedback within a system shouldn’t just focus on the outcomes, but also look at the processes involved. These include experience, mindset, motivation and equipment preparation and many other factors which we can’t directly observe. This is why team debriefs are so important, especially as a way of learning to deal with uncertainty because we can learn from others’ experiences and observations.

More Fallacies

There are another couple of issues which we should be aware of when looking at decision-making in diving. As we lose more, we take greater risks to get back to the status quo. Research has shown gamblers, when they are losing, will place bets on larger and larger odds as they believe that they can make up for their losses earlier in the day, these are odds which they wouldn't have taken at the start of the day as they would be too risky. Applying this to diving, if divers have lost dives due to weather or equipment failure, then they are likely to accept greater levels of uncertainty to achieve their goals. Levels which, in the cold light of day, would not be considered acceptable.

The other thing to consider is the sunk-cost fallacy. The more time, effort and money we have invested in something, the less likely we are to give it up, which is why people don't criticise their investment of $10k for the rebreather they've just bought even when they don't like it and they know there are problems with it.

The following example is given as an example of both of these. A diver has travelled a long way to dive a specific wreck, a wreck they have failed three times in the last two years to get onto because the weather has been poor but it is massively significant to a project they are are shooting some photos for. To dive this wreck, they have invested many hundreds of pounds/dollars in boat fees and breathing gas along with the time they have taken off work. They have a modelling team who have also invested time and money. The weather is due to hold today, but break tomorrow and they have no spare holiday left this year. They prepare their rebreather the night before and it works ok. On the morning of the dive, there is a slight leak on the pos/neg check but they think it is acceptable and go with it. On checking the bailout cylinders as they are kitting up, they notice that one of the cylinder pressures is not what they were expecting. At some point the cylinder has leaked, maybe due to the valves not being fully closed and the 2nd stage had pressure on the purge valve. The total quantity is below what they had planned, but only by about 5%. All the time they are travelling out to the wreck site and the weather is perfect and it looks like it will be an awesome dive… How many more little ‘issues’ would be needed to trigger an abort of the dive before even getting in the water? Accidents don’t happen because of a major, glaring, ‘smack you in the face’ issue, they happen because the multiple uncertainties which we think we are managing well, all combine just at the wrong time in a manner which we did not expect.

Improving Certainties

I am a big proponent of team diving as a way of improving safety and performance the reason being that multiple brains can a: help solve problems more effectively, and b: they are less likely to be tainted by the same biases at the same time (although group-think can cloud this). However, teams only work effectively when they have a clearly articulated goal, they use standardised (for their team) processes to reduce mental overheads, they understand the limitations of the systems they use, they trust each other to provide critical feedback rather than platitudes, they practice skills to prevent incidents from occurring but also to ensure that when failures happen, they happen safely and preferably predictably.

Other ways in which uncertainty can be reduced is via well-designed and well-implemented checklists, two-way and closed-loop communications and briefings which cover contingencies.

Where possible try to apply the adage, "Learning from your mistakes is great. Learning from others’ mistakes is much better though!"

Summary

As a friend of mine said, no surprises, no accidents. Surprises come from uncertainty. A risk is a measurable uncertainty and we use specific, logical tools to manage it. This isn’t what happens in diving for a number of reasons, not least the lack of incident, near-miss and performance data to allow the 'measurement' to happen. What we do is manage uncertainty and we do this by using mental shortcuts, biases and ‘rules of thumb’. Much (most) of the time these work out ok, but sometimes they don’t. Unfortunately, that is when we see the results in the form of injuries and fatalities.

So when someone says ‘entirely predictable’ when referring to the outcome of dive which didn’t end well, ask them if the outcome was 100% predictable given the variables which the incident diver was managing at that time with the motivation, experience, mindset they had because I can pretty much guarantee that any observer will not be tracking or using the same variables as the incident diver was using to manage their uncertainty on that particular dive.

Finally, consider how much uncertainty are you willing to accept and identify ways in which that uncertainty can be reduced. 

Footnote:

The Human Diver provides training in improving decision-making by developing communications, teamwork, leadership and situational awareness. This training comes in the form of an online micro-class and face-to-face training programmes. If you'd like to know more, drop me a line.