Complacency: The Silent Killer... But it's not that Simple!

- english gareth lock May 22, 2016

The recent case of Wes Skiles’ death on a rebreather dive in 2010 has brought home the stark reality that ‘complacency’ can kill. The use of the word “complacency" and the term "complacency, the silent killer” have come up frequently on social media in the last few days as a reason for the death. However, as with many things in life, it is not that simple. This blog article complements the Human Factors Skills in Diving Newsletter #7.

So what is complacency? 

Dictionary definitions include “self-satisfaction especially, when accompanied by unawareness of actual dangers or deficiencies”, and “a feeling of smug or uncritical satisfaction with oneself or one's achievements.” Taxonomies such the Human Factors Analysis and Classification System (HFACS) list it as part of adverse mental states but don't define what it means, the FAA Aviation Safety Reporting System (ASRS) taxonomy defines complacency as “self-satisfaction that may result in non-vigilance based on an unjustified assumption of satisfactory system state."  

Sidney Dekker, in "The Field Guide to Human Error", states "Complacency is a huge term, often used to supposedly explain people's lack of attention to something, their gradual desensitisation to risk, their non-compliance, their lack of mistrust, their laziness, the 'fat-dumb-and-happiness', their lack of chronic unease."  Unfortunately whilst these terms are commonly used, they are not very useful if we are to improve safety and performance (his point!).

Parasuraman et al, (2010) state that whilst there is no consensus definition for complacency, there are some key features. "The first is that human operator monitoring of an automated system is involved. The second is that the frequency of such monitoring is lower than some standard or optimal value (see also Moray & Inagaki, 2000). The third is that as a result of substandard monitoring, there is some directly observable effect on system performance. The performance consequence is usually that a system malfunction, anomalous condition, or out-right failure is missed...Technically, the performance consequence could also involve not an omission error but an extremely delayed reaction. However, in many contexts in which there is strong time pressure to respond quickly...a delayed response would be equivalent to a miss.

Simply put, we create a model of the external world and how it operates. If the reality maintains alignment with the model by monitoring it with an adequate frequency, and we adapt to any changes in a positive manner, we stay 'safe'.

Unfortunately, it isn't that simple. This model and the sampling rate is informed by previous experiences and current sensory perceptions, the most recent and emotionally charged having the greatest influence on how the model is defined and applied.

As we go through life these models are constantly updated. The problem is that if no adverse events occur, the models are updated with 'no adverse events occurred when I did this' and that becomes the new baseline. If we don't have a strong internal reflection process, or have external review from peers or supervisors, we can gradually drift from the established baseline. This is the term 'normalisation of deviance' that is often described. A previous article I wrote covers this in more detail.

If we look at Endsley's Situational Awareness Model below we can see the influence that experience (bottom) and system design and interface (top) has when it comes to the decision making. These experiences, combined with goals and expectations, determine the strategy for developing the different levels of Situational Awareness (L1, L2 and L3). Newsletter #5 covered SA in greater detail.  If those previous experiences have not ended in adverse or 'scary' situations, then the perception of risk (expectations) goes down, therefore, the monitoring and sampling strategy is changed, potentially being reduced.  

 

The systems that need to be monitored are often considered to be technical systems with a level of automation or control, but in Open Circuit diving these 'systems' could be the SPG, the bottom timer or the dive computer calculating decompression, or even the activity and location of the buddy.  All of these 'systems' are dynamic and require monitoring to ensure a safe return to the surface.

If we now move to rebreathers with their increased level of automation, system controls and a 'high' level of reliability, we should reconsider the issues at hand. Whilst some would not describe rebreathers as highly reliable (mechanically or electronically/software), for the most cases they are reliable enough for the diver to believe they are reliable and information can be trusted. 

With rebreathers, we have a greater opportunity for automation-related dependency. Humans are poor monitors, we have a relatively low threshold when it comes to attention span and we miss small changes. We require a certain level of mistrust in the system to improve our monitoring (goals/expectations in the diagram above). To achieve this we need to be exposed to real failures (but this is not good for consumer relations) or perceive a credible impact if signals are missed. e.g. mCCR.

Automation complacency is at its worst when the operator is undertaking multiple tasks and when manual tasks compete with the automated task for the operator’s attention. Irrespective of experience, automation complacency is present and cannot be overcome with simple practice.

During an experiment on automation complacency "The authors found that in the single-task condition, most participants detected the automation failure, whether it occurred early or late. Under multi-task conditions, however, only about half the participants detected the automation failure, and an even smaller proportion detected the failure if it occurred late than if it occurred early."  Parasuraman & Dietrich. 2010. (as above)

Training can certainly improve matters, but the training employed during rebreather courses is designed to train divers to action the emergency skills correctly, they do not help with the anomaly detection as no current system (as far as I know) has the ability to maintain life support at the same time as injecting failure modes into the display systems and then maintaining control once the diver actions the failure. Having an instructor hold up a flash card with a failure condition or problem to solve may tax the brain and muscle memory, but it then becomes a manual task rather than a monitoring task. Asking students what their pO2 is, if the system is in automatic and no failures are present, will not achieve the goal of "monitoring for failure".

"exposing operators to automation failures during training significantly decreased complacency and thus represents a suitable means to reduce this risk, even though it might not avoid it completely." - is from work by Bahner at al where they highlighted that randomly encountered automation failures during a training scenario improved the performance in subsequent assessments. Not that unsurprising you would think, but what was also apparent from this work was those that did not experience failures, even though they were told the failures could appear, suffered more from automation complacency.   

Some might argue that HUDs/NERDs/HUS might improve the detection of changes. They will certainly make it easier to bring the initial warning into the visual scan of the diver, but research has shown that "the automation complacency effect was not prevented by centrally locating the automated task."

Humans have a poor time detecting those things that we are not expecting! Taken to the extreme, pilots in a simulator using a HUD for their primary flight reference missed aircraft parked on the threshold of the runway!!  This was less likely to happen when they used head-down displays!

So how do we improve things?


First off, we need to be more aware. Simple, I know!! We need to train divers, especially rebreather divers, to be more aware of system conditions and what the implications might be. This is a real challenge given the limited communications we have underwater. Whilst training them to execute the emergency actions is essential, they also need to be able to understand the weak signals of information coming through to be able to move from L2 SA (Comprehension) to L3 SA (Projection). It also needs to be reinforced through post-course practice where there is an increased likelihood of real failure (purely from a time perspective) and these dives are effective from both an 'in-water' skills perspective, but also a systems knowledge perspective. This requires effective debriefing or reflective skills to be present. 

Secondly, we need to create the environment where people can discuss failures (system or personal) in an open manner. At an individual level, failures are quite rare and therefore the context and modes are unlikely to be encountered frequently. By sharing these experiences, whilst not as strong as personal experiences and less likely to remain in the long-term memory, they all contribute to the box at the bottom of Endsley's model which informs our SA development and our decision-making process.

Finally, we need to recognise that whilst the rebreather is trying to keep us alive as that is what it was programmed to do, it was designed, built and assembled by humans and therefore is the product of human fallibility too. High-Reliability Organisations (HRO) have a preoccupation with failure and look for weak signals for failures which might happen. You can do the same, if it doesn't look right, don't assume it will sort itself out. Be proactive, understand what the system was doing, and then see what the implications might be. If that means going back to your instructor/mentor do so. Use their experience to build your own.

Ironically, as automation increases in reliability, the training burden goes up, not down as there is a need to understand potential failures, and the clues leading to that failure. This will be a major challenge for the diving industry given the conflict between marketing teams & the accountants, and those at the sharp end delivering the content. 

A parting note from Chris Hadfield describing being an astronaut.

“A lot of people talk about expecting the best but preparing for the worst, but I think that’s a seductively misleading concept. There’s never just one ‘worst.’ Almost always there’s a whole spectrum of bad possibilities. The only thing that would really qualify as *the* worst would be not having a plan for how to cope.”

This point of view is very applicable to all forms of technical & cave diving and instructing in all domains.

 



Gareth Lock is the owner of The Human Diver, a niche company focused on educating and developing divers, instructors and related teams to be high-performing. If you'd like to deepen your diving experience, consider taking the online introduction course which will change your attitude towards diving because safety is your perception, visit the website.