Scenario Capture or Fulfilment - It Always Ends in a Disaster...!!

human factors operations Dec 22, 2015

I was listening to DisasterPodCast Episode 40, Shootdown by Andrew Rae which focussed on a number of aircraft shootdowns (TWA800, KAL007 and the USS Vicennes incident amongst others) and their subsequent investigation and incident causality. What struck me was a comment made near the end (~29:00) which referred to 'Scenario Capture' and the fact that because simulator training time is so precious, crews expect things to go wrong in terms of multiple failures which would lead to the key aim of the simulator sorties. As events unfold and the contribute to the 'expected' incident, they strengthen the confirmation bias that the simulator will end a certain way. This is hypothesised in the description of the USS Vincennes shootdown whereby biases influenced decision making (USS Stark was attacked a year before) which was then reinforced by the thought that they were under attack due to erroneous information being shouted across the Combat Information Centre (CIC), but the information to correlate the threat was not checked against independent sources.

It is easy to say with hundreds of man hours of investigation to say that the crew made a number of errors, but if we are to develop our own training scenarios to create "thinking operators", we need to be creative. Most events in the real world pass without incident, which means 'nothing happens' so crews or operators are already at a heightened state of awareness as they start the serial. When we start to run the training scenarios, and ones which the operators are expecting, they have already started to pre-load the cues for what to look for. If we as trainers provide those expected cues, then it will further reinforce those thoughts. However, the real world throws lots of curve balls, ones which we are not expecting and as a consequence, we don't necessarily recognise them because they don't match the pattern we are expecting. Thomas (2003) identified this difference between simulator and real-world using Line Operations Safety Audit (LOSA):

...in both environments [Line Operations, LOFT simulators] 50% or less of errors were effectively managed by being trapped by the flight crew. However, it was again found that crews' performance in the simulator was considerably better than that during normal line operations. Overall, crews failed to respond to 63.2% of all errors committed during normal line operations. Further, if the error was committed by the Captain, this figure increased to 67.6%. When interpreted in relation to crews' performances on key non-technical skills it was found that poor communication between crew members and a reluctance for First Officers to be assertive and respond to errors committed by the Captain were probable contributing factors to poor error management during normal line operations. This indicates possible latent failures in the organisational system, and highlights an area in which specifically targeted training interventions are required.

So, if you are looking at developing training scenarios, try to make the factors leading to the fulfilment of the scenario different and not the normal - there will be a much greater chance of creating "thinking operators" as a consequence.

Thomas, M.J.W. (2003) Improving organisational safety through the integrated evaluation of operational and training performance : an adaptation of the Line Operations Safety Audit ( LOSA ) methodology . Human Factors and Aerospace Safety, 3(1), 25–45.

Image from "USS Vincennes CG-49 at commissioning" by Camera Operator: PH3 KATHY KEIL, USN - ID:DN-SC-85-10271. Licensed under Public Domain via Wikimedia Commons