When do we stop asking why

When Do We Stop Asking “Why?”

February 05, 202611 min read

This is another blog in a series looking at how we view adverse events and what we can learn from them. Conscious that some want a 'quick and dirty' and that is what the top version is.

>>Two versions. Short at top, comprehensive version a little way down

After a serious diving incident, one question appears almost automatically: why did this happen? Asking “why” feels responsible and necessary. But the more important question is often left unasked: when do we decide we have gone far enough, and who decides that?

In practice, investigations rarely stop because understanding has been exhausted. They stop because a boundary has been reached. Sometimes that boundary is technical: a piece of equipment failed, a procedure wasn’t followed, or a decision went wrong. These explanations feel clear and actionable, and they fit neatly with how divers are trained to think about safety. But technical failures do not exist in isolation. They are shaped by design choices, maintenance practices, training, workload, time pressure, and commercial realities. Stopping at the equipment level gives closure but often misses how the situation made sense to those involved at the time – not just the ‘sharp end’ but also further up the organisation or system.

Other times, the boundary is organisational. Investigation panels may conclude they have reached the limit of what they can change. Laws, standards, or inherited policies sit outside their control. At that point, investigations stop not because learning is complete, but because influence has run out. Power and the associated dynamics are not often considered in incident investigations.

Most investigations focus on fixing immediate problems and getting operations running again. This is understandable, especially in operational environments like diving where lost operations mean lost revenue. But this premature closure also narrows learning. Questions about supervision, production pressure, training culture, or commercial incentives are quietly sidelined; not because they are irrelevant, but because they are harder to own. In sports diving, air gaps are intentionally placed in organisational structures to limit liability and transfer risk.

There is no natural stopping point when asking “why” in complex systems. You can always ask one more question. The decision to stop is shaped by time, resources, professional norms, and organisational comfort, not by logic alone. We forget that investigation reports are, in effect, sense-making artefacts. Like Rorschach inkblots, different readers see different causes depending on their experience, identity, and assumptions about how diving “really” works.

This is where a systems lens such as LEODSI is valuable. Instead of asking only what failed, it asks what conditions shaped behaviour and how work was actually done. It helps reveal how normal adaptations—shortcuts, workarounds, trade-offs—usually keep dives safe, but under certain conditions can contribute to unwanted outcomes.

The real purpose of asking “why” is not to find a single root cause, but to reach a level of understanding that allows future performance to change. Stopping too early provides comfort and closure, but little resilience. Going just far enough, beyond equipment and individual actions, into organisational conditions, creates learning that actually reduces risk.

The true measure of an investigation is not whether it feels complete, but whether it leaves the system better prepared for the next dive, under the next set of pressures, with the next group of imperfect humans doing their best to make it work.

When Do We Stop Asking “Why?” — The Comprehensive Version

Accident investigation, learning, and the limits or 'stop rules' we quietly accept

After a serious diving incident, one question appears almost automatically: why did this happen? It shows up in debriefs, investigation reports, training updates, and online discussions. Asking “why” feels responsible, even necessary. Yet buried inside that question is a deeper and more uncomfortable one that diving organisations rarely pause to consider: when do we decide we have gone far enough?

In safety-critical domains, the danger is not that we fail to ask “why,” but that we stop too early. We reach a point that feels satisfactory, defensible, or actionable, and we quietly draw a line there. The investigation closes. Recommendations are issued. Everyone moves on. The problem is that this stopping point often reflects organisational comfort rather than genuine learning.

Dive CCR freeflow


In diving, investigations commonly stop at the technical level. A regulator free-flowed. A CCR flooded. The line jams and drags the diver up. The drysuit leaked and the dive suffers from hypothermia. These explanations feel concrete and reassuring. They fit neatly with how divers are often trained to think about safety, and they lead to familiar solutions: retraining, revised procedures, equipment changes. But technical failures do not exist in isolation. They are shaped by design and purchase choices, maintenance practices, training pathways, task load and cognitive load, time pressures, and commercial constraints. Stopping at the equipment or proximal event gives us some form of closure, but it rarely tells us how the situation made sense to those involved at the time.

Sometimes the stopping point is organisational rather than technical. Investigation panels/bodies may conclude that they have reached the limit of what they can change. Laws, standards, or national regulations sit outside their authority. Policies are inherited rather than questioned. Culture outside the training system is not their responsibility. At that point, the investigation does not stop because understanding has been exhausted, but because influence has. Power and the associated dynamics are not often considered in incident investigations in diving. Clients & students have power. Instructors have power. Dive centres have power. Agencies have power.

This pattern is especially visible in organisations that are close to day-to-day operations, like dive centres or liveaboard operations. Their priority is often to correct immediate faults and restore capability. That focus is understandable where lost operations mean lost revenue. Dives still need to happen. Teams still need to function. But it also narrows the scope of learning. Broader questions about supervision, workload, production pressure, training culture, or commercial incentives are quietly sidelined, not because they are irrelevant, but because they are harder to own. In sports diving, air gaps are intentionally placed in organisational structures to limit liability and transfer risk.

Liveaboard

Over time, this shapes what investigations see. If the dominant frame is individual performance, then individual fixes dominate. If the frame includes organisational decision-making and system design, then responsibility becomes distributed rather than personalised. That shift is where discomfort often begins. Ironically, discomfort and reflection is where learning starts.

This matters because accident analysis does not have a natural stopping point. In complex systems like diving operations, it is always possible to ask one more “why.” Why did the diver act that way? Why did that option seem reasonable? Why were those constraints present? Why had those trade-offs become normal? The decision to stop is not dictated by logic, but by time, resources, professional norms, and organisational appetite. The 'stop rules' employed are contextual.

Yet investigation reports are often treated as definitive accounts rather than sense-making products. . We forget that investigation reports are, in effect, sense-making artefacts. Like Rorschach inkblots, different readers see different causes depending on their experience, identity, and assumptions about how diving “really” works. An instructor may see skills and procedures. A manager may see supervision and compliance. A regulator may see standards. None of these views are wrong, but none are sufficient on their own.

Rorschach inkblots

Learning from investigations depends less on the method used and more on the environment into which the findings land. In defensive or blame-oriented cultures, even high-quality analysis produces shallow learning. Where psychological safety exists, meaningful insight can emerge even from imperfect investigations. The difference is not analytical sophistication, but whether organisations are willing to reflect on how work actually happens, rather than how decision-makers believe work happens.

Major industrial accidents illustrate this clearly. Technical issues are often known well in advance. What fails is attention. Management focus drifts toward efficiency, output, or cost control, while warning signs become normalised. The eventual accident is rarely the result of a single bad decision, but of years of small, reasonable trade-offs that only appear reckless in hindsight.

Divers will recognise this pattern immediately. Briefings shortened because “we’ve done this dive before.” Equipment workarounds accepted as normal e.g., cells drifting, hoses, drysuits or valves leaking. Marginal conditions like strong currents or extremely limited visibility are tolerated because schedules are tight. None of these choices feel unsafe in isolation. Yet investigations that stop at the final action miss the slow migration of practice that made that action seem sensible at the time. The social acceptance of the drift is what the normalisation of deviance is about, not the breaking of the rules per se.

LEODSI LFEO

This is where LEODSI provides a different way of thinking. Instead of asking only what failed, LEODSI asks what conditions shaped behaviour and how work was actually done. It shifts attention from isolated causes to interacting influences across people, equipment, organisation, demand, supply, and information. Crucially, it helps investigators see how adaptations that normally keep dives safe can, under different conditions, contribute to unwanted outcomes.

From a LEODSI perspective, the question is not “who made the mistake?” but “what made this outcome possible?” That reframing allows investigations to move beyond local fixes and into system learning, without defaulting to blame or oversimplification.

So, when should we stop asking “why”? In practice, investigations often stop when all known facts are collected, when remedial actions can be formulated, or when legal thresholds are met. More honestly, they often stop when they feel complete.

The risk is that these stopping points align neatly with existing power structures. It is easier to retrain divers than to question training models and change them given the power in the different industries. Easier to update checklists than to examine commercial pressures. Easier to focus on individual judgement than to ask how organisational culture shaped that judgement.

Culture for power

Often ‘culture’ is perceived to be a ‘cause’, but we miss a great deal when we substitute culture for power (Perrow). If we want to truly improve safety diving, organisations must examine how power dictates policy, rather than merely blaming the ‘culture’ of dive centres and training organisations.

Asking “why” is not about endlessly chasing root causes. It is about depth with purpose. Each additional layer of inquiry exposes assumptions about competence, compliance, and control. LEODSI complements this by linking those layers to system design, showing how learning opportunities are lost when complexity is collapsed too quickly and we don’t examine the interactions, nor do we look at how time can shape and influence decisions and outcomes.

For diving organisations, the real question is not “What is the root cause?” but “What level of understanding is sufficient to change future performance?” That is a design decision, not just an analytical one. It forces organisations to confront what they are prepared to fix, not just what they are able to describe.

Why

Stopping the “why” conversation too early provides comfort and closure, but little resilience. Continuing it just far enough, beyond equipment, beyond individual actions, and into the organisational conditions that shape everyday work, creates something far more valuable: the ability to learn in ways that genuinely reduce risk. Unfortunately, the safety science research shows that while we might be able to find the ‘problems’ that need to be addressed, the ability to make change is really different.

And perhaps that is the real measure of an investigation. Not whether it feels complete or defensible, but whether it leaves the system better prepared for the next dive, under the next set of pressures, with the next group of imperfect humans doing their best to make it work.

Relevant recent blogs:

https://www.thehumandiver.com/post/what-story-gets-told-what-words-are-used

https://www.thehumandiver.com/post/when-the-story-hurts-too-much

https://www.thehumandiver.com/post/what-is-the-purpose-of-an-investigation


References:

Kletz, T. A. (2006). Accident investigation: Keep asking “why?”. Journal of hazardous materials, 130(1-2), 69-75.

Reason, J. (2016). Managing the risks of organizational accidents. Routledge.

Reason, J. (1991). Too little and too late: A commentary on accident and incident reporting systems. In Near miss reporting as a safety tool (pp. 9-26). Butterworth-Heinemann.

Rasmussen, J. (1990). Human error and the problem of causality in analysis of accidents. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 327(1241), 449-462.

Rasmussen, J. (1988). Coping safely with complex systems. In AAAS Annual Meeting 1988.

Cedergren, A., & Petersen, K. (2011). Prerequisites for learning from accident investigations–a cross-country comparison of national accident investigation boards. Safety Science, 49(8-9), 1238-1245.

Lessons from Longford: the Esso Gas Plant Explosion. Andrew Hopkins. CCH Australia, Sydney. 2000

Lundberg, J., Rollenhagen, C., & Hollnagel, E. (2010). What you find is not always what you fix—How other aspects than causes of accidents decide recommendations for remedial actions. Accident Analysis & Prevention, 42(6), 2132-2139.

Manuele, F. A. (2016). Root-Causal Factors: Uncovering the Hows & Whys of Incidents. Professional Safety, 61(05), 48-55.


Gareth founded The Human Diver in January 2016 when he recognised that there was a gap in knowledge within the diving community when it came to human factors and non-technical skills.  He decided to do something about it and has made waves ever since.

Gareth Lock

Gareth founded The Human Diver in January 2016 when he recognised that there was a gap in knowledge within the diving community when it came to human factors and non-technical skills. He decided to do something about it and has made waves ever since.

LinkedIn logo icon
Back to Blog