
Beyond the Floor: Why Meeting the Standard Isn't the Same as Being Safe
In 2022, I gave a presentation at Diving Talks called 'Compliance provides an illusion for safety in diving', alongside an InDepth article titled the same. The basic premise I was challenging was that as long as the instructor (or centre, or agency) was compliant to a standard, the diver would be safe. This article builds on that presentation, following deeper research into the academic space around standards, who writes them, and what the research in other domains shows about the contribution of standards to safer operations.
When I speak with people both inside and outside the diving industry, there is a chain of assumptions based on the fact that there are standards bodies, e.g., RSTC, WRSTC, RTC, ISO. The thought chain goes: standards exist and have independent assurance, these standards are then followed, and as a consequence, the activity is safe enough. Something not often considered by the end user is that if something goes wrong and the lawyers come knocking, the existence of the standard can then be used as a defence. The instructor followed the agency standard. The agency standard meets RSTC/RTC/ISO requirements. The rebreather complies with EN 14143 and other technical standards. Everyone has done what they were supposed to do.
This is a structurally neat description. It is also, on closer inspection, a closed loop in which the industry assesses its own work against criteria the industry itself has written.
Who actually writes the standards?
Take a moment to look at the membership of any of these bodies. The Recreational Scuba Training Council is composed of the major training agencies. The World Recreational Scuba Training Council is composed of the regional councils, which are themselves composed of training agencies. The Rebreather Training Council brings together rebreather manufacturers and training agencies. The ISO working groups that produce the recreational diving service standards draw their participants from those same agencies and manufacturers, often through national mirror committees that the agencies themselves nominate to.
This is not a conspiracy, and it is not a put-down of the activities being undertaken. It is the reality of how voluntary consensus standards work in most industries. The people who know the most about a domain are usually the people who operate in it commercially. Inviting them to write the standards is the only practical way to produce documents with technical credibility.
The structural issue is what happens next. The same organisations that contribute to writing the standard then assess their own products and courses against it. There is no independent body verifying that the standard is fit for the purpose of protecting divers, because the body that wrote the standard is also the body whose work is measured against it.
Lawyers and regulators have a phrase for this: regulatory capture. The economist George Stigler set it out in 1971, and the more recent work by Daniel Carpenter, David Moss, and James Kwak has refined it into something more useful for this scenario — cultural capture. Importantly, the regulator and the regulated are not in a corrupt arrangement. They are simply the same people, holding the same beliefs about what good practice looks like, drawing on the same shared experience, and reaching the same conclusions about what the standard should require. Outsiders with different worldviews in relation to safety and system safety — human factors specialists, organisational safety researchers, error management academics — are not excluded by malice. They are excluded because they are not part of the conversation that the standards body has decided is the conversation worth having.
Standards as legal shield as well as safety mechanism
The second function of these standards is what Sidney Dekker has been writing about for the last decade and what Drew Rae and David Provan call safety clutter: the accumulation of activities that protect organisations from liability without necessarily protecting people from harm. A manufacturer who designs a checklist on their own carries the full weight of liability for that checklist's adequacy. A manufacturer who can point upward and say their checklist follows the consensus position of an industry body has shifted the question from "is this checklist adequate?" to "did this manufacturer follow the prevailing industry view?" Those are very different legal questions, and the second is much easier to answer using a positive frame.
This works because of a doctrine that lawyers call the customary practice defence. If you did what your industry generally does, courts would usually treat that as evidence of reasonable care. The standard becomes the floor, and as long as you are on or above the floor, the question of whether the floor is in the right place rarely gets asked. The pan-industry body that produces the standard is not, in practical terms, suable. It has no operational accountability for any specific incident. It has diffuse membership, no individual decision-maker with a duty of care to any specific diver, and a constitutional remit that protects it from being held to account for outcomes. Liability diffuses upward into the structure and becomes very difficult to locate.
Mark Bovens calls this the accountability gap. Dennis Thompson, writing about political institutions, called it the problem of many hands. Ulrich Beck, in his work on risk society, gave it a sharper label — organised irresponsibility, a phrase he used to describe institutional arrangements rather than the character of the people within them. None of these writers had diving in mind, but what they describe is the kind of structure diving has built for itself, alongside many other industries.
Why the gap doesn't close
Here is the part that makes the situation harder to fix than it first appears. The gap between formal compliance and operational safety would close if accidents reliably exposed it, but they do not. Diving fatality investigations, when they happen at all, tend to identify a proximate cause — failure to follow procedure, failure to monitor, failure to abort — and stop there. The question of whether the procedure was designed in a way humans could realistically execute under the cognitive and physiological load of a real dive is not usually asked. The question of whether the training prepared the diver (or instructor) to recognise the conditions under which the procedure would degrade is not usually asked either. This focus on proximal causes or compliance is itself a structural feature, not a failure of investigators: investigations are often linked with legal cases, law enforcement investigations look to determine whether foul play is present and move on if not, and Coroners' inquests are primarily focused on cause of death, not cause of cause of death. Inquests could provide more, but they are not resourced or funded to do such a deep review.
Lundberg, Rollenhagen, and Hollnagel published a paper in 2009 with a title that has since become shorthand for this entire problem: What-You-Look-For-Is-What-You-Find (WYLFIWYF). Investigation models determine what counts as a finding. If your model assumes accidents are caused by individuals failing to follow procedures, your investigation will find individuals who failed to follow procedures. The procedures themselves — the standards that authorise them, the bodies that wrote those standards, the assumptions about human performance — sit outside the frame of inquiry that the investigation model has defined. Each accident then becomes evidence that the system is sound and that one person, on one day, fell short of it, because the system itself was never within scope.
Andrew Hopkins documented exactly this pattern in BP Texas City and again in the Macondo blowout. Compliance with industry standards was not just present; it was extensively documented. Catastrophic systemic failure was incubating underneath. The standards described what good practice was supposed to look like. The operational reality was that the standards had become decoupled from the work — a phenomenon that institutional theorists have been writing about since John Meyer and Brian Rowan's 1977 paper on formal structure as myth and ceremony. Organisations adopt formal structures to signal legitimacy to outsiders. Whether those structures change what happens at the sharp end is a separate question entirely, and often the answer is no.
The professional autonomy challenge
There is one more layer to this, and it helps explain why the literature on human factors in diving has not yet landed despite being available for the better part of two decades. Industries that develop their own standards tend to be slow to import safety science from neighbouring fields. The reason is not stupidity or laziness; it is that the legitimacy of a standards body depends on it being seen as the authoritative voice on what good practice looks like in its domain. To accept that aviation human factors, or medical resilience engineering, or maritime systems thinking has something to teach diving is to accept that the standards body's existing position was incomplete. That is a difficult concession for any institution to make, and harder still for an institution whose members have built their professional standing on being the recognised experts.
Charles Bosk wrote about this in medicine in 1979, and more recently with Mary Dixon-Woods in 2011. Rene Amalberti has written about it in his 2013 work on why some industries become ultra-safe and others stall. The pattern is consistent across domains. Knowledge transfer between high-risk industries happens slowly when the receiving industry has a strong professional identity built around being the expert in its own domain. Diving is one such industry. Instructors and agency staff are, by training and by experience, the people who know diving from the inside. Amalberti's observation is that, in industries with this profile, comparative critique from outside — the suggestion that risk management is decades behind aviation, healthcare, or offshore energy — tends to land as a challenge to professional standing rather than as an offer of help. It is a predictable pattern, not a failing of the individuals inside it, and recognising it as predictable is the first step to working with it rather than around it.
What the structure actually produces
Pull these threads together and a coherent picture emerges. The standards bodies are populated by the agencies and manufacturers whose products and courses are then assessed against the standards. Compliance with the standards functions, in legal practice, as a defence as well as a marker of safety. The pan-industry character of the standards body means that accountability for outcomes diffuses upward into a structure that cannot be sued. Accident investigations rarely interrogate the standards themselves, so the absence of accident-derived pressure to change the standards is not evidence that they are working. And the professional identity of the standards body's members makes the import of external safety science feel, predictably, like a challenge to legitimacy rather than an offer of help.
Each individual link in this chain is doing work that, in isolation, is defensible. The diver followed the procedure they were taught. The instructor taught the procedure the agency required. The agency required what RSTC consensus indicated. RSTC consensus reflected the agreed position of the agencies. The agency checklist met the RTC guidance. The RTC guidance was the consensus of the agencies. The problem only becomes visible at the system level, where the chain closes back on itself and there is no external reference point against which the procedure, the training, the standard, or the checklist can be tested for whether it actually does the job of keeping divers alive. The result is a system that is structurally well adapted to defending itself in court and structurally less well adapted to learning.
What would change look like?
I am not arguing that standards bodies should be abolished. They do important work, and the alternative — every agency and manufacturer writing entirely independent procedures — would almost certainly be worse. The argument is more specific than that: the current structure produces a particular kind of blindness, and the blindness is structural rather than the product of any individual's choices. Fixing it requires changes that are genuinely difficult to propose from within, because they would constrain the very discretion that allows the bodies to function. Three changes would matter.
The first is genuine independent representation on standards committees — human factors specialists, organisational safety researchers, and operational divers who do not work for the agencies whose products are being assessed. Not as observers, not as one voice among many, but with sufficient weight that a unanimous-from-the-agencies position cannot be passed without external scrutiny. Constructive dissent is the key to reducing groupthink, as shown in much of the red-teaming literature.
The second is investigation reform. Every diving fatality should look further back up the system to understand how and why standards influenced behaviours of those at the sharp end. Was the standard written in a way that prioritised commercial viability and liability protection, or operational safety, or some combination of the two? The case of Linnea Mills is an example of where the trade-off contributed to a fatal outcome.
The third is publication. The internal deliberations of standards bodies, the alternatives that were considered and rejected, the dissenting positions — these should be on the record. At present they are not, and that opacity is what allows the appearance of consensus to mask what is often a narrower agreement than it appears.
None of this is new or unique to diving. Aviation went through versions of these reforms in the 1970s and 1980s. Healthcare has been working on its own version for the last twenty years. Offshore energy was forced into it by Piper Alpha and again by Macondo. The body of evidence on what works is large, accessible, and not yet widely drawn on by the diving community that stands to benefit from it. That is the irony. The literature exists. The case studies exist. The methods are documented. The diving industry, which would have to do less original work than almost any of its peer industries to import the relevant knowledge, is the industry that has done the least to import it so far.
The reason, I think, comes back to where this article started. Reforming the standards architecture from within requires the people inside it to first accept that the existing standards are incomplete, and the structure itself makes that a hard thing to do. The professional identity of the members, the legal usefulness of the current arrangement, the absence of accident-derived pressure to change, and the closed-loop nature of the consensus process all push in the same direction. The knowledge that would help sits outside the conversation. Asking the bodies to reform themselves is asking something genuinely difficult, and it is not a criticism of the people inside them to say so. It is a description of the position they are in.
Be better than yesterday is not just a personal commitment. At a system level, it requires structures that are capable of being better than yesterday — capable, in other words, of recognising that yesterday's consensus was incomplete, and of being held to account when it was. The diving industry's standards architecture, as currently constituted, is not yet such a structure. Saying this out loud, and saying it as a structural observation rather than as an accusation, is the first step toward changing it.
Suggested further reading
Amalberti, R. (2001) The paradoxes of almost totally safe transportation systems. Safety Science, 37(2–3), pp. 109–126. https://www.sciencedirect.com/science/article/pii/S092575350000045X
Carpenter, D. and Moss, D. (eds.), Preventing Regulatory Capture (Cambridge University Press, 2014). https://www.cambridge.org/core/books/preventing-regulatory-capture/
Dekker, S., The Safety Anarchist (Routledge, 2018) and Compliance Capitalism (Routledge, 2022). https://www.amazon.co.uk/Safety-Anarchist-innovation-bureaucracy-compliance-ebook/dp/B0FCCZCTG5
Dixon-Woods, M., Yeung, K., & Bosk, C. L. (2011). Why is UK medicine no longer a self-regulating profession? The role of scandals involving “bad apple” doctors. Social Science & Medicine, 73(10), 1452–1459. https://doi.org/10.1016/j.socscimed.2011.08.031
Hopkins, A., Failure to Learn: The BP Texas City Refinery Disaster (CCH Australia, 2008). https://www.amazon.co.uk/Failure-Learn-Texas-Refinery-Disaster/dp/1921322446
Lundberg, J., Rollenhagen, C., and Hollnagel, E., What-You-Look-For-Is-What-You-Find: The consequences of underlying accident models in eight accident investigation manuals, Safety Science 47(10), 2009. https://www-sciencedirect-com.ludwig.lub.lu.se/science/article/pii/S0925753509000137
Meyer, J. and Rowan, B., Institutionalized Organizations: Formal Structure as Myth and Ceremony, American Journal of Sociology 83(2), 1977. https://www.jstor.org/stable/pdf/2778293.pdf
Rae, A., Provan, D., Weber, D., and Dekker, S., Safety Clutter: The Accumulation and Persistence of 'Safety' Work That Does Not Contribute to Operational Safety, Policy and Practice in Health and Safety 16(2), 2018. https://doi.10.1080/14773996.2018.1491147
Thompson, D., Moral Responsibility of Public Officials: The Problem of Many Hands, American Political Science Review 74(4), 1980.

