The Japanese trifecta of tragedy has some people rethinking risk-assessment models and catastrophic risk in general. And maybe those of us in aviation should as well. After all, these models are only as good as the assumptions that are made about the likelihood of an event–or a series of events–occurring. In Japan, someone underestimated the potential threat of a 9.0 earthquake, a tsunami higher than the seawalls around the nuclear power plants and the ability of the nuclear plants to keep cooling in the face of these natural disasters. A lot of “someones,” including government regulators, presumably went along with the risk assessments that minimized the potential strength of an earthquake, the height of an eventual tsunami and, of course, a lot of things related to protecting the nuclear rods from meltdown and the safety of the method used for storage of spent fuel.
It’s only now, in the weeks after the stunning loss of life and still-unfolding nuclear drama, that experts are saying that the scale of the natural disasters was in fact predictable and damage to the nuclear plants was not only foreseeable but preventable. Earthquakes greater than those the nuclear plants were built to withstand were within the potential range for the area and so on. Is this just a case of 20-20 hindsight? Or were there powerful factors–the pursuit of profits–at work, clouding people’s visions of risks and disastrous outcomes if all the remote, but catastrophic, risks lined up? How good is a risk assessment model if worst-case scenarios are discounted?
Aviation Risk Management
I assume you all know where this is leading. Risk-assessment models have become popular in aviation. They seem to lend a scientific aura to what is in effect a guessing game of what the future will bring. Maybe decision makers are more comfortable taking risks when there’s a patina of scientific certitude about cutting corners–I mean, not planning for risks that are too remote ever to happen. Until they do, that is. But what about areas of risk where no risk-assessment models are used at all? How do industry experts and the government regulators entrusted with our safety assess these risks and act to mitigate them?
Here are two of my favorite examples of the FAA playing Russian roulette with passengers’ lives. Yes, manufacturers and airlines bear primary responsibility for doing the right thing, but getting them to act contrary to their shareholders’ interests is pretty darn tough without the government providing the backbone. (If Japan’s experience is too remote, recall the BP debacle in the Gulf of Mexico not too long ago.)
Unvetted Maintenance Manuals. The FAA has finally owned up to the fact that
maintenance manuals are frequently wrong. In a refreshing safety bulletin released several months ago by the Faast team, the FAA plainly acknowledged what those of us in maintenance have known since pretty much the dawn of aircraft manufacturing: the maintenance manuals are often misleading if not outright incorrect.
While I commended the FAA for acknowledging this problem, I was rather disheartened that it placed the burden for correcting them on the mechanics who find the errors.
Notwithstanding the FAA safety bulletin, aircraft manufacturers have remained strangely silent, as have air carriers and repair stations. It’s the dirty little secret that everyone knows but no one wants to mention. Have those poor Faast inspectors who put out the safety bulletin been taken to the woodshed for speaking the truth? I sure hope not. But it wouldn’t surprise me at all. There must be some pretty upset industry lobbyists wondering what some creative lawyers will do with the FAA’s safety bulletin. Class action, anyone?
I haven’t seen the risk assessment model for failing to vet maintenance manuals before they are released. Nor have I seen one for failing to set up a process for correcting them, knowing that mechanics regularly find problems with them.
Is it so far fetched to imagine a mechanic working overtime at an understaffed facility under the usual time crunch to move work, improperly repairing an aircraft because of an incorrect manual and the work being released as airworthy? Is it that far a stretch to imagine that this could, under all the wrong circumstances, lead to an accident? It has happened before.
Unrestrained lap children. If you were going to risk anyone’s life on an aircraft, wouldn’t you want it to be someone who could assume the risk him or herself? When it comes to unrestrained lap children, it’s only the youngest, most vulnerable and least able to assume risk for themselves whom the FAA–with the complicity of the airlines–fails to protect.
When it comes to justifying its failure to mandate restraints for children under the age of two, the FAA uses the immediate past to justify its action (or, more correctly, inaction). According to the agency, no child under the age of two has been killed or injured because of a lack of safety restraints in the last X number of years. But in the more distant past, children have been killed or seriously injured because they were not properly restrained.
So the FAA’s perverse risk assessment, at least when it comes to lap children, apparently includes not just an analysis of the likelihood of an event occurring in the future but its actual occurrence in the last few years. Good thing the agency is not doing the risk analysis for the country’s nuclear plants or we’d really be in trouble!
Unvetted maintenance manuals and unrestrained lap children are just two examples of the FAA’s failure to assess and acknowledge risk properly. There are others that I have written about and will continue to write about. They include bird strikes, unaudited outsourcing of maintenance, maintenance workers who aren’t sufficiently fluent in English to read and understand manuals and work cards…the list goes on. I’m always interested in hearing from readers on what you see as unacknowledged risks.