Diligent application of hard-earned experience has made safety a hallmark of modern aviation; let’s not lose our grip on the basics of sound technique.
Air accidents are caused by human error, technical fault or some combination of the two. For human error, consider the May 2018 Global Air Boeing 737 tragedy on take-off from Havana, where “miscalculation of aircraft weight and balance” left the twinjet uncontrollable once airborne. A mechanical incident was the April 2019 diversion to a safe landing in Seattle of an American Airlines 737 after runway debris struck during take-off from Vancouver damaged hydraulics. And May’s Aeroflot Superjet 100 hard landing and fire at Moscow following a lightning strike may have had technical and human origins.
As evidenced by long-running downward trends in incidents and fatalities, the safety system has been hugely effective. Certification, rigorous post-incident investigation and clear guidance – or orders – on remedial measures in engineering, maintenance, operations and training all work together to make flying ever safer.
But it may be time to consider a fourth category of root cause. Ongoing investigation should in due course explain the fatal loss-of-control crashes that grounded the 737 Max fleet. At this point it seems likely that faulty sensor readings will be shown to have kicked off the chains of events that killed nearly 350 people, though other proximate causes may be identified. Restoring Max aircraft to flight will at the very least involve software modifications and extra pilot training – and it is too early to say whether any such change recommendations will not be extended to Airbus types, which have not been immune to sensor-related trouble.
It may not, though, be so simple to move on from the Max tragedies. The two crashes have led to calls for a systematic review of certification, regulation and pilot training.
Lurking beneath is a deeper problem. As in many other aspects of modern life, increasingly sophisticated computerisation is delivering benefits but often at a human cost of which we are barely aware. In aviation, automated systems have without question made for safer flying while human pilots remain on hand to intervene when necessary. But as aircraft automation becomes increasingly sophisticated, will we be confident that pilots can know when or how to intervene?
That is, do pilots and even engineers still know where “computer tasks” should end and “human tasks” begin? If not, aviation will – in its pursuit of sophistication and economy – have undermined the hard-won confidence of passengers and crew.