It’s no secret that the push for autonomous aircraft, at least in the civilian world, is fundamentally driven by economics.
In small aircraft like utility planes and helicopters, human pilots are disproportionately costly: not only do they demand a paycheck, they occupy a seat that could otherwise be taken by a paying passenger or cargo. In the emerging urban air mobility sector, margins are expected to be so tight that many people don’t think the industry can ever profitably scale so long as human pilots remain in the equation.
Beyond the profit motive, however, proponents of autonomous aircraft routinely make a safety case for taking pilots out of the cockpit, pointing to the high proportion of accidents that result from pilot error. A typical example can be found in a white paper published last year by Wisk, the electric vertical take-off and landing developer backed by Boeing.
Wisk cites a Boeing finding that around 80% of airplane accidents today are due to human error, in contrast to the earliest days of flight, when 80% of accidents were due to mechanical failures. “By developing autonomous systems,” the paper argues, “Wisk will help eliminate these errors and create an air taxi system that is fundamentally safe to transport people without an operator on board.”
A parallel conversation has been taking place in the world of self-driving cars, or autonomous vehicles (AVs) — and recent developments in that space hold lessons for aviation.
Like aircraft manufacturers who have been content to blame pilots for 80% or more of accidents in their products, AV developers have made much of an oversimplified claim by the National Highway Traffic Safety Administration that 94% of serious car crashes are due to driver error. NHTSA used the stat on its website to tout the potential of AVs “to remove human error from the crash equation,” saving lives and reducing injuries.
AV developers were quick to run with the claim. As National Transportation Safety Board Chair Jennifer Homendy described in a recent interview, “NHTSA put [the stat] on its automated vehicles page, and all of a sudden all of the AV people were using it as a way of saying, ‘well, see? It’s all human error! Our car will just fix it.’”
According to the document, “this differs significantly from a conventional safety approach in that it acknowledges both human mistakes and human vulnerability, and designs a redundant system to protect everyone.” Essentially, it recognizes that there are ways to achieve safety objectives other than inventing superior robot drivers — like redesigning roadway environments to promote safer speeds and make space for bicyclists and pedestrians.
Despite its ingrained propensity for blaming pilots, Part 121 commercial aviation is a triumph of safe systems design. Over a period of decades, airline operations in the U.S. and many other countries have evolved to be incredibly safe, high-profile accidents like the 737 Max crashes notwithstanding.
Today, many elements come together to sustain the industry’s enviable safety record: from how aircraft are designed and certified, to airline standard operating procedures and maintenance practices, to standards for airport infrastructure and a strong system of air traffic control. All of the Safe System Approach principles in the National Roadway Safety Strategy have already been embraced by commercial aviation, namely, that death and serious injuries are unacceptable, humans make mistakes, humans are vulnerable, responsibility is shared, safety is proactive, and redundancy is critical.
This approach has been so successful that even proponents of autonomy don’t really talk about making airline operations safer. While single-pilot and, ultimately, fully autonomous operations are being explored by OEMs and airlines as a way to cut costs and circumvent a future pilot shortage, from a safety perspective, their goal will be to ensure the same level of safety that exists today.
The auto manufacturers have “got a different bar than we have, they’ve got to be better than 40,000 [annual road deaths], we have to be as good as zero,” said Boeing’s vice president of airplane development Mike Sinnett in 2017, discussing the company’s autonomy strategy.
Autonomy seems to promise greater safety benefits for small aircraft like eVTOLs because small aircraft crash a lot more often than jetliners do, and usually for reasons labeled “pilot error.” When you look closely at these crashes, however, it’s apparent that the pilots were operating in a system far less robust than the one created for commercial airlines.
Take the January 2020 helicopter crash that killed nine people including Kobe Bryant. The Part 135 on-demand flight across the Los Angeles metro area was a prototypical urban air mobility mission, in a market that is being targeted by multiple eVTOL air taxi developers.
The NTSB identified the probable cause of the crash as the pilot’s decision to continue visual flight into instrument meteorological conditions, resulting in his spatial disorientation and loss of control. But he was also operating in a system that lacked practical instrument flight rules infrastructure for helicopters, and under the kind of pressure to complete the mission from which airline pilots are well insulated.
Although an autonomous aircraft presumably would not have succumbed to spatial disorientation, it wouldn’t have been allowed to randomly poke into clouds, either. The redesign of the system that would be necessary for efficient low-level operations by autonomous aircraft would make it safer for piloted aircraft, too — undermining the ostensible safety rationale for getting rid of pilots.
While it’s true that human pilots make mistakes, lots of them, developers of autonomous aircraft may have a tougher road ahead of them than they’d like to admit. In apaper published last year, Jon Holbrook, a cognitive scientist at NASA Langley Research Center, pointed out that most of what we know about human performance in aviation comes from studying relatively rare errors and failures, overlooking the many ways in which humans contribute to safety.
As an example, he cited an analysis of line operational safety audit data indicating that airline pilots intervene to manage aircraft malfunctions on 20% of normal flights. Extrapolating from this data suggests that these pilots intervene to keep flights safe over 157,000 times for every time that pilot error contributes to an accident resulting in a hull loss or fatality.
“An assertion … that human error contributes to accidents, therefore removing humans will reduce accidents, ignores that humans are also a significant source of successful system performance, and in fact contribute to safety far more than they reduce safety,” Holbrook wrote. Removing pilots from the cockpit entirely is likely to be significantly more challenging than previous leaps in automation that, for example, eliminated the need for flight engineers.
Autonomous systems will no doubt continue to improve. On Tuesday, Sikorsky announced that, in conjunction with DARPA, it had completed the first fully-autonomous flight of a UH-60 Black Hawk helicopter with no safety pilot on board, using the Matrix autonomy technology it has also fielded on small commercial aircraft like a heavily-modified FedEx Express ATR 42 for testing single-pilot operations.
Developers of these technologies are also taking a fundamentally different approach than the one that underlies many of the automated systems in today’s aircraft, which count on human pilots intervening swiftly and correctly in the event the system fails — as was the case with the maneuvering characteristics augmentation system in the 737 Max. More robust, reliable automation could eliminate the need for pilots to intervene so frequently to manage aircraft malfunctions in the first place.
Nevertheless, pitching autonomy as the solution to pilot error is a distraction from the requirement to, in the words of the DOT, create “a redundant system to protect everyone.” Even with fallible human pilots taken as a given, if UAM operators can translate the protections of the Part 121 world to Part 135 operations, they will succeed in achieving the level of safety they think is required for public acceptance.
As for whether they can make a profit, that’s a separate question. The true test of autonomous aircraft will not be in whether they can make a flawed system safe, but in whether they can make a safe system affordable.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.