Human Frailty vs Automation- Lessons from the 2014 Virgin Galactic Air Crash

A lack of automation permits the ever inevitable human errors to occur, over-automation encourages overconfidence in a system infallibility. So, where do we draw the line between automation and non automation? To explore this, we’ll look at the circumstances in which the virgin galactic planes crashed in 2014.

featured image for the 2014 virgin galactic crash
.

For some time now, Structures Centre has advocated against the over-reliance of software packages and why the simpler, faster and cruder methods of manual design, primarily hand calculations and sketches are still very key and must not be relegated as a result of the introduction of automation into the design process.

To corroborate this, there are structural engineering failures where it was indeed over-reliance on automation in the form of software, that played the most critical role. The Hartford roof collapse is one classical example. We see how over-reliance in a software package made an engineer ignore every warning of bad design, ultimately culminating in the collapse of a steel roof weighing over 1200tons. One thing we can all agree to is the fact that whilst there are dangers associated with the use of software/automation, it has made immense contribution to engineering and eclipsed many of our complex tasks. Jobs that would normally take weeks, possibly months to accomplish can now be completed within splits.

For the records Structures Centre is never against the use of software packages but the excessive use of software which invariably leads to over-reliance. Whilst a lot of engineers also subscribe to this view, there are quite a number of engineers who appear to be luddites, condemning anything automation, for them everything should be done manually. Personally, this writer do not have a problem with this, but sincerely do not think anybody should be subjected to the torture of carrying out manual analysis when there are software that would do the job within minutes. For this writer, this is simply a display of luddism. What is indeed important, is modelling correctly, being able to interpret software outputs correctly, and carrying out checks where necessary to spot errors. We can’t emphasize this enough.

Just as we’ve maintained that there are dangers associated with the use of software in engineering, it is equally important to also state here that there are in fact more risk or at least about the same amount of risks and dangers associated with non-automation of things that can be automated in an age of increasing complexity. But where do we draw the line between automation and non automation? To explore this, we’ll look at the circumstances in which the virgin galactic plane crashed in 2014.

The Virgin Galactic Aircrafts

As part of the virgin galactic quest for space acquisition, two aircraft were developed, the SpaceShipTwo and the WhiteKnightTwo. The WhiteKnightTwo is an aircraft shaped like a catamaran that climbs in a corkscrew up to an altitude of about 5000ft during which the SpaceShipTwo hangs beneath it (Figure 1).

Figure 1: SpaceShipTwo and WhiteKnightTwo

At about 5000ft SpaceShipTwo detached from WhiteKnightTwo, while the latter dip upwards due to the loss in weight and quickly getting out of the way. In a typical flight, SpaceShipTwo fires its rocket and accelerates turning vertically upwards. It accelerates past the transonic range (0.9-1.1)Mach becoming supersonic pressing the two pilots and passengers unto their seats. Soon after the rocket shuts off and SpaceShipTwo’s momentum carries it upwards following an arc, crossing the peak or apogee, before beginning its downward trajectory. As the vehicle traverses this arc it pitches over for passengers to have a view of the earth below, and they unbuckle for four minutes of weightlessness. Then, before gravity reasserts itself, they strap in, and the vehicle begins to gain speed as it descends.

The most brilliant part is how SpaceShipTwo re-enters the earth’s atmosphere. Unlike NASA’s space shuttle or Apollo Command Module, heat generation during re-entry is not a serious problem because SpaceShipTwo doesn’t actually go into orbit. It does, however, have to slow down its descent (by generating drag) and it needs to ensure it remains facing the ‘right way up’ throughout. Generating drag, however, is a double-edged sword: while it’s needed at re-entry, it needs to be minimized during the boost phase. SpaceShipTwo solves this conflict by using a feathered system – it changes its shape during different stages of the flight. During the boost stage, the feather remains un-deployed and drag is minimized, but during re-entry, the facing downwards, thus simultaneously solving the orientation issue1.

After re-entry, comes the landing. As SpaceShipTwo is unpowered (with the exception of its rocket) it now behaves like a glider, with the pilots having one chance to land it safely – there is no power to abort and come back around for another pass. Controlling the vehicle in this phase is incredibly difficult, which is why only the very best aviators are accepted, many of whom are either ex-NASA or aviation test pilots.

Hiring the best people appears to have strongly influenced the vehicle’s development, which was undertaken by Scaled Composites LLC – a subsidiary of Northrop Grumman, the same Grumman which built the iconic Apollo Lunar Module, the spidery vehicle that put Neil Armstrong and Buzz Aldrin on the moon. One of the philosophies adopted in SpaceShipTwo’s design is that automation is minimized, with control being left with the pilots. (This is quite a departure from NASA’s shuttle program, where shuttle landings were wholly controlled by computer –in fact, it is actually doubted that a human could successfully land a shuttle unaided.²) Scaled Composites took the view that minimizing automation also minimized the number of systems that could go wrong. Pilot intuition, reflexes and control would be the first and last line of defense.

The Crash

On the morning of 31 October 2014, WhiteKnightTwo took SpaceShipTwo up to 46,400ft (14 142m)¹. The test plan called for SpaceShipTwo to fire its rocket, then deploy its feather and glide back to the spaceport. It was the deployment of the feather that transformed the test into a tragedy.

Feather deployment has two stages: unlocking and deployment. The co-pilot manually ‘unlocks’ the mechanical lock that keeps the feather in place, then both the pilot and co-pilot ‘deploy’ the feather by pulling two levers that activate actuators that rotate the feather through 60°. However, there is only a narrow window when the co-pilot can unlock the feather, which is when the vehicle is travelling between 1.4 Mach and 1.8 Mach.

Once above 1.4 Mach, the aerodynamic forces acting on the feather prevent its deployment, and since the actuators used to achieve deployment are not designed to prevent deployment, the feather can be safely unlocked at this speed without fear of the feather deploying unintentionally. However, below speeds of 1.4 Mach, during the transonic range (0.9–1.1 Mach), the aerodynamic forces acting on the feather are such that they act not to prevent, but to cause, deployment. Therefore, unlocking below 1.4 Mach can result in accidental deployment.

The maximum of 1.8 Mach exists for safety reasons, providing a safe abort speed should the locking mechanism malfunction and the feather remain locked. If the feather is not unlocked by 1.8 Mach, then the pilots have to abort the mission. Aborting at this speed, by shutting down the rocket and minimizing the height or apogee the vehicle would attain, means they can mitigate the hazards of re-entering with an un-deployed feather. Thus, if the pilots attempt to unlock before 1.8 Mach, and a malfunction presents itself, there is time to abort the mission and re-enter safely.

On the morning of 31 October 2014, SpaceShipTwo reached 0.8 Mach, and the forward-facing cockpit camera and flight data indicate that the co-pilot, Alsbury, called out the airspeed as “0.8 Mach”1. He then moved the feather from the ‘locked’ to ‘unlocked’ position. Thus, unlocking occurred not at 1.4 Mach, but at about 0.82 Mach, in the transonic range, when the aerodynamic forces act to deploy the feather. These forces were sufficient to overcome the capacity of the deployment actuators, which occurred quickly after unlocking³. The increased drag on the vehicle during this phase of flight resulted in it losing aerodynamic stability and breaking apart, as it was essentially folded in half.

Human Frailty

The National Transportation Safety Board (NTSB) investigation found that the co-pilot had spent many hours in the simulator, where he had repeatedly unlocked the feather at the correct speed of 1.4 Mach. So why did he unlock it at 0.82 Mach during the test? While we are unlikely to ever know precisely why the NTSB identifies a number of issues that likely affected his performance on 31 October.

Firstly, from a physical perspective, flying SpaceShipTwo was quite different to being in the simulator. During the actual flight, the pilot and co-pilot were subjected to significant G-forces and vibrations that were absent in simulations.

Secondly, there was the workload, which was intense. Over a short period of time, the pilots were required to perform a significant number of tasks from memory, with the NTSB concluding that such a high-pressure environment was likely to produce human error – even if tasks had previously been performed successfully in a simulator.

Thirdly, the fear of having to abort the mission if the feather wasn’t unlocked by 1.8 Mach might have resulted in pressure on the co-pilot to unlock early. But despite this pressure, was the co-pilot not aware that there was a risk of catastrophic failure from early unlocking? It transpires that Scaled Composite was very aware of the catastrophic consequences of early deployment during the boost phase, but the NTSB found that “there was insufficient evidence to determine whether the pilots fully understood the potential consequences of unlocking the feather early”¹.

Which raises the most perplexing question of all: given the known catastrophic outcome, why did Scaled Composites not provide some form of an automated system to prevent early unlocking Disturbingly, the NTSB would find that Scaled Composites LLC did not include such a system because it simply never envisaged that such qualified pilots would make such a mistake.

Lessons

There are indeed lessons that can be learnt from this failure. One of them reinforces very quickly the famous sayings: “nobody is above mistakes”, “to err is human”. But there is one more which succinctly summarize the lesson from this failure, made by James Reason: “it is often the best people who make the worst mistakes”⁴.

The philosophy of minimal automation in the design of the vehicle left a critical vulnerability: no capability to prevent and manage a human error. It was foremost a system failure – it ignored human frailty, a constant threat regardless of the expertise and experience of the individuals involved. Ironically the cause of this crash is diametrically different from the case of the Hartford civic-centre building collapse previously discussed. Rather than a lack of automation being the issue, it was over-reliance (in the form of computer software) that caused the failure.

Just as the Hartford civic centre failure is a lesson to engineers who blindly keeps faith in software packages, this crash is a lesson to the luddite structural engineer who sometimes appear overly egotistic about his hand calculation prowess that he maintains a very strong contempt and dislike for anything automation. This is a lesson that human frailty is a constant threat.

We must ensure the reliability of human judgement: a lack of automation permits the inevitable human errors to occur, but over-automation encourages overconfidence in a system’s infallibility and relegates human intuition to the sidelines. Between these two extremes, we must always find a balance, and that is where we draw the line.

See: The Hartford Civic-centre Roof Collapse

References

  1. National Transportation Safety Board (2015) Aerospace Accident Report NTSB/AAR-15/02:
    In-Flight Breakup During Test Flight, Scaled Composites SpaceShipTwo, N339SS, Near KoehnDry Lake, California, October 31, 2014 [Online] Available at: www.ntsb.gov/investigationsAccidentReports/Reports/AAR1502.pdf (Accessed: July 2021)
  2. Hall J. L. (2003) ‘Columbia and Challenger: organizational failure at NASA’, Space Policy, 19 (4),pp. 239–247.
  3. National Transportation Safety Board (2015) Video shown during NTSB Board Meeting on
    in-flight breakup of SpaceShipTwo near Mojave, CA [Online] Available at:www.youtube.com/
    watch t=15&v=Qv8Y0aMNix8 (Accessed: July 21).
  4. Reason J. T. (1990), Human error, Cambridge, UK: Cambridge University Press.
  5. Brady S. (2015) Human fallibility and automation: lessons of the Virgin Galactic crash. The Structural Engineer, 93 (9), pp. 20–22.

Thank you for reading, let’s have your thoughts in the comment box.

2,372 Replies to “Human Frailty vs Automation- Lessons from the 2014 Virgin Galactic Air Crash”

  1. Hi, I believe your site might be having internet browser compatibility issues. When I take a look at your web site in Safari, it looks fine however, when opening in Internet Explorer, it has some overlapping issues. I merely wanted to provide you with a quick heads up! Other than that, wonderful website!

  2. Fantastic website you have here but I was wondering if you knew of any user discussion forums that cover the same topics talked about here? I’d really love to be a part of online community where I can get comments from other knowledgeable people that share the same interest. If you have any suggestions, please let me know. Thanks!