Like them or loathe them, unmanned aerial vehicles are here to stay. But are they as challenging to test as their manned counterparts? Aerospace engineers Garnet Ridgway and Sophie Robinson investigate.
Garnet Ridgway: By far the most complex and interesting component of any aircraft is its pilot. Human pilots give an otherwise inert machine the ability to aviate, navigate and communicate, and have an unparalleled capability to make split-second decisions.
The fact remains, however, that flying is not a normal activity for the human body. The effects of altitude, temperature and g-forces are constantly conspiring to endanger pilots, and our attempts to alleviate them tend to be compensatory rather than comprehensive solutions. Despite wearing g-suits, the pilots of fast jets are still exposed to massive forces, and even the most powerful environmental control system can’t protect helicopter pilots from extremes of temperature. The result is that pilots of manned aircraft are personally invested in the safety of the aircraft through an often inexorable physical link. If this safety concern were to be negated through the removal of the human element, aircraft testing would be far less challenging.
From an engineering perspective, the human brain can be considered as a high-order dynamic system with relatively few inputs (sight, vestibular, aural, etc) and outputs (limb movements, speech, etc). However, these inputs and outputs are so strongly interlinked, or ‘coupled’, that the relationships between them are not always easily predictable. Humans can be proficient at performing two different tasks in isolation, but attempting them simultaneously can result in a substantial drop in performance. For example, it has been found that intermittently switching to a near-field visual task results in severe degradation in far-field visual acuity. In a search-and-rescue scenario this could literally be a matter of life and death. Designing a test program that encompasses such subtleties is a major challenge.
Finally, the assessment of flight handling qualities and workload relies on an element of subjectivity and pilot opinion. Different pilots can react to the same test point in different ways, and so defining and declaring compliance with pass criteria can be very difficult indeed. It could be argued that this is the true art of aircraft testing – quantifying the subtleties of the pilot-aircraft relationship. Although to a limited extent this is applicable to remotely operated vehicles with human pilots, the future is undeniably autonomous. Two questions therefore arise: how are these skills to be retained, and do they really need to be?
Garnet Ridgway has a PhD from the University of Liverpool. He has designed cockpit instruments for Airbus and currently works for a leading UK-based aircraft test and evaluation organization.
Sophie Robinson: Unmanned aerial vehicles are swiftly becoming an important feature on both the military and civilian aviation landscapes. This rapid acceleration in use presents many challenges in testing compared with piloted vehicles.
Testing of manned vehicles has around 100 years of thinking, standardization and process refinement behind it. UAVs have entered the marketplace in the past 15 years and the associated process and infrastructure have been racing to catch up.
Manned aircraft often have strongly defined roles – for example most, if not all, military rotorcraft can be placed into one (or more) of four roles: attack, scout, utility and cargo. The method of testing aircraft in each of these roles has been honed and perfected, and these roles and methods, unsurprisingly, aren’t directly transferable to UAVs. Engineers have to be careful not to try to force a square peg into a round hole. The testing requirements for manned aircraft are also well known. Performance requirements and safety standards are predefined and applicable across a wide range of aircraft. This isn’t necessarily the case for UAVs. While specifications do exist, they lack maturity and often can’t be applied across a variety of platforms. Cost, size and capability make it impossible
to standardize the testing methodologies used. The differences in requirements between a hand-launched Raven UAV with a 4ft wingspan and those of a 131ft wingspan Global Hawk illustrate the broad spectrum of UAV operations – no single specification could assess both aircraft successfully.
Integration of UAVs into conventional airspace operations also proves problematic – often UAVs just aren’t designed to fit in. It can be difficult to obtain Certificates of Airworthiness and permits to fly, because until recently there were no airworthiness standards to comply with! Some UAVs also lack a lot of the standard equipment required to operate in normal airspace, such as transponders and traffic collision avoidance systems.
Despite all these challenges, UAVs represent a potentially revolutionary capability, with the ability to save money and lives, and both the military and civilian realms want to make use of this capability to its full potential as quickly as possible. This makes UAV testing one of the most exciting and fastest-moving fields in test and evaluation – and we engineers do love a challenge!
Sophie Robinson is currently finishing her PhD as part of the Flight Science and Technology research group within the Centre for Engineering Dynamics at the University of Liverpool. In the course of her research, Sophie regularly works with test pilots.