Slowly but surely the dream of fully autonomous flight is getting closer. In the USA, the Federal Aviation Administration is putting the finishing touches on its beyond-visual-line-of-sight regulations, with a draft of the Part 108 rule expected by the end of the year. At the same time, European Union member countries are gradually giving out permits so more drone developers can operate their aircraft out of sight of pilots.
But for beyond-visual-line-of-sight (BVLOS) operations to really take off requires the development of robust and reliable detect-and-avoid software, an on-board system that can see and respond safely to its airborne environment in real time. There are different approaches among unmanned aircraft systems (UAS) developers about the technology underpinning these systems, but many are creating systems that rely in whole or in part on camera-based computer visioning.
Data clouds
As with similar software developed for self-driving cars, these camera-based systems rely for their efficacy on large volumes of data to train algorithms that can see obstacles in the environment. To this end, NASA researchers have recently developed a camera pod with the goal of creating a data cloud that UAS developers can use to train their detect-and-avoid systems.
The pod was developed and built at NASA’s Armstrong Flight Research Centre in Edwards, California, and is called the Airborne Instrumentation for Real-world Video of Urban Environments (AIRVUE). The device was tested this summer on a piloted helicopter at NASA’s Kennedy Space Centre in Florida.
According to Nelson Brown, lead NASA researcher on AIRVUE, the catalyst for the project came from researchers at NASA’s Ames Research Centre, who were looking for real world datasets for the development of systems for advanced air mobility.
Brown and his team started providing the researchers with approach and landing data using their drone hardware, but soon they realized that they required “a custom pod that used bespoke hardware to really reach the level of quality of data that we needed to continue the research.”
The prototype pod they developed can be fitted to NASA’s own helicopters. But longer-term, says Brown, the aim is to create a sensor pod design that can be loaned to helicopter operators.
“After their helicopter lands, we will pull the data back and through a curation process, extract segments that are shared with NASA researchers and, more powerfully than that, we then host it online for the more general research community,” he adds.
Camera types and tags
While the data from the AIRVUE project will help speed the development of detect-and-avoid camera-based systems, the pod itself is “a passive data collection tool,” says Brown. As such it only has limited computer visioning capabilities on board, sufficient to detect problems in the data capturing such as “a foggy lens or an intermittent data signal.”
All the visual data captured by the pod is tagged using high precision GPS / INS (Inertial Navigation Systems) trajectory, meaning that it can be cross-referenced with geospatial data for a given area, allowing drone developers to compare the accuracy of their navigation systems with the real-world geography.
For the cameras themselves, Brown and his team experimented with infrared cameras but found that there was little demand for them among researchers. They have opted instead for standard RGB-based cameras but with several specific requirements particular to the use case.
Brown says, “We need cameras that we can electronically control because we do not have a person there to adjust the focus. We need control software for the cameras. That drives us to cameras that are mainly designed for automotive, aerospace or other industrial uses.”
Another requirement for the cameras is the capability to insert a timestamp into the footage. According to Brown, many consumer cameras such as GoPros are ruled out because they can’t be integrated with time code standards like IRIG-G. So far Brown and his team have tried out several different industrial cameras on the prototype for the pod.
They have found that most of them are sufficiently ruggedized to withstand the high vibration environment of helicopter flight, although NASA designers have built a custom mount to protect the lens. The research team are also able to test the durability of the cameras before installing on the helicopter through in-house vibration testing.
High resolution needs
While Brown and his team continue their search for the ideal off-the-shelf camera, Switzerland-based Daedalean have taken matters into their own hands. Frustrated by the lack of a suitable camera on the market, the company, which is developing autonomous piloting software systems for civil aircraft, decided to make its own.
According to Daedalean’s vice president of engineering Sylvain Alarie, the camera it has developed is similar in functionality to conventional cameras but with a much higher resolution.
Most aerospace cameras are typically used for video, which means that the image quality, is not of sufficient quality for computer vision systems, which need to be able to find patterns in pixel-level details and color aberrations too small for the human eye.
“Usually, off-the-shelf cameras provide good resolution and sharpness in the center of the image, but it’s important for our system to have good quality over the complete field of view,” says Alarie. “Our camera provides sharpness all over the image.”
While there are cameras made specifically for computer vision applications with enough resolution and contrast, they are usually unable to withstand the extreme temperatures and vibrations of flight. “We are working on the cameras to be certifiable so that they reach the environmental testing standards required for aviation,” says Alarie.
Daedalean’s camera was developed with camera-maker Kaya Instruments, the lens-maker Thorlabs and optical solutions company Dontech, who helped them create the camera’s glass cover. The integrated solution consists of the camera, a data processing unit for capturing and processing high-resolution images. It features multiple elements to ensure data quality and security, including leak detection sensors and other components that ensure its performance.
Daedalean’s computer vision system uses neural networks to process the camera footage in real time. A visual awareness suite based on it has been designed as a pilot aid able ot detect non-cooperative traffic, ground obstacles like masts and wires, or provide positioning information in the absence of GPS. According to Alarie, possible use cases include search and rescue, emergency medical services, law enforcement and offshore cargo operations.
While the system conducts tasks such as traffic, ground obstacle detection, and navigation in real time based on what it sees, it also has an internal database that stores data on what it has already seen – maps of the regions over which it had flown.
“The database is regularly updated during offline processing based on imagery recorded in-flight. That way – much like humans who remember areas they have seen previously – the system continuously extends and amends its knowledge of the environment,” says Philipp Krüsi, Daedalean’s director of engineering.
“You are updating your mental database with what you are seeing. That is the same thing as what we are doing.”
Alternative vision
Both NASA’s AIRVUE project and Daedalean’s computer vision system rely on the assumption that the future of detect-and-avoid will be camera-based. However, not all companies in the field agree with this assessment.
California-based Reliable Robotics, for example, is developing a detect-and-avoid navigation tool that relies on radar. The choice of a radar-based system was made after Reliable spent time studying the regulatory landscape around autonomous flight, says the company’s CEO, Robert Rose.
“If you want your technology or your system to be certified for civilian use, you need to follow standards that the FAA has accepted,” says Rose.
“To date, the FAA has only accepted one standard for sensors that can do detect-and-avoid, and that’s based on radar.”
This standard is the Radio Technical Commission for Aeronautics’ (RTCA) standard for technical and testing requirements, RTCA DO-366. In 2017, the FAA published a technical standards order that referenced the RTCA standard, which among other requirements, specifies the total boundaries which must be covered by the radar, the object detection distance, the accuracy, and the number of moving objects that can be tracked simultaneously.
According to Rose, the absence of any other standard means that “if you want to fly an automated aircraft and have a detect-and-avoid system this decade, then it needs to be a radar.” As well as being, “the only near-term viable option” for bringing detect-and-avoid to market, radar also has advantages over camera-based computer visioning.
“For localization, we looked at camera systems in the early days of the company, but they have a number of drawbacks,” says Rose. “They don’t work in certain meteorological conditions, they don’t work when you fly through bugs – you get bug splatters in the lens. There are also issues with lighting, when flying directly into the sun.
“And so, we spent more time looking at other ways of localizing the aircraft position, and we figured out that aircraft already have many of the sensors that you would need to get a highly precise position estimate of the aircraft.
“We figured out that there are algorithms you can use that combine your GPS, that combine your VOR (very high frequency omni-directional range), your DME (distance measuring equipment), instrument landing systems, your AHRS (attitude and heading reference system) and your magnetometer. You can fuse all this data in software and come up with a very high precision estimate for where the aircraft is, and that’s good enough for landing.”