Ken Gabriel is regarded as the main founder of the micro-electromechanical systems (MEMS) industry, which produces the accelerometers now commonly used in test and measurement applications. Draper Labs is the R&D non-profit organization spun out of MIT, perhaps most famous for developing the guidance computer for the Apollo moon mission.
Aerospace Testing International took the opportunity to talk to Ken at the recent Farnborough International Airshow to ask him about testing technology and the relationship between testing and innovation.
What does Draper Labs do?
We take technologies and integrate them into a new capability that is greater than the sum of its parts. We have about 1800 employees, we’re not a little research laboratory. Half of our business has been the same customer for 63 years – the strategic guidance systems for ballistic missiles for the US Navy and the Royal Navy. The culture isn’t that we just do sensor technology, we do the prototyping, the fielding and the support, just not in a product. That makes it much easier to partner with us, because we are not going to compete with you.
Tell us about your involvement with MEMS?
I got into MEMS at the Bell Lab straight out of Graduation School. We were looking at ways of extending human capabilities into new domains. I wanted to build something that was up to 100 times smaller than me and could manipulate things. Semiconductor technology could go down to small enough dimensions and provide signal processing and computing – so why not build mechanical parts out of silicon? That was the eureka moment. Then DARPA (the Defense Advanced Research Projects Agency) approached me to start a program, which enabled me to push the technology into applications. That was more technology management, we focused on what new capabilities we could get out of the technology.
Do you think engineers focus too often on technology and not applications?
That engineers should focus on capability is something I believe very strongly and it’s the ethic I bring to Draper. A technology is only useful within the context of a capability, either enabling one or making it possible at all. Once you know you are trying to build something, like a belt buckle-sized inertial guidance system, it focusses your thinking and that’s what delivers advances. It’s also rare that a new capability is dependent on just one technology. The technology has to be in context.
How will autonomous and guidance systems develop next?
The issues with current autonomous systems are not errors in the guidance navigation control (GNC), it’s that they were not sufficiently contextually aware of what is around them. Where autonomy and guidance navigation control overlap is that autonomy needs contextual awareness to know what to do with GNC.
The next step for autonomy and GNC is a combination of improving that contextual awareness, understanding the space better and improving how they handle GPS-denied environments.
There are advances to be made on inertial sensors that will make them function accurately for longer without a GPS signal. Military strategic guidance systems can go for 40 minutes and remain accurate, but they are expensive, heavy systems. The systems in an iPhone or car can run for a few minutes before the errors stack up. That needs to be 20, 30 minutes and the capability is there to be harvested.
What do you see as the big changes coming up for aerospace?
Automation will be one of the main drivers for aerospace in the future. Improved automation for safety, or in drones. Automation in drones will fundamentally change what happens in aviation and its societal impact. A great example is drones flying blood samples for analysis at a lab and sending the results to a smartphone.
Does certification and testing hold back innovation?
Safety is so paramount that you have to have testing. The consequences of not are so severe. But there are ways of incorporating testing as a natural part of development, instead of just as a certification or qualification gate.
A big part of that is the increased capability of model-based systems engineering. This tool means you don’t have to wait until a system is entirely finished to test part of it out. The model-based simulation can integrate a component virtually, recreating the rest of the system as is required, to see how it performs within the entire system.
Are there any other technologies you think will affect testing?
Hardware-in-the-loop will become a much more powerful technique that’s available to more people, because the tools are getting better, more affordable and more capable. It hasn’t completely infiltrated the industry yet, but it will.
Not only does it let you test parts of systems before the end of a project, but it reduces the need for experts at the front end. You’ll be able to make better systems, because you can explore options in more detail at the beginning of a project. You always have to verify things, but it means you don’t have to wait two or three years before you find something you did early on is wrong.