Web Exclusive Articles
« back to listing
Since NASA contracted Boeing to develop the CST-100 module,
James Johnson, lead test and evaluation manager of the program, has seen the team progress – but also face development challenges along the way
The development of NASA’s next generation of spacecraft is proving to be a combined effort, with Boeing involved in many of the agency’s projects. One program is the CST-100 (Crew Space Transportation) capsule – a device designed to carry up to seven passengers and compatible with a number of launch vehicles, initially the Atlas V but also the Delta IV and Falcon 9. As part of NASA’s Commercial Crew Development program, Boeing, in collaboration with Bigelow Aerospace, has been tasked with bringing the CST-100 – the primary function of which is to carry crew to the International Space Station – to the next level of design maturity.
Overall responsibility for development of the CST-100 lies with James Johnson, test and evaluation manager, at Boeing’s Commercial Crew division. Having undertaken two development phases since 2009, Johnson is currently looking to move the program into Commercial Crew Transportation Capability, a phase planned for December 2014.
NASA recently revealed details of wind tunnel tests using a model of the CST-100 spacecraft and integrated Atlas-V rocket, but Johnson says those evaluations only scratch the surface. “The main objective of the wind tunnel work is to address our aero capabilities during ascent, and to ensure we can align our forces of moment so that when we reach the speed we need to travel at, we don’t experience too much acoustic influence. We also need to confirm that the rocket can handle the amount of buffeting experienced,” he explains, revealing that the list of tests completed outside the tunnel is a long one. “We have parachute drop tests, engine firing tests in the desert, in-vacuum tests and structural tests. We are also working on a thermal protection testing system, and we have built an avionics systems integration lab specifically for this program.”
Testing in the tunnel
For the aforementioned wind tunnel tests, Johnson and his team visited Arnold Engineering Development Center (AEDC) in Tennessee, Langley Research Center in Virginia, and NASA’s own Ames Research Center in California. “We have to build unique models and use our own sensors, but where we can, we are trying to leverage what technology is already out there,” explains Johnson.
The reason for going outside Boeing’s own development sites was primarily for the testing opportunities they offer for spacecraft. “AEDC and Ames both have specific capabilities, and the one model that we used for abort testing was tested at both sites,” says Johnson, adding that work at Langley was predominantly concerned with engine calibration. “We could probably do it at one site, but they both have different strengths when it comes to testing,” he says. “Ames allows us to make manual changes to the model very quickly, while AEDC offers a more automated approach, which helps with our abort wind tunnel tests [of the Atlas V-mounted CST-100]. Here, we have one model that moves forward while the other moves backward, and we keep flying them in and out, allowing the two pieces to come apart and together again. When we need to calibrate the model, Langley’s wind tunnel can measure the engines in the tunnel and the thrust on the nozzles – so all three sites offer something different.”
Boeing used Ames’s Mach 1.24 transonic tunnel, as well as the smaller supersonic tunnel, which can test up to Mach 2. “The first model we had was used for abort testing, and we used both tunnels,” recalls Johnson. “The data for that correlated really well with our CFD and modeling predictions, which means our models were in good shape. We used a 7% model, which had a 12-14in diameter, and the follow-on is the same size, but with an updated design, and has thousands of data points.”
Another model – built to 2% scale (5in diameter) but measuring the full length of the rocket (about 8ft) – was also tested at Ames. “Here we investigated force and moment by specifying the angle and a Mach number, and we went super- and transonic to prove the performance.
“The ultimate goal of all our test work is to correlate and validate our models,” says Johnson. “Everyday vehicles you can fly in all situations, but with these vehicles we can’t hit or fly every data point. When you’ve identified that your wind tunnel tests can overlay directly onto your model, then you can verify the design to fly what you are asking it to fly. Once you verify it, you go into test flights, and then take that data to help execute
The size of the models was in part determined by the need to have, in some cases, up to 350 acoustic sensors (plus wires) installed within them. This presented a packaging challenge for Johnson and his team, but it was essential in order to get the data needed for the development. “For simple force and moment tests, you can make smaller models, because fewer sensors are used. We are able to size it by changing parameters of the pressure and air density in the tunnel, and then you scale that up to what the full-size vehicle would see.”
In addition, two other major wind tunnel tests were conducted by United Launch Alliance (ULA). “One was the integrated buffet test, which used our model with the first half of the Atlas rocket,” says Johnson. “It measured the forces and the buffets on the rocket, the crew and service modules, the dual-engine Centaur and the first section of the booster of the Atlas. The other test we did was primarily gathering acoustic and buffet information, to check for vibrating parts on the model.”
Evaluation of the vehicle’s emergency detection system is also being conducted at ULA, according to Johnson. This unit alerts the crew if there is an issue with the rocket, such as problems with pressure, in-flight, or any other element.
Away from the wind tunnel, engine test work has been completed in the Mojave Desert, says Johnson. “Polaris built a bespoke test stand that would hold our launch/abort engine when they fire the engine. A lot of the facilities we use could quickly pull things together, but we were keen to use our existing data infrastructure facilities. For GPS testing, we used chambers that already existed,” says the Boeing man.
As with all test programs, not everything has run smoothly for Johnson and his team on CST-100. “We had a challenge with the parachute drop test because when we were preparing to start testing, we were told we had a stability problem,” he recalls. “We looked at the mass and the drop in the vehicle and its aerodynamics, and decided that instead of doing a static drop from the helicopter, if we did a drop when the helicopter was moving, it would increase the stability.
“It didn’t matter if the drop article was static or moving; for us to get the data required, we just needed it the right way up,” he continues. “When we came up with the right parameters to drop it, we did two separate drops. The first was just the main unit, and then we deployed the drones together with the main unit. Both times it worked perfectly.”
Success on that occasion, then, but another, more recent, challenge Johnson recalls, was borne out of the team trying to develop the CST-100 at the same time as they were learning about its design characteristics. “We were told that more points were needed for the wind tunnel test matrix, so the thousands of runs we were doing at AEDC had to be fully prepared in advance. We couldn’t just turn up and run a test. When we give them one run they are used to putting in 100 data points, but for this new information we had to give them 5,000 runs. So when we made the changes we had to correlate them accordingly in our model to make sure they were right, before sending AEDC the correct data we needed to get to them. It all took a lot of extra hours,” admits Johnson.
Exercises such as the one detailed above create a huge amount of data to be processed – for one buffet testing session, it totaled 10TB. “For data collection we rely on each particular location, using specific tools to mine the data as it is generated from the test team. They can then send it to our team in real time, so by the time we get to the next run we can see what we need to achieve and if we need to make any changes ahead of the test.
The vehicle’s data points are dependent on the phases of flight – a combination of pitch and yaw that puts given forces onto the vehicle. “After establishing the data points we need to figure out how many sensors there are on the vehicle collecting information at each point,” explains Johnson.
Future program details
More wind tunnel work is expected, as well as further development of the overall system – beyond just the rocket and the spacecraft. “We need to work on the ground system, the mission control system, and the launch control center – all need to be tested,” says Johnson. “We are conducting further hardware tests, such as airbag drop tests, to ensure that if we don’t land in the water correctly we can right the vehicle. We had a water recovery test where the crew were climbing around on a model under water to make sure that they could get out onto the rafter.” The team has also dropped the model from a big rig, to analyze how the airbags deflate.
There will then be the ‘runs for record’ stage, where the units will start their qualification and acceptance testing in accordance with the required standards: “The general directive standard is SMC-S-016 and we are testing to that, but the first person that tested to that standard said it was terrible because it doesn’t allow you to shape it to your event. As a result, we created our own quality standard that we design and test all our hardware to, and that is how the runs for record for the unit and the major assembly will be executed.” The CST-100 is set for a 2016 launch – with one manned and one unmanned test vehicle planned.
John Challen is a freelance aviation and automotive journalist