Astronauts on the International Space Station (ISS) follow procedures for everything. Those procedures, and the mechanism for viewing them, aren't great. As a graduate project working with the NASA Ames Research Center, we built a head-mounted procedure viewer prototype in order to address some of the issues with crew member efficiency.
Keep reading for more details on our process
Crew members on the ISS spend most of their time carrying out scientific research. For each research task, they must follow a rigid, complex procedure. The procedures, and the way they are used, cause a considerable reduction in efficiency; procedures are static text documents and are only viewable on wall-mounted laptops throughout the station.
Every project needs a mission statement, right? Right.
We set out to explore operator workflow in isolated environments in order to optimize procedure execution.
A lot of the time, designers have some background knowledge of the domain they are designing for. That's not really the case when it comes to working on a space station. So to better inform our approach, we started with a literature review on topics like the psychology of spaceflight, the structure of the ISS, and the cognitive processing of tasks.
We paired the literature review with a competitive analysis of other procedure-like tools such as medical procedure viewers, websites like Instructables.com, as well as the viewer currently in use on the ISS. Here, we focused on portability, rich media, customization, contextual awareness, and tool retrieval. Our analysis led us to a few recommendations:
Designing tools for astronauts has a unique constraint in that you will never be able to do contextual research due to the prohibitive cost of sending designers to space. So, we had to make due by talking to anyone we could get our hands on.
We interviewed NASA experts within organizations like Operations Planning, Payload Science, and Wearable Computing to build up even more domain knowledge. We were also extremely fortunate to interview two astronauts with mission experience - one retired and one less than 6 months removed from commanding the ISS1.
If they looked at the procedure and saw that it was 20 pages long, literally, they'd be like, 'It's going to take me longer to read the procedure than to just do what I need to do.'
- Procedure writer
Since we couldn't research our end users in context, we went on a nation-wide search to perform contextual research with individuals whose work was similar to the ISS crew in some way. We met with commercial pilots, deep sea divers, lab technicians, paramedics, construction managers, and pit crews based on similarities their tasks shared with payload science and maintenance work.
Naturally, we synthesized our research through affinity diagrams (one domain-specific and one for analogous domains) and various other models (i.e. cultural, sequence, artifact). From this, we extracted eight key issues to address:
After several visioning sessions and activities, we narrowed down our vision to one utilizing a head-mounted display2. The HMD would allow for hands-free interaction through voice interaction and leverage augmented-reality (AR) to provide information in context within a crew's work environment.
Lo-fidelity testing, rapid iteration, available hardware - these are all things imperative to prototyping. These are also all things that HMDs utterly fail at. Basically, if you want to test an application on an HMD, you have to build it. That is dumb. So, we had to find a way to quickly iterate and get feedback without building out functional software each time.
Gordon and I ended up building a prototyping tool that would let us use the actual HMD in tests but would display our wireframes instead of working software3. This allowed us to test six iterations within a few weeks.
The final version was designed to have a pretty simple, linear structure (while allowing users to jump to anywhere if desired). It splits steps into individual screens and provides in-context information and AR overlays to address the verbosity of current procedures.
For hardware, we chose to use the Epson Moverio due to it being one of the few available HMDs and its Affordability compared to the rest of the market. The down side is that the Moverio doesn't have any external sensors, so again we had to find a way to get what we needed. We ended up pairing the Moverio with a tablet to process all voice commands and handle the AR. The tablet then relayed what to display to the HMD. While not pretty — or ideal — it got the job done.
The final prototype was well received; gaining high marks in our final readiness test. I believe we also succeeded in exploring if an HMD is worth pursuing in the future. I think that it is, but the hardware just isn't ready yet4.
If you want to know more about the project, you can get additional information at our project website. The code for the final prototype, as well as our prototyping tools, is available on github.
1. Seriously, astronaut time is so valuable that being able to snag an hour was a huge win.↩
2. While not only addressing many of the issues found, this project provided an opportunity to explore the feasibility of HMDs in a 0G environment.↩
3. A moderator would sit at a computer during a test and serve static images to the HMD based on commands spoken by the tester.↩
4. Most are unavailable, prohibitively expense, or not advanced enough to be useful.↩