Helios is a body worn camera mobile application for a dedicated public safety device. This project was designed for a Human Centered Design (HCD) class, Fall 2019.
Role: UX Designer | Timeline: 8 weeks
The product is an integrated body worn camera and remote speaker microphone. Body worn cameras are typically worn by public safety professionals such as police first responder/frontline, highway patrol and corrections officers — target users. The device relieves the burden of extra equipment by removing the radio speaker microphone (RSM) while adding a high resolution camera to capture various types of media — video, audio, and still images — media content types that are increasingly used as evidence. It is a wearable device worn in potentially mission critical environments where users' and public safety are possibly at risk.
The device is a 3.2 in touchscreen display with an Android OS. A few of the user interaction (UI) requirements included the following:
Disclaimer: The form factors for this body-worn camera were inspired by Motorola's si500 BWC. This project was not sponsored by Motorola.
According to IDEO’s Design Kit, secondary research is a method to gain greater understanding—“context, history, or data”. Because public safety has considerable social implications, I also wanted to understand the broader implications of body worn cameras. I conducted secondary research in order to gain greater insight about body worn cameras and the ecosystem in which they live.
I decided to tackle this phase by writing down a list of questions as a base to explore. Here are a few:
I created two concept maps after completing secondary research. The first concept map (below, left) helped me to understand body worn cameras and the large ecosystem including state laws and costs. The second (below, right) allowed me to explore and understand the functionality and relationships of interactions between the hardware and screen operations.
Task flows were created in order to understand several aspects of the design and interactions. Questions that came up included but were not limited to:
How might context determine how and when tasks are completed?
Based on secondary research, concept mapping, and a few iterations of task flows, I decided to come up with a list of design assumptions to help frame my next steps and provide greater clarity of potential interactions while also meeting design requirements.
Several iterations of sitemaps were created to understand the information structure and content types to be displayed on each screen. My first iterations included a greater level of detail (similar to user flows) in order to refine my thinking and processing.
Sketching helped to give form to the interactions that seemed incredibly abstract. Using a template close to actual size, I explored different ways content could be displayed and structured. My goal was to keep related content and tasks as relevant as possible. Given the touchscreen specifications — 3.2 in device, 360 x 640 px, 229ppi — keeping glanceability and large touchpoints in mind was key. While sketching, more questions and concerns came to mind:
Wireframes offered a more concrete way of seeing relationships from the details to the bigger picture. Based on some of the questions and concerns I realized during sketching, as well as design assumptions, I identified the following goals:
Readability is critical so, I used Skala Preview to check and refine how type sizes, icon design, weights and sizes, and other interactive elements were displayed with each iteration.