Helios Public Safety Mobile Application

Helios is a body worn camera mobile application for a dedicated public safety device. This project was designed for a Human Centered Design (HCD) class, Fall 2019.

Role: UX Designer   |  Timeline: 8 weeks


Product Description + Design Considerations 

The product is an integrated body worn camera and remote speaker microphone. Body worn cameras are typically worn by public safety professionals such as police first responder/frontline, highway patrol and corrections officers — target users. The device relieves the burden of extra equipment by removing the radio speaker microphone (RSM) while adding a high resolution camera to capture various types of media — video, audio, and still images — media content types that are increasingly used as evidence. It is a wearable device worn in potentially mission critical environments where users' and public safety are possibly at risk.

The device is a 3.2 in touchscreen display with an Android OS. A few of the user interaction (UI) requirements included the following:

  • A recording indicator
  • Real-time transmission of footage
  • Simplifying device status information
  • Data integrity

Disclaimer: The form factors for this body-worn camera were inspired by Motorola's si500 BWC. This project was not sponsored by Motorola.

Secondary Research

According to IDEO’s Design Kit, secondary research is a method to gain greater understanding—“context, history, or data”. Because public safety has considerable social implications, I also wanted to understand the broader implications of body worn cameras. I conducted secondary research in order to gain greater insight about body worn cameras and the ecosystem in which they live. 

I decided to tackle this phase by writing down a list of questions as a base to explore. Here are a few:

  • What is a body worn camera?
  • Who uses body worn cameras?
  • What, if any, issues surround body worn cameras exist? Why?
  • Are there recommendations for BWCs? Technical specifications?
  • What are the challenges of BWCs? What are the pros and cons of BWCs?
  • What is the future of BWCs?
  • What design factors need to be considered with wearable technology?
  • What are the human factors to consider for body worm cameras?
dpd-bwc-secondary research

View the full secondary research report here.

Concept Maps

I created two concept maps after completing secondary research. The first concept map (below, left) helped me to understand body worn cameras and the large ecosystem including state laws and costs. The second (below, right) allowed me to explore and understand the functionality and relationships of interactions between the hardware and screen operations. 


Task Flows

Task flows were created in order to understand several aspects of the design and interactions. Questions that came up included but were not limited to:

  • What are the essential tasks of a public safety officer? In this case, a police officer?
  • How is the chain of evidence retained?
  • How much processing of information is required while on duty versus back at the station?
  • What information might be required in the field?
  • How would errors, indictators, and notificiations work with the various button, LED and lever states?

How might context determine how and when tasks are completed?


Design Assumptions

Based on secondary research, concept mapping, and a few iterations of task flows, I decided to come up with a list of design assumptions to help frame my next steps and provide greater clarity of potential interactions while also meeting design requirements.

  • This device is a shared device.
  • Device ID is set by the agency and user id is related to the device through a tracking/logging system prior to the device leaving the agency.
  • Users would dock the devices at the end of their shifts which would then complete the processing of media assets, secure the camera and maintain the chain of evidence.
  • The agency has programmed the two programmable buttons for audio (only) capture and still image capture.
  • The device adapts to lighting conditions.
  • Users will process media captures when time and safety permits.
  • Real-time broadcasting is controlled by the agency.
  • Media captures are uploaded automatically and periodically via wifi directly to the agency’s secure, cloud servers. If unsuccessful, the upload process will begin again though not continuously. The user also has the option to push content to the cloud.


Several iterations of sitemaps were created to understand the information structure and content types to be displayed on each screen. My first iterations included a greater level of detail (similar to user flows) in order to refine my thinking and processing.

BWC Sitemap v6


Sketching helped to give form to the interactions that seemed incredibly abstract. Using a template close to actual size, I explored different ways content could be displayed and structured. My goal was to keep related content and tasks as relevant as possible. Given the touchscreen specifications — 3.2 in device, 360 x 640 px, 229ppi — keeping glanceability and large touchpoints in mind was key.  While sketching, more questions and concerns came to mind:

  • How to communicate context to the user through screens within the device? What types of elements will be actionable versus purely informational?
  • What content types need to be persistent across screens and what should be specific to tasks or context?
  • How to keep interactions shallow but allow users to efficiently and effectively complete tasks?


Deb Pang Davis
Deb Pang Davis
Deb Pang Davis


Wireframes offered a more concrete way of seeing relationships from the details to the bigger picture. Based on some of the questions and concerns I realized during sketching, as well as design assumptions, I identified the following goals:

  • Establish a clear hierarchy, structure and relationships of content
  • Clear delination between informative elements versus interactive elements
  • Maximize readability and touch target sizes 
  • Reduce information overload and stay focused on tasks per screen

Readability is critical so, I used Skala Preview to check and refine how type sizes, icon design, weights and sizes, and other interactive elements were displayed with each iteration.


A sample of wireframes (v2) based on feedback from peers. View the annotated wireframes here. (Note: The annotated versions are toward the bottom.)



  • Design for small screens is a great challenge. One key challenge was to determine how much information to present in a way that is helpful. The small screen size forced me to think about readability at a glance especially when on the move. Based on size, how much information is actually needed at any given time depending on task and context?
  • Learning about human factors and design for physical products is rewarding. It is a rich experience to design a solution that takes into account the physical form of a product and to consider how a person connects and relates to the product especially in potentially changing environmental conditions.   
  • Mapping is critical to design for physical devices with screen interactions. Because the interactions occur with both the hardware and screen, taking careful inventory of how levers, buttons, indicators and notifications directly map to the screen is crucial to a positive user experience. Input through a slider needs to map to the screen experience. If recording happens by switching a slider up or down then the same interaction is required on screen otherwise, there is confusion.
  • Balance visual appeal with tasks and context. A bold, large and clear approach seemed an appropriate visual direction given the tasks, screen size, and diversity of use environments. This was counter to a natural inclination to be subtle.

Please do not steal. It may bring on bad karma. Thank you.
©2006-2020. All Rights Reserved.