This project was the semester-long deliverable for a course called Rapid Prototyping in Computer Systems during my time at Carnegie Mellon University. The class was divided into 12 specialized teams of students from electrical engineering, software development, and design backgrounds that worked cross-functionally over the course of five months in partnership with 99PLabs at Honda. I gained valuable experience collaborating with engineers, stakeholders, and other designers to deliver a holistic solution for our clients at Honda.
ROLE
LEAD UX RESEARCHER & DESIGNER
TIMELINE
JAN - MAY 2023
CONTEXT
This project was semester-long course at Carnegie Mellon University.
TOOLS
Figma, Miro, React JS
TEAM
2 Researchers
3 Software Developers
3 Designers
Testing and diagnosing issues in self-driving algorithms under real-world conditions is costly and difficult to maintain at scale.
An autonomous testbed with sensor-pairing capabilities that recreates realistic testing scenarios on a modular track and collects test data for analysis on a component-based diagnostic UI.
Real-time testing data gathered by track vehicle sensors, easily-accessible and displayed front and center.
Save previous test results, retrieve test footage recordings and data for diagnosing, troubleshooting, and comparative analysis.
Watch tests happen from a birds-eye view to accompany testing data with visual monitoring system, great for reviewing previous tests without missing a single movement or turn.
Autonomous vehicles are an emerging area of technology that requires an incredible amount of foresight into their implementation.
Drivers make countless decisions while on the road. How can we ensure that algorithms are able to make safe decisions when presented with real world conditions, such as unexpected obstacles or terrain?
Additionally, testing self-driving algorithms under real-world conditions is costly to engineers and researchers. There must be a way to test these algorithms that is modular, robust, and scalable.
The F1TENTH is a miniature autonomous vehicle that is 1/10th the size of a real F1 race car. The F1TENTH is typically used for racing competitions and testing autonomous driving algorithms.
The F1TENTH utilizes a number of built-in sensors that collect data about its speed, position, acceleration, and more.
Our clients at Hondarequested that the testbed that we produce be to the scale of this miniature vehicle, as it would allow for accessible testing of self-driving algorithms.
We followed a double-diamond approach to completing this task. We further broke the double-diamond down into three principal phases of development: conceptualization, detailed design, and implementation.
As part of the Human-Computer Interaction team, most of my work revolved around planning and prototyping the design in preparation for handing off the design for development teams to implement in the final phase.
To begin understanding the requirements for testing self-driving vehicles, teams brainstormed various different challenges and opportunities for testing self-driving vehicles.
Through organizing team brainstorming activities and hands-on prototyping workshops, engineering and HCI students worked together to capture use cases and strategize solutions towards designing the final prototype. Workshopping sessions between teams were crucial in bringing together multiple perspectives into the design process.
These requirements were foundational in designing both the physical testing environment as well as the digital diagnostic UI.
We created storyboards of baseline and visionary scenarios that could arise as an outcome of the design. To the right are a couple of the storyboards that I drew up that show visionary scenarios of what the testing bed would be able to accomplish. Of these, we were able to incorporate two into the final prototype.
After consulting with experts at 99PLabs and collaborating with vehicle hardware and platform teams, we determined the basic testing data and functionality based around critique of baseline and visionary elements to be displayed on the UI.
In order to get a better understanding of the needs and specifications required for diagnosing and analyzing testing data from the vehicle and sensors, we conducted user interviews with three participants, including data scientists, machine learning researchers, and software developers.
We accumulated our findings from our primary research to identify key jobs to be done for our stakeholders.
These three user needs were identified to better understand what the expected tasks were for autonomous vehicle testers.
Knowing these user needs, our team created preliminary sketches of the interface and presented them to participants during user research sessions.
Physical testing environment needs to be able to simulate realistic driving scenarios in such a way that testers can easily recreate them over multiple test iterations and sessions.
Autonomous vehicle testers need to be able to assess points of failure of the vehicle when conducting tests so that they may make informed decisions about the vehicles’ design and improvements to the self-driving algorithm.
The testing environment needs to be responsive and dynamically display real-time data in order for test administrators to assess and troubleshoot as needed.
Participants suggested that the diagnostic UI dashboard contain components that are modular.
The prototype that we presented focused on collecting and monitoring the F1TENTH and sensor movements in real time. We found from interviews with key stakeholders that previous tests needed to be stored and resurfaced for comparative analysis.
We tested our diagnostic prototype 5 participants familiar with data analytics and administration of autonomous vehicle testing.
We tested multiple variations of the diagnostic interface prototype in order to understand the necessary changes needed to deliver on the needs we identified in research.
The prototype that we presented focused on collecting and monitoring the F1TENTH and sensor movements in real time. We found from interviews with key stakeholders that previous tests needed to be stored and resurfaced for comparative analysis.
Participants suggested that the diagnostic UI dashboard contain components that are modular.
Based on our findings, we decided to include a function that allows users to compare test runs within the diagnostic interface.
I developed a medium-fidelity prototype screens of the diagnostic UI based on findings from initial testing and validated this iteration with participants and client stakeholders.
This visual prototype was the blueprint for our development team to build the UI in React JS and implement data collection from sensors and video streaming.
Data from numerous sensors in the F1TENTH and the track itself is collected, displayed, and stored within this interface from both live tests and past tests.
The diagnostic interface is designed with an iterative testing process in mind, allowing for customization and comparative analysis across multiple trials.
Many features from the initial prototype, such as the data being displayed and navigation panels, were kept consistent.
The development team made the addition of a console log tab, which consolidated system messages to capture errors, warnings, or other software outputs from the vehicle.
After weeks of preparation, the class presented our final prototype to our client partners at Honda's 99PLabs. We demo'd the F1TENTH and the track, along with a walkthrough of the final diagnostic UI.