Back to projects

SensoryScape

Research Developer at University of Iowa Hospitals & Clinics. AI-driven assistive technology that helps vision-impaired patients navigate open, dynamic spaces using real-time guidance and alerts delivered through consumer devices.

Problem

Vision-impaired patients face significant challenges in open, dynamic environments: obstacles move, layouts change, and traditional aids do not provide real-time, context-aware feedback. The goal of this research is to deliver spatial awareness and navigation support through modalities that work in active, everyday settings—audio and haptics—using devices people already own or can wear.

System

I built a navigation system that combines LiDAR and computer vision to perceive the environment, then converts that spatial data into real-time audio and haptic feedback that users can act on while moving. The system integrates iPhone, Apple Watch, AirPods, Arduino, and Raspberry Pi into a single pipeline, which required solving latency, synchronization, and interoperability issues across consumer and embedded hardware. Sensor fusion and real-time processing are used to make obstacle detection more reliable in dynamic indoor environments where people and objects move.

Research role

As Research Developer, I work directly with clinicians and researchers to turn patient safety, usability, and deployment constraints into concrete system requirements and engineering decisions. That includes choosing which sensors and devices to use, how to prioritize and present information (e.g., spatial audio vs. haptics), and how to evaluate the system with users in realistic settings.

Gallery