Penn State researchers just unveiled something that sounds simple but represents a genuine shift: an app that lets you ask your phone to find your keys, your coffee mug, or your glasses—and actually finds them.
The app is called NaviSense. You speak a request into your phone, it scans your surroundings using AI vision models, identifies what you're looking for, and guides you to it with audio cues and haptic feedback (gentle vibrations). The team presented it at the ACM SIGACCESS ASSETS 25 conference in Denver, where it won the Best Audience Choice Poster Award.
What makes this different from earlier assistive tech is that NaviSense doesn't need a preloaded database of objects. Previous systems required someone to manually load models of specific items into the software—a bottleneck that made them inflexible and slow to adapt. "Previously, models of objects needed to be preloaded into the service's memory to be recognized. This is highly inefficient and gives users much less flexibility," explained Vijaykrishnan Narayanan, the Evan Pugh University Professor leading the work.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxInstead, NaviSense connects to servers running large language models and vision-language models—the same AI architecture behind tools like ChatGPT and image recognition systems. This means it can understand almost any object you ask for, in real-time, without needing advance setup.
How it actually works
You speak a request: "Find my phone." NaviSense listens, scans the room, filters out irrelevant objects, and if it needs clarification (say, you have two phones), it asks a follow-up question. One feature that emerged directly from user feedback is hand guidance—the app tracks your hand and directs you toward the object with precise audio cues. "There was really no off-the-shelf solution that actively guided users' hands to objects, but this feature was continually requested in our survey," said Ajay Narayanan Sridhar, the lead student investigator.
The research team built NaviSense by talking extensively with visually impaired people about what they actually needed. This wasn't an afterthought—it shaped the entire design.
In testing, NaviSense reduced search time compared to two commercial alternatives and participants reported a better overall experience. The team is now focused on power consumption and model efficiency as they move toward commercial release. Narayanan said the technology is "quite close to commercial release," with the next phase focused on making it even more accessible.
The work was supported by the U.S. National Science Foundation.







