Works
About
Contact
Empowered AR(2024)
AR Glasses, MIT Hackathon, Dissability Design
Research
EmpoweredAR was inspired by the challenge of making augmented reality (AR) technology accessible beyond entertainment and education, specifically aimed at aiding visually impaired individuals. The project utilizes Xreal lenses to create a digital mesh of the environment, which, coupled with auditory signals, provides spatial awareness to users.
Software/Tools
Figma + Illustrator + Photoshop + Premier + Unity + Xreal AR glasses + VScode
Question
How can we leverage Xreal glasses to create an object-detection, sound-based system that supports people with visual impairments?
How we did it
EmpoweredAR leverages the ability of Xreal
eyewear to generate meshes of the physical environment and combined with
accurate and reliable distance tracking, can provide wearers with an auditory
map. As they approach an object that they may collide with, a tone is created
and the intervals between the tones are shortened relative to their distance
from the object, giving them very explicit information about the space.
Although we focused on collision avoidance, we quickly realized that this
helped to illuminate clear spaces; distinguishing the two can help users
generate a mental image of their space.
This enables them to navigate around obstacles safely. Developed with Unity for audio playback, TensorFlow and YOLOv3 for object detection, and XREAL glasses for depth data, the project faced challenges such as the absence of an RGB camera in the lenses, complicating the differentiation of obstacles. Despite these challenges, the team successfully implemented dynamic object detection.
Visual Pitch
Key learnings
Future plans involve integrating AI to enhance object identification and navigation in more complex and dynamic environments. This initiative represents a significant step in transforming AR into a tool that empowers visually impaired individuals to confidently explore their surroundings, showcasing the project’s blend of technical innovation and social impact. The code and further details are available in our repository: https://codeberg.org/reality-hack-2024/TABLE_72.
Visual Outcome
Process