ARIA is a concept assistive technology designed to help blind individuals perceive the world through spatial sound.
For this project, I produced a cinematic AI-generated brand film that explores how ARIA glasses could translate visual information into auditory perception — allowing blind users to navigate environments, recognize people, read signs, and interact with the world more independently.
Rather than building a technical explainer, the goal was to create an emotion-driven narrative that demonstrates how assistive technology ultimately serves human connection.
The Story
The film follows a blind father navigating a small American town using ARIA smart glasses.
As he moves through everyday spaces — sidewalks, cafés, markets, and shops — the system quietly interprets the environment around him:
identifying paths and obstacles
predicting moving objects
recognizing familiar faces
reading signs and information
assisting small everyday interactions
The narrative builds toward a final emotional reveal when the father perceives his daughter’s joyful smile — a moment that reflects the deeper purpose of assistive technology: restoring connection and independence.
My Role
AI Video Director & Producer
Responsibilities included:
Story interpretation and cinematic direction
AI shot generation and character consistency
Visual pacing and edit structure
AR perception overlay design
Motion graphics direction
Final compositing and delivery
Creative Challenge
Producing a narrative film entirely with AI required solving several challenges:
Character consistency
Maintaining the same characters across multiple environments and camera angles.
Human interaction realism
Ensuring natural gestures, eye behavior, and social interactions in AI-generated footage.
Visual language design
Designing a perception system that felt assistive and trustworthy — not futuristic or intrusive.
Narrative pacing
Balancing emotional storytelling with the explanation of complex technology.
Visual Language
The ARIA interface was designed with a minimal and human-centered visual system.
Instead of overwhelming the viewer with constant data overlays, the film uses selective perception moments:
environmental wireframes for spatial awareness
trajectory prediction for moving objects
text recognition for signage
facial recognition for known individuals
subtle guidance paths for navigation
In the final scene, the interface shifts to a warm, minimal highlight — emphasizing the emotional moment rather than the technology itself.
Production Approach
The film was built through an AI-first production pipeline that combined:
AI image and video generation
controlled shot animation
motion graphics overlays
voiceover integration
editorial pacing
This hybrid workflow allowed the project to maintain cinematic continuity while visualizing a speculative technology concept.
Deliverables
90-second cinematic brand film
AI-generated live-action style footage
Custom AR perception overlays
Voiceover integration and final edit
Result
The final film demonstrates how AI-driven production can be used not just for visuals, but for emotionally grounded storytelling that communicates complex technology in an accessible and human way.
This project was one of the most creatively challenging AI video productions I’ve directed so far, combining narrative filmmaking with emerging AI workflows.
Cinematic AI brand film for ARIA smart glasses, showing how blind users could perceive the world through spatial sound and human-centered assistive tech.