MoodSee is a real-time emotional awareness assistant for Meta Quest 3, using Access Camera, AI emotion recognition, and contextual cues to help users — especially neurodivergent kids — better understand facial expressions, emotional transitions, and maintain eye contact during live conversations.
Role
XR Designer · UX/UI Designer · Concept Lead
Industry
Mixed Reality · AI · Healthcare & Education
Duration
24 hours — Hackathon Prototype
Concept & Research
MoodSee was born from two major breakthroughs:
Meta’s newly released Camera Access API for Quest 3, which enables facial data extraction
The rapid growth of AI emotion analysis models capable of interpreting expressions from video
The prototype explored whether real-time emotional cues could support autistic children in overcoming two of their biggest social challenges:
sustaining eye contact
recognizing and interpreting emotions during conversation
I defined the core concept: a gentle MR assistant that interprets real-time facial expressions and displays easy-to-understand emotional cues directly above the other person’s head — making communication clearer, safer, and less overwhelming.
Design Strategy
As the only designer on the team, I created the full UX and visual direction for MoodSee. This included:
Experience flow for live conversation
Minimalist emoji-based user interface with five core states: joy, sadness, anger, uncertainty, and boredom.
Color-coded aura system
Privacy protocol requiring verbal consent
Initial prototype in ShapesXR
I researched existing webcam-based emotional AI tools and adapted their concepts into MR. The design goal was clarity and focus — to support attention, not overwhelm it.
Prototype Development
We used:
Face Detection AI
Voice Emotion Detection
Body Tracking
Quest 3 Camera Access API
RoboFlow-trained emotion models
I tested multiple pre-trained models, selected the most stable one, and helped integrate it with our real-time pipeline (≈ 30 FPS).
The system cropped only the face from the camera feed to reduce noise and improve accuracy.
My responsibilities also included:
Ceating the floating emoji indicators
Designing the dominant emotion tracker
Building UI captions
Recording and editing the presentation trailer
Validating camera calibration and body tracking behavior
The prototype showed emotions with only ~0.3 ms delay — a strong technical achievement for a 24-hour build.
Concept & Research
MoodSee was born from two major breakthroughs:
Meta’s newly released Camera Access API for Quest 3, which enables facial data extraction
The rapid growth of AI emotion analysis models capable of interpreting expressions from video
The prototype explored whether real-time emotional cues could support autistic children in overcoming two of their biggest social challenges:
sustaining eye contact
recognizing and interpreting emotions during conversation
I defined the core concept: a gentle MR assistant that interprets real-time facial expressions and displays easy-to-understand emotional cues directly above the other person’s head — making communication clearer, safer, and less overwhelming.
Design Strategy
As the only designer on the team, I created the full UX and visual direction for MoodSee. This included:
Experience flow for live conversation
Minimalist emoji-based user interface with five core states: joy, sadness, anger, uncertainty, and boredom.
Color-coded aura system
Privacy protocol requiring verbal consent
Initial prototype in ShapesXR
I researched existing webcam-based emotional AI tools and adapted their concepts into MR. The design goal was clarity and focus — to support attention, not overwhelm it.
Prototype Development
We used:
Face Detection AI
Voice Emotion Detection
Body Tracking
Quest 3 Camera Access API
RoboFlow-trained emotion models
I tested multiple pre-trained models, selected the most stable one, and helped integrate it with our real-time pipeline (≈ 30 FPS).
The system cropped only the face from the camera feed to reduce noise and improve accuracy.
My responsibilities also included:
Ceating the floating emoji indicators
Designing the dominant emotion tracker
Building UI captions
Recording and editing the presentation trailer
Validating camera calibration and body tracking behavior
The prototype showed emotions with only ~0.3 ms delay — a strong technical achievement for a 24-hour build.
Testing & Iteration
We faced multiple challenges, including:
Camera lag
Incorrect emotion detection
Challenges with face framing
Fluctuating accuracy in different lighting conditions
Working together, we optimized the model inputs, cleaned the detection pipeline, and improved framing logic.
I continuously tested the MR overlay to ensure the emoji remained anchored above the person’s head using Body Tracking. This helped maintain eye contact — a core goal for autistic users.
Showcase & Results
During the final demo, judges wore Quest 3 and saw their own emotions appear above each other’s heads in real time. This created an immediate “wow effect” — transparent, accessible, and intuitive emotional feedback in MR.
I co-presented the live demo and created the video trailer used during the pitch.
The jury highlighted: • the social impact and clarity of the concept • the innovative use of Camera Access API • the accuracy and speed of the emotion HUD • the strong potential for therapy, education, and conflict-resolution tools
The audience showed strong interest, raising questions about privacy, ethical use, and broader applications.
Outcomes
Fully working AI-based emotional assistant built in 24 hours
Real-time facial emotion tracking integrated with Quest 3
Minimalist MR HUD designed for neurodivergent user
Strong jury feedback and high audience engagement
Validated potential for XR-based emotional learning
Future Vision
MoodSee has the potential to grow into a fully-fledged communication assistant for: • autistic children • psychologists • educators • social workers • negotiators • conflict mediators
The project proved that real-time emotional support in XR is not only possible, but it can also be accessible, intuitive, and deeply impactful for human communication.
Outcomes
Fully working AI-based emotional assistant built in 24 hours
Real-time facial emotion tracking integrated with Quest 3
Minimalist MR HUD designed for neurodivergent user
Strong jury feedback and high audience engagement
Validated potential for XR-based emotional learning
Future Vision
MoodSee has the potential to grow into a fully-fledged communication assistant for: • autistic children • psychologists • educators • social workers • negotiators • conflict mediators
The project proved that real-time emotional support in XR is not only possible, but it can also be accessible, intuitive, and deeply impactful for human communication.