MoodSee

MoodSee is a real-time emotional awareness assistant for Meta Quest 3, using Access Camera, AI emotion recognition, and contextual cues to help users — especially neurodivergent kids — better understand facial expressions, emotional transitions, and maintain eye contact during live conversations.

Role

XR Designer · UX/UI Designer · Concept Lead

Industry

Mixed Reality · AI · Healthcare & Education

Duration

24 hours — Hackathon Prototype

a cell phone on a table
a cell phone on a table
a cell phone on a table
  1. Concept & Research

MoodSee was born from two major breakthroughs:

  • Meta’s newly released Camera Access API for Quest 3, which enables facial data extraction

  • The rapid growth of AI emotion analysis models capable of interpreting expressions from video

The prototype explored whether real-time emotional cues could support autistic children in overcoming two of their biggest social challenges:

  • sustaining eye contact

  • recognizing and interpreting emotions during conversation

I defined the core concept: a gentle MR assistant that interprets real-time facial expressions and displays easy-to-understand emotional cues directly above the other person’s head — making communication clearer, safer, and less overwhelming.

  1. Design Strategy

As the only designer on the team, I created the full UX and visual direction for MoodSee. This included:

  • Experience flow for live conversation

  • Minimalist emoji-based user interface with five core states: joy, sadness, anger, uncertainty, and boredom.

  • Color-coded aura system

  • Privacy protocol requiring verbal consent

  • Initial prototype in ShapesXR

I researched existing webcam-based emotional AI tools and adapted their concepts into MR. The design goal was clarity and focus — to support attention, not overwhelm it.

  1. Prototype Development

We used:

  • Face Detection AI

  • Voice Emotion Detection

  • Body Tracking

  • Quest 3 Camera Access API

  • RoboFlow-trained emotion models

I tested multiple pre-trained models, selected the most stable one, and helped integrate it with our real-time pipeline (≈ 30 FPS).


The system cropped only the face from the camera feed to reduce noise and improve accuracy.

My responsibilities also included:

  • Ceating the floating emoji indicators

  • Designing the dominant emotion tracker

  • Building UI captions

  • Recording and editing the presentation trailer

  • Validating camera calibration and body tracking behavior

The prototype showed emotions with only ~0.3 ms delay — a strong technical achievement for a 24-hour build.

  1. Concept & Research

MoodSee was born from two major breakthroughs:

  • Meta’s newly released Camera Access API for Quest 3, which enables facial data extraction

  • The rapid growth of AI emotion analysis models capable of interpreting expressions from video

The prototype explored whether real-time emotional cues could support autistic children in overcoming two of their biggest social challenges:

  • sustaining eye contact

  • recognizing and interpreting emotions during conversation

I defined the core concept: a gentle MR assistant that interprets real-time facial expressions and displays easy-to-understand emotional cues directly above the other person’s head — making communication clearer, safer, and less overwhelming.

  1. Design Strategy

As the only designer on the team, I created the full UX and visual direction for MoodSee. This included:

  • Experience flow for live conversation

  • Minimalist emoji-based user interface with five core states: joy, sadness, anger, uncertainty, and boredom.

  • Color-coded aura system

  • Privacy protocol requiring verbal consent

  • Initial prototype in ShapesXR

I researched existing webcam-based emotional AI tools and adapted their concepts into MR. The design goal was clarity and focus — to support attention, not overwhelm it.

  1. Prototype Development

We used:

  • Face Detection AI

  • Voice Emotion Detection

  • Body Tracking

  • Quest 3 Camera Access API

  • RoboFlow-trained emotion models

I tested multiple pre-trained models, selected the most stable one, and helped integrate it with our real-time pipeline (≈ 30 FPS).


The system cropped only the face from the camera feed to reduce noise and improve accuracy.

My responsibilities also included:

  • Ceating the floating emoji indicators

  • Designing the dominant emotion tracker

  • Building UI captions

  • Recording and editing the presentation trailer

  • Validating camera calibration and body tracking behavior

The prototype showed emotions with only ~0.3 ms delay — a strong technical achievement for a 24-hour build.

a cell phone on a white block
a cell phone on a white block
a cell phone on a white block
two cell phones on a gray surface
two cell phones on a gray surface
two cell phones on a gray surface
  1. Testing & Iteration

We faced multiple challenges, including:

  • Camera lag

  • Incorrect emotion detection

  • Challenges with face framing

  • Fluctuating accuracy in different lighting conditions

Working together, we optimized the model inputs, cleaned the detection pipeline, and improved framing logic.

I continuously tested the MR overlay to ensure the emoji remained anchored above the person’s head using Body Tracking.
This helped maintain eye contact — a core goal for autistic users.

  1. Showcase & Results

During the final demo, judges wore Quest 3 and saw their own emotions appear above each other’s heads in real time.
This created an immediate “wow effect” — transparent, accessible, and intuitive emotional feedback in MR.

I co-presented the live demo and created the video trailer used during the pitch.

The jury highlighted:
• the social impact and clarity of the concept
• the innovative use of Camera Access API
• the accuracy and speed of the emotion HUD
• the strong potential for therapy, education, and conflict-resolution tools

The audience showed strong interest, raising questions about privacy, ethical use, and broader applications.

a cell phone leaning on a ledge
a cell phone leaning on a ledge
a cell phone leaning on a ledge
a pair of cell phones on a concrete block
a pair of cell phones on a concrete block
a pair of cell phones on a concrete block
a cell phone with a yellow rectangular screen
a cell phone with a yellow rectangular screen
a cell phone with a yellow rectangular screen

Outcomes

  • Fully working AI-based emotional assistant built in 24 hours

  • Real-time facial emotion tracking integrated with Quest 3

  • Minimalist MR HUD designed for neurodivergent user

  • Strong jury feedback and high audience engagement

  • Validated potential for XR-based emotional learning

Future Vision

MoodSee has the potential to grow into a fully-fledged communication assistant for:
• autistic children
• psychologists
• educators
• social workers
• negotiators
• conflict mediators

Future features could include:
• detailed eye-tracking
• tone-of-voice analysis
• behavior recommendations
• conversation context awareness
• emotional trend tracking

The project proved that real-time emotional support in XR is not only possible, but it can also be accessible, intuitive, and deeply impactful for human communication.

Outcomes

  • Fully working AI-based emotional assistant built in 24 hours

  • Real-time facial emotion tracking integrated with Quest 3

  • Minimalist MR HUD designed for neurodivergent user

  • Strong jury feedback and high audience engagement

  • Validated potential for XR-based emotional learning

Future Vision

MoodSee has the potential to grow into a fully-fledged communication assistant for:
• autistic children
• psychologists
• educators
• social workers
• negotiators
• conflict mediators

Future features could include:
• detailed eye-tracking
• tone-of-voice analysis
• behavior recommendations
• conversation context awareness
• emotional trend tracking

The project proved that real-time emotional support in XR is not only possible, but it can also be accessible, intuitive, and deeply impactful for human communication.

Other projects

a cell phone on a bench
a cell phone on a bench
a cell phone on a bench

Massera - Guided Massage

Massera is a Mixed Reality (MR) app for Meta Quest 3 / 3S that teaches massage through guided touch and spatial mapping, inspired by professional techniques.

a cellphone leaning against a wall
a cellphone leaning against a wall
a cellphone leaning against a wall

SolderSense

SoldierSense is a Mixed Reality training experience for Meta Quest 3 that teaches safe, realistic soldering using the Logitech MX Ink Stylus as a fully simulated MR soldering iron. The prototype won🥈 2nd Place — Meta XR Hackathon Berlin 2025, Education Track

a cell phone on a table
a cell phone on a table
a cell phone on a table

Folio

FOLIO is a spatial reimagination of the App Library — an interactive 3D bookshelf where each application becomes a dynamic, animated book you can pick up, rotate, browse, and open naturally. Designed for the XR Design Competition 2024, FOLIO explores how spatial computing can finally move beyond flat 2D grids and create a more human, intuitive relationship between users and their applications.

a cell phone on a table
a cell phone on a table
a cell phone on a table

Mix Drink Master

MixDrink Master is a Mixed Reality app for Meta Quest 3 / 3S that helps anyone craft professional cocktails on their first try.🥇 1st Place — Meta XR Hackathon Cologne 2024, Hobby & Skill Building Track

a cell phone on a table
a cell phone on a table
a cell phone on a table

WriteRight

WriteRight is a Mixed Reality learning experience that teaches handwriting, motor skills, and calligraphy through guided tracing and stylus-based practice on a real table surface. Built with the Logitech MX Ink Stylus, it helps children learn to write in a playful, stress-free way — and supports rehabilitation and fine-motor skill training for all ages.🥇 1st Place — Experimental Education 🥈 2nd Place — Logitech MX Ink Stylus Use Case at XR Hack Stockholm 2024

Interested in connecting?

Let’s talk projects, collaborations, or anything design!

Interested in connecting?

Let’s talk projects, collaborations, or anything design!

Interested in connecting?

Let’s talk projects, collaborations, or anything design!

Copyright 2025 by Artem Kolomatskyi

Copyright 2025 by Artem Kolomatskyi

Copyright 2025 by Artem Kolomatskyi