
Smart glasses that make conversations more inclusive.
Captify, a startup building assistive wearables, launched their new product Captify Pro, AI-powered glasses for the deaf and hard-of-hearing community. I led the MVP design of both the mobile app and glasses interface to make the experience more intuitive, trustworthy, and practical for daily use.
Company
Captify
Smart Glasses Mobile
Year
2025
Duration
6 months
Business Goal
Launch Captify Pro with a reliable, easy-to-use experience that drives daily adoption of AI-powered AR glasses for accessibility.
My Role
Founding product designer for Captify, leading 0→1 end-to-end design across the app and glasses UI, shaping product direction with the CEO and engineers, and grounding decisions in insights from 16 user interviews.

A lifestyle product for everyday communication
The redesign transformed Captify Pro from a technical prototype into a credible lifestyle product — one that blends accessibility, confidence, and ease of use. This work not only improved user trust and satisfaction but also gave Captify a stronger narrative for product launch and investor presentations.

Problem Statement
Solution I
Smart Mode
Auto-switches listening modes, No manual setup needed.
Solution II
Speaker Identification
Real-time voice labeling, see who's speaking instantly
User Interview
What do users say about current version of Captify?
We talked to deaf and hard-of-hearing users to find out why Captify wasn’t clicking. The barriers were more than just clunky setups or technical glitches. There was a real gap in trust and comfort. We used those insights to drive every design choice we made.

Main Experience I
Smart mode
Keeps users aware of important sounds around them with simple, real-time alerts—helping them feel safe and informed in any environment.
Pain Point I
8 of 16
Users had trouble adjusting settings for different situations.

Main Experience II
Speaker Labeling & Customization
Customize speaker tags to navigate in multi-person conversations.
Pain Point II
14 of 16
Had trouble identifying speakers in group conversations.
Solution II
Speaker Identification
Real-time voice labeling, see who's speaking instantly
Main Experience III
Stay Aware
Keeps users aware of important sounds around them with simple, real-time alerts—helping them feel safe and informed in any environment.
Pain Point III
10 of 16
Wants more environmental sound engagement.
Solution III
Stay Aware
Gentle, context-based alerts, stay confident anywhere.
Current solution |
Transcribe/Translate

User flow |
Transcribe as default

1st Version |
HISTORY AS HOMEPAGE
Transcript history wasn't being used
Translate and Transcribe tabs created unnecessary friction
Glass connection status not visible to users
2nd Version |
SHOWING DEVICE STATUS
😍
What works well
Clear structure; obvious flow
1. Navigation tabs
2. Switch in between transcribe/translate needs more work
3rd Version |
ALL-IN-ONE HOMEPAGE
😍
What works well
Separate tab to view history
Transcribe as default, translate as an option
Glass connection status displayed at the top of the homepage
Information hierarchy
Final Version |
ALL-IN-ONE HOMEPAGE
Final Design
Designed for faster, more reliable everyday use
By prioritizing real user needs, the experience now feels effortless and adaptive. Users can manage hearing and transcription seamlessly with minimal manual input.

AI Speaker Identification
Allow users to label frequent speakers so the system can recognize and tag voices automatically, improving context and accuracy over time.

Manage Notifications & Transcripts
Refine alerts and transcripts by importance, delete outdated content, and improve long-session performance for real-world use.



