At Union Square Ventures, I had the opportunity to share how my team at Hume AI approached the challenge of designing the interface for the launch of EVI, the first empathic voice AI.
EVI (Empathic Voice Interface) is designed not just to convert text to speech, but to understand and convey nuanced emotions in real time, adapting to conversational context and the needs of users. Our goal was to create a responsive, emotionally intelligent system that feels attuned to its audience rather than robotic or uncanny. Achieving this required deep interdisciplinary collaboration––from voice data annotation and emotion modeling to product design, interface prototyping, and UX research with real users.
In my talk, I walked through the design decisions and tradeoffs we faced: balancing transparency and control for users, ensuring the AI's tone matched intent, and building trust through feedback and iterative improvement. I also shared how we used custom visualizations and micro-interactions to make EVI’s empathy visible and intuitive, so that users could sense not just what the AI "knew," but how it "felt." The conversation at USV was both technical and deeply human, exploring how empathic AI can support better communication, richer experiences, and a future where voice technology genuinely understands us.