Schedule

Tentative Schedule of the two-session Workshop

Start Time Duration Event
14:15 15 min Welcome and Introductions
14:30 30 min Lightning Talks by Accepted Participants
15:00 45 min Keynote by Prof. Mayank Goel, followed by 15min Q&A
15:45 45 min Break
16:30 45 min Keynote by Dr. Akshay Paruchuri, followed by 15min Q&A
17:15 45 min Group Discussions with Demos/Posters

Keynotes

Prof. Mayank Goel

Keynote Details: To be announced

Speaker Bio: Mayank Goel is an Associate Professor in the Software and Societal Systems Department (S3D) and the Human-Computer Interaction Institute (HCII) in the School of Computer Science at Carnegie Mellon University, where he leads the Smart Sensing for Humans (SMASH) Lab. His research focuses on developing practical and deployable sensing and machine-learning systems for health sensing, technologies for the developing world, and novel user interactions that reduce barriers to technology use. His work draws on human–computer interaction, mobile computing, sensing, signal processing, and machine learning, and is inherently interdisciplinary, involving close collaborations with engineers, clinicians, community health workers, patients, and caregivers worldwide. Several of his inventions are deployed in clinics and hospitals, licensed to companies, and integrated into commercial products. He received his PhD in Computer Science and Engineering from the University of Washington, an MS in Computer Science from the Georgia Institute of Technology, and a BTech in Computer Science and Engineering from GGS Indraprastha University, India.

Dr. Akshay Paruchuri
From Sensing to Understanding: Building All-Day Wearable Systems for Personal Health Management

Next-generation wearables such as smart glasses are poised to become platforms for continuous, multimodal egocentric sensing, uniquely positioning them to transform personal health management. Realizing this vision requires solving two interconnected challenges: energy-efficient operation for all-day usage and intelligent systems that transform raw sensor data into actionable health insights. Beginning with energy efficiency, smart glasses face a fundamental tension: cameras, on-device AI, and wireless transmission are power-hungry, threatening all-day usability. Smarter sensing approaches can help - for example, EgoTrigger, an audio-driven image capture approach that selectively activates cameras only when low-power audio cues indicate contextually relevant moments. EgoTrigger can significantly reduce computational requirements while maintaining performance on episodic memory tasks. With some approaches for more efficient sensing in mind, the next challenge is generating meaningful insights. Agentic systems, such as the Personal Health Insights Agent (PHIA), leverages large language models with tools such as code generation and information retrieval to analyze wearable health data, achieving over 84% accuracy on health queries. I will further discuss recent multi-agent advances including the Personal Health Agent (PHA) framework, and promising directions for incorporating egocentric visual information alongside personal context for richer health reasoning.

The convergence of these capabilities opens transformative possibilities. For general users, such glasses provide assistance benefiting from continuous egocentric context. For healthcare, passive longitudinal sensing enables previously impossible questions: How did movement patterns change before and after a fall? Even if at-risk elderly populations never adopt smart glasses, longitudinal data from healthy wearers could advance our understanding of gait deterioration and early warning signs, yielding both practical systems for vulnerable populations and fundamental scientific insights into human behavior.

Speaker Bio: Akshay Paruchuri is a Postdoctoral Scholar in the Stanford Translational AI (STAI) Lab, working with Professor Ehsan Adeli on learning from egocentric, multimodal information (e.g., data from wearables and neuroimaging) to improve healthcare outcomes for general populations and those affected by aging-related diseases. He received his PhD in Computer Science from the University of North Carolina at Chapel Hill, advised by Professor Henry Fuchs. His research spans computer vision, machine learning, and healthcare AI, with publications at Nature Communications, NeurIPS, ECCV, MICCAI, and IEEE TVCG. He has conducted research at Google AR, Google Consumer Health Research, and IDSIA USI-SUPSI, and previously developed consumer wearable devices at Nike.


Accepted Contributions

Will be listed here