I'm a CS PhD student at UNC Chapel Hill advised by Professor Henry Fuchs. My research interests are at the intersection of computer vision, computer graphics, and machine learning. I work on research that typically involves applications in healthcare and/or augmented reality. Currently, I'm working toward a future where wearable, spatial computing devices, such as augmented reality eyeglasses capable of all-day use, are contextually aware and personalized to the benefit of users and their goals (e.g., human memory enhancement, becoming healthier).
Currently, I'm a visiting researcher at IDSIA USI-SUPSI working with Professor Piotr Didyk. I've previously done internships at Google AR/VR, Google Consumer Health Research, and Kitware. Prior to pursuing a PhD, I was an embedded systems engineer working on wearable devices at Nike. I enjoy reading, running, and hiking in my spare time.
NewsResearch
Please see my Google Scholar for a full, more up-to-date list of my publications.
"What's Up, Doc?": Analyzing How Users Seek Health Information in Large-Scale Conversational AI Datasets
arXiv 2025
In order to better understand how people use LLMs when seeking health information, we created HealthChat-11K, a dataset of 11,000 real-world conversations. Subsequent annotation and analysis of user interactions across 21 health specialties reveals common user interactions and enables informative case studies on incomplete context, affective behaviors, and leading questiions. Our findings highlight significant risks when seeking health information with LLMs and underscore the need to improve how conversational AI supports healthcare inquiries.
RADAR: Benchmarking Language Models on Imperfect Tabular Data
arXiv 2025
To address language models' poor handling of data artifacts, the RADAR benchmark was created to evaluate data awareness on tabular data. Using 2,980 table-query pairs grounded in real-world data spanning 9 domains and 5 data artifact types, RADAR finds that model performance drops significantly when artifacts like missing values are introduced. This reveals a critical gap in their ability to perform robust, real-world data analysis.
What Are the Odds? Language Models Are Capable of Probabilistic Reasoning
EMNLP 2024 (Main)
Language models were evaluated on probabilistic reasoning tasks such as estimating percentiles and calculating probabilities using idealized and real-world distributions. Techniques including within-distribution anchoring and simplifying assumptions significantly improved LLM performance by up to 70%.
Transforming Wearable Data into Health Insights using Large Language Model Agents
arXiv 2024
The Personal Health Insights Agent (PHIA) leverages large language models with code generation and information retrieval tools to enable personalized insights from wearable health data, an ongoing challenge. Evaluated on over 4000 questions, PHIA accurately answers over 84% of factual numerical and 83% of open-ended health questions, paving the way for accessible, data-driven personalized wellness.
Structure-preserving Image Translation for Depth Estimation in Colonoscopy Video
MICCAI 2024 (Oral)
To address the domain gap in colonoscopy depth estimation, a structure-preserving synthetic-to-real image translation pipeline generates realistic synthetic images that retain depth geometry. This approach, aided by a new clinical dataset, improves supervised depth estimation and generalization to real-world clinical data.
Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos
ECCV 2024
Near-field lighting in endoscopes is modeled as Per-Pixel Shading (PPS) to achieve state-of-the-art depth refinement on colonoscopy data. This is accomplished using PPS features with teacher-student transfer learning and PPS-informed self-supervision.
Motion Matters: Neural Motion Transfer for Better Camera Physiological Measurement
WACV 2024 (Oral)
Neural Motion Transfer is presented as an effective data augmentation technique for estimating PPG signals from facial videos. This approach significantly improved inter-dataset testing results by up to 79% and outperformed existing state-of-the-art methods on the PURE dataset by 47%.
Reconstruction of Human Body Pose and Appearance Using Body-Worn IMUs and a Nearby Camera View for Collaborative Egocentric Telepresence
IEEE VR 2023 (ReDigiTS Workshop)
A collaborative 3D reconstruction method estimates a target person's body pose using worn IMUs and reconstructs their appearance via an external AR headset view from another nearby person. This approach aims to enable future anytime, anywhere telepresence through daily worn accessories.
Software
rPPG-Toolbox: Deep Remote PPG Toolbox
NeurIPS 2023 Datasets and Benchmarks Track
A comprehensive toolbox that contains unsupervised and supervised remote photoplethysmography (rPPG) models with support for public benchmark datasets, data augmentation, and systematic evaluation.