I'm a CS PhD student at UNC Chapel Hill advised by Professor Henry Fuchs. My research interests are at the intersection of computer vision, computer graphics, and machine learning. I work on research that typically involves applications in healthcare and/or augmented reality. Currently, I'm working toward a future where wearable, spatial computing devices, such as augmented reality eyeglasses capable of all-day use, are contextually aware and personalized to the benefit of users and their goals (e.g., human memory enhancement, becoming healthier).

Currently, I'm a visiting researcher at IDSIA USI-SUPSI working with Professor Piotr Didyk. I've previously done internships at Google AR/VR, Google Consumer Health Research, and Kitware. Prior to pursuing a PhD, I was an embedded systems engineer working on wearable devices at Nike. I enjoy reading, running, and hiking in my spare time.

News
...show all...

Research


Please see my Google Scholar for a full, more up-to-date list of my publications.

teaser img

"What's Up, Doc?": Analyzing How Users Seek Health Information in Large-Scale Conversational AI Datasets

Akshay Paruchuri, Maryam Aziz, Rohit Vartak, Ayman Ali, Best Uchehara, Xin Liu, Ishan Chatterjee, Monica Agrawal

arXiv 2025

In order to better understand how people use LLMs when seeking health information, we created HealthChat-11K, a dataset of 11,000 real-world conversations. Subsequent annotation and analysis of user interactions across 21 health specialties reveals common user interactions and enables informative case studies on incomplete context, affective behaviors, and leading questiions. Our findings highlight significant risks when seeking health information with LLMs and underscore the need to improve how conversational AI supports healthcare inquiries.

teaser img

RADAR: Benchmarking Language Models on Imperfect Tabular Data

Ken Gu, Zhihan Zhang, Kate Lin, Yuwei Zhang, Akshay Paruchuri, Hong Yu, Mehran Kazemi, Kumar Ayush, A. Ali Heydari, Maxwell A. Xu, Yun Liu, Ming-Zher Poh, Yuzhe Yang, Mark Malhotra, Shwetak Patel, Hamid Palangi, Xuhai Xu, Daniel McDuff, Tim Althoff, Xin Liu

arXiv 2025

To address language models' poor handling of data artifacts, the RADAR benchmark was created to evaluate data awareness on tabular data. Using 2,980 table-query pairs grounded in real-world data spanning 9 domains and 5 data artifact types, RADAR finds that model performance drops significantly when artifacts like missing values are introduced. This reveals a critical gap in their ability to perform robust, real-world data analysis.

teaser img

What Are the Odds? Language Models Are Capable of Probabilistic Reasoning

Akshay Paruchuri, Jake Garrison, Shun Liao, John Hernandez, Jacob Sunshine, Tim Althoff, Xin Liu, Daniel McDuff

EMNLP 2024 (Main)

Language models were evaluated on probabilistic reasoning tasks such as estimating percentiles and calculating probabilities using idealized and real-world distributions. Techniques including within-distribution anchoring and simplifying assumptions significantly improved LLM performance by up to 70%.

teaser img

Transforming Wearable Data into Health Insights using Large Language Model Agents

Mike A. Merrill, Akshay Paruchuri, Naghmeh Rezaei, Geza Kovacs, Javier Perez, Yun Liu, Erik Schenck, Nova Hammerquist, Jake Sunshine, Shyam Tailor, Kumar Ayush, Hao-Wei Su, Qian He, Cory Y. McLean, Mark Malhotra, Shwetak Patel, Jiening Zhan, Tim Althoff, Daniel McDuff, Xin Liu

arXiv 2024

The Personal Health Insights Agent (PHIA) leverages large language models with code generation and information retrieval tools to enable personalized insights from wearable health data, an ongoing challenge. Evaluated on over 4000 questions, PHIA accurately answers over 84% of factual numerical and 83% of open-ended health questions, paving the way for accessible, data-driven personalized wellness.

teaser img

Structure-preserving Image Translation for Depth Estimation in Colonoscopy Video

Shuxian Wang, Akshay Paruchuri, Zhaoxi Zhang, Sarah McGill, Roni Sengupta

MICCAI 2024 (Oral)

To address the domain gap in colonoscopy depth estimation, a structure-preserving synthetic-to-real image translation pipeline generates realistic synthetic images that retain depth geometry. This approach, aided by a new clinical dataset, improves supervised depth estimation and generalization to real-world clinical data.

teaser img

Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos

Akshay Paruchuri, Samuel Ehrenstein, Shuxian Wang, Inbar Fried, Stephen M. Pizer, Marc Niethammer, Roni Sengupta

ECCV 2024

Near-field lighting in endoscopes is modeled as Per-Pixel Shading (PPS) to achieve state-of-the-art depth refinement on colonoscopy data. This is accomplished using PPS features with teacher-student transfer learning and PPS-informed self-supervision.

teaser img

Motion Matters: Neural Motion Transfer for Better Camera Physiological Measurement

Akshay Paruchuri, Xin Liu, Yulu Pan, Shwetak Patel, Daniel McDuff, Roni Sengupta

WACV 2024 (Oral)

Neural Motion Transfer is presented as an effective data augmentation technique for estimating PPG signals from facial videos. This approach significantly improved inter-dataset testing results by up to 79% and outperformed existing state-of-the-art methods on the PURE dataset by 47%.

teaser img

Reconstruction of Human Body Pose and Appearance Using Body-Worn IMUs and a Nearby Camera View for Collaborative Egocentric Telepresence

Qian Zhang, Akshay Paruchuri, Young-Woon Cha, Jia-Bin Huang, Jade Kandel, Howard Jiang, Adrian Ilie, Andrei State, Danielle Szafir, Daniel Szafir, Henry Fuchs

IEEE VR 2023 (ReDigiTS Workshop)

A collaborative 3D reconstruction method estimates a target person's body pose using worn IMUs and reconstructs their appearance via an external AR headset view from another nearby person. This approach aims to enable future anytime, anywhere telepresence through daily worn accessories.

teaser img

Drone Brush: Mixed Reality Drone Path Planning

Angelos Angelopoulos, Austin Hale, Husam Shaik, Akshay Paruchuri, Ken Liu, Randal Tuggle, Daniel Szafir

HRI 2022

Drone Brush introduces a mixed reality interface using HoloLens 2 for intuitive 3D drone path planning with hand gestures, featuring collision checking via spatial maps and path simplification.



Software

teaser img

rPPG-Toolbox: Deep Remote PPG Toolbox

NeurIPS 2023 Datasets and Benchmarks Track

A comprehensive toolbox that contains unsupervised and supervised remote photoplethysmography (rPPG) models with support for public benchmark datasets, data augmentation, and systematic evaluation.