research

Research interests and projects.

I study egocentric vision, visual representation learning, and perception in naturalistic environments—using large-scale egocentric video, computer vision, and behavioral experiments to understand how infants’ everyday visual experience shapes early concept development.


Characterizing infant visual experience

What do infants actually see, and how does it compare to the data that drives modern vision systems? I use naturalistic egocentric video (e.g., the BabyView dataset) to quantify objects, scenes, and activities in infants’ view and compare those statistics to standard vision datasets.


Visual–linguistic alignment

How well are the visual and linguistic streams aligned in naturalistic settings? I use multimodal models (e.g., CLIP) to measure that alignment in egocentric infant data.


Attention, action, and learning

How do attention and action structure learning—e.g., how manual actions create visual saliency and support joint attention, and how real-time attention relates to language?


3D vision and object experience (in development)

How do manipulation and dyadic interaction shape the 3D view statistics that support object learning? I use 3D object reconstruction and 6DoF pose tracking to characterize viewpoint distributions during active vs. passive viewing, compare infant and caregiver view experiences during joint play, and test whether view variability predicts recognition and generalization to novel views and exemplars.


Methods and open science

Methods: Computer vision (object detection, pose estimation, multimodal embeddings), multimodal data fusion (head-mounted eye trackers, cameras, microphones), and behavioral experiments (e.g., eye-tracking).
Open science: Committed to data-driven, ecologically valid developmental psychology and reproducible pipelines on large-scale naturalistic data.