Robotics Engineering Colloquium Series - Professor Ziyun (Claude) Wang
12:00 p.m. to 1:00 p.m.
Beyond Frames: Learning to Perceive With Event-Based Vision
The development of computer vision has largely focused on scaling data and computational power, often at the cost of energy efficiency. In contrast, biological intelligence achieves remarkable adaptability with minimal resources, inspiring a shift toward neuromorphic vision, an approach that mimics the structure and dynamics of biological neural systems.
In this talk, I will explore the promises and challenges of neuromorphic computer vision. I will begin with the sensing principles of event-based vision in contrast to frame-based cameras. I will then highlight algorithmic advances in three fundamental tasks in computer vision: reconstruction, segmentation, and tracking, including continuous visual hull reconstruction, continuous-time human motion field estimation, and unsupervised independent motion segmentation. I will also present a real-world robotic application in which event-based sensors enable high-speed object catching and conclude by outlining open problems and future directions for event-based sensing in robotics.
Bio:
Ziyun (Claude) Wang is an Assistant Professor of Electrical and Computer Engineering and a core faculty member of the Data Science and AI Institute (DSAI) and the Laboratory for Computational Sensing & Robotics (LCSR) at Johns Hopkins University. He received his Ph.D. from the GRASP Lab at the University of Pennsylvania.
Dr. Wang’s research lies at the intersection of computer vision, robotics, and neuromorphic computing, with a focus on developing bio-inspired, power-efficient embodied AI systems. A central theme of his research is leveraging emerging sensing technologies, particularly event-based vision sensors, as powerful tools for robot perception. By exploiting the high temporal resolution and sparse data characteristics of event-based sensors, Dr. Wang designs algorithms for fast and efficient scene understanding, enabling capabilities in high-speed vision and low-latency motion estimation under difficult visual conditions. His broader goal is to build more agile, efficient, and robust robotic perception systems that integrate such sensing and learning principles while addressing their theoretical and practical challenges. He has co-authored publications in leading venues including CVPR, ICCV, ECCV, ICLR, AAAI, ICIP, ICCP, ISER, and RA-L.