Robotics Engineering Colloquium Series: Jeannette Bohg | Enabling Cross-Embodiment Learning

Wednesday, December 13, 2023
12:00 pm to 1:00 pm
Location
Floor/Room #
520

WIN ‘Bridging the Gaps’ Distinguished Lecturer

Preview

Photo of speaker

Robotics Engineering Colloquium Series

Professor Jeannette Bohg

Enabling Cross-Embodiment Learning

December 13th, 2023
12:00 PM - 1:00 PM
Location: Unity Hall 520

Abstract: In this talk, I will investigate the problem of learning manipulation skills across a diverse set of robotic embodiments. Conventionally, manipulation skills are learned separate for every task, environment and robot. However, in domains like Computer Vision and Natural Language Processing we have seen that one of the main contributing factor to generalisable models is large amounts of diverse data. If we were able to to have one robot learn a new task even from data recorded with a different robot, then we could already scale up training data to a much larger degree for each robot embodiment. In this talk, I will present a new, large-scale datasets that was put together across multiple industry and academic research labs to make it possible to explore the possibility of cross-embodiment learning in the context of robotic manipulation, alongside experimental results that provide an example of effective cross-robot policies. Given this dataset, I will also present multiple alternative ways to learn cross-embodiment policies. These example approaches will include (1) UniGrasp - a model that allows to synthesise grasps with new hands, (2) VICES - a systematic study of different action spaces for policy learning and (3) XIRL - an approach to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos.

Bio: Jeannette Bohg is an Assistant Professor for Robotics at Stanford University. She directs the Interactive Perception and Robot Learning Lab. In general, her research explores two questions: What are the underlying principles of robust sensorimotor coordination in humans, and how can we implement them on robots? Research on this topic has to necessarily be at the intersection of Robotics, Machine Learning and Computer Vision. In her lab, they are specifically interested Robotic Grasping and Manipulation.

Join us via Zoom

Audience(s)

DEPARTMENT(S):

Robotics Engineering