Archives

Design, Manufacturing, and Locomotion Studiesof a 1.27 Gram Ambulatory Microrobot

Dr. Onur Ozcan
Postdoctoral Fellow 
Harvard University and Wyss Institute

Friday, December 13th, 2013

Abstract: Biological research over the past several decades has elucidated some of the mechanisms behind highly mobile, efficient, and robust locomotion in insects such as the cockroach. Roboticists have used this information to create biologically inspired machines capable of running, jumping, and climbing robustly over a variety of terrains. To date, little work has been done to develop an at-scale insect-inspired robot capable of similar feats, due to limitations in fabrication, actuation, and electronics integration at small scales. This talk addresses these challenges, focusing on the mechanical design and fabrication of a 1.27g walking robot, the Harvard Ambulatory MicroRobot (HAMR). The development of HAMR includes modeling and parameter selection for a two degree of freedom leg powertrain that enables locomotion. In addition, a design inspired by pop-up books that enables fast and repeatable assembly of the miniature robot is presented. Finally, a method to drive HAMR resulting in speeds up to 15 body lengths per second is presented, along with simple control schemes. We believe HAMR has the potential for use in hazardous environments, as well as being an ideal tool to investigate locomotion at the small scales. 

Bio: Dr. Onur Ozcan creates bio-inspired miniature ambulatory robots through research at the interface of mechanical engineering and robotics. He received his B.S. (2007) in Mechatronics Engineering at Sabanci University in Istanbul, Turkey and his M.S. (2010) and Ph.D. (2012) in Mechanical Engineering at Carnegie Mellon University in Pittsburgh, Pennsylvania where he worked on control and automation of tip-directed nanoscale fabrication. He is currently a postdoctoral fellow working on fabrication and control of miniature crawling robots at Harvard University’s School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering.

Developing Robot Capabilities for the Real World

Wes Huang
iRobot

Friday, November 8th, 2013

Abstract: In this talk, I will describe two research projects at iRobot, on door opening and socially-aware navigation, that are steps towards a long-term goal of extending independent living. For robots to do work in indoor environments, access to buildings and rooms is essential, and doors often stand in the way of that access. We are developing algorithms for opening doors with iRobot's 510 PackBot. Part of this work involves overcoming limitations of the PackBot, but we are also addressing problems that prior work in door opening has not. The second project is about socially-aware navigation. Navigating in shared spaces is a social interaction, but our robots are not yet versant in the social aspect of navigation. I will describe an experiment we conducted to investigate how a robot can actively use nonverbal social cues in navigation. These projects can be viewed as steps towards developing robots that extend independent living, enabling seniors to live at home or to delay a transition to a higher level of care which has been stated as a long-term goal for iRobot by our CEO. 

Wes Huang is a Principal Robotics Engineer at iRobot.  He received his PhD in Robotics from Carnegie Mellon's Robotics Institute and has conducted research in robotic manipulation, mobile robotics, and mapping & localization. He joined iRobot's Research group in 2009. 

Youtube2Text: Natural Language Description Generation for Diverse Activities in Video

Professor Kate Saenko
Computer Science
University of Massachusetts Lowell

Friday, November 1st, 2013

Abstract: Many core tasks in artificial intelligence require joint modeling of images and natural language. The past few years have seen increasing recognition of the problem, with research on connecting words and names to pictures, describing static images in natural language, and visual grounding of natural-language instructions for robotics. I will discuss recent work that focuses on generating natural language descriptions of a short  but extremely diverse YouTube video clips, where limited prior work exists.Despite a recent push towards large-scale object recognition, activity recognition in video remains limited to narrow domains and small vocabularies of actions. In this work, we tackle the challenge of recognizing and describing activities ``in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, namely, the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from web-scale natural language corpora to penalize unlikely combinations of actors/actions/objects; we also use a web-scale language model to ``fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches. 

Professor Saenko has been an Assistant Professor in the Computer Science Department at UML since fall 2012. Before that, she was a Postdoctoral Researcher at the International Computer Science Institute, a Visiting Scholar at UC Berkeley EECS and a Visiting Postdoctoral Fellow School of Engineering and Applied Science at Harvard University. Before that, she was a PhD student at MIT. Her research interests are in applications of machine learning to image and language understanding, multimodal perception for autonomous systems, and adaptive intelligent human-computer interfaces. 

Mechanisms as Minds: Unconventional Methods of Control in Tensegrity Robots

John Rieffel
Assistant Professor of Computer Science
Union College

Friday, October 18th, 2013

Abstract: Traditional engineering approaches strive to avoid, or actively suppress, nonlinear dynamic coupling among components. Biological systems at every scale of life, by contrast, are often rife with these same dynamics. Could there be, in some cases, a benefit to high degrees of dynamical coupling? The emerging field on "morphological computation" explores the paradoxical notion that increasing a robot's complexity can sometimes simplify the task of control. In this talk I'll demonstrate morphological computation in physical tensegrity robots driven by simple oscillators. These results lend further credence to notions of embodied anatomical computation in biological systems, and are of particular interest when studying the biomechanics of completely soft animals such as caterpillars, and in the nascent field of soft robotics.

John Rieffel is an Assistant Professor of Computer Science at Union College. He received degrees in Computer Science (BA) and Engineering (BS) from Swarthmore College, and a Ph.D in Computer Science from Brandeis University. He has held postdoctoral positions in Cornell's Mechanical Engineering Department and Tufts University's Biology Department.  His research interests include tensegrities, soft and amorphous robotics, 3D printing, and the evolution of physical systems. 

Economy of motion: three problems of efficient robotic locomotion and manipulation

Devin Balkcom
Associate Professor, Computer Science
Dartmouth College

Friday, October 11th, 2013 

Abstract: Computer scientists are often concerned about lower bounds:  what is the least amount of time or memory required to solve a problem? This talk will explore a few fundamental problems in robotics from an equally minimalist perspective, motivated by problems in motion planning and manipulation of flexible materials including paper, cloth, and string. What is the fastest trajectory to move a vehicle with particular motion capabilities from one location and orientation to another, among infinitely many possible trajectories?  How many fingers are needed to immobilize a bendable (but not stretchable) piece of cloth? How many degrees of freedom (motors) must a mechanism have to tie a knot in a piece of string? We will show analytical and geometric solutions to each of these problems, as well as a few machines built based on algorithms and principles discovered.

Devin Balkcom is an Associate professor of Computer Science at Dartmouth College. Balkcom earned his Ph.D. from Carnegie Mellon University; as part of his thesis he built an origami-folding robot. Balkcom is a recipient of an NSF CAREER grant.

Context Aware Shared Autonomy for Robotic Manipulation Tasks

Dejan Pangercic
Bosch Research and Technology Center 

Thursday, September 19th, 2013

Abstract: In this talk I will briefly present on-going robotic projects at Bosch (autonomous lawnmower, robot for organic farming, hospital transport assistant system, activities from the PR2 Beta Program) and then describe a project in which a collaborative human-robot system that provides context information to enable more effective robotic manipulation. We take advantage of the semantic knowledge of a human co-worker who provides additional context information and interacts with the robot through a user interface. A Bayesian Network encodes the dependencies between this information provided by the user. The output of this model generates a ranked list of grasp poses best suitable for a given task which is then passed to the motion planner. Our system was implemented in ROS and tested on a PR2 robot. We compared the system to state-of-the-art implementations using quantitative (e.g. success rate, execution times) as well as qualitative (e.g. user convenience, cognitive load) metrics. We conducted a user study in which eight subjects were asked to perform a generic manipulation task, for instance to pour a bottle or move a cereal box, with a set of state-of-the-art shared autonomy interfaces. Our results indicate that an interface which is aware of the context provides benefits not currently provided by other state-of-the-art implementations. 

Torque Controlled Humanoid Robots

Chris Atkeson
Robotics
Institute of Carnegie Mellon

Friday, September 6th, 2013

Abstract: This talk will describe how torque controlled humanoids (Sarcos Primus System, Boston Dynamics Atlas) are different from heavily geared robots (Honda Asimo, HRP2, Willow Garage PR2). Our own work in this area will be reviewed.

Chris Atkeson is a professor in the Robotics Institute at Carnegie Mellon University. He has been working with torque controlled humanoids for the last 20 years. More information is at www.cs.cmu.edu/~cga 

Mutual Information-based Gradient-Ascent Control for Distributed Robotics

Brian J. Julian
Computer Science and Artificial Intelligence Laboratory
MIT

Tuesday, April 30th, 2013

Abstract: In this talk I discuss the relevancy, generality, validity, and scalability of a novel class of decentralized, gradient-based controllers. More specifically, these controllers use the analytical gradient of mutual information to distributively control multiple robots as they infer the state of an environment. Given simple robot dynamics and few probabilistic assumptions, none of which include Gaussian noise, I prove that these controllers are convergent between sensor observations and, in their most general form, locally optimal. Additionally, by employing distributed approximation algorithms and non-parametric methods, I show that computational tractability is achieved even for large systems and environments. Throughout the talk I support this work with numerous simulations and hardware experiments concerning traditional robot applications such as mapping, exploration, and surveillance.  

Brian J. Julian received the B.S. degree in mechanical and aerospace engineering from Cornell University, and the S.M. degree in electrical engineering and computer science from the Massachusetts Institute of Technology.  He is currently a Ph.D. candidate in the Computer Science and Artificial Intelligence Laboratory at MIT. Since 2005, he has been a staff member in the Engineering Division at MIT Lincoln Laboratory, where his is currently Associate Staff in the Rapid Prototyping Group and a Lincoln Doctoral Scholar. Brian’s research in distributed multi-robot coordination applies tools from control, network, and optimization theory to develop scalable information-theoretic algorithms with provable performance characteristics.

Robotic Lunar Landing and Hopping

Bobby Cohanim
Draper Laboratory

Wednesday, April 24th, 2013

Abstract: Government and commercial efforts from around the world are racing to send exploration and scientific spacecraft back to the Moon. NASA is investigating the use of small missions to perform site surveys, environmental science, emplace early infrastructure, and provide test beds for technology development projects. The Google Lunar X-Prize will award the first privately funded team to send a robot to the Moon, travel 500 meters over the lunar surface and transmit video, images, and data back to the Earth. Space ventures are expanding from the public sector out to the private sector. Companies around the world are raising private funds to fuel innovation in technology and establish space infrastructure. This talk describes the history, research, and development of lunar landing and hopping at Draper and MIT: from the Apollo program, through recent prototype work in Hoppers, and NASA flights demonstrating autonomous GNC of terrestrial landers.  

Bobby Cohanim is the Mission Design Group Leader and a Principal Member of the Technical Staff at Draper Laboratory. He joined Draper Laboratory in September, 2006. He has worked extensively on NASA's ALHAT program developing the next generation of landing sensors and GNC for planetary landing, and developing and demonstrating landing and hopping technologies for the next generation of planetary vehicles. Before coming to Draper, Bobby worked at JPL developing mission concepts for future planetary missions, and at MIT developing techniques to develop large arrays for science observations. He received his B.S. from Iowa State University in Aerospace Engineering (2002), his S.M. in Aero/Astro from MIT (2004), and is a finishing ScD candidate in Aero/Astro at MIT (2009-2013).

Cheap and Easy: Sensitive and Robust Tactile Arrays from COTS Components

Leif Jentoft and Yaroslav Tenzer
Harvard School of Engineering and Applied Sciences 

Tuesday, April 16th, 2013

Abstract: Despite decades of research, tactile arrays remain an exotic and expensive technology. We report a new approach based on commercial-off-the-shelf (COTS) integrated circuits and standard printed circuit boards. New barometric pressure sensor chips include MEMS pressure sensors, instrumentation amplifiers, analog-to-digital converters, and standard bus interfaces but cost less than $2 each. A low-performance microcontroller can handle multiplexing and bus interface, enabling simple serial communication over USB. Performance is excellent, with minimum element spacing of 5mm, excellent linearity, sensitivity of less than 0.01N, and readout rates of up to 250Hz per element. We present characterization of the sensors, as well as an open-source release of the circuitboard plans and microcontroller code. Much as the Kinect revolutionized computer vision by leveraging consumer economies of scale, this is a potentially breakthrough technology that can enable widespread use of tactile sensing in diverse robotics and human-interface applications.  

Leif Jentoft received the Bachelor of Science in Engineering from Franklin W. Olin College in 2009. While there, he studied under Dr. Gill Pratt, researching passive mechanical systems and bioinspired control. He is currently pursuing his PhD under Prof. Robert Howe in the Biorobotics Laboratory at Harvard University. He is a founder at TakkTile, a joint venture with Yaroslav Tenzer that seeks to bring sensitive, robust tactile sensing to those who need it in research, start-ups, and industry. His research interests include robotic grasping and manipulation, tactile sensing, passive mechanics, and technology that makes capable robots less expensive for real-world tasks.  

Yaroslav Tenzer received the Bachelor in Mechanical Engineering and the Master’s degree in Mechatronics from Ben-Gurion University, Israel, in 2000 and 2006, respectively. During the studies he worked at the Internal Combustion Laboratory, led by Prof. Eran Sher. There he participated in a various experiments designed to evaluate fuel efficiency and its impact on engines. Yaroslav then joined the Department of Mechanical Engineering at Imperial College London, UK where he gained a PhD in Medical Robotics. He is now a post-doctoral researcher at the Biorobotics Laboratory at Harvard University, USA. His research interests include autonomous robots for service applications, wearable sensors, haptics and surgical robotics.

Using Human-Robot Interactions to Study Human-Human Social Behavior

Brian Scassellati
Associate Professor of Computer Science
Yale University

Tuesday April 2nd, 2013

Abstract: Robots offer a unique tool in the study of human social behavior because they offer a completely controllable, and infinitely repeatable, stimulus that respond precisely to only select social cues and without observer bias. In this talk, I'll present some results from human-robot interactions on topics that include cheating and compliance, social learning and instruction, and animacy and agency. Finally, I'll offer some preliminary data on how these robots can be used as therapeutic and diagnostic tools for social deficits such as autism spectrum disorder.  

Brian Scassellati is an Associate Professor of Computer Science at Yale University. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills. sing computational modeling and socially interactive robots, his research evaluates models of how infants acquire social skills and assists in the diagnosis and quantification of disorders of social development (such as autism). His other interests include humanoid robots, human-robot interaction, artificial intelligence, machine perception, and social learning. Dr. Scassellati received his Ph.D. in Computer Science from the Massachusetts Institute of Technology in 2001. Dr. Scassellati's research in social robotics and assistive robotics has been recognized within the robotics community, the cognitive science community, and the broader scientific community. e was named an Alfred P. Sloan Fellow in 2007 and received an NSF CAREER award in 2003. His work has been awarded five best-paper awards. He was the chairman of the IEEE Autonomous Mental Development Technical Committee from 2006 to 2007, the program chair of the IEEE International Conference on Development and Learning (ICDL) in both 2007 and 2008, and the program chair for the IEEE/ACM International Conference on Human-Robot Interaction (HRI) in 2009. 

Interactive Robots and Systems

Professor François Michaud
Department of Electrical Engineering and Computer Engineering
Université de Sherbrooke

Wednesday, March 20th, 2013

Abstract: Mobile robotics is one of the best examples of systems engineering: it requires the integration of sensors, actuators, energy sources, embedded computing, decision algorithms in a common structure, working in the real world. Only technologies and methodologies that work with the constraints of such integration can be useful, and so integration directly influences scientific considerations associated with the intelligent behavior of such systems. It is therefore important to address such challenges by developing innovative solutions and validating them in real world field experiences. This presentation addresses an overview of interactive robots and systems developed at IntRoLab, Université de Sherbrooke, ranging from compliant actuators, direct physical interfaces, artificial audition, augmented reality telepresence interfaces, vision-based SLAM, natural human-robot interaction, telerehabilitation and telehealth applications.  

François Michaud is a Professor at the Department of Electrical Engineering and Computer Engineering of the Université de Sherbrooke. He is the Director of IntRoLab, a research laboratory on mobile robotics and intelligent systems working on mechatronics and developing AI methodologies for the design of intelligent autonomous systems that can assist humans in everyday uses. His research interests are architectural methodologies for intelligent decision-making, autonomous mobile robots, social robotics, robot learning and intelligent systems. He held the Canada Research Chair in Autonomous Mobile Robots and Intelligent Systems from 2001 to 2011. He is also the Director of the Interdisciplinary Institute for Technological Innovation (3IT), an interdisciplinary research center working on important application ranging from design to exploitation of information technology (integrating devices, telecommunications, and processing towards applications). In addition, he was involved in a major engineering educational reform based on problem-based and project-based learning, mainly by developing a mobile robotic platform for introducing EECE and design to freshmen students, a robotic module and senior design project activities. He received his bachelor’s degree (1992), Master’s degree (1993) and Ph.D. degree (1996) in Electrical Engineering from the Université de Sherbrooke. He then spent one year as a postdoctoral researcher at the Interaction Lab (Brandeis University, USA), before going back to Sherbrooke.

Coordinated Autonomy for Mobile Fulfillment: The Kiva Systems Solution

Andrew Tinka
Research Scientist
Kiva Systems
  

Tuesday, March 12th, 2013

Abstract: Kiva Systems develops complete mobile robotics systems for material handling automation in warehouses and fulfillment centers. A fleet of mobile drive units moves modular shelving pods around the warehouse floor, allowing human operators to stand in place while the inventory they need comes to them. The Kiva solution offers improved productivity, quality, robustness, and flexibility over incumbent technologies. Customers using Kiva Systems' technology include Staples, Walgreens, and The Gap; more recently, Kiva Systems was acquired by Amazon. Andrew Tinka is a member of Kiva's Research and Advanced Development group. His presentation will give an overview of the Kiva solution, particularly from the perspective of the research group. The core planning challenges will be described as a set of coupled resource allocation problems. The role of research in a growing, disruptive company will be discussed.  

Andrew Tinka received his Ph.D. in 2012 in Electrical Engineering and Computer Sciences at U.C. Berkeley. His doctoral research focused on the development of a fleet of mobile floating robots for environmental sensing applications. His previous engineering experience includes building UAV fleets at UC Berkeley and developing embedded control systems at Powis Parker, Inc. He is currently a Research Scientist at Kiva Systems. His research interests include multivehicle control, decentralized planning, and robotics for unstructured environments.  

Development of the MEMS Tactile Sensor for Haptic Interface in our Life

Haruo Noma
Senior Researcher
Advanced Telecommunications Research Institute, Kyoto, Japan

Tuesday, February 26th, 2013

Abstract: We have fine mechanical sensing units in our skin. We can sense the material properties of objects by touching, and we can put a key into a lock without watching carefully. Without tactile sensation, it is very difficult to handle an object dexterously. Most robots developed until now do not have human-like tactile sensing on their hands or bodies. What is human-like tactile sensation? I think our tactile sensing consists of fine and micro mechanical sensing units and neural information processing. In this presentation, I will introduce our MEMS macro tactile sensor, and some applications using the sensor in our research development. We aim to place the sensor into many locations in our daily life in the future.  

Haruo Noma was a Senior Researcher at the Advanced Telecommunications Research Institute (ATR) in Kyoto, Japan for the past 19 years. He recently accepted a position as Professor of Computer Science at Ritsumekan University in Shiga, Japan. His research is focused on virtual reality, especially haptic (touch) feedback, robotics, tactile sensing, wearable sensors, sensor networks, and applications using these technologies. He is a member of IEEE and Senior Member of ACM.

Robots Getting Better, Every Day and In Every Way

Dr. Dan Grollman 
Vecna Technologies

Tuesday, February 12th, 2013

Abstract:  Robots are slowly becoming a commodity item. No longer restricted to heavily engineered, industrial settings, they are migrating into everyday locations, and interacting with everyday people. But as the amount of previous training that can be expected of the end-user decreases, it is vital that all aspects of the robot become easier to operate and adapt to new and unanticipated environments and uses. In this talk I will review some of my work in robot learning from demonstration, which aims at letting users implicitly program a robot to perform a desired task, even if they cannot explicitly describe the task, or even perform it perfectly themselves. Placed in a larger context, I will discuss ideas and needed research for incorporating these techniques into commercial robots, such that they can continually and autonomously improve themselves and their working relationships with humans during operation.  

Dan Grollman is a Robot Doctor at Vecna technologies. He completed his Ph.D. in Computer Science at Brown University in 2009, and did his postdoctoral work at the Ecole Polytechnique Federale de Lausanne until 2011. Recognized as a "Young Pioneer" in Human-Robot Interaction in 2007/2008, Dan's work in learning from failure won him and his advisor Aude Billard a "Best Cognitive Robotics" paper at ICRA 2011. At Vecna, Dan leads the Robotics Usability group, focused on improving the ease with which humans and robots can work together.  

Perception R&D for Unmanned Systems at iRobot

Dr. Christopher Geyer
Senior Principal Research Scientist
iRobot Corporation

Tuesday, January 15th, 2013

Abstract: The ability to effectively perceive the world is key to virtually all robots.  Perception enables autonomy, and can be used to make interaction between robot and people more natural.  For some time, however, a lack of effective and embeddable perception algorithms has been an obstacle to autonomous, people-friendly robots.  Recently, though, there has been a convergence of both increased computational performance and more reliable algorithms that are enabling “smarter” robotic behaviors and more natural modes of interaction.  In this presentation, I will talk research at iRobot in computer vision and perception, and their applications to problems in unmanned systems.  I will discuss joint work with UC Berkeley and Brown University to develop a real-time object recognition capability, as well as work in activity recognition with Colorado State University.  

Dr. Geyer is a Senior Principal Research Scientist at iRobot Corporation.  He joined iRobot in 2008, coming from Carnegie Mellon University’s Robotics Institute, where he participated in the DARPA Urban Challenge and conducted research in perception for unmanned systems.  Dr. Geyer started his career in robotics in the GRASP Lab at the University of Pennsylvania, where he received his B.S.E. and Ph.D. in Computer Science in 1999 and 2002, respectively, and was a post-doc at U.C. Berkeley, where he lead the development of an autonomous landing capability for an unmanned helicopter.

Why Is Dynamic Robotic Simulation So Bad, And When Will It Get Better?

Dr. Evan Drumwright, PhD. Assistant Professor,
Computer Science Department
The George Washington University

Wednesday, December 5th, 2012

Abstract: Dynamic robotic simulation will eventually accelerate robotics research dramatically through effective validation, super-fast learning, and lowered barriers to research participation. However, the difficulty of effectively simulating robots performing tasks that entail physical interaction is currently on par with programming robots situated in real environments. Participants in DARPA's Robotics Challenge are in fact now experiencing significant simulation-related problems. I'll describe why the vision described above has been so hard to achieve, and I'll show recent research from the GWU Positronics Lab toward meeting it.

Evan Drumwright is an Assistant Professor of Computer Science at George Washington University. He has been a visiting researcher at Honda Research Institute USA and visiting faculty at the University of Memphis.  

His research has been funded by the National Science Foundation and Willow Garage.

Robotics in Minimally Invasive Surgery: Planning, Training and Intervention

Dr. Andrew H. Gosline
Post Doctoral Research Fellow,
Pediatric Cardiac Bioengineering, Children's Hospital Boston
Harvard Medical School 

Tuesday, November 13th, 2012

Abstract:  Robotic technology in medical practice has experienced vast progress in recent years.  The move to minimally invasive procedures that offer faster recovery times, minimized infection risk, and smaller incisions means that clinicians interact with tissues and deliver therapies using long, slender tools that both reduce the available visual information and interfere with accurate tactile perception at the surgical site.  Robotic technology has helped clinicians overcome these limitations in training and intervention scenarios.   

In this talk he will provide examples of how straightforward technological advances can have a positive effect in medical practice.  For example, the use of eddy current brakes in conjunction with DC motors can provide a hybrid haptic interface that is ideal for surgical simulation and training of physicians.   

Dr. Gosline received his B.Sc from Queen's University in Mechanical Engineering, his M.A.Sc from the University of British Columbia in Electrical Engineering, and his Ph.D in Electrical Engineering from McGill University.  Prior to joining Children's Hospital Boston, he was a member of Immersion Corporation's research team, in Montreal, QC.  His research interests include medical robotics, haptics, applied control, simulation, virtual reality, and mechatronics.

A New Class of Industrial Robot (Introduction to Baxter)

Rethink Robotics

Wednesday, November 7th, 2012

Abstract and Agenda:

  • Overview of Baxter's hardware including sensors and Series Elastic Actuators.
  • Our use of ROS for production Robot and development (Robot Operating System). 
  • Perception which includes vision services.
  • UX/UI Development process overview.
  • Business Model and keeping Baxter's costs low.  

Rethink Robotics was founded in 2008 by robotics pioneer Rodney Brooks. Rod was a co-founder of iRobot © (Nasdaq: IRBT) and held positions there including CTO, Chairman and board member from 1990 through 2011.  

From 1984 through 2010, Rod was on the faculty of MIT as the Panasonic Professor of Robotics, and was the director of MIT CSAIL, the Computer Science and Artificial Intelligence Laboratory. While at MIT, Rod developed the behavior-based approach to robotics that underlies the robots of both iRobot and Rethink Robotics.  

Now, as Chairman and Chief Technology Officer of Rethink Robotics, Rod is devoted to his mission of creating smarter, more adaptable, low-cost robotic solutions that can help manufacturers to improve efficiency, increase productivity and reduce their need for offshoring.

 
  • Email a Friend
  • Bookmark this Page
  • Share this Page