RBE Master's Capstone Presentation Showcase

Tuesday, April 30, 2024
5:30 pm to 8:00 pm
Location
Floor/Room #
UH 500


Screen A

Presenter: Anuj Pai Raikar, Jesulona 

Preview

Capstone Presentation Showcase Schedule

Akinyeel, and Thira Kismat Patel - Calli (5:30 - 5:55)

Project Title: Motion Planning for Dexterous In-Hand Manipulation

Abstract: In-hand manipulation may be defined as changing the grasp on an object without the need for re-grasping. While in-grasp object pose modification is a commonplace task for humans (e.g. when moving a grasped pen into a writing position or repositioning a key within hand before insertion), in-hand manipulation remains a significant challenge for robotic systems. Several approaches have been presented to address this problem, several of which rely on the redundancy of high-DOF robotic fingers and extensive object state information, often obtained by elaborate sensing systems. 
   Our approach to achieving robust in-hand manipulation relies on the ability to alter the surface friction of robot fingers. By changing the effective friction between the object and the finger, we were able to achieve controlled sliding and rotation of the object within-hand. So far, we have designed several controllers, path planners, and manipulation strategies. This work aims to develop further efficient motion planning algorithms for manipulating objects using the robotic gripper.


Presenter: Ankush Singh Bhardwaj - Calli (6:00 - 6:25)

Project Title: Dexterous Picking - Reconfiguring Underactuated Dexterous Gripper 

Abstract: While picking challenging objects, human apply various dexterous manipulation skills to their advantage. For example, if the object is too thin, one can slide it to the edge of the table surface or flip it using their two fingers. Similarly, if the object is close to a vertical surface, pushing the object towards that surface makes the picking operation easier at times. We are implementing such skills for robots.    
   In this project we aim to reconfigure the grasp pose of the three finger (Model - O) underactuated dexterous gripper to increase robustness of an ongoing grasp. This is done by leveraging grasp primitive to obtain a desired pose of the gripper. The gripper is modeled as parallel manipulator and the resultant Jacobian developed from its kinematic model maps the error from the underactuated joints of the gripper to the end effector which is controlled by a serial manipulator. 
 

Presenter: Shane Toma - Calli (6:30 - 6:55)

Project Title: Robotic Torch Cutting Simulation for Metal Recycling 

Abstract: The use of oxy-fuel torch cutting for scrap metal recycling spans many industries and applications. Given the dangerous nature of this work, automation of the cutting process using a robotic system is highly desirable. This research presents a novel simulation for a robotic torch cutting system for metal recycling. Simulation is beneficial for this application due to the difficult nature of conducting physical experiments. This work allows for more rapid testing, data collection and controller design, which can facilitate the implementation of the physical system. The simulation consists of a heat transfer analysis through the cutting surface to produce a temperature gradient, which is used to control the velocity of the cutting torch along a predefined path. With known temperature values along the path as a function of the proximity of the torch and time exposed to the flame, the velocity controller can be tuned to produce an optimal cut. The results demonstrate the ability of the controller to produce successful cuts through metal plates of varying thickness, and the simulation provides key feedback on cutting statistics to promote further optimization. 
 

Presenter: Uthiralakshmi Sivaraman  - Calli (7:00 - 7:25)

Project Title: Optimizing Active Vision-Based Policy for Robotic Grasping through 3D Point Cloud Feature Extraction​: A Comparative Study

Abstract: The majority of robotic grasping algorithms rely on a single image of the object, which makes the algorithm's performance highly dependent on the camera's viewpoint and significantly limits its effectiveness. Active vision enables robots to intelligently adjust their sensors, facilitating quick and reliable interactions with unknown objects in uncertain conditions. Our approach focuses on identifying the most efficient subsequent viewpoint for data collection in vision-based grasping. To optimize a vision-based policy for robotic grasping, choosing the most effective 3D point cloud feature extraction technique is crucial. These techniques convert raw 3D spatial data into structured forms that learning algorithms can use to make well-informed decisions about object manipulation. Our research compares several global feature representations to find the most suitable candidate for developing an active vision policy using a Dagger-like imitation learning algorithm.

 

Presenter: Soham Aserkar, Sri Lakshmi Hasitha Bachimanchi, and Dheeraj Bhogisetty - Nemitz (7:30 - 7:55)

Project Title: ChatterArm: LLM Augmented Vision-based Grasping

Abstract:  With an aging global population, the need for scalable and effective elderly care solutions is becoming increasingly urgent. This project addresses the challenges of providing support to the elderly in everyday tasks such as fetching objects. The approach combines multi-modal large language model (LLM) with vision-based grasping technique and a robot manipulator to create an interactive robot. Our system allows for interaction through natural language text input, enabling the robot to recognize and manipulate objects with variations in shape and color. Results from simulation tests show that the manipulator can successfully execute tasks based on user commands, demonstrating its potential to operate effectively in real-world scenarios. The impact of this technology extends beyond individual assistance, with potential applications in inventory management, order fulfilment, and waste sorting.
 


Screen C

Presenter: Jinesh Rajasekhar and Loahit Krisnamurth - Zhang, Z. (5:30 - 5:55)

Project Title: Efficient Fusion methods for Depth Completion in Autonomous Driving

Abstract: Depth completion involves recovering a dense depth map from a sparse map and an RGB image. Recent approaches focus on utilizing color images as guidance images to recover depth at invalid pixels. Fusion between Sparse LiDAR depth and RGB images remains a challenging task due to the difference in modalities. In this work we investigate efficient feature fusion strategies methods to combine sparse LiDAR features and image features. We propose a Transformer based feature fusion method to perform the fusion. Our fusion method outperforms the PENet baseline model.

Presenter: Shounak Sheshadri Naik - Zhang, Z. (6:00 - 6:25)

Project Title: Colmap + NeRF Transformer

Abstract: NeRFs try to learn a implicit representation of the scene in question. COLMAP or SfM allows us to construct a 3D point cloud of the scene. Our hypothesis is that forwarding this geometric prior to the NeRF when it is training will help improve training time and reduce sparsity of input views required for a good rendering of the scene. Furthermore we hypothize that using transformers and the attention mechanism can help generalize NeRF to atleast similar objects.
 

Presenter: Swapneel Wagholikar - Zhang, Z. (6:30 - 6:55)

Project Title: Enhancing PointAttN: Attention Network for 3D Point Cloud Completion

Abstract: Perception of the environment in Autonomous systems can be performed using a variety of sensors such as LiDAR, laser or RGB-D scanners which provides 3D point cloud data. The challenge of point cloud completion, involving the generation and estimation of missing points within partial point clouds, holds a significant position in the realm of 3D computer vision applications. This directed research focuses on implementation of PointAttN: Tranformer Network for point cloud completion.
   In addition to implementing the baseline PointAttN architecture, I experimented with this network with some modifications. The modifications involved use of hyperspherical embeddings, adding skip connections in the network, adding more encoder layers, cross layer information integration, and use of specialized loss function.


Presenter: Girivaasan Chandrasekaran - Zhang, Z. (7:00 - 7:25)

Project Title: Handheld Assistive Manipulator for Ultrasound Guided Lumbar Puncture Procedure

Abstract: This project aims to design and develop a manipulator that can be attached to an ultrasound probe, to assist surgeons with performing the lumbar puncture procedure. The traditional method of performing lumbar puncture procedure remains ambiguous with respect to the process of needle insertion. The developed manipulator provides better positioning of the needle with ultrasound guidance. The positioning of the needle can be varied with the linear actuator feedback and can be secured within an acceptable angular range that was analyzed from a phantom (Ultrasound Lumbar Puncture Simulator - GTSimulators, Davie, FL USA). The angular range was used as a reference to avoid needle-tip collisions with the spinal process. Force sensing was integrated into the manipulator to ensure the localization of the needle tip into the subarachnoid space. 
 

Presenter: Zhuofan Lu, Jason Rockmael, Matthew Sweeney, and Dang Tran - Nemitz (7:30 - 7:55)


Screen D

Presenter: Lalith Athithya Navaneetha Krishnan - Zhang, Z. (5:30 - 5:55)

Project Title: Enhancing Occupancy-MAE Large-scale LiDAR Point Clouds 

Abstract: Unlike existing methods which rely on manual annotations, Voxel-MAE leverages a mask voxel autoencoder network to learn representations from point clouds without supervision. By converting point clouds into voxel representations and discerning whether each voxel contains relevant information, our approach enhances the network's understanding of object shapes, thereby improving performance in tasks like 3D object detection. Even with a high masking ratio, Voxel-MAE effectively captures the spatial redundancy inherent in large-scale point clouds. Furthermore, demonstrate the versatility of Voxel-MAE through successful application in unsupervised domain adaptation tasks, showcasing its potential for generalization.

Presenter: Hoang Nguyen and Maanav Iyengar - Li (6:00 - 6:25)

Project Title: Exploring Augmented Reality for Human-Robot Interaction in Nursing Assistance

Abstract:This Directed Research investigates the effectiveness of Augmented Reality (AR) in facilitating Human-Robot Interaction (HRI) during nursing tasks within the NurSim hospital simulation. The study evaluates the impact of AR on Goal-Directed Navigation, Shared Workspace Manipulation, and Mobile Manipulation for IV Pole Delivery scenarios and seeks to answer the following research questions:     
   1) How can AR improve the expressiveness of robot autonomy, such that humans are well-informed about ongoing tasks? 
   2) How can AR effectively mitigate interference between humans and robots in shared workspaces? 
   3) What AR visual cues can effectively communicate between human and robot autonomy in mobile manipulation tasks?
      By showcasing NurSim's capabilities as an open-source training platform for humanoid robots and contributing to the advancement of AR in nursing assistance, the research aims to uncover insights into effective AR visual cues for improving task performance and collaboration between humans and robots at various levels of autonomy.


Presenter: Keshubh Sharma - Li (6:30 - 6:55)

Project Title: Optimal External viewpoint selection 

Abstract: The project showcases a standardised approach at finding optimal viewpoint in the workspace to accentuate teleportation of a robot. The project shows the methodology at selecting random viewpoints and the formula to select the best ones for specific task category. It also includes a survey to collect imperical data to confirm our approach gives the optimal viewpoint.
 

Presenter: Ntmitrii Gyrichidi - Li (7:00 - 7:25)

Project Title: Solving the stability problem for the IONA robot and developing the model of the chemical tank for the Tele-Robotic System for Powder Waste Disposal project

Abstract: The direct research consisted 2 phases. In the first phase the problem of ensuring the nursing assistant robot IONA tip over stability during moving was solved. The results of this phase consisted of robot’s tip over stability calculation using ZMP method as well as the control system, which ensured the robot’s stability. In the second phase, a model of a chemical tank with a manipulator inside and a special end effector was developed and assembled for the project “Tele-Robotic System for Powder Waste Disposal”. The results of this phase was the developed model, installed in the HRI lab, which allowed to perform successful real world experiments of a control system and teleportation method.


Presenter: Omkar Bharambe and Dushyant Astik Patil - Nemitz (7:30 - 7:55)

Project Title: Grab-n-Go Bot: Human-Inspired Dynamic Manipulation

Abstract: Mobile manipulators play a crucial role in warehouse and industrial operations, transporting parts efficiently between machines and workstations. However, traditional stop-and-pick methods can limit their productivity. Picking up objects while the robot is in motion can significantly reduce task completion time and enhance operational efficiency. Our project, "Grab-n-Go Bot: Human-Inspired Dynamic Manipulation" addresses the inefficiency of traditional stop-and-pick methods in mobile manipulator robotics. Inspired by human-like object retrieval, we designed a robot capable of picking objects while in motion. Our approach focuses on manipulability based path planning algorithms to optimize object grasping area and minimize picking time. Our simulations show that our method outperforms static object picking, offering time savings and efficiency gain. This innovation has wide applications in warehouse management, crop harvesting, package delivery, and search and rescue robotics. Our project contributes to more efficient and dynamic object retrieval technique with broad industry implications.


Screen E

Presenter: Ankit Talele - Zhang, H. (5:30 - 5:55)

Project Title: Hand Gesture Classification based on Forearm Ultrasound Video Snippets Using 3D Convolutional Neural Networks

Abstract: Ultrasound based hand movement estimation is a crucial area of research with applications in human- machine interaction. Forearm ultrasound offers detailed information about muscle morphology changes during hand movement which can be used to estimate hand gestures. Previous work has focused on analyzing 2-Dimensional (2D) ultrasound image frames using techniques such as convolutional neural networks (CNNs). However, such 2D techniques do not capture temporal features for chunks of ultrasound data corresponding to continuous hand movements. This study uses 3D CNN based techniques to capture spatiotemporal patterns within ultrasound video segments for gesture recognition. By leveraging different types of convolutions which integrate both spatial and temporal information, our methodology aims to enhance the accuracy and robustness of gesture classification and compare their performance with our proposed method to assess its effectiveness.


Presenter: Shiva Surya Lolla - Zhang, H. (6:00 - 6:25)

Project Title: Enhancing Robotic Ultrasound Imaging with ElasticFusion SLAM and Pose Graph Optimization

Abstract: This project advances the localization and 3D reconstruction capabilities in robotic ultrasound (US) imaging by integrating the ElasticFusion SLAM framework with a robotic US system equipped with a stereo RGB-D camera setup. Tailored for RGB-D cameras, ElasticFusion excels in low-texture environments typical of human skin and phantom materials used in US imaging and effectively manages the complex, short loopy trajectories of the probe. To refine pose accuracy, Pose Graph Optimization was applied, integrating outputs from the dual-camera setup. The system's efficacy was validated using the EVO evaluation package against ground truth trajectories from a Franka Emika Panda robot, employing metrics such as Absolute Pose Error (APE) and Relative Pose Error (RPE). Results demonstrated a notable reduction in trajectory errors, with RPE improving from 6mm to as low as 1mm with Sim(3) alignment, and a 50% reduction in ground truth data needs. Additionally, a comparative analysis with ORB SLAM 3, a feature-based SLAM method, showed that ElasticFusion provided superior performance in this application context. However, trajectory accuracy experienced some deterioration when using Pose Graph Optimization. These findings underline the advantages of ElasticFusion in enhancing diagnostic imaging by significantly improving trajectory precision and reducing dependency on extensive ground truth data.


Presenter: Shrishailya Chavan - Zhang, H. (6:30 - 6:55)

Project Title: Deep learning-based tumor boundary detection using ultrasound image for photoacoustic prostate imaging.

Abstract: This project introduces a novel approach in interventional medicine through the development of a deep learning-based system for tumor boundary detection using ultrasound imaging, specifically for photoacoustic prostate imaging. The primary objective is to harness the capabilities of ultrasound imaging coupled with advanced digital image processing to generate significant research outcomes. By applying deep learning techniques, the project endeavors to segment ultrasound images of tumor samples accurately. This segmentation process facilitates the reconstruction of a precise 3D model and mask of the tumor, derived from a series of segmented ultrasound images. The relevance of this innovation lies in its ability to reconstruct a 3D photoacoustic image of the tumor, which is typically challenged by significant background noise. By implementing a 3D mask derived from segmented ultrasound images, our system can effectively mitigate background noise, thereby enhancing the clarity and quantification of the 3D tumor image. This advancement promises substantial improvements in the precision and effectiveness of tumor imaging and could potentially transform diagnostic and treatment methodologies in prostate cancer care.

Presenter: Gowri Shankar Sai Manikandan - Chamzas (7:00 - 7:25)

Project Title: LLM-Driven Complex Task Planning in Robotic Manipulation

Abstract:  Amidst the rise of transformer technologies, leading corporations such as OpenAI and NVIDIA are making significant investments towards embedding these innovations in robotics, aiming for Artificial General Intelligence (AGI). This project showcases the integration of GPT-4 with a UR10 robotic arm and RH P12 RN gripper, coordinated through the vision capabilities of a Realsense D435 camera in a simulated environment. The system translates complex natural language instructions into precise pick-and-place actions, effectively mimicking a Language-Action model. Despite achieving notable success in task execution, the project identifies key challenges in spatial understanding and LLM response accuracy. The findings underscore the potential for substantial advancements in robotic intelligence, highlighting the necessity for further research into integrating Vision-Language-Action models to overcome current limitations and move closer to the realization of AGI in robotic systems.
 

Presenter: Evan Arenburg, Anshul Jindal, and Tanish Mishra - Nemitz (7:30 - 7:55)

Project Title: TidyUp: Simulating an Autonomous Mobile Robot for Home Organization

Abstract: This capstone project addresses the risk and inefficiency of scattered toys and untidiness in homes by simulating three key functionalities of an autonomous robot: object detection, robotic arm planning, and navigation. The object detection simulation identifies toys in the environment, while the robotic arm planning simulation demonstrates precise pick-and-place capabilities using a Robotic Arm. The navigation simulation exhibits a differential drive robot's ability to maneuver around a cluttered room. These independent simulations collectively showcase the robot's potential to enhance home safety and free up parental time, with broader applications in fields like assistive robotics and emergency response, highlighting the transformative impact of autonomous systems in daily tasks.


Screen G

Presenter: Hitanshu Shah - Barton (5:30 - 5:55)

Project Title: Autotuning Musical String Instrument

Abstract: This project focuses on evaluating various tuning mechanisms and validating the accuracy of an optical frequency detection method for a musical instrument tuning system. By integrating a PID control system with an innovative frequency-to-potentiometer mapping algorithm, it aims to automate the tuning process, enhancing precision and stability.

 
Presenter: Martin Bleakley - Zheng (6:00 - 6:25)

Project Title: Teleoperational Ultrasound

Abstract: The primary goal of this project is to develop a teleoperational ultrasound system, allowing healthcare professionals to control and conduct ultrasound examinations from remote locations. The system is a portable six degree-of-freedom manipulator which can be remote controlled to localize, translate, and rotate an ultrasound probe within its workspace. The project is a continuation of the work done by students from the previous year.


Presenter: Michael Beskid - Pinciroli (6:30 - 6:55)

Project Title:  Analysis of Minimalistic Collective Perception Algorithm Performance with Inaccurate Estimation of Sensor Quality

Abstract: Collective perception is an essential yet challenging problem for robot swarms which is further compounded by sensing inaccuracies. A novel approach to solving this problem for minimalistic robots equipped with flawed sensors proposes a probabilistically optimal algorithm based on a decentralized Kalman filter. This approach was demonstrated to achieve reasonable performance with highly inaccurate sensors while assuming that the sensor accuracy is known perfectly. The goal of this research is to extend this work to consider the case where robots have inaccurate estimates of their own sensor accuracies. We collect and analyze data through dynamic simulations to characterize the performance of this algorithm across a wide range of parameters, with the addition of inaccurate assumed sensor quality estimates. We find that the algorithm is reasonably robust to small inaccuracies in sensor quality estimates and provide several recommendations to govern the application of this technique to practical settings.
 

Presenter: Soumitra Pandit - Faal (7:00 - 7:25)

Project Title: Investigating optimal control methods for point to point traversal of hanging n-link robot.

Abstract: Built, researched and investigated the dynamics and optimal control stacks for solving nonlinear constrained optimization using representative tasks.
 

Presenter: Emily Austin and Jeffrey Davis - Nemitz (7:30 - 7:55)

Project Title: 3D-Printed Amphibious Robot

Abstract:  This project demonstrates the development and application of a new approach to search and rescue operations through the use of 3D-printed iterative robot design. The main focus was to address the different challenges presented from diverse and complex environments inaccessible to human responders. Thus, the project explores a variety of 3D-printing techniques to fabricate a robot for specific environments. Through the use of 3D-printing, this robot can be heavily customized and quickly manufactured, allowing it to be optimized for various environmental conditions.
   Through this process we designed a robotic system able to traverse a combination of terrestrial and aquatic environments, and tested the system’s performance in a variety of metrics including efficiency, durability, and ruggedness, as well as the time to manufacture. This research shows the efficacy of using 3D-printing techniques and materials to address a range of hazardous environments that a single standard system can not address holistically.


Screen H

Presenter: Rutwik Sudhakar Kulkarni - Sanket (5:30 - 5:55)

Project Title: Vision Goggles: A Sim2Real2Sim Framework for Testing UAVs

Abstract: This project presents a novel system, comparable to equipping a UAV with virtual reality (VR) goggles, designed to enhance autonomous drone navigation in complex environments. The system simulates a UAV's vision within a motion capture setup, employing a pose-dependent RGBD camera rendering technique using 3D Gaussian Splatting. This framework accurately mimics a UAV's perspective in various rendered scenarios, thus providing a realistic and dynamic visual environment. The proposed architecture provides image updates at 70 Hz, making it suitable for high-speed operations. We further demonstrate the potential of our framework by deploying a depth-based obstacle avoidance algorithm on a custom-built UAV operating in the testbed described above. 


Presenter: Shaurya Parashar - Sanket (6:00 - 6:25)

Project Title: BatSense: Depth from Sound

Abstract: In the realm of aerial robotics, precise navigation is paramount. Existing sensors, such as cameras, lidars, and TOF sensors, face limitations in various environmental conditions. To address these challenges, we propose an innovative acoustic-based depth mapping method using deep learning and a phased microphone array. Our approach demonstrates promising results in simulation scenarios, offering potential for nocturnal and 360-degree depth perception. This method has implications for enhancing the navigational capabilities of aerial robots, bridging current gaps in perception technology.
 

Presenter: Aditya Kailash Nisal - Xiao (6:30 - 6:55)

Project Title: Semantic Image Retrieval

Abstract: The proliferation of digital imagery in various domains demands robust image retrieval systems capable of understanding and interpreting the content of images beyond conventional metadata tagging. This project introduces an advanced image retrieval system that leverages deep learning techniques for object-level feature extraction and graph-based similarity measures to enhance retrieval accuracy and relevance. Our approach utilizes segmentation models to isolate and identify distinct objects within images, enabling the extraction of rich, descriptive features using the DINO (DIstillation with NO labels) vision transformer model. Each image is represented as a graph where nodes correspond to objects and are annotated with feature vectors derived from the vision transformer. To assess similarity between images, we employ graph neural networks that compare node embeddings, enabling a nuanced understanding of image content through structural and feature-based comparisons. This method allows for precise retrieval by focusing on the relationships and attributes of individual objects within images, setting it apart from traditional methods that rely on global image features. Initial evaluations demonstrate that our system significantly outperforms baseline methods in terms of accuracy and relevance, particularly in complex image datasets with diverse content. Our research contributes to the field by introducing a graph-based approach that harnesses the power of local features and relational data for improved image retrieval, offering substantial implications for applications in digital libraries, online media databases, and beyond.

 
Presenter:  Upasana Mahanti - Agheli (7:00 - 7.25)

Project Title: Elevation Mapping for Bipedal Quadruped Robot

Abstract: This research is aimed at developing and implementing algorithms that enable a robot to detect and track individuals and objects, enhancing navigation through adaptive responses to environmental changes. The project integrates these computer vision capabilities into an existing robotic framework, striving for a bimodal system that combines stable locomotion with acute environmental awareness. The goal is to deliver a quadruped robot with improved navigational abilities across various settings, equipped with a sophisticated computer vision toolkit.
 


Screen I

Presenter: Sumeet Shanbhag - Farzan (5:30 - 5:55)

Project Title: Unified Safety-Critical Motion Planning for Connected \\ Non-Holonomic Agents Using an Adaptive A* and Hybrid A* Integration

Abstract: This paper presents a unified approach to safety-critical, multi-agent motion planning for connected autonomous robotic systems, seamlessly integrating the kinematic, dynamic, and safety constraints of individual agents, while reducing computational expense to ensure real-time applicability. By integrating Voronoi Cells with an adaptive blend of A* and Hybrid A* algorithms, the proposed combinational planner ensures the generation of feasible and executable trajectories, guaranteeing efficient and collision-free navigation of multiple agents in dynamically complex environments. An additional deadlock avoidance strategy is proposed to further enhance the safety layer. We demonstrate the effectiveness and robustness of our approach in terms of efficiency, collision avoidance, and deadlock resolution through simulations in diverse, randomly generated environments. The results show that the proposed method outperforms existing methods in terms of dynamic considerations and obstacle avoidance, making it a practical real-time motion planning approach for connected non-holonomic agents in complex environments.


Presenter: Swati Skirke - Farzan (6:00 - 6:25)

Project Title: Safety critical collaborative decision-making for Autonomous Vehicle fleet using Multi-agent Reinforcement Learning

Abstract: In the near future, an Autonomous Vehicle (AV) fleet will take over urban traffic where there will be collaborative decision-making between AVs. However, this setting raises serious challenges, both in terms of traffic flow and individual mobility, as well as from the road safety point of view. In order to achieve
   flawless communication and decision-making between AVs, there is a need for a robust framework. Also, traditional control logic is not sufficient for motion control of an AV fleet. This problem can be solved with multi-agent Reinforcement Learning, where multiple RL agents can share their experience while exploring the common environment. Our approach focuses on building intelligent control policies for collaborative decision-making of an Autonomous Vehicle fleet in urban traffic scenarios. Here, the objective is to make a collective decision about actions to be executed by AVs in a fleet while avoiding any possible collision. We are maintaining a single actor-critic network, which is trained using experiences collected by multiple agents. We are using the MAA2C algorithm to train the network. We have found satisfactory results for collision avoidance for an AV fleet in a ramp-merging scenario. 
 

Presenter: Varun Gampa - Farzan (6:30 - 6:55)

Project Title: Enhancing Real-time Optimal Control: Learning Augmented Model Predictive Controller for Safe Robotic Systems in Safety-Critical Scenarios

Abstract: This project introduces an enhanced off-policy Gaussian Predictive Control (GPC) framework designed to complement Model Predictive Control (MPC) algorithms. The framework utilizes two MPCs with different horizon lengths, feeding the difference in their control inputs to train the GPC. After training, the GPC refines control inputs from a shorter-horizon MPC, ensuring the system behaves similarly to an MPC with a longer horizon while minimizing computational overhead. Furthermore, it restricts the GPC controller from giving large control inputs as it gives the difference between the shorter and longer horizon MPCs, thus keeping the system safe as well. Experimental validation on a differential drive mobile robot on physics based simulator ISAAC SIM, showcases the framework's adaptability to diverse trajectories, highlighting its potential for real-time optimal control in safety-critical robotic systems.
 

Presenter: Chaitanya Gaddipati - Farzan  (7:00 - 7:25)

Project Title: Model Predictive Control of Tissue Temperature in Robotic Laser Surgery for Tumor Ablation

Abstract: Laser technology plays a vital role in modern medical practices, particularly in laryngeal microsurgery for tumor excision. Robotic systems aim to enhance precision in these procedures by focusing on automating laser temperature control. However, variability in tissue properties complicates precise thermal regulation during laser-tissue interactions. This work addresses the challenge of automatically controlling laser power in surgery to regulate the temperature of a given soft tissue. Tissue temperature is influenced by how the tissue absorbs laser light which depends on specific optical parameters that differ across tissues and within the same tissue. This variability necessitates the development of a robust controller that can adapt to these differences and thereby minimize damage to healthy tissue while effectively ablating tumors. This research proposes a novel model predictive controller that achieves the target temperature profile across a given tissue surface. Central to our approach is the incorporation of the system's partial differential equation (PDE) model as a constraint while defining the optimization problem for the controller. We anticipate that this research contributes to the future development of PDE constrained optimal controllers in the context of robotic laser surgery for tumor ablation.
 

Audience(s)

DEPARTMENT(S):

Robotics Engineering