RBE Directed Research Presentation: Shounak Naik, Mandeep Singh and Sai Ramana Kiran

Wednesday, May 3, 2023
3:00 pm to 4:00 pm
Location
Floor/Room #
150 E
Preview

RBE Robot Lineup

RBE Directed Research Presentation

Shounak Naik, Mandeep Singh and Sai Ramana Kiran

Wednesday, May 3rd, 2023

3:00 PM - 4:00 PM

Location: UH 150E

Life is Uncertain, Might as well use it

Abstract: Navigation of a quadcopter through an unknown environment remains an open challenging research problem. Any computation related to navigation requires low latency as well as an accurate understanding of the scene. Current methods utilize odometry or rely on predictions of relatively big neural networks. We present a solution which can be deployed on physically constrained (tiny) quadcopters and would also work in GPS denied environments. We use a deep learning-based approach which relies on the uncertainty of network predictions to understand and navigate through the scene. Leveraging uncertainties allows us to use a small model as we do not need our predictions to be upto scale. We use a simple monocular camera and a small neural network to get the uncertainties in Depth, Optical Flow and Surface Normals predictions. We have trained the network on a custom made FlyingCubes dataset which has ground truth for Surface Normals, Depth and Forward Optical Flow. We show that using this framework works well for dodging obstacles and navigating through an unknown environment.

Minimalist Navigation on tiny aerial robots using optical Flow and its uncertainty

Abstract: Navigation using a monocular camera is pivotal for autonomous operation on tiny aerial robots due to the sensor suite’s versatility, cost and accuracy. Our work introduces MinNav, a navigation stack based on optical flow and its uncertainty to fly through a scene that has static and dynamic obstacles. We demonstrate the practicality of the proposed approach by deploying MinNav on a tiny drone and analyzing its performance in various environments that contain static, dynamic, and flying-through-gap applications. Our approach is on par with performance that uses depth from stereo cameras or models that predict monocular depth that is about 10× slower.

EdgeFlowNet: 100FPS@2W Optical Flow For Tiny Mobile Robots

Abstract: Optical flow estimation is a critical task in mobile robotics that enables safe and accurate navigation, obstacle avoidance, and other functionalities. However, performing this task on tiny autonomous mobile robots is a challenging problem due to limited onboard sensing and computation capabilities. This paper proposes a solution to this problem by introducing EdgeFlowNet, a high-speed, low-latency dense optical flow approach for tiny autonomous mobile robots. We demonstrate the practicality of the proposed approach by deploying EdgeFlowNet on a tiny drone and analyzing its performance in static, dynamic, and flying through gap applications. EdgeFlowNet achieves about 20$\times$ faster processing than previous state-of-the-art approaches while maintaining similar accuracy. Overall, this paper presents a practical solution for optical flow estimation on tiny mobile robots, demonstrating efficiency improvements compared to previous approaches.

Advisor: Professor Nitin Sanket, Worcester Polytechnic Institute (WPI)

Audience(s)

DEPARTMENT(S):

Robotics Engineering