BEGIN:VCALENDAR
CALSCALE:GREGORIAN
VERSION:2.0
METHOD:PUBLISH
PRODID:-//Drupal iCal API//EN
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:EDT
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:EST
TZOFFSETTO:-0500
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
SEQUENCE:1
X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
UID:235731
DTSTAMP:20260428T092652Z
DTSTART;TZID=America/New_York:20260501T173000
DTEND;TZID=America/New_York:20260501T190000
URL;TYPE=URI:https://www.wpi.edu/news/calendar/events/robotics-engineering-
 spring-2026-graduate-capstone
SUMMARY:Robotics Engineering Spring 2026 Graduate Capstone
DESCRIPTION:\n\n\n      \n      \n\n\n\nRobotics Engineering Spring 2026 Gr
 aduate Capstone Schedule\n\n\n5:30 PM - 6:00 PM\nSIMRAN CHAUHAN\nTitle: Re
 al-Time Direction-Aware Traversability Estimation with Physics Supervision
  for Off-Road Autonomous Vehicles\nAbstract: Off-road autonomous vehicles 
 require accurate, real-time assessment of terrain traversability that acco
 unts for terrain topography, and vehicle geometry and orientation. However
 , performing this assessment through conventional physics-based mobility a
 nalysis is computationally prohibitive for real-time use. This work presen
 ts a learning-based approach by incorporating a novel criteria for topogra
 phical mobility evaluation and thus provides a fast alternative to computa
 tionally expensive physics-based mobility analysis. The proposed method is
  trained using supervision from a deterministic mobility engine and is des
 igned to identify safety-critical impassable regions under significant cla
 ss imbalance. By incorporating terrain-geometry information during learnin
 g, the approach captures richer mobility cues than binary passability labe
 ls alone. The framework is evaluated on synthetic off-road terrains of var
 ying difficulty and demonstrated on simulated LiDAR-derived terrain maps f
 rom simulation. Results show improved detection of impassable terrain and 
 produce aggregated traversability scores that can be directly used for dow
 nstream path planning, supporting real-time mobility assessment for autono
 mous ground vehicles in unstructured environments.\nAdvisor: Lee Moradi\nP
 AVAN GANESH PABBINEEDI\nTitle: On the Advantages of Control-Space Kinodyna
 mic Planning and Control Over Geometric State-Space Methods for Ground Veh
 icle Navigation\nAbstract: Autonomous ground vehicles need motion planning
  and control methods that consider the vehicle's dynamic limitations. Trad
 itional approaches first plan a path in the geometric state space using on
 ly position and heading, and then use a controller to follow that path bas
 ed mainly on positional error. Because these methods ignore velocity and d
 ynamic feasibility during planning, the controller must make reactive corr
 ections to follow trajectories that may not be dynamically achievable, lea
 ding to larger tracking errors and less smooth motion. To address this, th
 is project implements a motion planning and control framework that operate
 s directly in the control space. It generates time-parameterized reference
 s that are dynamically feasible by construction. The controller consumes t
 his full kinodynamic reference, enabling predictive control that accounts 
 for vehicle kinodynamics, rather than tracking geometric waypoints alone.T
 he framework is tested on two platforms with different dynamics: an Ackerm
 ann-steered vehicle in simulation and a differential-drive robot in a real
  indoor environment. The proposed approach is compared with a traditional 
 geometric state-space method using the same controller and optimizer. Resu
 lts show a significant reduction in cross-track error, with the kinodynami
 c control-space approach achieving lower tracking error while also produci
 ng smoother motion. These results demonstrate the benefits of integrating 
 planning and control directly in the control space for autonomous vehicle 
 navigation.\nAdvisor: Constantinos Chamzas\nMANIDEEP DUGGI\nTitle: A Prefe
 rence-Aware, Socially Aware Multi-Robot Assistive System\nAbstract: This p
 roject presents a socially and physically assistive multi-robot system tha
 t integrates a social robot, Pepper, with a mobile manipulation robot, Str
 etch, to support object-fetch and delivery tasks during daily activities. 
 In this system, Stretch provides physical assistance through autonomous na
 vigation, object retrieval, and delivery, while Pepper serves as the prima
 ry user-facing robot, providing spoken interaction, social engagement, tas
 k explanations, confirmation prompts, and progress updates. To make the in
 teraction more personalized and user-centered, the project investigates in
 dividual differences in preferences for robot communication and assistance
  style. Based on user responses to a structured preference questionnaire, 
 the system defines multiple interaction profiles that vary along dimension
 s such as verbal presence, update frequency, initiative, consent, social t
 one, explanation detail, support, and anthropomorphic style. These profile
 s are implemented as distinct robot characters that allow Pepper to tailor
  its behavior to different user preferences. Overall, the project demonstr
 ates how a coordinated multi-robot assistive system can combine physical t
 ask support with personalized social interaction to improve user comfort, 
 engagement, transparency, and acceptance during everyday assistive tasks, 
 with the potential to promote long-term user adoption of assistive robots.
 \nAdvisor: Fiona Yuan\nROBERT GUNDUZ\nTitle: Reinforcement Learning for Fr
 ontier Exploration\nAbstract: Exploring new reinforcement learning methodo
 logy for tasking a robot to map an unknown environment as quickly as possi
 ble. Implementation includes, but is not limited to, tuning hyper-paramete
 rs for an existing Advantage Actor Critic (A2C) model, or investigating ot
 her approaches such as sequence to sequence (Sec2Sec).\nAdvisor: Kevin Lea
 hy\nDHRUV AGRAWAL\nTitle: EAGLE: Efficient Aerial Guidance for Interconnec
 ted Load Systems\nAbstract: This work presents a control framework for coo
 perative aerial payload manipulation using single and multiple quadrotors 
 transporting a cable-suspended load. The system considers both a quadrotor
 -centric formulation, where UAVs independently track trajectories while tr
 eating the payload as a disturbance, and a payload-centric formulation, wh
 ere the coupled dynamics of the quadrotor–payload system are explicitly mo
 deled for direct payload trajectory control. A geometric controller is emp
 loyed to ensure stable and precise flight, while a multi-quadrotor extensi
 on enables cooperative transport of a rigid body payload through taut cabl
 e connections. The proposed framework enables coordinated aerial manipulat
 ion by leveraging system-level dynamics to achieve stable and accurate pay
 load trajectory tracking under coupling constraints.\nAdvisor: Guanrui Li\
 nLUCAS BURSTEIN\nTitle: Robotically Controlled Magnetic Laser Aiming for S
 urgical Applications\nAbstract: This project presents a novel optical fibe
 r steering system for laser surgery based on magnetic actuation. Fiber ste
 ering is achieved by means of two permanent magnets: a small magnet mounte
 d at the distal end of the fiber and a larger, robotically manipulated mag
 net that generates controlled magnetic torque and force. In clinical use, 
 the larger magnet would remain outside the patient’s body, positioned clos
 e enough to produce the necessary magnetic fields. Open-loop laser aiming 
 of the proposed system was demonstrated.\nAdvisor: Loris Fichera\n6:00 PM 
 - 6:30 PM\nWARWICK BARKER\nTitle: Facilitating Industrial Changeover Throu
 gh A Sketch-Based Interface\nAbstract: Industrial changeover is a wasteful
  process that continues to hinder automation efforts in high mix, low volu
 me (HMLV) manufacturing environments. Without a way to mitigate changeover
  waste, it is difficult to economically justify new automation projects in
  HMLV industries. This project aims to reduce wasteful changeover time for
  industrial robot applications through the use of a sketch-based interface
  and robot-mounted camera. With little-to-no training, an operator can set
  up a new job using simple sketching tools to draw on an image of the new 
 part. This process further abstracts programming from changeover operation
 s to the point that an operator needs zero understanding of conventional t
 each pendant programming methods to successfully complete a changeover ope
 ration. By simplifying changeover and reducing waste, manufacturers can ju
 stify and implement increased automation in one of the most automation-ave
 rse manufacturing segments.\nAdvisor: Jane Li\nSEYEDALI GOLESTANEH\nTitle:
  AURA: Asymptotically Optimal Uncertainty-Robust Replanning Algorithm for 
 Kinodynamic Planning\nAbstract: Sampling-based motion planners offer a pra
 ctical and scalable approach to kinodynamic motion planning, notably for h
 igh-dimensional, underactuated, or non-holonomic systems, where analytic t
 rajectory generation and exhaustive search over the state space are infeas
 ible. However, their sequential computation leads to slow convergence towa
 rd optimal solutions. Moreover, the offline planning process cannot be cou
 pled with execution steps to take into account of the motion uncertainty. 
 In this work, both limitations are addressed within a unified framework, A
 URA, an asymptotically optimal meta-planner, robust to uncertainties that 
 arise during execution. Alongside the main execution thread, this framewor
 k consists of a replanning module, which continuously explores the state s
 pace and improves the plan during execution, and an optimization module, w
 hich refines control inputs to reduce tracking error. The proposed approac
 h is evaluated in both simulation and real-world environments across a var
 iety of systems, demonstrating consistent improvements in path efficiency 
 in comparison to other baselines.\nAdvisor: Constantinos Chamzas\nTANMAYI 
 INAPARTHY\nTitle: Beyond Physical Safety: Vision-Language Models for Emoti
 onal Safety Reasoning in Dementia Care\nAbstract: SafeScore is a framework
  that fine-tunes vision-language models to reason about emotional safety i
 n dementia care interactions. Rather than focusing solely on physical haza
 rds, SafeScore analyzes care videos to interpret the emotional states of p
 eople living with dementia (PLWDs) — processing facial expressions, body l
 anguage, gestures, and speech together — and generates structured safety s
 cores that quantify whether a caregiver's actions are helping or unintenti
 onally escalating distress. The system builds a multimodal training pipeli
 ne using dementia care videos, NVILA-8B-generated captions, Whisper transc
 ripts, and DeepSeek-R1-generated Question–Reasoning–Answer supervision to 
 fine-tune long-context VLMs for emotional reasoning. In addition to the la
 rge-model pipeline, We also aim to develop a lightweight SafeScore using M
 oondream2, a lightweight VLM, enabling emotional safety scoring to run eff
 iciently on small, resource-constrained devices — making the framework dep
 loyable in real-world assistive robots, smart-home systems, and telehealth
  platforms where computational resources may be limited.\nAdvisor: Fiona Y
 uan\nHARMEET DHILLON\nTitle: Real Time optical Communication using event c
 ameras\nAbstract: In multi-robot systems, traditional radio frequency (RF)
  communication struggles with contention and jamming. Optical communicatio
 n offers a strong alternative. However, conventional frame-based cameras s
 uffer from limited frame rates, motion blur, and reduced robustness under 
 high dynamic range lighting. Event cameras support microsecond temporal re
 solution and high dynamic range, making them extremely sensitive to scene 
 changes under fast relative motion with an optical transmitter. Leveraging
  these strengths, we develop a complete optical communication system capab
 le of tracking moving transmitters and decoding messages in real time. Our
  system achieves over 95% decoding accuracy for text transmission during m
 otion by implementing a Geometry Aware Unscented Kalman Filter (GA-UKF), a
 chieving 7× faster processing speed compared to the previous state of-the-
 art method, while maintaining equivalent tracking accuracy at transmitting
  frequencies ≥ 1 kHz.\nAdvisor: Kevin Leahy\nYASAR IDIKUT\nTitle: Safety-C
 ritical Quadrotor Navigation Using Fourth-Order Control Barrier Functions 
 with Full Nonlinear Quadrotor Dynamics\nAbstract: Safety-critical quadroto
 r navigation in cluttered environments typically relies on hierarchical ar
 chitectures in which a high-level safety filter modifies position referenc
 es that are then tracked by a separate attitude controller. However, This 
 separation can create a mismatch between translational safety objectives a
 nd attitude evolution. To address this gap, we propose a safety controller
  based on high order control barrier functions that explicitly incorporate
 s the full nonlinear quadrotor dynamics, including attitude dynamics and t
 orque-level actuation. Moreover, we propose 2 control designs to leverage 
 the nonlinear model predictive control and the proposed high-order control
  barrier function for the tasks of quadrotor obstacle avoidance. The resul
 ting formulation enforces obstacle-avoidance constraints while optimizing 
 control torques, granting the safety layer direct authority over attitude 
 behavior and eliminating the hierarchical disconnect inherent to reference
 -tracking safety filters. We validate the approaches in both high-fidelity
  simulation and real-world flight experiments, demonstrating reliable cons
 traint satisfaction and successful obstacle avoidance. These results sugge
 st that torque-level high-order CBFs provide a practical pathway to provab
 ly safer quadrotor navigation under full-body dynamics.\nAdvisor: Guanrui 
 Li\nJESSICA HART\nTitle: SWAMP Docking Station Design\nAbstract: The Syste
 m for Watering and Autonomously Monitoring Plants (SWAMP) is a mobile robo
 t designed to care for plants with minimal user input. During its developm
 ent by last year's MQP team, the robot successfully demonstrated navigatio
 n, obstacle avoidance, and precision watering. However, its autonomy was l
 imited by the need for manual water refilling and battery monitoring. This
  capstone addresses those limitations through the design and development o
 f a docking station for SWAMP. The docking station will serve as a home ba
 se and autonomously refill the robot's onboard water tanks. The addition o
 f a docking station will reduce required user maintenance and enable SWAMP
  to care for plants over extended periods without human intervention.\nAdv
 isor: Greg Lewin\n6:30 PM - 7:00 PM\nEVAN CARMODY\nTitle: Gripper Design f
 or Soft Continuum Robots\nAbstract: Designing and fabricating a lightweigh
 t, effective gripper design for implementation with a cable-driven origami
  module soft continuum robot arm. The gripper will contain a camera for vi
 sion based servoing and must integrate well with control and actuation pip
 eline. Testing and integration with the entire system will follow to evalu
 ate designs for efficiency and compatibility.\nAdvisor: Berk Calli\nJOHN H
 ALL\nTitle: Structured Light for Endoscopic Depth Estimation\nAbstract: Cu
 rrent gastric endoscopy vision capabilities are limited by the use of mono
 cular cameras. Estimating the volume of the stomach is a crucial component
  for successful Endoscopic Sleeve Gastroplasty (ESG) operations and is cur
 rently done by surgeon intuition. Traditional vision based depth estimatio
 ns such as stereo or time of flight struggle in the poorly lit and reflect
 ive environment of the stomach. The solution proposed by this project is d
 epth estimation using structured light with a monocular camera and diffrac
 ted laser system. The system generates a sparse but robust point cloud whi
 ch can be used to estimate the volume of the stomach. This project outline
 s a method to calibrate and camera/laser pair and estimate the depth. The 
 system was tested on a variety of surface shapes and materials in addition
  to ex-vivo stomach tissue.\nAdvisor: Giovanni Pittiglio\nAMEYA PHADNIS\nT
 itle: An Improved Manipulation Framework for Object Grasping using Point C
 louds\nAbstract: This Capstone Project proposes a consolidated vision-guid
 ed grasping pipeline that leverages a novel octree-based vision algorithm 
 applied to point clouds that is designed to provide a compact yet geometri
 cally informative description of the scene. Building on this representatio
 n, a new algorithmic grasping method is developed that directly operates o
 n the octree structure, enabling efficient reasoning over object geometry 
 and spatial occupancy without reliance on learning-based models. The propo
 sed framework is tested in Gazebo on several objects from the YCB dataset,
  and the success rate and grasp pose accuracy for the objects are noted. I
 t is observed that with the help of the vision output, a simple approach i
 s required to perform grasping of objects, which would otherwise require h
 eavy computation or learning-based methods to achieve similar results.\nAd
 visor: Jing Xiao\nBRIJAN VAGHASIYA\nTitle: Navigation in Complete Darkness
  using Neuromorphic Cameras\nAbstract: This research presents a method for
  autonomous navigation in zero-light environments using Neuromorphic camer
 a, structured lighting and coded apertures. By projecting structured light
  patterns and capturing the reflected images through a coded aperture, the
  system enhances depth estimation in complete darkness. A deep learning fr
 amework processes these images to extract depth information, enabling navi
 gation in challenging, low-visibility environments.\nAdvisor: Nitin Sanket
 \n
END:VEVENT
END:VCALENDAR
