Projects and Jobs for Students

Ingenieurpraxis, Forschungspraxis und Masterarbeit

 

Student projects available (winter semester 2023-24)

Topic 1: Visuo-Tactile Robot Manipulation for Tight Clearance Assembly.

Click to download description

In this work, your research topic will be in 6D pose estimation algorithms and contact-rich manipulation. More specifically:

  • Evaluating the performance of the some latest 6D pose estimation algorithms in various scenarios;
  • Integrating visual perception into our original tactile insertion skill framework and software architecture;
  • Verifying your algorithm with real robot experiments;
  • Assist the research activities including experiments, and publications;
  • Possible to extend internship period to master thesis and aim at publishing papers in top-tier robotics conference.

Contact: Yansong Wu (Yansong.wu@tum.de), Dr. Fan Wu (f.wu@tum.de)

Topic 2: Synthesize Behavior Trees from Human Demonstrations for Industrial Assembly Tasks

Click here to download description

Topic 3: Large Language Model based Task and Motion Planning

Click here to down description

In this work, your research topic will be in:

  • Build a real-world setup with several manipulation tasks.
  • Develop a software interface to covert task plans generated from LLMs to executable motion plans or Behavior Tree based on our developed skill library.
  • Experimental validation on a set of manipulation tasks.

Contact: Yansong Wu (Yansong.wu@tum.de), Dr. Fan Wu (f.wu@tum.de)

The development of the tactile and dexterous end effectors for the surgical robotic system will require the expertise to design and integrate sensing interfaces in a highly compact space. Moreover, it will be important to oversee the general robotic testbed development including the control and sensing architecture for seamless integration of systems from the bottom-up. The operator would require the crucial sense of touch with low-magnitude normal and shear forces as well as sensing of multiple contact points. Additionally, the end effectors need to be modular to handle different surgical tools. Furthermore, this sensory information needs to be translated to the operator side and be integrated into the overall control and sensing architecture of the robotic system. 

In our institute we already have developed a drive unit for the EndoWrist. It comprises four motors that are connected to the pullies which in turn through a cable mechanism drive the joints of the EndoWrist. Two of this drive unit has been attached to two robot arms around a surgical bed. An endoscope has been attached to the third arm. The robots are inserted through the trocar point into a training module. The robots are registered with respect to each other and the bed. On the surgeon side we have two Lambda haptic consoles which is interaction with the surgeon.

Two positions:

Position 1:   The research primarily focuses on the kinematic modeling of available  endowrist in order to obtain the coupled transformation between the motor space and joint space.  After successful implementation, you will progress to dynamic identification (emphasizing friction modeling in the endowrist) as well as external torque estimation through the current measurement. Finally, the proposed model will be validated on a testbed.

Position 2:      In the other position we are seeking for highly skilled researcher to improve our existing software architecture of the whole system. This includes mainly on the networking of the robots and haptic consoles. Good programming skills in ROS, c++/python in this position is appealing

Following experiences and background are appealing

  • Good Matlab skills (position1)

  • Good understanding of coordinate frames and 3d transformations (position1)

  • Good mathematic knowledge, specifically Linear Algebra (position1)

  • Experience with ROS, Python, Cpp  (position2)

  • Experience with Maxon motor controller (position1,2)

  • Creative and independent thinker   (position1,2)

Related Literature

  • Lee, C., Park, Y. H., Yoon, C., Noh, S., Lee, C., Kim, Y., ... & Kim, S. (2015). A grip force model for the da Vinci end-effector to predict a compensation force. Medical & biological engineering & computing, 53, 253-261. 

  • S. Kim and D. Y. Lee, "Friction-model-based estimation of interaction force of a surgical robot," 2015 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea (South), 2015, pp. 1503-1507, doi: 10.1109/ICCAS.2015.7364591. 

  • Longmore, S. K., Naik, G., & Gargiulo, G. D. (2020). Laparoscopic robotic surgery: Current perspective and future directions. Robotics, 9(2), 42. 

  • Guadagni, S., Di Franco, G., Gianardi, D., Palmeri, M., Ceccarelli, C., Bianchini, M., ... & Morelli, L. (2018). Control comparison of the new EndoWrist and traditional laparoscopic staplers for anterior rectal resection with the Da Vinci Xi: a case study. Journal of Laparoendoscopic & Advanced Surgical Techniques, 28(12), 1422-1427.  

  • Abeywardena, S., Yuan, Q., Tzemanaki, A., Psomopoulou, E., Droukas, L., Melhuish, C., & Dogramadzi, S. (2019). Estimation of tool-tissue forces in robot-assisted minimally invasive surgery using neural networks. Frontiers in Robotics and AI, 6, 56.
     

For more information, please contact: 

Dr Hamid Sadeghian (hamid.sadeghian@tum.de)
Mario Harbinger (mario.troebinger@tum.de)

Location:

Forschungszentrum Geriatronik, MIRMI, TUM, Bahnhofstraße 37, 82467 Garmisch-Partenkirchen.
MIRMI, TUMGeorg-Brauchle-Ring 60-62, 80992 München.

Background

Accurate pose estimation of surgical tools is paramount in the field of robotic surgery, helping to increase precision and ultimately improve patient outcomes. Relying solely on forward kinematics often falls short, unable to account for various uncertainties such as cable compliance of endowrists, motor backlash, and environmental noise, among others.

In our lab, we developed a cutting-edge surgical testbed with three Franka Emika robots, equipped with an professional endoscope and endowrists. This setup provides a unique platform to tackle real-world challenges in robotic surgery.

To enhance autonomy in robotic surgery, we are looking to develop an innovative pose estimation algorithm utilizing endoscopic images. This thesis opportunity aims to tackle this exciting challenge, potentially making a significant contribution to the future of robotic surgery.

Tasks

  • Perform camera calibration for the endoscopes
  • Develop a pose estimation algorithm for endowrists
  • Evaluate the methods on our robotic surgery setup

Prerequisites

  • Good Python & C++ programming skills / ROS 2
  • Good understanding of state estimation algorithms, e.g. Kalman filter/particle filter
  • Good knowledge of computer vision, e.g. camera calibration, feature extraction
  • Goal-oriented mentality and motivation about the topic

References

[1] Moccia, Rocco, et al. "Vision-based dynamic virtual fixtures for tools collision avoidance in robotic surgery." IEEE Robotics and Automation Letters 5.2 (2020): 1650-1655.

[2] Staub, Christoph. Micro Endoscope based Fine Manipulation in Robotic Surgery. Diss. Technische Universität München, 2013.

[3] Richter F, Lu J, Orosco R K, et al. Robotic tool tracking under partially visible kinematic chain: A unified approach[J]. IEEE Transactions on Robotics, 2021, 38(3): 1653-1670.

Contact

Zheng Shen(zheng.shen@tum.de)

Fabian Jacob(fabian.jakob@tum.de)

Chair of Robotics Science and Systems Intelligence, Munich Institute of Robotics and Machine Intelligence (MIRMI)

The thesis goal is to study how vision modulates planning and execution of movements

 

Healthy participants will wear EEG in order to study what happens at brain level during different grasping tasks (tasks execution in the light and in the darkness and tasks imagination)

Good to have:

  • Good programming and signal processing skills
     

Contacts:

Location/s: Faculty of Biology, LMU Biocenter  and Großhaderner Str. 2 Munich, and 

MIRMI (TUM), Garching bei München, Carl Zeiss Strasse

 

References:

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8008384

https://iopscience.iop.org/article/10.1088/1741-2552/aa8911/pdf

Europe is leading the market of torque-controlled robots. These robots can withstand physical interaction with the environment, including impacts while providing accurate sensing and actuation capabilities. This opens the door to the exploitation of intentional dynamic contact transitions for manipulation. This exciting field of research, which we refer to as impact-aware robotics, requires the development of a new holistic framework
comprising modeling, learning, planning, sensing, and control aspects, supported by collision-tolerant hardware.

This thesis will focus on developing an impact monitoring pipeline that incorporates pre-existing knowledge of impact and release motions to discriminate between expected and unexpected impacts. 
The student will implement a classification pipeline communicating with the AGX Dynamics physical engine and validating the controller using a Franka Emika dual-arm setup.  

What you will gain

  • Hands-on experience with Franka Emika robot manipulators
  • Experience with Machine Learning classification for a Robotic application
  • Experience with software integration of C++ and Python/MatLab software packages
  • Access to I.AM. consortium software and network (our academic partners are TU/e, EPFL, and CNRS, while the industrial are Franka Emika, Smart Robotics, and Algoryx) 


Prerequisites

  • Good C++ programming skills
  • Good MatLab or Python programming skills
  • Good understanding of dynamical system modeling principles (e.g., mass-spring-damper model)
  • Familiarity with classification principles (data preprocessing, classification tuning, etc.)
  • Willingness to get familiar with new Software frameworks and integrate them 


Helpful but not required

  •  Experience with Ubuntu
  •  Experience with robot manipulators

Related Literature:

  • B. Proper, A. Kurdas, S. Abdolshah, S. Haddadin and A. Saccon, "Aim-Aware Collision Monitoring: Discriminating Between Expected and Unexpected Post-Impact Behaviors," in IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4609-4616, Aug. 2023, doi: 10.1109/LRA.2023.3284371.
  • I. Aouaj, V. Padois and A. Saccon, "Predicting the Post-Impact Velocity of a Robotic Arm via Rigid Multibody Models: an Experimental Study," 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, 2021, pp. 2264-2271, doi: 10.1109/ICRA48506.2021.9561768.
  • J. J. van Steen, N. van de Wouw and A. Saccon, "Robot Control for Simultaneous Impact tasks via Quadratic Programming-based Reference Spreading," 2022 American Control Conference (ACC), Atlanta, GA, USA, 2022, pp. 3865-3872, doi: 10.23919/ACC53348.2022.9867812.

Websites

i-am-project.eu


www.algoryx.se/agx-dynamics/


jrl-umi3218.github.io/mc_rtc/

For more information, please contact:

Alessandro Melone at alessandro.melone@tum.de 

We are looking for a HiWi candidate for supporting the organization of the 2023 International Workshop on Human-Friendly Robotics (HFR) that is happening in Munich on Sept. 20/21. The position is available immediately!  

The main tasks of the HiWi should be:

  • Website creation for the workshop on human friendly robotics; 
  • Maintenance and updates of the website; 
  • Potentially (if desired) communication with world-wide renowmed speakers; 
  • Support with media advertising;  
  • Support with folder/documents digital organization. 

Pre-requisites:
Experience with Webdesign 

Helpful but not required:
German language


Position: 4 to 8 weeks 
Hour per week/Salary: to be defined depending of the availability and experience 

For more information, please contact:

Dr Luis Figueredo

Human-Robot Interaction

For a successful integration of robots in our society, task execution is an important skill with which we are trying to equip robots. Besides task-model for the trajectory of human-object interactions, which has been previously developed, grasping and manipulating objects is the aspect of our developed task model that we want to improve in this project. There are grasps that may not lead to a successful execution and should not be considered during the robot’s task execution. For example, when pouring the contents of a cup into another, grasping the cup on top (by the cup’s opening) will not allow the cup’s content to be poured correctly. Also important is the ability to grasp a previously unknown object, that is similar to other previously seen objects or that has a similar shape to another. In addition, for the execution of the grasp, a metric indicating the quality of the grasp is a desirable feature to have.

The system will use a simulation (Vrep/CoppeliaSim) to show a human its exploration of grasps. Upon seeing the simulated exploration, the human provides the system with feedback on whether the seen simulation/alteration of the task still fits the implicit task features (e.g. the feedback could say whether the robot will be able to successfully complete a pouring task based on the selected grasp).

Available modules:

  • Visualization of different robot gripper types in Vrep/CoppeliaSim
  • Control interface to Vrep/CoppeliaSim from C++ code
  • Rudimentary grasp and gripper (code-)models
  • Database (not dataset) for storing object information

Tasks may include a few of the below (to be discussed depending on your interest and background)

  • Explore available, online datasets of object meshes and of 3d object point clouds (e.g. YCB dataset) -> select most comprehensive
  • Explore & implement available grasp planners (working with object meshes and point clouds) -> select best performing one on the selected dataset + check if the planner outputs a grasp metric
  • Take into account different gripper types (e.g. antipotal gripper & human hand gripper)
  • Create visualization of the generated grasp points / EE-poses in Vrep
  • Explore mesh segmentation approaches in relation to the generated grasps to condense & compress the amount of generated grasps from the planner
  • Evaluate & implement methods for determining shape similarity and correspondence between a detected object shape and a list of mesh models from a database (look at: CLIP, BERT and similar resources)
  • Automate correct positioning of the visually-detected objects in simulation
  • Generate a visualization framework for the user feedback.

Prerequisites

  • Good C++ programming skills
  • Good understanding of coordinate frames and 3d transformations
  • Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features (especially concerning user-interaction)

Helpful but not required
- Experience with ROS
- Experience with robot manipulators
- Python

Related Literature:

  • Grasp Taxonomy: Paper 1 (https://www.csc.kth.se/grasp/taxonomyGRASP.pdf), Paper 2 (https://ieeexplore.ieee.org/document/7243327)
  • Grasp Planning via Hand-Object Geometric Fitting (https://link.springer.com/article/10.1007/s00371-016-1333-x)
  • Contact Grasp-Net: Efficient Grasp Generation in Cluttered Spaces (https://arxiv.org/pdf/2103.14127.pdf)
  • Automatic Grasp Planning Using Shape Primitives (https://www.cs.columbia.edu/~allen/PAPERS/grasp.plan.ra03.pdf)
  • Generating Task-specific Robotic Grasps (https://arxiv.org/pdf/2203.10498v1.pdf)
  • Using Geometry to Detect Grasps in 3D Point Clouds (https://arxiv.org/pdf/1501.03100.pdf)
  • Grasp-It (https://graspit-simulator.github.io/build/html/grasp_planning.html)

For more information, please contact:

Dr Luis Figueredo

Andrei Costinescu

AI-Enabled Lab Automation

Other Categories

You can find more research opportunities at the Chairs of our Principal Investigators!