Projekte und Jobs für Studierende

Ingenieurpraxis, Forschungspraxis und Masterarbeit

 

Student projects available (winter semester 2023-24)

Topic 1: Visuo-Tactile Robot Manipulation for Tight Clearance Assembly.

Click to download description

In this work, your research topic will be in 6D pose estimation algorithms and contact-rich manipulation. More specifically:

  • Evaluating the performance of the some latest 6D pose estimation algorithms in various scenarios;
  • Integrating visual perception into our original tactile insertion skill framework and software architecture;
  • Verifying your algorithm with real robot experiments;
  • Assist the research activities including experiments, and publications;
  • Possible to extend internship period to master thesis and aim at publishing papers in top-tier robotics conference.

Contact: Yansong Wu (Yansong.wu@tum.de), Dr. Fan Wu (f.wu@tum.de)

Topic 2: Synthesize Behavior Trees from Human Demonstrations for Industrial Assembly Tasks

Click here to download description

Topic 3: Large Language Model based Task and Motion Planning

Click here to down description

In this work, your research topic will be in:

  • Build a real-world setup with several manipulation tasks.
  • Develop a software interface to covert task plans generated from LLMs to executable motion plans or Behavior Tree based on our developed skill library.
  • Experimental validation on a set of manipulation tasks.

Contact: Yansong Wu (Yansong.wu@tum.de), Dr. Fan Wu (f.wu@tum.de)

The development of the tactile and dexterous end effectors for the surgical robotic system will require the expertise to design and integrate sensing interfaces in a highly compact space. Moreover, it will be important to oversee the general robotic testbed development including the control and sensing architecture for seamless integration of systems from the bottom-up. The operator would require the crucial sense of touch with low-magnitude normal and shear forces as well as sensing of multiple contact points. Additionally, the end effectors need to be modular to handle different surgical tools. Furthermore, this sensory information needs to be translated to the operator side and be integrated into the overall control and sensing architecture of the robotic system. 

In our institute we already have developed a drive unit for the EndoWrist. It comprises four motors that are connected to the pullies which in turn through a cable mechanism drive the joints of the EndoWrist. Two of this drive unit has been attached to two robot arms around a surgical bed. An endoscope has been attached to the third arm. The robots are inserted through the trocar point into a training module. The robots are registered with respect to each other and the bed. On the surgeon side we have two Lambda haptic consoles which is interaction with the surgeon.

Two positions:

Position 1:   The research primarily focuses on the kinematic modeling of available  endowrist in order to obtain the coupled transformation between the motor space and joint space.  After successful implementation, you will progress to dynamic identification (emphasizing friction modeling in the endowrist) as well as external torque estimation through the current measurement. Finally, the proposed model will be validated on a testbed.

Position 2:      In the other position we are seeking for highly skilled researcher to improve our existing software architecture of the whole system. This includes mainly on the networking of the robots and haptic consoles, as well as development of a digital twin of the system in Mujoco. Good programming skills in ROS, c++/python in this position and simulation environment such as Mujoco is appealing

Following experiences and background are appealing

  • Good Matlab skills (position1)

  • Good understanding of coordinate frames and 3d transformations (position1)

  • Good mathematic knowledge, specifically Linear Algebra (position1)

  • Experience with ROS, Python, Cpp  (position2)

  • Experience with Maxon motor controller (position1,2)

  • Creative and independent thinker   (position1,2)

Related Literature

  • Lee, C., Park, Y. H., Yoon, C., Noh, S., Lee, C., Kim, Y., ... & Kim, S. (2015). A grip force model for the da Vinci end-effector to predict a compensation force. Medical & biological engineering & computing, 53, 253-261. 

  • S. Kim and D. Y. Lee, "Friction-model-based estimation of interaction force of a surgical robot," 2015 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea (South), 2015, pp. 1503-1507, doi: 10.1109/ICCAS.2015.7364591. 

  • Longmore, S. K., Naik, G., & Gargiulo, G. D. (2020). Laparoscopic robotic surgery: Current perspective and future directions. Robotics, 9(2), 42. 

  • Guadagni, S., Di Franco, G., Gianardi, D., Palmeri, M., Ceccarelli, C., Bianchini, M., ... & Morelli, L. (2018). Control comparison of the new EndoWrist and traditional laparoscopic staplers for anterior rectal resection with the Da Vinci Xi: a case study. Journal of Laparoendoscopic & Advanced Surgical Techniques, 28(12), 1422-1427.  

  • Abeywardena, S., Yuan, Q., Tzemanaki, A., Psomopoulou, E., Droukas, L., Melhuish, C., & Dogramadzi, S. (2019). Estimation of tool-tissue forces in robot-assisted minimally invasive surgery using neural networks. Frontiers in Robotics and AI, 6, 56.
     

For more information, please contact: 

Dr. Hamid Sadeghian (hamid.sadeghian@tum.de)
Mario Tröbinger (mario.troebinger@tum.de)

Location:

Forschungszentrum Geriatronik, MIRMI, TUM, Bahnhofstraße 37, 82467 Garmisch-Partenkirchen.
MIRMI, TUMGeorg-Brauchle-Ring 60-62, 80992 München.

Background

Accurate pose estimation of surgical tools is paramount in the field of robotic surgery, helping to increase precision and ultimately improve patient outcomes. Relying solely on forward kinematics often falls short, unable to account for various uncertainties such as cable compliance of endowrists, motor backlash, and environmental noise, among others.

In our lab, we developed a cutting-edge surgical testbed with three Franka Emika robots, equipped with an professional endoscope and endowrists. This setup provides a unique platform to tackle real-world challenges in robotic surgery.

To enhance autonomy in robotic surgery, we are looking to develop an innovative pose estimation algorithm utilizing endoscopic images. This thesis opportunity aims to tackle this exciting challenge, potentially making a significant contribution to the future of robotic surgery.

Tasks

  • Perform camera calibration for the endoscopes
  • Develop a pose estimation algorithm for endowrists
  • Evaluate the methods on our robotic surgery setup

Prerequisites

  • Good Python & C++ programming skills / ROS 2
  • Good understanding of state estimation algorithms, e.g. Kalman filter/particle filter
  • Good knowledge of computer vision, e.g. camera calibration, feature extraction
  • Goal-oriented mentality and motivation about the topic

References

[1] Moccia, Rocco, et al. "Vision-based dynamic virtual fixtures for tools collision avoidance in robotic surgery." IEEE Robotics and Automation Letters 5.2 (2020): 1650-1655.

[2] Staub, Christoph. Micro Endoscope based Fine Manipulation in Robotic Surgery. Diss. Technische Universität München, 2013.

[3] Richter F, Lu J, Orosco R K, et al. Robotic tool tracking under partially visible kinematic chain: A unified approach[J]. IEEE Transactions on Robotics, 2021, 38(3): 1653-1670.

Contact

Zheng Shen(zheng.shen@tum.de)

Dr. Hamid Sadeghian (hamid.sadeghian@tum.de)

Chair of Robotics Science and Systems Intelligence, Munich Institute of Robotics and Machine Intelligence (MIRMI)

The thesis goal is to study how vision modulates planning and execution of movements

 

Healthy participants will wear EEG in order to study what happens at brain level during different grasping tasks (tasks execution in the light and in the darkness and tasks imagination)

Good to have:

  • Good programming and signal processing skills
     

Contacts:

Location/s: Faculty of Biology, LMU Biocenter  and Großhaderner Str. 2 Munich, and 

MIRMI (TUM), Garching bei München, Carl Zeiss Strasse

 

References:

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8008384

https://iopscience.iop.org/article/10.1088/1741-2552/aa8911/pdf

Robot Motion Planning

The goal of this 3 month internship/6 month thesis will be to develop a safe and time efficient motion planner. As robots and humans increasingly share the same workspace, the development of safe motion plans becomes paramount. For real-world applications, nonetheless, it is critical that safety solutions are achieved without compromising performance. The computation of safe, time-efficient trajectories, however, usually requires rather complex often decoupled planning and optimization methods which degrades the nominal performance. Prior work has already addressed the problem as a graph search-based scheme that enables us to solve the problem efficiently. However, for different domains it is hard to design a single heuristic function that captures all complexities. Current goal would be to integrate multiple constraints as heuristics following the multi-heuristic paradigm. The final goal would be to scale up to a redundant robot system performing human-centred manipulation planning.

Related Literature: S*: On Safe and Time Efficient Robot Motion Planning, Multi-Heuristic A*

The main tasks would be (in close collaboration) :

  • Understand the basics of Multi-Heuristic A*; 
  • Develop a simple prototype in Matlab/C++ with a 7 Dof Robot; 
  • Integrate the planning with a simulator; 
  • Come up with scenarios to test the overall system;  
  • Support with existing implementations. 

Pre-requisites:

  • Experience with prototyping in Matlab and C++;
  • Basics of Manipulator Kinematics and Dynamics;
  • Basic working knowledge of Manipulator Control;
  • Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features;

Helpful but not required
Knowledge of search-based planning, running simple controllers on robots

For more information, please contact:

Riddhiman Laha (riddhiman.laha@tum.de)

The goal of this 3 month internship/6 month thesis will be to benchmark existing and new real-time robot planning algorithms. The platform to be used for testing, both in simulation and real-world experiments, is a humanoid robot with two 7 DoF arms. Prior work has already proposed solutions for reactive planning that scale up to bimanual manipulation planning. However, a thorough evaluation of existing and new algorithms is necessary to understand the advantages and limitations of each approach. To this end, realistic open-source motion planning datasets are to be utilized for generating static and dynamic scenes in order to test the different strategies. Lastly, a thorough analysis using existing and novel metrics is needed in order to rank the different planners.

Related Literature: MotionBenchMaker, Multi-agent Planner, and TrajOpt.  

The main tasks would be (in close collaboration) :

  • Implement and configure the MotionBenchMaker
  • Port existing planning algorithm in the same environment; 
  • Set-up comparison planner like STOMP, CHOMP, and TrajOpt;
  • Come up with easy and hard scenarios to test the planners;  
  • Integrate tested code to the humanoid planning stack and perform experiments;

Pre-requisites:

  • Experience with prototyping in Matlab and C++;
  • Familiarity with ROS, MoveIt, OMPL;
  • Basics of Manipulator Kinematics and Dynamics;
  • Basic working knowledge of Manipulator Control;
  • Willingness to really get familiar with MuJoCo simulator and its features;

Helpful but not required
Knowledge of search-based, optimization-based, and reactive planning, running simple controllers on robots.

For more information, please contact:

Riddhiman Laha (riddhiman.laha@tum.de)

Mensch-Roboter-Interaktion

For a successful integration of robots in our society, task execution is an important skill with which we are trying to equip robots. Besides task-model for the trajectory of human-object interactions, which has been previously developed, grasping and manipulating objects is the aspect of our developed task model that we want to improve in this project. There are grasps that may not lead to a successful execution and should not be considered during the robot’s task execution. For example, when pouring the contents of a cup into another, grasping the cup on top (by the cup’s opening) will not allow the cup’s content to be poured correctly. Also important is the ability to grasp a previously unknown object, that is similar to other previously seen objects or that has a similar shape to another. In addition, for the execution of the grasp, a metric indicating the quality of the grasp is a desirable feature to have.

The system will use a simulation (Vrep/CoppeliaSim) to show a human its exploration of grasps. Upon seeing the simulated exploration, the human provides the system with feedback on whether the seen simulation/alteration of the task still fits the implicit task features (e.g. the feedback could say whether the robot will be able to successfully complete a pouring task based on the selected grasp).

Available modules:

  • Visualization of different robot gripper types in Vrep/CoppeliaSim
  • Control interface to Vrep/CoppeliaSim from C++ code
  • Rudimentary grasp and gripper (code-)models
  • Database (not dataset) for storing object information

Tasks may include a few of the below (to be discussed depending on your interest and background)

  • Explore available, online datasets of object meshes and of 3d object point clouds (e.g. YCB dataset) -> select most comprehensive
  • Explore & implement available grasp planners (working with object meshes and point clouds) -> select best performing one on the selected dataset + check if the planner outputs a grasp metric
  • Take into account different gripper types (e.g. antipotal gripper & human hand gripper)
  • Create visualization of the generated grasp points / EE-poses in Vrep
  • Explore mesh segmentation approaches in relation to the generated grasps to condense & compress the amount of generated grasps from the planner
  • Evaluate & implement methods for determining shape similarity and correspondence between a detected object shape and a list of mesh models from a database (look at: CLIP, BERT and similar resources)
  • Automate correct positioning of the visually-detected objects in simulation
  • Generate a visualization framework for the user feedback.

Prerequisites

  • Good C++ programming skills
  • Good understanding of coordinate frames and 3d transformations
  • Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features (especially concerning user-interaction)

Helpful but not required
- Experience with ROS
- Experience with robot manipulators
- Python

Related Literature:

  • Grasp Taxonomy: Paper 1 (https://www.csc.kth.se/grasp/taxonomyGRASP.pdf), Paper 2 (https://ieeexplore.ieee.org/document/7243327)
  • Grasp Planning via Hand-Object Geometric Fitting (https://link.springer.com/article/10.1007/s00371-016-1333-x)
  • Contact Grasp-Net: Efficient Grasp Generation in Cluttered Spaces (https://arxiv.org/pdf/2103.14127.pdf)
  • Automatic Grasp Planning Using Shape Primitives (https://www.cs.columbia.edu/~allen/PAPERS/grasp.plan.ra03.pdf)
  • Generating Task-specific Robotic Grasps (https://arxiv.org/pdf/2203.10498v1.pdf)
  • Using Geometry to Detect Grasps in 3D Point Clouds (https://arxiv.org/pdf/1501.03100.pdf)
  • Grasp-It (https://graspit-simulator.github.io/build/html/grasp_planning.html)

For more information, please contact:

Dr Luis Figueredo

Andrei Costinescu

Robot Control

Pos-type: Forschungspraxis/Internship, possible thesis extention. Application deadline: February 20, 2024

Students are expected to study and understand the physical and mathematical representations of developed concepts. Apply and gain an understanding of multi-DoF manipulator systems, their control, and classifications. Further, using the state-of-the-art simulation frameworks develop a codebase for its representation. The work will be foundational for further research, thus the student is expected to follow best coding practices and document his work.  

The student is expected to work on the simulation of the BSA concept. For simplification, one of the modes of BSA can be modeled as a series elastic actuator (SEA). The first step would be to modify rigid robot representation, such that it includes elasticity in joints (modeled as SEA). Implementation details can be found in [2] and the project code in [3]  (not necessary to use the same framework for Simulation and Dynamics). Further, dynamics should be extended to handle other modes of BSA as well as impulsive switches between them. 

The result should be a usable code base with a minimal reproducible example of a Manipulator executing a throwing maneuver exploiting elastic elements and impulsive mode switches. The simulation will be verified against Matlab implementation (already developed)

For more detailed description check PDF attachment (to open it, click on the picture above)

Requirements:

  • Knowledge of Matlab, C++, Python

  • Working skills in Ubuntu operating system

  • Familiarity with ROS

  • Robotics (Forward, backward dynamics and kinematics)

  • Proficiency in English C1, reading academic papers 

  • Plus are:

    • Knowledge of working with Gazebo/MuJoCo

    • Familiarity with GIT

    • DesignPatterns for coding

    • Familiarity with Docker

    • Googletest (or other testing framework)

What you will gain:

  • Hands on understanding of Robot Manipulators

  • Insights into new developemnts of elastic joints

  • Profficiency with ROS, MuJoCo

  • Being part of our researcher community

To apply, you can send your CV, and short motivation to the Supervisor (with the Senior Supervisor in cc).

Supervisor

M.Sc. Vasilije Rakcevic

vasilije.rakcevic@tum.de      

 

Senior Supervisor

Dr.-Ing. Abdalla Swikir

abdalla.swikir@tum.de 

Mechatronik

Pos-type: Forschungspraxis/Internship, possible thesis extention. Application deadline: February 20, 2024

Brushless motors are growing in popularity for Robotics applications. In particular, due to their high power density, these motors can be used with smaller gear ratios to deliver the torque and speed requirements. For example, the key to MIT mini cheetah's success was BLDC adaptation within the Proprioceptive Actuator concept [1].

Task will be to understand physical properties of BLDC actuators, and be able to mathematically describe them. One should design test scenarios and program the control for them (that includes programming absorber side - how the loading will look like, alongside the motor being tested). Collect, visualise and analyse data from experiments (plotting power, efficiency, etc.). Make conclusions about the relation between physical properties of different actuators (high level design choices like inrunner or outrunner, number of poles, control algorithm, etc.) and collected results. It's worth mentioning that you will not start from scratch and will be supported by inhouse developed solutions for BLDC control, available testbed, etc.

Please check attached PDF for a bit more detailed description (To open it, click on the Picture above).

What you will gain:

  • Hands-on experience and in-depth understanding of Brushless Motors, and their control
  • Visualising and analysing the data
  • Best practices for Embedded software development
  • Experience building, prototyping, 3d Printing
  • Working with DataSheets and Documentations of various Devices
  • Hacking electronic signals (via oscilloscope, etc.)
  • Insights in our System Development and access to our community

Requirements from candidates:

  • Knowledge of C, Matlab
  • Working skills in Ubuntu operating system
  • Understanding how Motors work
  • Basics in Electronics and Mechanics
  • Proficiency in English C1, reading academic papers
  • Plus are: 
    • Familiarity with GIT
    • Embedded software development
    • Robotics

To apply, you can send your CV, and short motivation to the Supervisors (with the Senior Supervisor in cc)

Supervisors

M.Sc. Vasilije Rakcevic

vasilije.rakcevic@tum.de        

M.Sc. Edmundo Pozo Fortunić

edmundo.pozo@tum.de 

Senior Supervisor

Dr.-Ing. Abdalla Swikir

abdalla.swikir@tum.de 

 

[1] P. M. Wensing, A. Wang, S. Seok, D. Otten, J. Lang and S. Kim, "Proprioceptive Actuator Design in the MIT Cheetah: Impact Mitigation and High-Bandwidth Physical Interaction for Dynamic Legged Robots," in IEEE Transactions on Robotics, vol. 33, no. 3, pp. 509-522, June 2017, doi: 10.1109/TRO.2016.2640183.

Pos-type: Forschungspraxis/Internship, possible thesis extention. Application deadline: February 20, 2024

Brushless motors are growing in popularity for Robotics applications. They are particularly interesting due to their power density and availability. A good example of its abilities is MIT mini cheetah success with the Proprioceptive Actuator concept [1]. There, leveraging low gear ratio, back-drivability, high torque(power) density, they have been able to develop a powerful enough and stable actuator even for acrobatic maneuvers.

We are working on our own solutions for BLDC actuation. For that purpose, we have developed controllers that are the heart of all recent hardware developments [2] We are looking to enhance them and better integrate them within other projects.

Please check attached PDF for a bit more detailed description (To open it, click on the Picture above).

What you will gain:

  • Hands-on experience and in-depth understanding of IMU
  • Understanding Motor Control and various aspects of DC motors
  • Best practices for Embedded software development
  • Working with DataSheets and Documentations of various Devices
  • Hacking electronic signals (via oscilloscope, etc.)
  • Insights in our System Development and access to our community

Requirements from candidates:

  • Knowledge of C
  • Basics of Microcontroller programming
  • Basics in Electronics and Mechanics
  • Proficiency in English C1, reading academic papers
  • Plus are: 
    • Arduino programming
    • Familiarity with GIT

We are welcoming initiative and always aiming to support new ideas. This internship is great opportunity to get familiar with our work and gain a lot of knowledge in hands-on Embedded system development.

To apply, you can send your CV, and short motivation to the Supervisor (with the Senior Supervisor in cc)

Supervisor

M.Sc. Vasilije Rakcevic

vasilije.rakcevic@tum.de        

M.Sc. Edmundo Pozo Fortunić

edmundo.pozo@tum.de 

 

Senior Supervisor

Dr.-Ing. Abdalla Swikir

abdalla.swikir@tum.de  

[1] P. M. Wensing, A. Wang, S. Seok, D. Otten, J. Lang and S. Kim, "Proprioceptive Actuator Design in the MIT Cheetah: Impact Mitigation and High-Bandwidth Physical Interaction for Dynamic Legged Robots," in IEEE Transactions on Robotics, vol. 33, no. 3, pp. 509-522, June 2017, doi: 10.1109/TRO.2016.2640183.

[2] Fortunić, E. P., Yildirim, M. C., Ossadnik, D., Swikir, A., Abdolshah, S., & Haddadin, S. (2023). Optimally Controlling the Timing of Energy Transfer in Elastic Joints: Experimental Validation of the Bi-Stiffness Actuation Concept. arXiv [Eess.SY]. Retrieved from http://arxiv.org/abs/2309.07873

 

 

Andere Kategorien

We are looking for a working student for a 16-20 hours/week position as soon as possible:

The Munich Institute of Robotics and Machine Intelligence (MIRMI) is transforming into an internationally leading science and technology center for machine intelligence, integrating robotics, artificial intelligence, and perception. More than 70 professors from various TUM Schools cooperate within the framework of MIRMI. With its interdisciplinary orientation at the interfaces of engineering, natural sciences, IT, social sciences, and humanities, MIRMI is a central component in developing embodied AI and robotics at the Technical University of Munich and Bavaria. The MIRMI communications team actively promotes research, teaching activities, and events of the MIRMI community to the broader national and international media, the TUM community, and the public. The team handles internal and external communications and publishes news articles, features, photographs, and videos through our channels (websites, newsletters, press releases, and social media) about diverse groups, from young researchers and entrepreneurs to international thought leaders from industry, politics, and academia.

We are looking for a new team member to support us with the following tasks:

  • updating information on crucial communication channels such as websites and newsletters (CMS handling: Typo3 and WordPress);
  • building up reports and supporting the analysis of media data;
  • supporting content management for social media;
  • supporting the organization of events and
  • administrative tasks related to communication and community management.

What we expect:

  • Experience/knowledge handling Typo3 or WordPress is an advantage;
  • experience with photography and video production;
  • fluent in written and spoken English and good German;
  • strong communication and organizational skills are among your strengths; you bring creativity, flexibility, strategic thinking, practical implementation skills, initiative, and
  • interest in the application of Artificial Intelligence and robotics in educational technology settings.

What we offer:

  • Science communication – You have direct contact with state-of-the-art research and teaching on robotics and artificial intelligence at a top research institute in the world;
  • Start-up-like working mode – You work in a young, demanding, dynamic, and internationally influenced-academic environment.
  • Unique network – You are integrated into a network of excellent research and start-up institutes: the Technical University of Munich (TUM) / MIRMI, UnternehmerTUM, and the TUM Venture Lab Robotics & AI.
  • International Community – You work directly with an international team with a hands-on mentality in a rapidly developing scientific field.

 

Please email your complete application documents (CV, academic/school certificates, job references, motivation letter) to communications@mirmi.tum.de quoting “Working Student for Communications” in the email subject line. The position will be filled as soon as possible, and only shortlisted candidates will be notified. Preference will be given to applications received before 26 February 2023.

TUM has been pursuing the strategic goal of substantially increasing the diversity of its staff. As an equal opportunity and affirmative action employer, TUM explicitly encourages nominations of and applications from women as well as from all others who would bring additional diversity dimensions to the university’s research and teaching strategies. Preference will be given to disabled candidates with equal qualifications. International candidates are highly encouraged to apply.

Technische Universität München
Munich Institute of Robotics and Machine Intelligence
Georg-Brauchle-Ring 60-62,
80992 München
communications@mirmi.tum.de

Hinweis zum Datenschutz:
Im Rahmen Ihrer Bewerbung um eine Stelle an der Technischen Universität München (TUM) übermitteln Sie personenbezogene Daten. Beachten Sie bitte hierzu unsere Datenschutzhinweise gemäß Art. 13 Datenschutz-Grundverordnung (DSGVO) zur Erhebung und Verarbeitung von personenbezogenen Daten im Rahmen Ihrer Bewerbung. Durch die Übermittlung Ihrer Bewerbung bestätigen Sie, dass Sie die Datenschutzhinweise der TUM zur Kenntnis genommen haben.

Kontakt: communications@mirmi.tum.de

Weitere Forschungsmöglichkeiten finden Sie an den Lehrstühlen unserer Principal Investigators!