Projects and Jobs for Students

The thesis goal is to study how vision modulates planning and execution of movements

 

Healthy participants will wear EEG in order to study what happens at brain level during different grasping tasks (tasks execution in the light and in the darkness and tasks imagination)

Good to have:

  • Good programming and signal processing skills
     

Contacts:

Location/s: Faculty of Biology, LMU Biocenter  and Großhaderner Str. 2 Munich, and 

MIRMI (TUM), Garching bei München, Carl Zeiss Strasse

 

References:

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8008384

https://iopscience.iop.org/article/10.1088/1741-2552/aa8911/pdf

We are looking for a HiWi candidate for support with code maintenance, testing, evaluation and git organization of multiple algorithms for robot manipulation and control. In this position, you will be immersed in the world of robotics manipulation. You will have access to a diverse set of state-of-the-art manipulation planning and control algorithms, and you will have the opportunity to see these algorithms working in a real robot (multiple Franka-Emika Research robots). 
The main requisite is to be curious and passionate about robotics research (and be an excellent programmer).  
You will have close guidance, yet you are expected to work hard.

Main tasks involve:

  • Learning how to execute some of the algorithms in the Franka Emika robot (with support);
  • Learning details about libfranka and franka_ros;
  • Running experiments and collecting data;
  • Cleaning some code and organize git repositories for some of the available algorithms;
  • Testing, releasing, and deploying;
  • Implementing simple interfaces for using the algorithms;
  • Potential integration of algorithms;
  • Transfering codes from matlab to C++;
  • Implementing interface between simulators and robots;
  • Dockerizing some established robot demos; 

Pre-requisites:

  • Excellent C++ skills;
  • Excellent git skills;

Desired:

  • Experience with robots;
  • Experience with ROS;
  • Matlab skills;
  • Python skills;

Helpful but not required:

  • Experience with Franka-Emika robots;

 

Depending on the interest, the position has the potential to become a BS or MS thesis.


Position: Minimum 3 months (long-term desired)
Hour per week/Salary: to be defined depending on the availability and experience 

For more information, please contact:

Dr Luis Figueredo 

 

 

We are looking for a HiWi candidate for supporting the organization of the 2023 International Workshop on Human-Friendly Robotics (HFR) that is happening in Munich on Sept. 20/21. The position is available immediately!  

The main tasks of the HiWi should be:

  • Website creation for the workshop on human friendly robotics; 
  • Maintenance and updates of the website; 
  • Potentially (if desired) communication with world-wide renowmed speakers; 
  • Support with media advertising;  
  • Support with folder/documents digital organization. 

Pre-requisites:
Experience with Webdesign 

Helpful but not required:
German language


Position: 4 to 8 weeks 
Hour per week/Salary: to be defined depending of the availability and experience 

For more information, please contact:

Dr Luis Figueredo

 

Human-Robot Interaction

At the institute for Robotics and System Intelligence of TUM a safety lab is built where the human physical interaction is the main focus. Moreover, several algorithms are built to test and improve robots (of different types) accuracy and repeatability.

Problem formulation:
Due to the demand of smart manufacturing and flexible automation, collaborative/tactile robots are applied in industry. However, these robots may suffer from inaccuracies in measurements and modelings. There are multiple ways to address these artifacts. In this thesis, we try to improve the robot model and in particular, the DH parameters. Therefore, the robot motion accuracy and repeatability can improve. For this we need an external and high precision device. Laser Tracker X from company Faro is therefore used as the reference.

The objective of this master thesis is to investigate the methods for calibrating the robot DH parameters, where laser tracker is used as the reference. Moreover, the optimal calibration method is proposed, with detailed documentation. Therefore, everyone can insert to our testbed and calibrate a robot with minimum effort.

The general structure and main contents of this master thesis are:

  1. Literature research on available methods for calibrating the robot DH parameters. (e.g., look at references [1-3])
  2. Testing and comparing different calibration methods (first for Franka arm and later other robots).
  3. Preparing a well-written manual for future DH parameter calibration of a given robot.

What you gain:
RSI offers a multidisciplinary lab at Garching, which is comparatively unique in several aspects. Therefore, by doing your thesis here,
1. You get to work in a friendly lab with people from different backgrounds, busy with different robotics projects.
2. You have the opportunity to work with and learn different robotics arms and the corresponding interfaces.
3. Also, you will learn to work with high-tech sensors and measurement devices.

Prerequisites:
    Basic familiarity with Ubuntu and real-time kernels
    Good C++ programming skills as well as CMAKE
    Basic knowledge in robot kinematics

References:
1. Chen, T., Lin, J., Wu, D. and Wu, H., 2021. Research of calibration method for industrial robot based on error model of position. Applied Sciences, 11(3), p.1287.

2. Švaco, M., Šekoranja, B., Šuligoj, F. and Jerbić, B., 2014. Calibration of an industrial robot using a stereo vision system. Procedia Engineering, 69, pp.459-463.

3. Lee, J.W., Park, G.T., Shin, J.S. and Woo, J.W., 2017, October. Industrial robot calibration method using denavit—Hatenberg parameters. In 2017 17th International Conference on Control, Automation and Systems (ICCAS) (pp. 1834-1837). IEEE.

More information:
Please send your CV and transcript of grades to Ali Baradaran (ali.baradaran at tum.de) .

For a successful integration of robots in our society, task execution is an important skill with which we are trying to equip robots. Ideally, the robot would learn to perform a task by observing one demonstration of a human interacting with an object while performing the task. There are, however, many freedoms on the object trajectory alone that need to be explored to represent the task constraints and features comprehensively. Showing alternatives of the observed object-interaction-trajectory to the human and expecting feedback on whether the shown alteration still fulfills the intrinsic constraints/features of the task, becomes increasingly boring and tedious considering the large amount of variation needing to be explored.

This project aims to counter the above and speed-up the task-modelling process using the information available from multiple demonstrations of the same task. Multiple demonstrations all respect the intrinsic features of the task and by showing variation (e.g. in object positioning, execution velocity, grasping alternatives, etc.), the system would be able to extract the common features of the task much faster.

In addition, multiple executions of the same task can facilitate exploring a new task feature: the ordering (sequencing) of the task segments. A task is hardly ever a linear sequence of actions; there can be alternatives that one can perform, there can be concurrent actions in the execution (keyword: human-robot-collaboration), which suggest that a DAG is the data-structure to represent the sequence of actions within the task. From the multiple demonstrations and with the DAG as action data-structure representation, we will obtain a more general task model.

Available modules:

  • Extraction of visual perceptions from RGBD demonstrations
  • Processing of the visual perceptions (extracting object pose & time)
  • Visualization of trajectory alterations in Vrep/CoppeliaSim
  • Trajectory alterations to check for individual task features
  • Control interface to Vrep/CoppeliaSim from C++ code

Tasks may include  a few  of the below (to be discussed depending on your interest and background)

  • Normalization of the different demonstrations (time-wise for task segmentation; other normalization for acc. and vel. features)
  • Normalization: e.g. DTW (dynamic time warping)
  • Define metric for trajectory matching: e.g. DFD (discrete Frechet distance)
  • Explore automatic feature extraction from multiple demonstrations
  • Literature: Automatic extraction of constraints in manipulation tasks for autonomy and interaction
  • Develop & implement checks to verify which task features are active from the multiple demonstrations
  • Object (Via-Point) Matching
  • Check volume around objects to verify if the object is a via-point or not
  • Create DAG of task segments as representation
  • Automate correct positioning of the visually-detected objects in simulation
  • Create in the simulation the possibility to display variance rings (confidence intervals; with transparency) around the to-be-followed path
  • Create scripts in CoppeliaSim to allow modifying at certain points along the trajectory the size of the confidence intervals

Prerequisites

  • Good C++ programming skills (incl. data structures and algorithms; in particular graph theory)
  • Good understanding of coordinate frames and 3d transformations
  • Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features (especially concerning user-interaction)

Helpful but not required

  • Experience with ROS
  • Experience with robot manipulators
  • Python

Related Literature :

Automatic extraction of constraints in manipulation tasks for autonomy and interaction (https://infoscience.epfl.ch/record/255556/files/EPFL_TH8059.pdf)

 

For more information, please contact:

Dr Luis Figueredo

Andrei Costinescu

 

For a successful integration of robots in our society, task execution is an important skill with which we are trying to equip robots. Besides task-model for the trajectory of human-object interactions, which has been previously developed, grasping and manipulating objects is the aspect of our developed task model that we want to improve in this project. There are grasps that may not lead to a successful execution and should not be considered during the robot’s task execution. For example, when pouring the contents of a cup into another, grasping the cup on top (by the cup’s opening) will not allow the cup’s content to be poured correctly. Also important is the ability to grasp a previously unknown object, that is similar to other previously seen objects or that has a similar shape to another. In addition, for the execution of the grasp, a metric indicating the quality of the grasp is a desirable feature to have.

The system will use a simulation (Vrep/CoppeliaSim) to show a human its exploration of grasps. Upon seeing the simulated exploration, the human provides the system with feedback on whether the seen simulation/alteration of the task still fits the implicit task features (e.g. the feedback could say whether the robot will be able to successfully complete a pouring task based on the selected grasp).

Available modules:

  • Visualization of different robot gripper types in Vrep/CoppeliaSim
  • Control interface to Vrep/CoppeliaSim from C++ code
  • Rudimentary grasp and gripper (code-)models
  • Database (not dataset) for storing object information

Tasks may include a few of the below (to be discussed depending on your interest and background)

  • Explore available, online datasets of object meshes and of 3d object point clouds (e.g. YCB dataset) -> select most comprehensive
  • Explore & implement available grasp planners (working with object meshes and point clouds) -> select best performing one on the selected dataset + check if the planner outputs a grasp metric
  • Take into account different gripper types (e.g. antipotal gripper & human hand gripper)
  • Create visualization of the generated grasp points / EE-poses in Vrep
  • Explore mesh segmentation approaches in relation to the generated grasps to condense & compress the amount of generated grasps from the planner
  • Evaluate & implement methods for determining shape similarity and correspondence between a detected object shape and a list of mesh models from a database (look at: CLIP, BERT and similar resources)
  • Automate correct positioning of the visually-detected objects in simulation
  • Generate a visualization framework for the user feedback.

Prerequisites

  • Good C++ programming skills
  • Good understanding of coordinate frames and 3d transformations
  • Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features (especially concerning user-interaction)

Helpful but not required
- Experience with ROS
- Experience with robot manipulators
- Python

Related Literature:

  • Grasp Taxonomy: Paper 1 (https://www.csc.kth.se/grasp/taxonomyGRASP.pdf), Paper 2 (https://ieeexplore.ieee.org/document/7243327)
  • Grasp Planning via Hand-Object Geometric Fitting (https://link.springer.com/article/10.1007/s00371-016-1333-x)
  • Contact Grasp-Net: Efficient Grasp Generation in Cluttered Spaces (https://arxiv.org/pdf/2103.14127.pdf)
  • Automatic Grasp Planning Using Shape Primitives (https://www.cs.columbia.edu/~allen/PAPERS/grasp.plan.ra03.pdf)
  • Generating Task-specific Robotic Grasps (https://arxiv.org/pdf/2203.10498v1.pdf)
  • Using Geometry to Detect Grasps in 3D Point Clouds (https://arxiv.org/pdf/1501.03100.pdf)
  • Grasp-It (https://graspit-simulator.github.io/build/html/grasp_planning.html)

For more information, please contact:

Dr Luis Figueredo

Andrei Costinescu

Humans have a fascinating built-in embodied intelligence that is indispensable for survival. A central ability is the protection reflex such as the nociceptive withdrawal reflex (NWR), defined as an automatic retraction of an extremity from a noxious stimulus such as heat or pain. Such noxious stimuli are detected by a wide spectrum of receptors in the skin, which transmit this information to the spinal cord and brain in the form of electrical impulses (Fig. 1). Although some works focused on the NWR elicited by electrocutaneous stimulation on upper limb level, no literature has systematically investigated on such protection reflex triggered by noxious mechanical stimuli (to the best of our knowledge). It is entirely unknown how stimulus’ physical characteristics such as shape, speed, or temperature modulate the kinematic and dynamic reflex responses in the neuromusculoskeletal system. Moreover, despite the already astonishing performance in biological systems, there may exist limits and potential weaknesses, which, if well understood, could be addressed in a robotic implementation. Indeed studying how the reflex mechanism of human finger works is beneficial to implement in the future human-like reflexes to a finger robot. Indeed, since the robots have to interact with humans (as stated by Asimov´s third law), it is good to give to them similar abilities of the humans.

 

In this work, you will have the great opportunity to build up a set up to study the reflex mechanism of human finger (see figure 2). You will collect data from muscles of the human arm using EMG electrodes and finger joints movements using a motion capture system. The stimuli applied to the human finger to generate a reflex will be conical frustums (with different diameters and temperature) attached to the end effector of a robotic arm (Fig.2).

 

Requirements:

-          Interest in studying human behaviors

-          Interest in working with EMG

-          Basic programming knowledge

-          Good English skills

 

For more info please contact:

Gemma C. Bettelani: gemma.bettelani@tum.de

Christopher Herneth: christopher.herneth@tum.de

Please apply as soon as possible, the first suitable candidate will be selected

 

Robot Control

Type: Research practice, Master Thesis, Master Internship

Surface electromyography (EMG) is a popular man-machine interface, especially among people with motor disabilities (e.g amputees), with applications varying from prosthesis control up to controlling a VR/AR avatar. Due to the convoluted nature of EMG, the reliable estimation of a multi-degree of freedom movement is an open research problem. Decoding of movement from EMG signals is usually treated as a classification problem of hand gestures, that leads to non-continuous decoding that may also not feel naturalistic in some applications. Last years there is a growing interest in advanced methods that decompose a multichannel surface EMG signal to motor neuron discharges i.e the "neural" signal that excites the muscles, traveling via the motor neuron from spinal cord to the muscle fibers.

This project is focused in development of algorithms that decode multichannel EMG signal in real-time, with application in robot control or control of VR avatar. Towards this end, EMG decomposition algorithms and biomedical signal processing algorithms are going to be explored. These algorithms transform the original signal to a space that better encode the variable of interest (e.g finger force, trajectory). Output of the algorithm will be used for the control of a VR avatar or robotic arm/hand. Upon student interest the project can be split into tasks corresponding to forschungpraxis, master thesis. Exact application can also be co-defined with the student based on interest.

In this work, you will have the opportunity to work in state-of-the-art lab facilities of MIRMI, equipped with wide variety of data acquisition systems (EEG, EMG, motion capture etc), robots, mechanical/electrical workshop and more. To be excited and passionate about the research is a fundamental prerequisite.

If you have curiosity about neural/biomedical engineering and want to work on technology that could help people with motor disabilities, feel free to contact me. Strong computer science background and knowledge of signal processing is a strong plus on for the tasks that we are working on.

Type: Research practice, Master Thesis, Master Internship  (The tasks can be split into different work packages)

Pre-requisites

  • Interest in biomedical/neural engineering
  • Good Python, Matlab
  • Good software engineering skills
  • Knowledge of Digital Signal processing

Not mandatory skills, but beneficial depending on the project tasks

  • Experience with time-series, ideally biomedical data (EMG, EEG etc)
  • Parallel Programming
  • Optimization
  • Machine Learning
  • Adaptive Filters (e.g Wiener Filters)
  • C++, Rust
  • Virtual Reality app development (Unity, Unreal Engine)
  • Willingness to participate/perform electrophysiological measurements
  • Basic knowledge of human anatomy and physiology (of upper limb)

For more information, please contact: Ioannis Xygonakis (ioannis.xygonakis@tum.de). Application deadline: 01.04.2023

Related literature

[1] Holobar, Ales, and Dario Farina. "Noninvasive neural interfacing with wearable muscle sensors: combining convolutive blind source separation methods and deep learning techniques for neural decoding." _IEEE signal processing magazine_ 38.4 (2021): 103-118.

[2] Negro, Francesco, et al. "Multi-channel intramuscular and surface EMG decomposition by convolutive blind source separation." _Journal of neural engineering_ 13.2 (2016): 026027.

[3] Ison, Mark, et al. "High-density electromyography and motor skill learning for robust long-term control of a 7-DoF robot arm." _IEEE Transactions on Neural Systems and Rehabilitation Engineering_ 24.4 (2015): 424-433.

Type: Ingenieurspraxis, Bachelorthesis, Forschungspraxis, Masterthesis

One core limitation in the development of dexterous artificial hands is the amount of degrees of freedom that need to be implemented. As the human hand includes 27 degrees of freedom, an equal amount of actuators is necessary to mimic human behavior and grasping types. To overcome this issue, the concept of variable joint stiffness changes the friction in the finger joints that are actuated by only one motor; the difference in joint friction leads to desired joint poses and enables a high variety of grasping synergies in underactuated hands.

At MIRMI a dexterous robotic hand including 7 degrees of actuation is under development. To enable further dexterity we aim to investigate the variable joint stiffness approach using miniature magneto-rheological joints. We currently offer various topics in the field of mechanical engineering, electrical engineering,  informatics and comparable backgrounds. Please contact me for further information on current topics.

Current Research Objectives:

  • Feasibility Study on magnetorheological joints for the variable joint stiffness concept with one finger.   

Tasks may include:

  • CAD: Integration of the magnetorheological joint into a robotic finger.
  • Microcontroller Programming: Control of the joint stiffness and encoder reading using I2C communication interface.
  • Experimental characterization and evaluation of the joint stiffness and motion trejectories.

Type:

Ingenieurspraxis, Bachelorthesis, Forschungspraxis, Masterthesis
(The exact topic will be discussed with the student with respect to their interest)

What we expect from you:

  • Motivation to work in an outstanding team of researchers and contribute to our scientific work and help to buid-up our laboratory infrastructure
  • Autonomous working habits
  • Knowledge in C Programming and Control
  • Knowledge in CAD design

What we can offer:

  • Highly dynamic scientific environment
  • Working at the interface between human and technology
  • Investigation of real world problems
  • Interdisciplinary topics

Contact:

Please send me an email including:

  • CV
  • Overview of courses with grades (only students with a grade average of 2.5 or better)
  • Short statement of motivation and the topics you are interested in

sonja.gross@tum.de

Please apply before 03.03.2023.

AI-Enabled Lab Automation

Please find the announcement here.

Please find the announcement here.

Other Categories

You can find more research opportunities at the Chairs of our Principal Investigators!