(keine Einträge)
Karriere und Jobs
Für Hochschulabsolventen und Berufstätige
Projekte und Jobs für Studierende
Robot Motion Planning
The goal of this 3 month internship/6 month thesis will be to develop a safe and time efficient motion planner. As robots and humans increasingly share the same workspace, the development of safe motion plans becomes paramount. For real-world applications, nonetheless, it is critical that safety solutions are achieved without compromising performance. The computation of safe, time-efficient trajectories, however, usually requires rather complex often decoupled planning and optimization methods which degrades the nominal performance. Prior work has already addressed the problem as a graph search-based scheme that enables us to solve the problem efficiently. However, for different domains it is hard to design a single heuristic function that captures all complexities. Current goal would be to integrate multiple constraints as heuristics following the multi-heuristic paradigm. The final goal would be to scale up to a redundant robot system performing human-centred manipulation planning.
Related Literature: S*: On Safe and Time Efficient Robot Motion Planning, Multi-Heuristic A*
The main tasks would be (in close collaboration) :
- Understand the basics of Multi-Heuristic A*;
- Develop a simple prototype in Matlab/C++ with a 7 Dof Robot;
- Integrate the planning with a simulator;
- Come up with scenarios to test the overall system;
- Support with existing implementations.
Pre-requisites:
- Experience with prototyping in Matlab and C++;
- Basics of Manipulator Kinematics and Dynamics;
- Basic working knowledge of Manipulator Control;
- Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features;
Helpful but not required
Knowledge of search-based planning, running simple controllers on robots
For more information, please contact:
Riddhiman Laha (riddhiman.laha@tum.de)
The goal of this 3 month internship/6 month thesis will be to benchmark existing and new real-time robot planning algorithms. The platform to be used for testing, both in simulation and real-world experiments, is a humanoid robot with two 7 DoF arms. Prior work has already proposed solutions for reactive planning that scale up to bimanual manipulation planning. However, a thorough evaluation of existing and new algorithms is necessary to understand the advantages and limitations of each approach. To this end, realistic open-source motion planning datasets are to be utilized for generating static and dynamic scenes in order to test the different strategies. Lastly, a thorough analysis using existing and novel metrics is needed in order to rank the different planners.
Related Literature: MotionBenchMaker, Multi-agent Planner, and TrajOpt.
The main tasks would be (in close collaboration) :
- Implement and configure the MotionBenchMaker;
- Port existing planning algorithm in the same environment;
- Set-up comparison planner like STOMP, CHOMP, and TrajOpt;
- Come up with easy and hard scenarios to test the planners;
- Integrate tested code to the humanoid planning stack and perform experiments;
Pre-requisites:
- Experience with prototyping in Matlab and C++;
- Familiarity with ROS, MoveIt, OMPL;
- Basics of Manipulator Kinematics and Dynamics;
- Basic working knowledge of Manipulator Control;
- Willingness to really get familiar with MuJoCo simulator and its features;
Helpful but not required
Knowledge of search-based, optimization-based, and reactive planning, running simple controllers on robots.
For more information, please contact:
Riddhiman Laha (riddhiman.laha@tum.de)
Mensch-Roboter-Interaktion
For a successful integration of robots in our society, task execution is an important skill with which we are trying to equip robots. Besides task-model for the trajectory of human-object interactions, which has been previously developed, grasping and manipulating objects is the aspect of our developed task model that we want to improve in this project. There are grasps that may not lead to a successful execution and should not be considered during the robot’s task execution. For example, when pouring the contents of a cup into another, grasping the cup on top (by the cup’s opening) will not allow the cup’s content to be poured correctly. Also important is the ability to grasp a previously unknown object, that is similar to other previously seen objects or that has a similar shape to another. In addition, for the execution of the grasp, a metric indicating the quality of the grasp is a desirable feature to have.
The system will use a simulation (Vrep/CoppeliaSim) to show a human its exploration of grasps. Upon seeing the simulated exploration, the human provides the system with feedback on whether the seen simulation/alteration of the task still fits the implicit task features (e.g. the feedback could say whether the robot will be able to successfully complete a pouring task based on the selected grasp).
Available modules:
- Visualization of different robot gripper types in Vrep/CoppeliaSim
- Control interface to Vrep/CoppeliaSim from C++ code
- Rudimentary grasp and gripper (code-)models
- Database (not dataset) for storing object information
Tasks may include a few of the below (to be discussed depending on your interest and background)
- Explore available, online datasets of object meshes and of 3d object point clouds (e.g. YCB dataset) -> select most comprehensive
- Explore & implement available grasp planners (working with object meshes and point clouds) -> select best performing one on the selected dataset + check if the planner outputs a grasp metric
- Take into account different gripper types (e.g. antipotal gripper & human hand gripper)
- Create visualization of the generated grasp points / EE-poses in Vrep
- Explore mesh segmentation approaches in relation to the generated grasps to condense & compress the amount of generated grasps from the planner
- Evaluate & implement methods for determining shape similarity and correspondence between a detected object shape and a list of mesh models from a database (look at: CLIP, BERT and similar resources)
- Automate correct positioning of the visually-detected objects in simulation
- Generate a visualization framework for the user feedback.
Prerequisites
- Good C++ programming skills
- Good understanding of coordinate frames and 3d transformations
- Willingness to really get familiar with the CoppeliaSim simulator (and other simulators) and its features (especially concerning user-interaction)
Helpful but not required
- Experience with ROS
- Experience with robot manipulators
- Python
Related Literature:
- Grasp Taxonomy: Paper 1 (https://www.csc.kth.se/grasp/taxonomyGRASP.pdf), Paper 2 (https://ieeexplore.ieee.org/document/7243327)
- Grasp Planning via Hand-Object Geometric Fitting (https://link.springer.com/article/10.1007/s00371-016-1333-x)
- Contact Grasp-Net: Efficient Grasp Generation in Cluttered Spaces (https://arxiv.org/pdf/2103.14127.pdf)
- Automatic Grasp Planning Using Shape Primitives (https://www.cs.columbia.edu/~allen/PAPERS/grasp.plan.ra03.pdf)
- Generating Task-specific Robotic Grasps (https://arxiv.org/pdf/2203.10498v1.pdf)
- Using Geometry to Detect Grasps in 3D Point Clouds (https://arxiv.org/pdf/1501.03100.pdf)
- Grasp-It (https://graspit-simulator.github.io/build/html/grasp_planning.html)
For more information, please contact:
Robot Control
The objective of this 6-month master's thesis is to develop a novel framework for calibrating joint torque sensors of serial manipulators. The (re-)calibration of joint side torque sensors is crucial for safety, particularly as robots and humans increasingly operate in shared environments. Recalibrating an assembled robotic manipulator requires precise knowledge of its kinematic and dynamic model, a task already addressed in previous research regarding inertial parameter identification of serial manipulators. The final goal is to introduce an "off-calibration" score indicating the current calibration status of the arm and the need for recalibration. Additionally, if recalibration is necessary, the novel recalibration framework can be utilized to update sensor parameters.
The main task would be:
- Understanding the existing inertial identification framework;
- Literature research on calibration frameworks for robotic manipulators;
- Developing and prototyping the novel framework in simulation;
- Come up with scenarios to test the overall system;
- Implementation of a real robotic system.
Pre-requisites:
- Experience with prototyping in Matlab, Python and C++;
- Basics of Manipulator Kinematics and Dynamics;
- Basic working knowledge of Manipulator Control;
Helpful but not required:
- Knowledge of inertial parameter identification, sensor calibration and running simple controllers on robots
For more information, please contact:
Mario Tröbinger (mario.troebinger@tum.de)
Pos-type: Forschungspraxis/Internship, possible thesis extention. New application deadline: March 31, 2024
Students are expected to study and understand the physical and mathematical representations of developed concepts. Apply and gain an understanding of multi-DoF manipulator systems, their control, and classifications. Further, using the state-of-the-art simulation frameworks develop a codebase for its representation. The work will be foundational for further research, thus the student is expected to follow best coding practices and document his work.
The student is expected to work on the simulation of the BSA concept. For simplification, one of the modes of BSA can be modeled as a series elastic actuator (SEA). The first step would be to modify rigid robot representation, such that it includes elasticity in joints (modeled as SEA). Implementation details can be found in [2] and the project code in [3] (not necessary to use the same framework for Simulation and Dynamics). Further, dynamics should be extended to handle other modes of BSA as well as impulsive switches between them.
The result should be a usable code base with a minimal reproducible example of a Manipulator executing a throwing maneuver exploiting elastic elements and impulsive mode switches. The simulation will be verified against Matlab implementation (already developed)
For more detailed description check PDF attachment (to open it, click on the picture above)
Requirements:
-
Knowledge of Matlab, C++, Python
-
Working skills in Ubuntu operating system
-
Familiarity with ROS
-
Robotics (Forward, backward dynamics and kinematics)
-
Proficiency in English C1, reading academic papers
-
Plus are:
-
Knowledge of working with Gazebo/MuJoCo
-
Familiarity with GIT
-
DesignPatterns for coding
-
Familiarity with Docker
-
Googletest (or other testing framework)
-
What you will gain:
-
Hands on understanding of Robot Manipulators
-
Insights into new developemnts of elastic joints
-
Profficiency with ROS, MuJoCo
-
Being part of our researcher community
To apply, you can send your CV, and short motivation to the Supervisor (with the Senior Supervisor in cc).
Supervisor
M.Sc. Vasilije Rakcevic
Senior Supervisor
Dr.-Ing. Abdalla Swikir
Internship/Master thesis
The goal of this 3-month internship/6-month thesis is to develop a policy based on diffusion models for multifingered grasping. The platform to be used for testing, both in simulation and real-world experiments, is a 7 DoF arm equipped with a multifingered hand.
Background
The meteoric rise of Diffusion Models is one of the biggest developments in Machine Learning in the past several years. Diffusion models can be applied to various tasks in computer vision and natural language processing[1,2]. In recent years, there have been some applications using diffusion models in robotics since they can represent highly-expensive multimodal distributions and exhibit proper gradients over the entire space. Urain et al. [3] learn SE(3) diffusion models for 6DoF grasping, giving rise to a novel framework for joint grasp and motion optimization without needing to decouple grasp selection from trajectory generation. In [4], they introduce a new form of robot visuomotor policy called diffusion policy that generates behavior via a conditional denoising diffusion process on robot action space. Wu et al. [5] introduce a pipeline that leverages a mixture-of-experts strategy to learn diverse manipulation policies, followed by a diffusion policy to capture complex action distributions from these experts.
Your tasks:
- Train and test diffusion policy
- Build a similar environment including a 7DoF arm and a multifingered hand
- Train a reinforcement-based policy based on diffusion policy to fulfill multifingered grasping tasks
- Implement the experiments on the real robots
Requirements:
- Experiences or knowledge from related Robotic courses
- Knowledge of deep learning
- Knowledge of Pytorch or similar machine learning library
- Willingness to really get familiar with MuJoCo simulator and its features
- knowledge and experience of Git, Linux is a plus
Helpful but not required:
Knowledge of MoveIt, ROS, Experience with robot manipulators and robot hand
Supervisor:
Prof. Sami Haddadin
Contacts:
Dr. Shuang Li (li.shuang@tum.de)
Dr. Abdeldjallil Naceri (djallil.naceri@tum.de)
Dr. Abdalla Swikir (abdalla.swikir@tum.de)
Location/s: Georg-Brauchle-Ring 60-62, 80992 München
Refenrences:
[1] Ho, J., Jain, A. and Abbeel, P., 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, pp.6840-6851.
[2] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. and Chen, M., 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2), p.3.
[3] Urain, J., Funk, N., Peters, J. and Chalvatzaki, G., 2023, May. Se (3)-diffusionfields: Learning smooth cost functions for joint grasp and motion optimization through diffusion. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5923-5930). IEEE.
[4] Chi, C., Feng, S., Du, Y., Xu, Z., Cousineau, E., Burchfiel, B. and Song, S., 2023. Diffusion policy: Visuomotor policy learning via action diffusion. Robotics: Science and Systems Conference (RSS). 2023.
[5] Wu, T., Gan, Y., Wu, M., Cheng, J., Yang, Y., Zhu, Y. and Dong, H., 2024. UniDexFPM: Universal Dexterous Functional Pre-grasp Manipulation Via Diffusion Policy. arXiv preprint arXiv:2403.12421.
Mechatronik
Pos-type: Forschungspraxis/Internship, possible thesis extention. New application deadline: March 31, 2024
Brushless motors are growing in popularity for Robotics applications. In particular, due to their high power density, these motors can be used with smaller gear ratios to deliver the torque and speed requirements. For example, the key to MIT mini cheetah's success was BLDC adaptation within the Proprioceptive Actuator concept [1].
Task will be to understand physical properties of BLDC actuators, and be able to mathematically describe them. One should design test scenarios and program the control for them (that includes programming absorber side - how the loading will look like, alongside the motor being tested). Collect, visualise and analyse data from experiments (plotting power, efficiency, etc.). Make conclusions about the relation between physical properties of different actuators (high level design choices like inrunner or outrunner, number of poles, control algorithm, etc.) and collected results. It's worth mentioning that you will not start from scratch and will be supported by inhouse developed solutions for BLDC control, available testbed, etc.
Please check attached PDF for a bit more detailed description (To open it, click on the Picture above).
What you will gain:
- Hands-on experience and in-depth understanding of Brushless Motors, and their control
- Visualising and analysing the data
- Best practices for Embedded software development
- Experience building, prototyping, 3d Printing
- Working with DataSheets and Documentations of various Devices
- Hacking electronic signals (via oscilloscope, etc.)
- Insights in our System Development and access to our community
Requirements from candidates:
- Knowledge of C, Matlab
- Working skills in Ubuntu operating system
- Understanding how Motors work
- Basics in Electronics and Mechanics
- Proficiency in English C1, reading academic papers
- Plus are:
- Familiarity with GIT
- Embedded software development
- Robotics
To apply, you can send your CV, and short motivation to the Supervisors (with the Senior Supervisor in cc)
Supervisors
M.Sc. Vasilije Rakcevic
| M.Sc. Edmundo Pozo Fortunić
|
Senior Supervisor
Dr.-Ing. Abdalla Swikir
[1] P. M. Wensing, A. Wang, S. Seok, D. Otten, J. Lang and S. Kim, "Proprioceptive Actuator Design in the MIT Cheetah: Impact Mitigation and High-Bandwidth Physical Interaction for Dynamic Legged Robots," in IEEE Transactions on Robotics, vol. 33, no. 3, pp. 509-522, June 2017, doi: 10.1109/TRO.2016.2640183.
Pos-type: Forschungspraxis/Internship, possible thesis extention. New application deadline: March 31 2024
Brushless motors are growing in popularity for Robotics applications. They are particularly interesting due to their power density and availability. A good example of its abilities is MIT mini cheetah success with the Proprioceptive Actuator concept [1]. There, leveraging low gear ratio, back-drivability, high torque(power) density, they have been able to develop a powerful enough and stable actuator even for acrobatic maneuvers.
We are working on our own solutions for BLDC actuation. For that purpose, we have developed controllers that are the heart of all recent hardware developments [2] We are looking to enhance them and better integrate them within other projects.
Please check attached PDF for a bit more detailed description (To open it, click on the Picture above).
What you will gain:
- Hands-on experience and in-depth understanding of IMU
- Understanding Motor Control and various aspects of DC motors
- Best practices for Embedded software development
- Working with DataSheets and Documentations of various Devices
- Hacking electronic signals (via oscilloscope, etc.)
- Insights in our System Development and access to our community
Requirements from candidates:
- Knowledge of C
- Basics of Microcontroller programming
- Basics in Electronics and Mechanics
- Proficiency in English C1, reading academic papers
- Plus are:
- Arduino programming
- Familiarity with GIT
We are welcoming initiative and always aiming to support new ideas. This internship is great opportunity to get familiar with our work and gain a lot of knowledge in hands-on Embedded system development.
To apply, you can send your CV, and short motivation to the Supervisor (with the Senior Supervisor in cc)
Supervisor
M.Sc. Vasilije Rakcevic
| M.Sc. Edmundo Pozo Fortunić
|
Senior Supervisor
Dr.-Ing. Abdalla Swikir
[1] P. M. Wensing, A. Wang, S. Seok, D. Otten, J. Lang and S. Kim, "Proprioceptive Actuator Design in the MIT Cheetah: Impact Mitigation and High-Bandwidth Physical Interaction for Dynamic Legged Robots," in IEEE Transactions on Robotics, vol. 33, no. 3, pp. 509-522, June 2017, doi: 10.1109/TRO.2016.2640183.
[2] Fortunić, E. P., Yildirim, M. C., Ossadnik, D., Swikir, A., Abdolshah, S., & Haddadin, S. (2023). Optimally Controlling the Timing of Energy Transfer in Elastic Joints: Experimental Validation of the Bi-Stiffness Actuation Concept. arXiv [Eess.SY]. Retrieved from http://arxiv.org/abs/2309.07873
Studentische Hilfskräfte (HiWi)
Andere Kategorien
Ingenieurpraxis, Forschungspraxis und Masterarbeit
HiWi
Technical Engineer for Parallel and Distributed Robot Learning System
Contact: Dr. Fan Wu (f.wu@tum.de)
Student projects available (winter semester 2023-24)
Topic 1: Visuo-Tactile Robot Manipulation for Tight Clearance Assembly.
In this work, your research topic will be in 6D pose estimation algorithms and contact-rich manipulation. More specifically:
- Evaluating the performance of the some latest 6D pose estimation algorithms in various scenarios;
- Integrating visual perception into our original tactile insertion skill framework and software architecture;
- Verifying your algorithm with real robot experiments;
- Assist the research activities including experiments, and publications;
- Possible to extend internship period to master thesis and aim at publishing papers in top-tier robotics conference.
Contact: Yansong Wu (Yansong.wu(at)tum.de), Dr. Fan Wu (f.wu(at)tum.de)
Topic 2: Synthesize Behavior Trees from Human Demonstrations for Industrial Assembly Tasks
Click here to download description
Topic 3: Large Language Model based Task and Motion Planning
Click here to down description
In this work, your research topic will be in:
- Build a real-world setup with several manipulation tasks.
- Develop a software interface to covert task plans generated from LLMs to executable motion plans or Behavior Tree based on our developed skill library.
- Experimental validation on a set of manipulation tasks.
Contact: Yansong Wu (Yansong.wu(at)tum.de), Dr. Fan Wu (f.wu(at)tum.de)
The development of the tactile and dexterous end effectors for the surgical robotic system will require the expertise to design and integrate sensing interfaces in a highly compact space. Moreover, it will be important to oversee the general robotic testbed development including the control and sensing architecture for seamless integration of systems from the bottom-up. The operator would require the crucial sense of touch with low-magnitude normal and shear forces as well as sensing of multiple contact points. Additionally, the end effectors need to be modular to handle different surgical tools. Furthermore, this sensory information needs to be translated to the operator side and be integrated into the overall control and sensing architecture of the robotic system.
In our institute we already have developed a drive unit for the EndoWrist. It comprises four motors that are connected to the pullies which in turn through a cable mechanism drive the joints of the EndoWrist. Two of this drive unit has been attached to two robot arms around a surgical bed. An endoscope has been attached to the third arm. The robots are inserted through the trocar point into a training module. The robots are registered with respect to each other and the bed. On the surgeon side we have two Lambda haptic consoles which is interaction with the surgeon.
Two positions:
Position 1: The research primarily focuses on the kinematic modeling of available endowrist in order to obtain the coupled transformation between the motor space and joint space. After successful implementation, you will progress to dynamic identification (emphasizing friction modeling in the endowrist) as well as external torque estimation through the current measurement. Finally, the proposed model will be validated on a testbed.
Position 2: In the other position we are seeking for highly skilled researcher to improve our existing software architecture of the whole system. This includes mainly on the networking of the robots and haptic consoles, as well as development of a digital twin of the system in Mujoco. Good programming skills in ROS, c++/python in this position and simulation environment such as Mujoco is appealing
Following experiences and background are appealing
-
Good Matlab skills (position1)
-
Good understanding of coordinate frames and 3d transformations (position1)
-
Good mathematic knowledge, specifically Linear Algebra (position1)
-
Experience with ROS, Python, Cpp (position2)
-
Experience with Maxon motor controller (position1,2)
-
Creative and independent thinker (position1,2)
Related Literature:
-
Lee, C., Park, Y. H., Yoon, C., Noh, S., Lee, C., Kim, Y., ... & Kim, S. (2015). A grip force model for the da Vinci end-effector to predict a compensation force. Medical & biological engineering & computing, 53, 253-261.
-
S. Kim and D. Y. Lee, "Friction-model-based estimation of interaction force of a surgical robot," 2015 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea (South), 2015, pp. 1503-1507, doi: 10.1109/ICCAS.2015.7364591.
-
Longmore, S. K., Naik, G., & Gargiulo, G. D. (2020). Laparoscopic robotic surgery: Current perspective and future directions. Robotics, 9(2), 42.
-
Guadagni, S., Di Franco, G., Gianardi, D., Palmeri, M., Ceccarelli, C., Bianchini, M., ... & Morelli, L. (2018). Control comparison of the new EndoWrist and traditional laparoscopic staplers for anterior rectal resection with the Da Vinci Xi: a case study. Journal of Laparoendoscopic & Advanced Surgical Techniques, 28(12), 1422-1427.
-
Abeywardena, S., Yuan, Q., Tzemanaki, A., Psomopoulou, E., Droukas, L., Melhuish, C., & Dogramadzi, S. (2019). Estimation of tool-tissue forces in robot-assisted minimally invasive surgery using neural networks. Frontiers in Robotics and AI, 6, 56.
For more information, please contact:
Dr. Hamid Sadeghian (hamid.sadeghian(at)tum.de)
Mario Tröbinger (mario.troebinger(at)tum.de)
Location:
Forschungszentrum Geriatronik, MIRMI, TUM, Bahnhofstraße 37, 82467 Garmisch-Partenkirchen.
MIRMI, TUM, Georg-Brauchle-Ring 60-62, 80992 München.
Background
Accurate pose estimation of surgical tools is paramount in the field of robotic surgery, helping to increase precision and ultimately improve patient outcomes. Relying solely on forward kinematics often falls short, unable to account for various uncertainties such as cable compliance of endowrists, motor backlash, and environmental noise, among others.
In our lab, we developed a cutting-edge surgical testbed with three Franka Emika robots, equipped with an professional endoscope and endowrists. This setup provides a unique platform to tackle real-world challenges in robotic surgery.
To enhance autonomy in robotic surgery, we are looking to develop an innovative pose estimation algorithm utilizing endoscopic images. This thesis opportunity aims to tackle this exciting challenge, potentially making a significant contribution to the future of robotic surgery.
Tasks
- Perform camera calibration for the endoscopes
- Develop a pose estimation algorithm for endowrists
- Evaluate the methods on our robotic surgery setup
Prerequisites
- Good Python & C++ programming skills / ROS 2
- Good understanding of state estimation algorithms, e.g. Kalman filter/particle filter
- Good knowledge of computer vision, e.g. camera calibration, feature extraction
- Goal-oriented mentality and motivation about the topic
References
[1] Moccia, Rocco, et al. "Vision-based dynamic virtual fixtures for tools collision avoidance in robotic surgery." IEEE Robotics and Automation Letters 5.2 (2020): 1650-1655.
[2] Staub, Christoph. Micro Endoscope based Fine Manipulation in Robotic Surgery. Diss. Technische Universität München, 2013.
[3] Richter F, Lu J, Orosco R K, et al. Robotic tool tracking under partially visible kinematic chain: A unified approach[J]. IEEE Transactions on Robotics, 2021, 38(3): 1653-1670.
Contact
Zheng Shen(zheng.shen@tum.de)
Dr. Hamid Sadeghian (hamid.sadeghian@tum.de)
Chair of Robotics Science and Systems Intelligence, Munich Institute of Robotics and Machine Intelligence (MIRMI)
The thesis goal is to study how vision modulates planning and execution of movements
Healthy participants will wear EEG in order to study what happens at brain level during different grasping tasks (tasks execution in the light and in the darkness and tasks imagination)
Good to have:
- Good programming and signal processing skills
Contacts:
- Prof. Laura Busse: busse(at)biologie.uni-muenchen.de
- Dr. Melissa Zavaglia: melissa.zavaglia(at)tum.de
- Dr. Gemma C. Bettelani: gemma.bettelani(at)tum.de
Location/s: Faculty of Biology, LMU Biocenter and Großhaderner Str. 2 Munich, and
MIRMI (TUM), Garching bei München, Carl Zeiss Strasse
References:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8008384
https://iopscience.iop.org/article/10.1088/1741-2552/aa8911/pdf
Weitere Forschungsmöglichkeiten finden Sie an den Lehrstühlen unserer Principal Investigators!
Hinweis zum Datenschutz:
Im Rahmen Ihrer Bewerbung um eine Stelle an der Technischen Universität München (TUM) übermitteln Sie personenbezogene Daten. Beachten Sie bitte hierzu unsere Datenschutzhinweise gemäß Art. 13 Datenschutz-Grundverordnung (DSGVO) zur Erhebung und Verarbeitung von personenbezogenen Daten im Rahmen Ihrer Bewerbung. Durch die Übermittlung Ihrer Bewerbung bestätigen Sie, dass Sie die Datenschutzhinweise der TUM zur Kenntnis genommen haben.