Research Fund

Support for basic research in machine learning, computer vision, and robotics

Today’s autonomous systems still lack behind the versatile capabilities of humans in scene perception and object manipulation tasks. This project addresses the research question of how robots can learn to perceive and manipulate objects from visual and tactile feedback in a self-supervised way. Jörg Stückler and his team will develop methods that will allow robots to learn models of their interaction with objects from camera images and tactile measurements. The scientists will investigate the use of learned models for perception and control in several robotic object manipulation tasks.

If the mobile service robots of the future, among them delivery robots and self-driving cars, were able to learn independently from their environments, they would also be capable of adapting to changes in their surroundings and thus move about more efficiently. In turn, this would eliminate the need for engineers to tune robots manually to their environments.

This research project addresses the question of how mobile robots can learn their driving capabilities in their environment (i.e. mobility affordances) in a self-supervised way. Jörg Stückler and his team will develop methods for learning motion models that will allow mobile robots to predict the effects of their actions. The scientists will develop a vision-based navigation approach that use learned models for motion planning. They will then evaluate this approach for the autonomous navigation of a mobile robot.

In various fields of research, such as the social sciences, biology, and computer science, network models are often applied to help describe complex systems with many individual elements interacting. In recent years, these models have often been used to draw new conclusions from observed data. The availability of large amounts of data has promoted this development.

Generative models are a popular network model. Here, latent variables are introduced which integrate the scientific findings in this field of knowledge (the "domain knowledge") and capture complex interactions. However, interactions among individuals are usually so complex that they are often approximated as independent. Conditioning upon these variables, the network edges are assumed to be independent and the distribution of probabilities within the network can be simplified. The disadvantage of these models is that in some real-world scenarios, the interactions within the network are not well captured. This means that the model's mathematical description does not correspond well to what is observed in real data. The coupling between variables, which are  too limited, are the main problem here. In comparison, network ensemble models do not use such latent variables, but rather network-specific variables (e.g. degree of distribution or clustering coefficient). However, these models also suffer from various problems that limit their practical application.

This project will combine certain features of the generative model and the network ensemble model with methods from statistical physics. The aim is to develop better principle-based models. In addition, the project aims to ensure that these models can be efficiently applied to concrete problems (e.g. repeatability or the simultaneous occurrence of different forms of relationships between two nodes). 

As technological development accelerates, millions of low-skilled workers are destined to lose their jobs to automation. To mitigate the resulting societal problems, this project aims to develop a scientific and technological foundation for rapidly and inexpensively teaching people the skills they will need to stay or become employable in the workplace of the future, which will be increasingly cognitively demanding.

Building on computational models of human learning and decision-making, Falk Lieder’s group proposes a general and scalable approach that leverages machine learning and artificial intelligence to teach workers the strategies they will need to meet the self-management challenges of the knowledge economy.

The researchers will test this approach by developing a series of intelligent tutors that develop and teach optimal decision strategies for increasingly realistic scenarios. They will illustrate the potential of this approach by developing and evaluating a simulation-based intelligent tutor that teaches high-level employees, freelancers, entrepreneurs, and academics far-sighted strategies for planning their projects, prioritizing their tasks, and managing themselves more effectively.

Many students struggle to stay focused long enough to learn effectively, and social media has exacerbated the problem. Constant distractions at work have led to losses in productivity, which cost the economy billions. These serious issues not only have a negative implications for the lives of individual people, but also for society as a whole.

In this project, Dr. Falk Lieder and Jun.-Prof. Dr. Maria Wirzberger will address the challenge of staying focused in the face of distractions by developing a brain training app called ACTrain together with their project team. ACTrain will be a personal assistant with a name and a customized appearance that will train people to stay focused on a task and effectively resume it after getting distracted. Unlike conventional brain training apps, ACTrain will allow people to train while they are working or studying, thereby turning their daily lives into a gym for the mind. ACTrain can thus be used in many different contexts, including education and the workplace.

The heart of ACTrain is an intelligent feedback mechanism based on computational models of how attention control skills are learned. Based on these models, the application gives people feedback when they get distracted. The feedback communicates the benefits of regaining focus for their productivity and success. In both online courses and the workplace, this software could improve the lives of millions of students and working professionals.

This project focuses on investigating new ways of transferring characteristics of the human visual system to artificial neural networks, with the aim of making them more robust against changes in image features. These can include, for example, changes in image style that do not alter the image's content. At present, no learning algorithm is capable of robustly generalize what it has learned to other untrained image features. Artificial neural networks quickly make mistakes when the image changes even slightly, for instance when noise is added or style changes are made. Humans have no problems recognizing the content of an image in such instances. Even if most of us grow up under the influence of a certain environment with specific visual characteristics (such as the Black Forest), our visual system easily generalizes to completely different environments (such as a desert environment or a painting).

Previous work has shown that deep artificial neural networks use very different image features for decision making than our visual system. For example, while we usually categorize objects by their shape, these networks rely mainly on local patterns in the images. It is still very difficult to incorporate the image features humans use to perceive into artificial systems, as we simply know too little about the exact properties of biological systems.

This is why we want to develop mechanisms that can transfer robust features directly from measurements of brain activity to artificial systems. Under controlled conditions, we will first investigate the mechanisms with which these features can be transferred between networks. In the final phase of the project, we will use publicly available measurements of neural activity from the visual system to test which of the neural properties can be transferred to artificial networks using the methods we have developed.

The Cyber Valley “Locomotion in Biorobotic and Somatic Systems” research group investigates the biomechanics of locomotion and underlying morphological adaptations, as evolved by nature. The researchers then apply their biological findings to develop life-like robots and functional materials that are similar to how they occur in nature. Their research is at the interface of engineering and biology – a relatively new and promising field. 

Dr. Ardian Jusufi and Hritwick Banerjee envision developing a flexible, stretchable, and biocompatible external sensor made from multi-functional smart materials that could one day be applied in healthcare, both for humans and in non-invasive veterinary care. The sheet-like sensor would adhere externally to the human or animal exterior as smoothly as a second layer of skin, and would stay in place no matter how a person or animal moves. The sensor could then detect a person’s health, sense blood pressure and other biometric values, or whether a person had an irregular heart beat that could indicate adverse health events such as heart attacks. In addition to a broad of range of biomedical applications, the soft and flexible sensor could also be built into smart clothes, wearable electronics, or soft robotics, to name just a few examples. They could also be used to improve human-machine interaction. For instance, self-driving cars could be equipped with such sensors. If a person touched the sensor while sitting in the vehicle, it could detect an imminent medical emergency and send a signal to the autopilot, which would immediately drive the car to the nearest hospital. 

There are substantial technological challenges on the path to developing soft interfaces of this kind, which would have to potentially gather a broad range of healthcare information while being wrapped around an arm or leg like silk. The fundamental features of such a sensor must be significantly improved to enable performance. This is why fundamental basic research is required to explore flexibility, sensitivity, repeatability, linearity, durability, and stimuli-responsive material, for instance. 

Top sum up: the key aims of the scientists’ research project are as follows: 

  • manufacturing pressure-sensitive tactile sensing which is strain invariant, and improving the interface between highly stretchable and biocompatible conducting materials which provide excellent adhesion
  • developing a sensing sleeve with a multi-stimuli response embedded into a single hybrid platform that could actively conform to the device or body without compromising efficacy, and
  • exploring innovative automobile, entertainment industry applications for cutting- edge soft sensors, including integration with mobile soft robots, rehabilitative systems, and possibly collision-aware surgical robotics

Airline pilots train many hundreds of hours in flight simulators before they take to the skies. In contrast, surgeons have very limited access to simulators, and those that are available do not offer sufficiently realistic conditions. Medical instruments used for robotic and minimally-invasive surgery are often tested on grapes or cans of meat and thus do not accurately reflect reality.

Dr. Tian Qiu, leader of the Cyber Valley “Biomedical Microsystems” research group, has set out to improve the situation. His research focuses on developing very realistic organ phantoms that optimize surgical training procedures and make them quantitatively measurable. Not only are these phantoms authentic physical replicas, they also have a cyber component. In other words. Qiu’s research program proposes to develop a “cyber-physical twin” of human organs.

Each 3D-printed organ twin is made of soft materials very similar to real organs in terms of anatomy and tissue properties. The cyber aspect is that the model can sense what it experiences and that this data is collected. Such data would be impossible to record if, for instance, a medical procedure were trained on a real human organ. With the data generated by a cyber-physical organ twin, the outcome of a surgery training session can now be clearly visualized, which is not even possible in a real surgical situation. The performance of a medical student who is training to become a surgeon could thus be evaluated automatically, and the feedback can be provided immediately after the training session to improve the training experience.

Such smart cyber-physical organ twins will one day transform surgical training. Tian Qiu and his team believe that they could gradually substitute medical training on human bodies and reduce animal experiments. The organ replicas not only offer the opportunity to develop and test new medical instruments, but also to develop better safety products such as helmets and airbags, for example when body part replicas are used in crash tests. Vital data on how they are affected in an accident can be collected and analyzed.

Learning to play a musical instrument is a long and difficult endeavor. Not everyone can afford the help of a professional teacher, and even with this help, feedback is limited in terms of latency and expressiveness. To tackle these problems, we will design a collection of new data-driven techniques and tools. The main idea is to systematically record musical practice data of students and feed it back through smart, visual interfaces. With a Visual Analytics web-tool, we will allow students, teachers, and professional musicians to detect errors and improve their style in a completely innovative way. By additionally recording motion data, we will also be able to convey fingering instructions or correct poses through augmented reality displays that visualize information directly attached to a physical music instrument.

As musical data can be complex, and notes or audio data signals recorded from instruments are usually noisy, AI is a useful, if not necessary vehicle for data processing and analysis. We will follow a human-centered design process that involves musicians and music teachers of different backgrounds and skill-levels in data acquisition, development, and evaluation of our techniques and tools. Our goal is to provide ready-to-use music education tools, re-usable data processing techniques, and datasets comprised of notes, audio, motion capturing, and other features that we record from instruments and players.

The next big step for researchers in machine learning and artificial intelligence is to enhance a computer’s ability to reason. While deep networks can extract very complicated patterns from data, there is a certain sense of dissatisfaction when it comes to their performance on tasks with combinatorial or algorithmic complexity. This sentiment was for example expressed by Battaglia et al [2] who advocate that “combinatorial generalization must be a top priority for AI”.

On the other hand, there are decades’ worth of research contributions in graph algorithms and discrete optimization. We have optimal runtimes for sorting algorithms, clever tricks for various algorithmic problems over graphs/networks such as for shortest path or various cuts or matching-based problems. In other words, when faced with combinatorial or algorithmic problems in isolation and with a clean specification, we already have very strong methods for solving them. This should not be ignored. While there is some level of success in designing deep learning architectures with “algorithmic behavior”, the classical methods are still miles ahead when it comes to performance in purely combinatorial setups. We believe the right approach is to build bridges between the two disciplines so that progress can freely flow from one to another. In that spirit, we would rephrase the earlier sentiment as “merging techniques from combinatorial optimization and deep learning must be a top priority for AI”.

We plan to demonstrate this merging by designing neuro-algorithmic architectures that show rapid learning capabilities when solving difficult decision-making problems from raw inputs.

While most successful applications of machine learning to date have been in the realm of supervised learning, unsupervised learning is often seen as a more challenging, and possibly more important problem. Turing award winner Yann LeCun, one of the so-called “Godfathers of AI”, famously compared supervised learning with the thin “icing on the cake” of unsupervised learning. An approach called contrastive learning has recently emerged as a powerful method of unsupervised learning of image data, allowing, for example, to separate photos of cats from photos of dogs without using any labeled data for training. The key idea is that a neural network is trained to keep each image as close as possible to its slightly distorted copy and as far as possible from all other images. The balance between attractive and repulsive forces brings similar images together. In this project these ideas will be applied to single-cell transcriptomics, a very active field of biology where one experiment can measure gene activity of thousands of genes in millions of individual cells. The group will use contrastive learning to find structure in such datasets and to visualize them in two dimensions. They will then go back to the image data and use two-dimensional embeddings as a tool to gain intuition about how different modeling and optimization choices affect the final representation.

Animals generate movement in a fascinatingly efficient, dynamic, and precise way. They achieve this by a well-tuned dynamic interplay between nervous system and muscles where they exploit the visco-elastic properties of the muscles to reduce the neuronal load. This apparent computation performed by the body is termed morphological computation. Building on this idea, novel robotic systems, like muscle driven robots, soft robots, or soft wearable assistive devices are developed. However, the control of non-linear and elastic robotic systems is challenging.

In this project, we will employ machine learning approaches to learn a well-tuned dynamic interplay between controller and muscle(-like) actuator. The goal is to explicitly exploit the muscle properties and therefore rely on morphological computation. We will develop this approach with computer simulations of human arm movements which consider muscles and low-level neuronal control (like re exes). We will further add a model of a technical assistive device and learn a controller which helps to maximize the morphological computation in the human neuro-muscular arm model.

With our collaboration partners Syn Schmitt (Uni Stuttgart) and Dieter Büchler (MPIIS), we will also apply this approach to muscle-driven robotic systems. This will allow us to learn a control which also exploits morphological computation in such systems.

Learning to exploit morphological computation will provide a novel approach to controlling robotic systems with elastic actuators and soft structures with potential applications especially in human-robot interaction or assistance.

In recent decades, meteorologists have consistently improved weather forecasting systems, which have thus become increasingly complex. Sophisticated systems, such as the Consortium for Small Scale Modeling (COSMO) model, incorporate influences such as local topographical, soil, and vegetation properties. Despite these advances, the data remains  approximate because of spatio-temporal differences as well as interactions and influences that either cannot be observed or have not been taken into account. With their Distributed, Spatio-Temporal Graph Artificial Neural Network Architecture (DISTANA), Professor Martin Butz and his Neuro-Cognitive Modeling Group at the University of Tübingen’s Department of Computer Science have now developed a new approach, which may either enhance or serve as an alternative to traditional forecasting systems.

DISTANA applies inductive learning biases, which implement universal principles of weather dynamics, including the principle that system dynamics can be influenced by only partially observable or even fully unknown, but universally applicable local factors, and the principle that the propagation of weather dynamics over space is restricted to local neighborhoods in instances where temporal intervals are sufficiently small.

Over the course of this project, the researchers will be both developing combined weather prediction datasets as benchmarks and enhancing DISTANA. Ultimately, they expect DISTANA to outperform state-of-the-art weather forecasting systems, as it will be able to take unknown factors into account. Once successfully trained, DISTANA may be useful for weather forecasting on various spatio-temporal scales, potentially enabling better predictions of extreme weather events. This would in turn make it possible to take preventive measures accordingly. Additionally, the principle behind DISTANA may be applied in other areas, for instance water flow prediction, erosion modeling, or output prediction for wind park turbines. In the long term, this research could contribute to informing actions that aim to alleviate the negative impact of climate change.

Understanding how animals behave, in particular how animals move and interact with each other and their environment, is fundamental to addressing the most important ecological problems of our time. State of the art methods for tracking animal movement, which in turn allows us to understand their behavior, often disrupt their daily lives and are not scalable to vast environments. For this reason, Aamir Ahmad and his group will develop WildCap: a team of aerial robots that captures the movements of animals in a non-invasive way. Unlike existing methods, this novel approach allows the aerial robots to choose flight paths that provide optimal viewpoints for best estimating the movement of the animals and, in future, their behavior.

Detecting anomalies is a relevant problem in networked systems, where many individual entities are connected with each other. Anomalous behaviors arise when patterns of interactions deviate from what is considered regular activity. Examples of these can be seen in online social networks, where fake profiles mimic realistic ones to engage people with malicious goals.

The challenge is to properly describe what a legitimate pattern of interaction is, so that this can be used to measure what deviates from it. Probabilistic generative models are a powerful approach for this, as they allow to incorporate domain knowledge that can guide an expressive description of a system and thus help predicting anomalies. For instance, they have been successfully deployed to infer communities, groups of people that interact in similar ways. This behavior has been observed in several realistic scenarios, in particular in social science. Hence, in these situations, the detecting anomalies is strictly connected with that of detecting communities.

In this project, Caterina De Bacco’s research group at the Max Planck Institute for Intelligent Systems incorporates these insights to develop models that jointly detect communities and anomalies. The idea is that incorporating these two problems will boost the predictive power in detecting anomalous behaviors. In particular, we will exploit recent work to model realistic scenarios where reciprocity effects play a significant role and where networks, and thus anomalies, evolve in time.

Robots in the construction industry are often larger than the building artefact, and are limited to working with predictable and standardized materials, despite their generally low level of sustainability. Advances in research are currently validating a shift towards distributed and mobile material-robot construction systems. This entails distributed robotic systems being co-designed directly with the material systems they employ, leading to zero waste construction systems. The potential of these systems lies in their adaptability and robustness, allowing them to operate in dynamic environments and to collaborate in large teams.

This project by Nicolas Kubail Kalousdian and Achim Menges (University of Stuttgart) intends to extend this line of investigation by combining research on Architecture, Engineering & Construction (AEC) and AI to the problem of building with natural deformable materials, specifically bamboo. In particular we will investigate how to combine the adaptiveness of reinforcement learning (RL) with the long-horizon capabilities of Logic-Geometric Programming (LGP) to form a task and motion planning strategy that can solve such collaborative and nonlinear construction problems. We believe that combining the strengths of distributed material-robot systems and those of AI methods such as LGP and RL are essential in unlocking natural deformable materials for use in more sustainable robotic construction.

Deep Learning is often perceived as a cutting-edge technology driving innovation by powering virtual assistants like Siri or creating photo-realistic images of fictional people using GANs. However, a look under the hood reveals that the algorithms performing the actual training of those deep neural networks are rather archaic. Crucially, these training algorithms still rely on extensive human intervention to produce well-performing models. Thus, they create a necessity for either vast and very costly hyperparameter searches or for experts who carefully and artfully set them. Currently, vast amounts of human labor and compute resources are wasted re-running the same training process over and over again in search of appropriate hyperparameters.

Instead, Frank Schneider’s group (University of Tübingen) argues for a new type of training algorithm, one that works “out of the box”. In order to achieve this level of automation, we aim to replace the static update rule of current methods with a dynamic agent-like behavior that is able to automatically adapt to the training process. Training algorithms that require less or even no manual tuning can be expected to reduce the amount of energy and compute consumption by at least one order of magnitude. 
 

Cooperation enables intelligent systems to divide the costs, share the risks, and distribute the utility. As such, it unlocks the true potential of cognitive networks in terms of the efficiency of resource expenditure (self-optimization), stable distributed control (self-organization), and sustainability (self-diagnosing and self-healing). Cooperative cost-, risk-, and resource sharing mechanisms have broad applicability to manage and optimize technological networks such as fog computing, UAV swarms or open random-access communication networks. Nevertheless, inducing cooperation among cognitive entities is challenging due to several issues such as lack of information, conflicting interests, and excessive computational complexity. 

This project by Setareh Maghsudi (University of Tübingen) aims at developing rigorous decision-making mechanisms for cooperation. Such mechanisms enable cognitive entities to strategically collaborate in complicated scenarios, e.g., under uncertainty or inside dynamic environments. The focal point of such strategies is repeated interaction or using trajectories (histories) of such interactions. More precisely, the building blocks of such mechanisms are reinforcement learning or inverse reinforcement learning. We leverage stability concepts and convergence notions from graph-constrained cooperative game theory to define convergence and stability properties in uncertain scenarios and investigate their achievability. Besides, we analyze the strategies regarding utility performance and learning efficiency.

At the moment, there are large investments and developments in the field of individual mobility, such as urban air mobility vehicles and unmanned aircraft applications, e.g. for medical transportation or rescue purposes. The majority of these aircraft or aerial vehicles are based on electric propulsion and thus, range and endurance are quite limited. In the case of fixed-wing aircraft, these parameters can be significantly improved by exploiting (harvesting) energy from the atmospheric environment. An extreme example is conventional soaring, which is a glider flight, where a pilot combines experience, skills, knowledge, and perception in a decision-making process in such a way, that updrafts are detected and exploited to maximize flight range while keeping situational awareness at all times. These tasks are very complex and can only be accomplished by highly trained pilots.

The objective of this work by Aamir Ahmad (University of Stuttgart) is to find systematic approaches to autonomously maximize the exploitation of environmental energy for flights with small fixed-wing aircraft (unmanned or manned), while, minimizing the flight duration for a required distance. The underlying problem is the trade of short-term rewarding actions (covering some distance) against actions that are expected to pay off in the long term (mapping and exploiting atmospheric updrafts) while navigating in a complex, particularly hard-to-model environment.

What we can learn from a single data set in experiments and observational studies is always limited, and we are inevitably left with some remaining uncertainty. It is of utmost importance to take this uncertainty into account when drawing conclusions if we want to make real scientific progress. Formalizing and quantifying uncertainty is thus at the heart of statistical methods aiming to obtain insights from data.

To compare schientific theories, scientists translate them into statistical models and then investigate how well the models' predictions match the gathered realworld data. One widely applied approach to compare statistical models is Bayesian model comparison (BMC). Relying on BMC, researchers obtain the probability that each of the competing models is true (or is closest to the truth) given the data. These probabilities are measures of uncertainty and, yet, are also uncertain themselves. This is what we call meta-uncertainty (uncertainty over uncertainties). Meta-uncertainty affects the conclusions we can draw from model comparisons and, consequently, the conclusions we can draw about the underlying scientific theories.

This project by Paul-Christian Bürkner (University of Stuttgart, SimTech Cluster of Excellence) contributes to this endeavor by developing and evaluating methods for quantifying meta-uncertainty in BMC. Building upon mathematical theory of meta-uncertainty, we will utilize extensive model simulations as an additional source of information, which enable us to quantify so-far implicit yet important assumptions of BMC. What is more, we will be able to differentiate between a closed world, where the true model is assumed to be within the set of considered models, and an open world, where the true model may not be within that set – a critical distinction in the context of model comparison procedures.

Thumb ticker sm allgower1
University of Stuttgart
Thumb ticker sm pi  053
University of Tübingen
Thumb ticker sm headshot2021
Max Planck Institute for Intelligent Systems
Thumb ticker sm kjkuchenbecker1
Max Planck Institute for Intelligent Systems
Thumb ticker sm l1170153  0.5 credit
ELLIS Institute Tübingen & Max Planck Institute for Intelligent Systems
Thumb ticker sm metin eth vertical
Max Planck Institute for Intelligent Systems
Square news event item green
ZF Friedrichshafen AG
Thumb ticker sm alexander joos
IAV GmbH
Thumb ticker sm bruno kistner 004
Porsche AG
Thumb ticker sm bosch research thomas kropf cr p 0578
Robert Bosch GmbH
Square news event item blue
Amazon
Thumb ticker sm thomas stauner
BMW Group
Thumb ticker md design learnings graphics and comments 10

Cyber Valley Research Fund project optimizes focus and pr...

Dr. Falk Lieder combats distractions with desktop application
Arrow left
Thumb ticker md cyber valley research fund article

Cyber Valley Research Fund drives advancements in soft ro...

Dr. Ardian Jusufi expediates the development of stretchable sensor technology
Arrow left
Thumb ticker md design learnings graphics and comments 7

Cyber Valley Research Fund improves human decision-making...

Scalable machine learning for future challenges
Arrow left
Thumb ticker md 18

Cyber Valley Research Fund advances robotic perception te...

Integrating Vision and Touch
Arrow left
Thumb ticker md website graphics

Cyber Valley Research Fund drives research excellence

Understanding the fundamental patterns in networks
Arrow left
Thumb ticker md cv research fund

Cyber-physical twins of human organs

Cyber Valley Research Fund advances surgical and biomedical innovation
Arrow left
Thumb ticker md 11

Unveiling anomalies for safer connections

Innovative project insights from the Cyber Valley Research Fund
Arrow left
Thumb ticker md research fund event

Cyber Valley Research Fund Event

Bringing AI fundamental research to new heights
Arrow left
Thumb ticker md 20181213 ev team wolfram scheible5

Cyber Valley Research Fund Board issues second call for p...

Cyber Valley research groups are eligible to apply for project funding
Arrow left