Home | english  | Impressum | Sitemap | KIT
Leitung H²T
+49 721 608 47379
asfourQia5∂kit edu

Institut für Anthropomatik und Robotik
Lehrstuhl für Hochperformante Humanoide Technologien
Adenauerring 2, Geb.50.20
76131 Karlsruhe

Sekretariat

Sprechstunde:

Mo. - Do.: 10:00h - 12:00h

Telefon: +49 721 608-43547
            +49 721 608-48277

Email: sekretariat asfourGhs5∂anthropomatik kit edu

Laufende Projekte

Laufende Projekte

INOPRO: Intelligente Orthetik und Prothetik

INOPRO: Intelligente Orthetik und Prothetik
Ansprechpartner:

Tamim Asfour

Förderung:

BMBF

Starttermin:

2016

Endtermin:

2021

Description

INOPRO aims to develop new, intelligent orthoses and prostheses and the necessary human-machine-interfaces to improve the quality of life of the users.

Scientifically, the projects focusses on five aspects:

  1. Autonomous adaption to the user and the environment
  2. Advanced sensory feedback to enhance safety and trust
  3. Compensation of the loss of ability through active support
  4. Easy and intuitive control
  5. Modularity

IMAGINE: Robots Understanding Their Actions by Imagining Their Effects

IMAGINE: Robots Understanding Their Actions by Imagining Their Effects
Ansprechpartner:

Tamim Asfour

Förderung:

EU H2020 Projekt 

Starttermin:

2017

Endtermin:

2020

Description

The scientific objective of IMAGINE is to enable robots to understand how their environment behaves and how to interact with it, resulting in substantial advances in autonomy, flexibility and robustness of interaction in the presence of unforeseen changes in the environment. Specifically, in
the context of IMAGINE understanding means the ability of the robot

  • to infer which actions possibly apply in a given state of the environment, and how to parametrize the actions to achieve a desired effect;
  • to discern to what extent its actions succeed, and if the action effect differs from the expected outcome, to infer possible reasons and recover from failures.

The core functional element is a generative model based on an association engine and a physics simulator. Understanding is given by the robot’s ability to predict and simulate the effects of its actions, before and during their execution. This allows the robot to choose actions and their parameters based on their simulated performance, and to monitor their progress during execution by matching observed to simulated behavior.

The industrial objective of IMAGINE is to drastically augment the level of structural understanding of devices and functional understanding of actions available to robots, enabling them to infer how to disassemble entire categories of electromechanical devices and appliances, to monitor the success of their own disassembly actions, and to synthesize effective recovery strategies for problems they encounter during disassembly.

The societal/environmental objective of IMAGINE is to increase the economical efficiency of recycling of devices containing hazardous components, to increase the rate of effective recycling and to reduce illegal and/or hazardous dumping or shipping to lax legislatures.

 

SECONDHANDS: A robot assistant for industrial maintenance

SECONDHANDS: A robot assistant for industrial maintenance
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU H2020 Projekt

Starttermin:

2015

Endtermin:

2020

Description

secondhands-logo

The goal of SecondHands is to design a robot that can offer help to a maintenance technician in a pro-active manner. The robot will be a second pair of hands that can assist the technician when he/she is in need of help. Thus, the robot should recognize human activity, anticipate humans needs and pro-actively offer assistance when appropriate, in real-time, and in a dynamically changing real world. The robot assistant will increase the efficiency and productivity of maintenance technicians in order to ensure a smooth running of production machinery thereby maximizing output and return on investment.

KIT leads the tasks of the design of the new robot, grasping and mobile manipulation as well as natural language and multimodal interfaces. Based on the ARMAR humanoid robot technologies, a new humanoid robot for maintenance tasks will be developed and validated in several maintenance tasks in a warehouse environment. A special focus is the development of methods for task-specific grasping of familiar objects, object handover, vision- and haptic-based reactive grasping strategies as well as mobile manipulation tasks while taking into account uncertainties in perception and action. To facilitate natural human-robot interaction, KIT will implement a speech recognition, and dialogue management system to enable maintenance technicians to conduct spoken dialog with the robot.


TIMESTORM: Mind and Time - Investigation of the temporal attributes of human-machine synergetic interaction

TIMESTORM: Mind and Time - Investigation of the temporal attributes of human-machine synergetic interaction
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU FET-ProActive Project

Starttermin:

2015

Endtermin:

2018

Description

TimeStorm

TimeStorm aims at equipping artificial systems with humanlike cognitive skills that benefit from the flow of time by shifting the focus of human-machine confluence to the temporal, short- and long-term aspects of symbiotic interaction. The integrative pursuit of research and technological developments in time perception will contribute significantly to ongoing efforts in deciphering the relevant brain circuitry and will also give rise to innovative implementations of artifacts with profoundly enhanced cognitive capacities. TimeStorm promotes time perception as a fundamental capacity of autonomous living biological and computational systems that plays a key role in the development of intelligence. In particular, time is important for encoding, revisiting and exploiting experiences (knowing), for making plans to accomplish timely goals at certain moments (doing), for maintaining the identity of self over time despite changing contexts (being).

The main role of KIT in TimeStorm is to investigate the temporal information in the perception and execution of manipulation actions and to integrate time processing mechanisms in humanoid robots. In particular, we investigate how semantic representation (top-down) and hierarchical segmentation (bottom-up) of human demonstrations based on spatio-temporal object interactions can be combined to facilitate generalization of action durations. This would allow a robot to scale perceived and learnt temporal information of an action in order to perform the same and other actions with various temporal lengths.


WALK-MAN: Whole-body Adaptive Locomotion and Manipulation

WALK-MAN: Whole-body Adaptive Locomotion and Manipulation
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU-FP7

Starttermin:

2013 

Endtermin:

2017 

Description

WALK-MAN targets at enhancing the capabilities of existing humanoid robots, permitting them to assist or replace humans in emergency situations including rescue operations in damaged or dangerous sites like destroyed buildings or power plants. The WALK-MAN robot will demonstrate human-level capabilities in locomotion, balance and manipulation. The scenario challenges the robot in several ways: Walking on unstructured terrain, in cluttered environments, among a crowd of people as well as crawling over a debris pile. The project's results will be evaluated using realistic scenarios, also consulting civil defence bodies.

KIT leads the tasks concerning multimodal perception for loco-manipulation and the representation of whole-body affordances. The partly unknown environments, in which the robot has to operate, motivate an exploration-based approach to perception. This approach will integrate whole-body actions and multimodal perceptual modalities such as visual, haptic, inertial and proprioceptive sensory information. For the representation of whole-body affordances, i.e. co-joint perception-action representations of whole-body actions associated with objects and/or environmental elements, we will rely on our previous work on Object-Action Complexes (OAC), a grounded representation of sensorimotor experience, which binds objects, actions, and attributes in a causal model and links sensorimotor information to symbolic information. We will investigate the transferability of grasping OACs to balancing OACs, inspired by the analogy between a stable whole-body configuration and a stable grasp of an object.


Robots Exploring Tools as Extensions to their Bodies Autonomously (REBA+)

Robots Exploring Tools as Extensions to their Bodies Autonomously (REBA+)
Ansprechpartner:

Tamim Asfour 

Förderung:

DFG Priority Programme 1527 

Starttermin:

2015

Endtermin:

2018

REBA

REBA
While neuroscientific research is unravelling a remarkable complexity and flexibility of body representations of biological agents during actions and the use of tools (Cardinali et al., 2012; Umilta et al., 2008; Maravita and Iriki, 2004), available approaches for representing body schemas for robots (Hoffmann et al., 2010a) still largely lack equally sophisticated, adaptive and dynamically extensible representations. Associated and largely open challenges are rich representations that marry body morphology, control, and the exploitation of redundant degrees of freedom in representations that offer strong priors for rapid learning that in turn support a flexible adaptation and extension of these representations to realize capabilities such as tool use or graceful degradation in case of malfunction of parts of the body tree. This motivates the present project: to develop, implement and evaluate for a robot rich extensions of its body schema, along with learning algorithms that use these representations as strong priors in order to enable rapid and autonomous usage of tools and a flexible coping with novel mechanical linkages between the body, the grasped tool and target objects. As a major scientific contribution to Autonomous Learning we aim to address these key aspects of interaction learning: ​
  • ​enhancing the scope from the body morphology to a representation of body-tool-environment linkages
  • enhancing the scope from a representation of morphology to a representation that includes control
  • enhancing the scope from minimal DOF systems to systems that offer and exploit redundant degrees of freedom
We will develop these representations in the context of a systematically chosen ”matrix” of real-world interaction situations, arranged to pose learning challenges in increasing order of complexity along the above three dimensions. Thereby, we will build on the previous project, where we have developed a basic framework for adaptive body schemes emphasising the kinematics level. The project will thus directly contribute to enhance the autonomy of robots for adjusting their physical interaction with the world to the variety of situations that is characteristic of many natural environments. It will also advance the state of the art of representations that can support such capabilities, including representations that can autonomously extend themselves as a result of autonomous exploration.

InvasIC

InvasIC
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

DFG 

Starttermin:

2014

Endtermin:

2018

Invasives Rechnen

Unter dem Begriff Invasives Rechnen wird ein völlig neues Paradigma für den Entwurf und die Programmierung zukünftiger paralleler Rechensysteme erforscht.
Logo Invasive Computing
Das Teilprojekt D1 des Transregio 89 untersucht die Potentiale und Grenzen invasiver Architekturen in anspruchsvollen Szenarien der humanoiden Robotik. Dabei wird die Implementierung einer kognitiven Roboterarchitektur mit ihren unterschiedlichen Verarbeitungshierarchien auf invasiven eng gekoppelten Prozessorfeldern und heterogenen MPSoCs untersucht und evaluiert. Wir erwarten, dass sich dynamisch stark schwankende Rechenlasten durch invasive Mechanismen effizienter auf begrenzte Ressourcen abbilden lassen, als dies bei konventioneller Ressourcenzuordnung zur Compile-Zeit möglich wäre.

i-Support

i-Support
Ansprechpartner:

Tamim Asfour

Links:
Förderung:

Horizon 2020

Starttermin:

2015

Endtermin:

2018

Description

isupport_logo

The I-SUPPORT project envisions the development and integration of an innovative, modular, ICT-supported service robotics system that supports and enhances frail older adults’ motion and force abilities and assists them in successfully, safely and independently completing the entire sequence of bathing tasks, such as properly washing their back, their upper parts, their lower limbs, their buttocks and groin, and to effectively use the towel for drying purposes. Advanced modules of cognition, sensing, context awareness and actuation will be developed and seamlessly integrated into the service robotics system to enable the robotic bathing system.

KIT leads the tasks concerning the learning motion primitives from human observation and kinesthetic teaching for a soft robot arm which should provide help in washing and drying tasks. The learned motion primitives should be represented in a way, which allow the adaptation to different context (soaping, washing, drying), body parts (back, upper lower limbs, neck) and users. To achieve this, adaptive representations will be developed of the learned motion primitives which are able to account for the different requirements such as encoding different motion styles (circular and linear), adaptation to different softness of different body parts, etc. To enable the handling of different washing and trying tools with varying softness by the robot arm, we will investigate how motion primitives can be enriched with models which encode correlations between objects properties and action parameters. Furthermore, KIT is addressing the task of personalization and adaptation of the robotic bathing system to the user by taking into the users‘ preference and previous sensorimotor experience. Based on a reference model of the human body, the Master Motor Map, which defines the kinematics and dynamics of the human body with regard to global body parameters such as height, weight, we will derive individual models of the different users. These models will be used to generate initial washing and/or drying behavior which will be refined based on sensorimotor experience obtained from the robot arm


Abgeschlossene Projekte

Abgeschlossene Projekte

KoroiBot: Improving humanoid walking capabilities by human-inspired mathematical models, optimization and learning

KoroiBot: Improving humanoid walking capabilities by human-inspired mathematical models, optimization and learning
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU-FP7

Starttermin:

2013 

Endtermin:

2016 

Description

KoroiBot is a three years project funded by the European Commission under FP7-ICT-2013-10. The goal of the project is to investigate the way humans walk, e.g., on stairs and slopes, on soft and slippery ground, over beams and seesaws and create mathematical models and learning methods for humanoid walking. The project will study human walking, develop techniques for increased humanoid walking performance and evaluate them on existing state of the art humanoid robots.

KIT leads the tasks concerning human walking experiments, the establishment of large scale human walking database and the development of human and humanoid models as basis for the development of general motion and control laws transfer rules between different embodiments and for the generation of different walking types. The developed models and transfer rules, we will study how to implement balancing push recovery strategies to deal with different types of perturbation in free and constrained situations. Furthermore, we will investigate the role of prediction in walking as well as the role different sensory feedback such as vision, vestibular and foot haptics in balancing. Therefore, we will implement a control and action selection schema emphasizing predictive control mechanisms, which rely on the estimation of expected perturbation based on multimodal sensory feedback and past sensorimotor experience. The control schema will be validated in the context of prediction and selection of push recovery actions.


XPERIENCE: Robots Bootstrapped through Learning from Experience

XPERIENCE: Robots Bootstrapped through Learning from Experience
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU-FP7

Starttermin:

2011 

Endtermin:

2015 

Description

Logo

Current research in enactive, embodied cognition is built on two central ideas: 1) Physical interaction with and exploration of the world allows an agent to acquire and extend intrinsically grounded, cognitive representations and, 2) representations built from such interactions are much better adapted to guiding behaviour than human crafted rules or control logic. Exploration and discriminative learning, however are relatively slow processes. Humans, on the other hand, are able to rapidly create new concepts and react to unanticipated situations using their experience. “Imagining” and “internal simulation”, hence generative mechanisms which rely on prior knowledge are employed to predict the immediate future and are key in increasing bandwidth and speed of cognitive development. Current artificial cognitive systems are limited in this respect as they do not yet make efficient use of such generative mechanisms for the extension of their cognitive properties.

 

Solution

The Xperience project will address this problem by structural bootstrapping, an idea taken from child language acquisition research. Structural bootstrapping is a method of building generative models, leveraging existing experience to predict unexplored action effects and to focus the hypothesis space for learning novel concepts. This developmental approach enables rapid generalization and acquisition of new knowledge and skills from little additional training data. Moreover, thanks to shared concepts, structural bootstrapping enables enactive agents to communicate effectively with each other and with humans. Structural bootstrapping can be employed at all levels of cognitive development (e.g. sensorimotor, planning, communication).

 

Project Goals

 

  1. Xperience will demonstrate that state‐of‐the‐art enactive systems can be significantly extended by using structural bootstrapping to generate new knowledge. This process is founded on explorative knowledge acquisition, and subsequently validated through experience‐based generalization.
  2. Xperience will implement, adapt, and extend a complete robot system for automating introspective, predictive, and interactive understanding of actions and dynamic situations. Xperience will evaluate, and benchmark this approach on existing state‐of‐the‐art humanoid robots, integrating the different components into a complete system that can interact with humans.

 

Expected Impact

By equipping embodied artificial agents with the means to exploit prior experience via generative inner models, the methods to be developed here are expected to impact a wide range of autonomous robotics applications that benefit from efficient learning through exploration, predictive reasoning and external guidance.


HEiKA-EXO: Optimization-based development and control of an exoskeleton for medical applications

HEiKA-EXO: Optimization-based development and control of an exoskeleton for medical applications
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

 HEiKA

Starttermin:

Januar 2013

Endtermin:

Dezember 2013

Description

HEiKA-EXO is a project, which is financed by the HEiKA joint research initiative of the University of Heidelberg and the Karlsruhe Institute of Technology. The project is a prototype study for the design and control of a lower leg exoskeleton for medical applications based on expertise available at KIT in humanoid robotics and the University of Heidelberg in mathematical modeling. The first exoskeleton prototype feature two active degrees of freedom in the knee and ankle joints.

GRASP

GRASP
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU-FP7

Starttermin:

2008 

Endtermin:

2012

About GRASP

Logo
Grasp Logo

GRASP is an Integrated Project funded by the European Commission through its Cognition Unit under the Information Society Technologies of the seventh Framework Programme (FP7). The project was launched on 1st of March 2008 and will run for a total of 48 months.

The aim of GRASP is the design of a cognitive system capable of performing grasping and manipulation tasks in open-ended environments, dealing with novelty, uncertainty and unforeseen situations. To meet the aim of the project, studying the problem of object manipulation and grasping will provide a theoretical and measurable basis for system design that is valid in both human and artificial systems. This is of utmost importance for the design of artificial cognitive systems that are to be deployed in real environments and interact with humans and other agents. Such systems need the ability to exploit the innate knowledge and self-understanding to gradually develop cognitive capabilities. To demonstrate the feasibility of our approach, we will instantiate, implement and evaluate our theories and hypotheses on robot systems with different embodiments and complexity.

GRASP goes beyond the classical perceive-act or act-perceive approach and implements a predict-act-perceive paradigm that originates from findings of human brain research and results of mental training in humans where the self-knowledge is retrieved through different emulation principles. The knowledge of grasping in humans can be used to provide the initial model of the grasping process that then has to be grounded through introspection to the specific embodiment. To achieve open-ended cognitive behaviour, we use surprise to steer the generation of grasping knowledge and modelling


PACO-PLUS

PACO-PLUS
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU-FP6

Starttermin:

2006 

Endtermin:

2010 

About PACO-PLUS

logo
Logo

PACO-PLUS is an integrated Project funded by the European Commission through its Cognition Unit under the Information Society Technologies of the sixth Framework Programme (FP6). The project was launched on 1st of February 2006 and will run for a total of 48 months.

PACO-PLUS brings together an interdisciplinary research team to design and build cognitive robots capable of developing perceptual, behavioural and cognitive categories that can be used, communicated and shared with other humans and artificial agents. To demonstrate our approach we are building robot systems that will display increasingly advanced cognitive capabilities over the course of the programme. They will learn to operate in the real world and to interact and communicate with humans. To do this they must model and reflectively reason about their perceptions and actions in order to learn, act and react appropriately. We hypothesize that such understanding can only be attained by embodied agents and requires the simultaneous consideration of perception and action.

Our approach rests on three foundational assumptions:

  • Objects and Actions are inseparably intertwined in cognitive processing; that is “Object-Action Complexes” (OACs) are the building blocks of cognition.
  • Cognition is based on reflective learning, contextualizing and then reinterpreting OACs to learn more abstract OACs, through a grounded sensing and action cycle.
  • The core measure of effectiveness for all learned cognitive structures is: Do they increase situation reproducibility and/or reduce situational uncertainty in ways that allow the agent to achieve its goals?

Sonderforschungsbereich 588: Lernende and kooperierende multimodale humanoide Roboter (SFB 588)

Sonderforschungsbereich 588: Lernende and kooperierende multimodale humanoide Roboter (SFB 588)
Ansprechpartner:

Tamim Asfour

Links:
Förderung:

Deutsche Forschungsgemeinschaft (DFG)

Starttermin:

01.07.2001

Endtermin:

30.06.2012

Description

The Collaborative Research Center 588 "Humanoid Robots - Learning and Cooperating Multimodal Robots" was established on the 1st of July 2001 by the Deutsche Forschungs-gemeinschaft (DFG) and will run until June 30th, 2012.

The goal of this project is to generate concepts, methods and concrete mechatronical components for a humanoid robot, which will be able to share his activity space with a human partner. With the aid of this partially anthromorphic robot system, it will be possible to step out of the "robot cage" to realise a direct contact to humans.  

The term multimodality includes the communication modalities intuitive for the user such as speech, gesture and haptics (physical contact between the human and the robot), which will be used to command or instruct the robot system directly.

Concerning the cooperation between the user and the robot - for example in the joint manipulation of objects - it is important for the robot to recognise the human's intention, to remember the acts that have already been carried out together and to apply this knowledge correctly in the individual case. Great effort is spent on safety, as this is a very important aspect of the man-machine-cooperation.

An outstanding property of the system is its ability to learn. The reason for this is the possibility to lead the system to new, formerly unknown problems, for example to new terms and new objects. Even new motions will be learned with the aid of the human and they can be corrected in an interactive way by the user.
The Collaborative Research Center 588 is assigned to the Department of  Informatics. More than 40 scientists and 13 institutes are involved in  this project. They belong to the department of Informatics, the Faculty of Electrical and Information Engineering, the Faculty of  Mechanical Engineering and Faculty of Humanities and Social Sciences  as well as Fraunhofer Institute of Optronics, System Technologies and  Image Exploitation (IOSB)  and the Forschungszentrum Informatik  Karlsruhe (FZI). 

EU FET Flagship Initiative "Robot Companions for Citizens"

EU FET Flagship Initiative "Robot Companions for Citizens"
Ansprechpartner:

Tamim Asfour 

Links:
Förderung:

EU-FET 

Starttermin:

2011

Endtermin:

2012

Description

The coordination action CA-RoboCom will design and describe the FET Flagship initiative “Robot Companions for Citizens” (RCC) including its: Scientific and Technological framework, governance, financial and legal structure, funding scheme, competitiveness strategy and risk analysis.

The FET Flagship initiative RCC will realize a unique and unforeseen multidisciplinary science and engineering program supporting a radically new approach towards machines and how we deploy them in our society.

Robot Companions for Citizens is an ecology of sentient machines that will help and assist humans in the broadest possible sense to support and sustain our welfare. RCC will have soft bodies based on the novel integration of solid articulated structures with flexible properties and display soft behavior based on new levels of perceptual, cognitive and emotive capabilities. RCC will be cognizant and aware of their physical and social world and respond accordingly. RCC will attain these properties because of their grounding in the most advanced sentient machines we know: animals.

Robot Companions for Citizens will validate our understanding of the general design principles underlying biological bodies and brains, establishing a positive feedback between science and engineering.

Driven by the vision and ambition of RCC, CA-RoboCom will, by means of an appropriate outreach strategy, involve all pertinent stakeholders: science and technology, society, finance, politics and industry. Other than the commitment of its Consortium, CA-RoboCom will involve a wide range of external experts in its working groups, its Advisory Board, and in its European and International Cooperation board. The CA-RoboCom consortium believes that given the potential transformative and disruptive effects of RCC in our society their development and deployment has to be based on a the broadest possible support platform.