Reproduction and learning of movement representations

Learning Movement Representations

Imitation learning provides an intuitive solution of enabling a robot to generate its motion autonomously in order to accomplish certain actions in human-centered environments. To understand observed actions and movements, knowledge needs to be extracted from human demonstrations and represented in a generic form, which can be accessed by robots for future reproduction. Following neuroscientific studies, we are exploring the strategy of representing motion in the form of basic motor primitives, which are learned from markerless and marker-based human motion capture data.


Dynamic Movement Primitives offer an appropriate representation based on non-linear differential equations learned and formulated in a way that a generalized reproduction of a demonstration is obtained which can be easily adapted by start and goal parameters in the equation according to desired position values. 

Furthermore, our studies explore how Hidden Markov Models (HMM) can be used to represent generalized movements. Based on multiple demonstrations of the same movement, characteristic features (key points) of each demonstration form the training data for the learning of the HMM. Using this HMM, key points that are common to all demonstrations are identified and applied to generate a reproduction of a movement.

For adaption to novel situations, we are building a motion library by labeling each recorded movement according to task and context (e.g., grasping, placing, and releasing). To reproduce a complex task, the trajectory is split up into basic movement segments, which are matched to the motor primitives contained within the library. Sequencing of these primitives leads to a generalization of the underlying task.

Videos