- Version
- Download 10
- File Size 185 KB
- File Count 1
- Create Date October 5, 2021
- Last Updated October 5, 2021
D2.4 Report and software for the integrated probabilistic segmentation of long demonstrations, building of motor primitive libraries and learning of symbolic higher level policies - ABSTRACT
This document reports the advancements developed in the context of WP2, describing the work conducted and the achievements accomplished during our work towards the completion of task T2.4. “Development of Primitives Learning algorithm”, while we also report preliminary results of T2.5. “Formalization of human gestures in an interaction representation”.
The aim of WP2 is to deliver a knowledge base and develop robotic cognition. In particular, this knowledge base should contain accessible information regarding human and robot tasks, should contain tools for analysing the human-operator’s workspace to detect task-specific
objects, monitor human activity and predict human tasks’ evolution, while a primitive’s learning algorithm will provide the robot with the ability to learn from demonstrations.
During task T2.4, we have addressed the problems associated with the development of a robust method for acquiring learning algorithms for motion primitives. We have focused on developing different ways of representing human demonstrated data, with different orders of
complexity. Specifically, we have studied and developed: (1) contextual linear dynamical systems, for modelling simple human movements depended on the environmental context, i.e. the detected objects in the operator’s workspace, (2) probabilistic movement primitives, for
modelling time-series periodic data, and (3) a novel approach that can guarantee the stability of the learned motion dynamics, the ImitationFlows. ImitationFlows have the ability to learn higher-order nonlinear dynamics, with stability guarantees, outperforming all existing state-ofthe-art methods.
The learned primitives are the core part of a contextual probabilistic clustering method that we have developed in the context of T2.4 and T2.5. This segmentation algorithm employs the wellknown autoregressive hidden markov model, in an augmented framework, in which we account for the context of the environment, i.e. the detected objects towards which the human operator acts. The primitives are modelling the latent dynamics of the human motion, while due to the powerful probabilistic characteristics of the hidden markov models we can both predict the evolution of the human motion, but also infer the class of motion segments (trajectories) w.r.t. to the context variables. In this direction, a powerful similarity metric was developed for comparing similar task-specific gestures for providing the classes of the human trajectories. For the real-time execution of our algorithms, we have integrated them to a particular ROS package with the respective nodes. Those can be easily integrated to the core system of SHAREWORK.
All above contributions are detailed in dedicated sections of this document. Further technical information is available in relevant disseminated project results, attached as papers in the Appendix.