A fundamental capability for any artificial-intelligence system which interacts with humans is being aware of the surrounding human presence, their location and body posture. Thus, the technological developments framed under the concept of human-tracking are aimed at developing and perfecting these abilities.
These skills open up a world of possibilities for the robotic system, which is now able to plan its actions taking into account, for example, not putting the person at risk, either by intercepting the robot’s trajectory (i.e. collision avoidance) or by receiving the effects of the robot’s actions (e.g. sparks from a welding process). Besides, the robot would be now able to reason about where it should hand out a tool or how to help them in improving their ergonomics in the tasks they share. Furthermore, these skills are the basis for gaining higher level perception and knowledge, such as not only knowing the position of the human but also knowing what task they are performing. In this sense in Sharework project at Eurecat Technology Center we developed a module to identify tasks through Artificial Intelligence (AI).
However, although, in everyday life, we are used to know where and in what posture the people around us are, naturally and without any effort, endowing an artificial system with these abilities is quite a challenge. In fact, Human-Tracking is a research topic to which the scientific community has dedicated extensive efforts and that has been addressed in a wide variety of approaches. The reason for its importance is the abundance of applications that benefit from such a technology.
A STEP FORWARD IN HUMAN TRACKING WITH SHAREWORK
In the Sharework modular solution, human-tracking is solved in real-time without using reflective visual markers nor other devices on the body. In contrast, Eurecat Robotics and Automation Technology Unit has opted for using AI and multiple 3D cameras distributed on the environment, saving the inconvenience for the tracked humans of wearing markers, usually on a tight black suit, incompatible with many industrial processes. In this way, we minimize occlusions and maximize accuracy. In addition, the developed solution is hardware-agnostic.
Sharework human tracking solution uses AI and 3D cameras to track the humans in the scene without the need of markers or any other hardware
The human-tracking problem has been addressed as multiple single-view human-tracking problems with a subsequent fusing procedure. In addition, the single-view human tracking is tackled in two stages: first estimating the 2D pose in the RGB image and then reconstructing the 3D pose using the camera pointcloud; breaking the problem into two problems more easily addressable.
The used cameras capture colour and depth information of the scene to produce a coloured cloud of 3D points.
Therefore, each camera-stream is processed individually with state-of-the-art deep-learning techniques. This is used to estimate the 3D poses of all the workers in the environment simultaneously. The estimations from the cameras are, then, fused and filtered to produce a robust 3D reconstruction of the position and posture of the workers. Note that the fusion algorithm is able to correctly associate the human bodies detected by the different cameras.
The developed software module is able to keep the tracking information of several operators correctly updated even when they pass each other in front of the camera.
We have integrated them to a particular ROS package with the respective nodes. Those can be easily integrated to the core system of Sharework. Thus, the resulting human-tracking data is processed by other modules to infer higher knowledge and reason in consequence. The tracking information is used:
- as an input to help recognizing the task being performed by the worker, so that robot knows how to help the worker and how to adapt best to the worker actions.
- to define a forbidden space for the robot during the motion planning process in order to avoid collisions with the worker
- as input information to be analysed and then assess human ergonomics and give health recommendation to the worker.
During the upcoming months, the developed Human-Tracking module will be integrated and validated in the four industrial use-case demonstrators of the Sharework project. For each of them the location, type and number of cameras will be adapted to the scenario, thus demonstrating the versatility of the module.
About the authors
Néstor Garcia
Received the B.S. degree (with honors) in Industrial Engineering and the Ph.D. degree (also with honors) in Automatic Control, Robotics and Computer Vision, both from the Polytechnic University of Catalonia (UPC), Barcelona, Spain, in 2015 and 2019 respectively. He worked as a Research Assistant at the Institute of Industrial and Control Engineering (IOC) of the UPC, being involved in different R&D projects. He joined Eurecat in 2018, where he is the Principal Investigator of the Collaborative Manipulation research group. His current research interests are focused on task and motion planning, human-robot interaction, multirobot cooperation and learning in robotic systems.
Eurecat is the leading Technology Centre of Catalonia. It provides the industrial and business sector with differential technology and advanced expertise, offers solutions to their innovation needs and boosts their competitiveness in a fast-paced environment. With a vast expertise in industrial robotics, Eurecat is Sharework project’s coordinator and is responsible for the development of the human tracking modules and the preliminary system integration.