Automation with robots has been dreamed, discussed, and deployed in various sectors and applications. May it be vacuum cleaner or other appliances at homes, logistic systems, or welding operations in production industries, etc. Along this way, new opportunities have evolved, while making work and life efficient and comfortable.
We are at a time, where robots are about to enter homes, same as computers entered few decades ago. However, robots must overcome a major barrier towards safety for users. Their general-purpose functionalities also make them prone to higher levels of dynamics within home and industrial floors.
Safety for humans from machines is understood by ensuring safe collisions. In other words, the machine is supposed to be in complete rest before the human touches it. Presence or position of human is estimated using sensors. Safety standards provides means to determine the position of this sensor with respect to the machine, as illustrated by the diagram below [ISO 13855:2010].
System parameters which determine this protective separation is the intrusion distance and reaction time of the controller. Intrusion distance is the linear distance, which is traversed by the human moving towards the machine until the presence is detected. This signal is sent to the controller to stop the machine. The delay from the state change of the digital signal until the machine starts to stop is the reaction time of the controller.
Safety with robots have evolved using similar standards, even when they were aimed to work intuitively besides humans. The safety operation termed as speed and separation monitoring determines the size of protective separation distance from the robot. The robot is slowed down or completely stopped as human comes into this reach. To make things worse, the robots have multiple and moving danger points. The motion of the robot body can be non-linear in nature and depends on various factors like payload, pose and velocity of individual joints, as illustrated in Figure below.
These dynamics become increasingly challenging for large heavy-duty robots, which can better complement human workers. They can also result in false detections for an operator outside of the robot reach. This can be better understood by the gif below, which illustrates a robot moving in a single base joint axis, with protective separation distance computed at only its hand, shown by purple sphere. The sphere expands to compensate the increasing stopping distance during robot acceleration. This results in false detection even with a static pillar inside the robot cell, which is outside its actual reach.
Thus, the current safety systems cannot be used in a proximity and interactive environment, as they will result in unnecessary stops.
At Fraunhofer IWU we developed a new method of collision sensing. It works on the basic argument that not all the voxels in the robot cell have uniform collision risk.
Thus, we divide the collision sensing into two parts. One, where small amount of 3D depth data around the robot body is evaluated for separation estimation. Being smaller in size, it results in about 51% less data points processing, and thus reactive system. Nevertheless, large robots cannot stop quickly, and require some sensing outside the local small areas. This is performed by wide field of view angle-based 3D sparse LiDARs. They have ranges more than 10 meters, at a precision of 2 cm. Thus, the global sensing system is integrated with the previous, which provides additional 61% range, compared to state of art sensors.
Sharework collision sensing approach results in a reactive system, adapting intelligently in the presence of human and accelerating to safe robot velocity depending on the sensing system in charge of the control.
The black lines in the image represent the depth points from LiDAR and the dense point cloud at robot is estimated by a 3D camera. The robot speed is regulated using the shortest distance between the human and robot, instead of only presence of human. The working of these two integrated sensing systems can be understood by the videos below, where the human operator enters the corresponding sensing areas. The sphere represents the shortest distance to human, in the global sensing and the green pixels represent the points used for speed regulation in local sensing.
The proposed sensing methodology is modular, scalable and cost-effective for small to large industries.
About the author
M.Sc. Aquib Rashid
He has a masters degree focused on hardware-software co-de sign for safety critical systems. He is a PhD student in a joint collaboration with Fraunhofer and TU Chemnitz under Prof. Matthias Putz and Prof. Wolfram Hardt. His research appetite was kindled in Bosch, where he analysed and developed high-speed computer vision algorithms on FPGA. These experiences were brought to industrial and real-world applications in Fraunhofer, where he is working since 2015. He has led multiple international projects and published multiple research articles. His research interest involve developing robotic vision methods, which enable robots to be used efficiently, enabling agile production and recycling processes with human robot collaboration.
Fraunhofer IWU Team
Aquib Rashid, Ibrahim Al Naser, Paul Eichler, Sophie Bauer, Jayanto Halim and Mohamad Bdiwi
Fraunhofer IWU www.iwu.fraunhofer.de
Fraunhofer Institute for Machine Tools and Forming Industry (IWU) is a leading institute within Fraunhofer applied research organisation working on the development of efficient value chain processes in the machine tool, vehicle and component production sector. The institute is in charge of developing tools for real-time human detection and ensure system flexibility through human safety and reliable and secure computing architectures.