Ms. Zainab Saleem

PhD Student / PEM Student Researcher

Nanotechnology and Bio-engineering research group

 

Research Topic: Obstacle Detection and Collision Avoidance for Autonomous Cobot Operation in Dynamic Environments

Zainab Saleem.jpg

Ms. Saleem completed her Bachelors and Masters in ‘Computer Engineering’ from prestigious Institutes in Pakistan. During her graduate studies, Ms. Saleem worked as a ‘Lab Engineer’ with the same institute as well as being an active member of ‘Advance Image Processing Research Lab’, one of its kind in the country.

Her research thesis titled “Prediction of Microbial Spoilage and Shelf Life of Bakery Items using Hyperspectral Imaging”, in this work, the microbial spoilage in bakery items was detected (using hyperspectral imaging and computer vision algorithms) almost 24 hours before it started appearing visual to naked eyes. Ms. Saleem has worked on collaborative projects analyzing the quality of meat and detecting the adulterant in red chilli using hyperspectral imaging system.

After completing her masters, she started her PHD with IT Sligo in March 2021 under the supervision of Dr. Marion McAfee, Dr Saif Huq (London Metropolitan University, UK), Dr. Eoghan Furey (Letterkenny Institute of Technology, Ireland) and Dr. Fredrik Gustafsson (Linköping University, Sweden).

Research Project: 

 

Obstacle Detection and Collision Avoidance for Autonomous Cobot Operation in Dynamic Environments

 

Robots have been used in manufacturing for decades to speed production and enhance accuracy. Industrial robots traditionally operate inside cages, isolated from humans. The ability to have robots sharing the workspace and working side-by-side with human co-workers is a key factor for the materialisation of the Industry 4.0 concept and is at the core of the smart, flexible factory.

 

The paradigm for robot usage has shifted in the last few years, from an idea in which robots work with complete autonomy to a scenario where robots cognitively collaborate with human beings. This brings together the best of each partner, robot and human, by combining coordination, dexterity and cognitive capabilities of humans with the robots’ accuracy, agility and ability to produce repetitive work. Some robots substitute for human workers; others - collaborative robots, or “cobots”, which work alongside workers - complement them. To have them working safely alongside humans, cobots need to be provided with biological-like reflexes, allowing them to

circumvent obstacles and avoid collisions. This is extremely important to give cobots more autonomy and minimum need for human intervention, especially when cobots are operating in a dynamic environment and interacting/collaborating with human co-workers.

The primary objective of the research is to implement entirely autonomous obstacle detection and collision avoidance within a standard robot manipulator. This will effectively transform the manipulator into a fully autonomous cobot that is able to adaptively plan its own path and thus continue its task despite obstacles intruding in its pre-planned trajectory in a dynamic environment. However, given the breadth of the entire research area, the primary focus of the project is to develop an enhanced obstacle detection framework using data from multiple sensors through the application of sensor fusion and machine learning techniques. This would potentially pave the way towards implementing entirely autonomous cobots for effective HRI (Human-Robot Interaction).

For proper implementation of collision avoidance in an HRI scenario, the motion of the human co-worker needs to be predicted, and his/her configuration needs to be captured. Previously, cobots have been equipped with sensors capturing local information on their environment. Ultrasonic sensors, capacitive sensors and laser scanner systems have all been attempted in collision avoidance systems. However, the information provided by these sensors does not cover the whole scene, and so these systems can only provide a limited contribution to enhance safety in human-robot collaboration tasks.

 

This project will address an important gap in the literature in the area of perception of the time-varying environment in which robotic systems operate. While cobots have been equipped with various types of sensors in the past for the detection of obstacles in their path, these do not cover the entire peripheral area and provide only a limited view of the dynamic 3D environment. This has so far hampered the uptake of cobot systems in manufacturing due to the inherent safety risks when interacting with humans.

The main contributions of this project will be: -

  1. Development of a sensor network with fusion algorithms for both static and dynamic 3D object detection within a manufacturing environment

  2. Self-equip the cobot, that human operator can work without any extra safety kit

  3. Dynamically plan the path after the detection of obstacles

 

Profile Links: - 

KMR_iiwa_8M1A2785_RGB_3200x1800_180418.jpg

Kuka iiwa cobot (image from kuka.com)