New! Sign up for our free email newsletter.
Science News
from research organizations

Closing the loop for robotic grasping

Date:
June 25, 2018
Source:
Queensland University of Technology
Summary:
Roboticists have developed a faster and more accurate way for robots to grasp objects, including in cluttered and changing environments, which has the potential to improve their usefulness in both industrial and domestic settings.
Share:
FULL STORY

Roboticists at QUT have developed a faster and more accurate way for robots to grasp objects, including in cluttered and changing environments, which has the potential to improve their usefulness in both industrial and domestic settings.

  • The new approach allows a robot to quickly scan the environment and map each pixel it captures to its grasp quality using a depth image
  • Real world tests have achieved high accuracy rates of up to 88% for dynamic grasping and up to 92% in static experiments.
  • The approach is based on a Generative Grasping Convolutional Neural Network

QUT's Dr Jürgen Leitner said while grasping and picking up an object was a basic task for humans, it had proved incredibly difficult for machines.

"We have been able to program robots, in very controlled environments, to pick up very specific items. However, one of the key shortcomings of current robotic grasping systems is the inability to quickly adapt to change, such as when an object gets moved," Dr Leitner said.

"The world is not predictable -- things change and move and get mixed up and, often, that happens without warning -- so robots need to be able to adapt and work in very unstructured environments if we want them to be effective," he said.

The new method, developed by PhD researcher Douglas Morrison, Dr Leitner and Distinguished Professor Peter Corke from QUT's Science and Engineering Faculty, is a real-time, object-independent grasp synthesis method for closed-loop grasping.

"The Generative Grasping Convolutional Neural Network approach works by predicting the quality and pose of a two-fingered grasp at every pixel. By mapping what is in front of it using a depth image in a single pass, the robot doesn't need to sample many different possible grasps before making a decision, avoiding long computing times," Mr Morrison said.

"In our real-world tests, we achieved an 83% grasp success rate on a set of previously unseen objects with adversarial geometry and 88% on a set of household objects that were moved during the grasp attempt. We also achieve 81% accuracy when grasping in dynamic clutter."

Dr Leitner said the approach overcame a number of limitations of current deep-learning grasping techniques.

"For example, in the Amazon Picking Challenge, which our team won in 2017, our robot CartMan would look into a bin of objects, make a decision on where the best place was to grasp an object and then blindly go in to try to pick it up," he said

"Using this new method, we can process images of the objects that a robot views within about 20 milliseconds, which allows the robot to update its decision on where to grasp an object and then do so with much greater purpose. This is particularly important in cluttered spaces," he said.

Dr Leitner said the improvements would be valuable for industrial automation and in domestic settings.

"This line of research enables us to use robotic systems not just in structured settings where the whole factory is built based on robotic capabilities. It also allows us to grasp objects in unstructured environments, where things are not perfectly planned and ordered, and robots are required to adapt to change.

"This has benefits for industry -- from warehouses for online shopping and sorting, through to fruit picking. It could also be applied in the home, as more intelligent robots are developed to not just vacuum or mop a floor, but also to pick items up and put them away."

The team's paper Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach will be presented this week at Robotics: Science and Systems, the most selective international robotics conference, which is being held at Carnegie Mellon University in Pittsburgh USA.

The research was supported by the Australian Centre for Robotic Vision.


Story Source:

Materials provided by Queensland University of Technology. Note: Content may be edited for style and length.


Cite This Page:

Queensland University of Technology. "Closing the loop for robotic grasping." ScienceDaily. ScienceDaily, 25 June 2018. <www.sciencedaily.com/releases/2018/06/180625192819.htm>.
Queensland University of Technology. (2018, June 25). Closing the loop for robotic grasping. ScienceDaily. Retrieved December 20, 2024 from www.sciencedaily.com/releases/2018/06/180625192819.htm
Queensland University of Technology. "Closing the loop for robotic grasping." ScienceDaily. www.sciencedaily.com/releases/2018/06/180625192819.htm (accessed December 20, 2024).

Explore More

from ScienceDaily

RELATED STORIES