Synergy emergence in deep reinforcement motor learning
- Date:
- March 19, 2020
- Source:
- Tohoku University
- Summary:
- Human motor control has always been efficient at executing complex movements naturally, efficiently, and without much thought involved. This is because of the existence of motor synergy in the central nervous system (CNS). Motor synergy allows the CNS to use a smaller set of variables to control a large group of muscles; thereby simplifying the control over coordinated and complex movements.
- Share:
Human motor control has always been efficient at executing complex movements naturally, efficiently, and without much thought involved. This is because of the existence of motor synergy in the central nervous system (CNS). Motor synergy allows the CNS to use a smaller set of variables to control a large group of muscles; thereby simplifying the control over coordinated and complex movements.
Now, researchers at Tohoku University have observed a similar concept in robotic agents using deep reinforcement learning (DRL) algorithms.
DRL allows robotic agents to learn the best action possible in their virtual environment. It allows complex robotic tasks to be solved whilst minimalising manual operations and achieving peak performance. Classical algorithms, on the other hand, require manual intervention to find specific solutions for every new task that appears.
However, applying motor synergy from the human world to the robotic world is no small task. Even though many studies support the employment of motor synergy in human and animal motor control, the background process is still largely unknown.
In the current study, researchers from Tohoku University utilised two DRL algorithms on walking robotic agents known as HalfCheetah and FullCheetah. The two algorithms were TD3, a classical DRL, and SAC, a high-performing DRL.
The two robotic agents were tasked with running forward as far as possible within a given time. In total, the robotic agents completed 3 million steps. Synergy information was not used vis-à-vis the DRLs but the robotic agents demonstrated the emergence of motor synergy throughout their movements.
Mitsuhiro Hayashibe, Tohoku University professor and co-author of the study, notes, "We first confirmed in a quantitative way that motor synergy can emerge even in deep learning as humans do." Professor Hayashibe adds, "After employing deep learning, the robotic agents improved their motor performances while limiting energy consumption by employing motor synergy."
Going forward, the researchers aim to explore more task with different body models to further confirm their findings.
Story Source:
Materials provided by Tohoku University. Note: Content may be edited for style and length.
Journal Reference:
- Jiazheng Chai, Mitsuhiro Hayashibe. Motor Synergy Development in High-Performing Deep Reinforcement Learning Algorithms. IEEE Robotics and Automation Letters, 2020; 5 (2): 1271 DOI: 10.1109/LRA.2020.2968067
Cite This Page: