New! Sign up for our free email newsletter.
Science News
from research organizations

Connective issue: AI learns by doing more with less

Sparsity and energy constraints guide learning and communications in silicon neuronal networks

Date:
August 3, 2021
Source:
Washington University in St. Louis
Summary:
New research reveals constraints can lead to learning in AI systems.
Share:
FULL STORY

Brains have evolved to do more with less. Take a tiny insect brain, which has less than a million neurons but shows a diversity of behaviors and is more energy-efficient than current AI systems. These tiny brains serve as models for computing systems that are becoming more sophisticated as billions of silicon neurons can be implemented on hardware.

The secret to achieving energy-efficiency lies in the silicon neurons' ability to learn to communicate and form networks, as shown by new research from the lab of Shantanu Chakrabartty, the Clifford W. Murphy Professor in the Preston M. Green Department of Electrical & Systems Engineering at Washington University in St. Louis' McKelvey School of Engineering.

Their results were published July 28, 2021 in the journal Frontiers in Neuroscience.

For several years, his research group studied dynamical systems approaches to address the neuron-to-network performance gap and provide a blueprint for AI systems as energy efficient as biological ones.

Previous work from his group showed that in a computational system, spiking neurons create perturbations which allow each neuron to "know" which others are spiking and which are responding. It's as if the neurons were all embedded in a rubber sheet formed by energy constraints; a single ripple, caused by a spike, would create a wave that affects them all. Like all physical processes, systems of silicon neurons tend to self-optimize to their least-energetic states, while also being affected by the other neurons in the network. These constraints come together to form a kind of secondary communication network, where additional information can be communicated through the dynamic but synchronized topology of spikes. It's like the rubber sheet vibrating in a synchronized rhythm in response to multiple spikes.

In the latest research result, Chakrabartty and doctoral student Ahana Gangopadhyay showed how the neurons learn to pick the most energy-efficient perturbations and wave patterns in the rubber sheet. They show that if the learning is guided by sparsity (less energy), it's like the electrical stiffness of the rubber sheet is adjusted by each neuron so that the entire network vibrates in a most energy-efficient way. The neuron does this using only local information which is communicated more efficiently. Communications between the neurons then become an emergent phenomenon guided by the need to optimize energy use.

This result could have significant implications on how neuromorphic AI systems might be designed. "We want to learn from neurobiology," Chakrabartty said. "But we want to be able to exploit the best principles from both neurobiology and silicon engineering."

Historically, neuromorphic engineering -- modeling AI systems on biology -- has been based on a relatively straightforward model of the brain. Take some neurons, a few synapses, connect everything together and, voila, it's… if not alive, at least it's able to perform a simple task (recognizing images, for example) as efficiently, or moreso, than a biological brain. These systems are built by connecting memory (synapses) and processors (neurons). Each performing its single task, as it was presumed to work in the brain. But this one-structure-to-one-function approach, though it is easy to understand and model, misses the full complexity and flexibility of the brain.

Recent brain research has shown tasks are not so neatly divided, and there may be instances in which the same function is being performed by different brain structures, or multiple structures working together. "There is more and more information showing that this reductionist approach we've followed might not be complete," Chakrabartty said.

The key to building an efficient system that can learn new things is the use of energy and structural constraints as a medium for computing and communications or, as Chakrabartty said, "Optimization using sparsity."

The situation is reminiscent of the theory of six-degrees of Kevin Bacon: The challenge -- or constraint -- is to make connections to the actor by connecting six or fewer people.

For a neuron that is physically located on one chip to be its most efficient: The challenge -- or constraint -- is completing the task within the allotted amount of energy. It might be more efficient for one neuron to communicate through intermediaries to get to the destination neuron. The challenge is how to pick the right set of "friend" neurons among many choices that might be available. Enter energy constraints and sparsity.

Like a tired professor, a system in which energy has been constrained also will seek the least resistant way to complete an assigned task. Unlike the professor, an AI system can test all of its options at once, thanks to the superposition techniques developed in Chakrabartty's lab, which uses analog computing methods. In essence, a silicon neuron can attempt all communication routes at once, finding the most efficient way to connect in order to complete the assigned task.

The current paper shows that a network of 1,000 silicon neurons can accurately detect odors with very few training examples. The long-term goal is to look for analogs in the brain of a locust which has also been shown to be adept in classifying odors. Chakrabartty has been collaborating with Barani Raman, a professor in Department of Biomedical Engineering, and Srikanth Singamaneni, The Lilyan & E. Lisle Hughes Professor in the Department of Mechanical Engineering & Materials Science, to create a sort of cyborg locust -- one with two brains, a silicon one connected to the biological one.

"This would be the most interesting and satisfactory aspect of this research if and when we can start connecting the two realms," Chakrabartty said. "Not just physically, but also functionally."


Story Source:

Materials provided by Washington University in St. Louis. Original written by Brandie Jefferson. Note: Content may be edited for style and length.


Journal Reference:

  1. Ahana Gangopadhyay, Shantanu Chakrabartty. A Sparsity-Driven Backpropagation-Less Learning Framework Using Populations of Spiking Growth Transform Neurons. Frontiers in Neuroscience, 2021; 15 DOI: 10.3389/fnins.2021.715451

Cite This Page:

Washington University in St. Louis. "Connective issue: AI learns by doing more with less." ScienceDaily. ScienceDaily, 3 August 2021. <www.sciencedaily.com/releases/2021/08/210803175214.htm>.
Washington University in St. Louis. (2021, August 3). Connective issue: AI learns by doing more with less. ScienceDaily. Retrieved November 20, 2024 from www.sciencedaily.com/releases/2021/08/210803175214.htm
Washington University in St. Louis. "Connective issue: AI learns by doing more with less." ScienceDaily. www.sciencedaily.com/releases/2021/08/210803175214.htm (accessed November 20, 2024).

Explore More

from ScienceDaily

RELATED STORIES