New! Sign up for our free email newsletter.
Science News
from research organizations

Improving computer vision for AI

Date:
May 27, 2021
Source:
University of Texas at San Antonio
Summary:
Researchers have developed a new method that improves how artificial intelligence learns to see.
Share:
FULL STORY

Researchers from UTSA, the University of Central Florida (UCF), the Air Force Research Laboratory (AFRL) and SRI International have developed a new method that improves how artificial intelligence learns to see.

Led by Sumit Jha, professor in the Department of Computer Science at UTSA, the team has changed the conventional approach employed in explaining machine learning decisions that relies on a single injection of noise into the input layer of a neural network.

The team shows that adding noise -- also known as pixilation -- along multiple layers of a network provides a more robust representation of an image that's recognized by the AI and creates more robust explanations for AI decisions. This work aids in the development of what's been called "explainable AI" which seeks to enable high-assurance applications of AI such as medical imaging and autonomous driving.

"It's about injecting noise into every layer," Jha said. "The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won't see the AI fail just because you change a few pixels of the input image."

Computer vision -- the ability to recognize images -- has many business applications. Computer vision can better identify areas of concern in the livers and brains of cancer patients. This type of machine learning can also be employed in many other industries. Manufacturers can use it to detect defection rates, drones can use it to help detect pipeline leaks, and agriculturists have begun using it to spot early signs of crop disease to improve their yields.

Through deep learning, a computer is trained to perform behaviors, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through set equations, deep learning works within basic parameters about a data set and trains the computer to learn on its own by recognizing patterns using many layers of processing.

The team's work, led by Jha, is a major advancement to previous work he's conducted in this field. In a 2019 paper presented at the AI Safety workshop co-located with that year's International Joint Conference on Artificial Intelligence (IJCAI), Jha, his students and colleagues from the Oak Ridge National Laboratory demonstrated how poor conditions in nature can lead to dangerous neural network performance. A computer vision system was asked to recognize a minivan on a road, and did so correctly. His team then added a small amount of fog and posed the same query again to the network: the AI identified the minivan as a fountain. As a result, their paper was a best paper candidate.

In most models that rely on neural ordinary differential equations (ODEs), a machine is trained with one input through one network, and then spreads through the hidden layers to create one response in the output layer. This team of UTSA, UCF, AFRL and SRI researchers use a more dynamic approach known as a stochastic differential equations (SDEs). Exploiting the connection between dynamical systems to show that neural SDEs lead to less noisy, visually sharper, and quantitatively robust attributions than those computed using neural ODEs.

The SDE approach learns not just from one image but from a set of nearby images due to the injection of the noise in multiple layers of the neural network. As more noise is injected, the machine will learn evolving approaches and find better ways to make explanations or attributions simply because the model created at the onset is based on evolving characteristics and/or the conditions of the image. It's an improvement on several other attribution approaches including saliency maps and integrated gradients.

Jha's new research is described in the paper "On Smoother Attributions using Neural Stochastic Differential Equations." Fellow contributors to this novel approach include UCF's Richard Ewetz, AFRL's Alvaro Velazquez and SRI's Sumit Jha. The lab is funded by the Defense Advanced Research Projects Agency, the Office of Naval Research and the National Science Foundation. Their research will be presented at the 2021 IJCAI, a conference with about a 14% acceptance rate for submissions. Past presenters at this highly selective conference have included Facebook and Google.

"I am delighted to share the fantastic news that our paper on explainable AI has just been accepted at IJCAI," Jha added. "This is a big opportunity for UTSA to be part of the global conversation on how a machine sees."


Story Source:

Materials provided by University of Texas at San Antonio. Original written by Milady Nazir. Note: Content may be edited for style and length.


Cite This Page:

University of Texas at San Antonio. "Improving computer vision for AI." ScienceDaily. ScienceDaily, 27 May 2021. <www.sciencedaily.com/releases/2021/05/210527091439.htm>.
University of Texas at San Antonio. (2021, May 27). Improving computer vision for AI. ScienceDaily. Retrieved November 23, 2024 from www.sciencedaily.com/releases/2021/05/210527091439.htm
University of Texas at San Antonio. "Improving computer vision for AI." ScienceDaily. www.sciencedaily.com/releases/2021/05/210527091439.htm (accessed November 23, 2024).

Explore More

from ScienceDaily

RELATED STORIES