New! Sign up for our free email newsletter.
Science News
from research organizations

Using big data to analyze images, video better than the human brain

Date:
March 20, 2017
Source:
Uni Research
Summary:
Improving traffic safety, better health services and environmental benefits: Big Data experts see a wide range of possibilities for advanced image analysis and recognition technology.
Share:
FULL STORY

Improving traffic safety, better health services and environmental benefits -- Big Data experts see a wide range of possibilities for advanced image analysis and recognition technology.

"Advanced image recognition by computers is the result of a great deal of very demanding work. You have to mimic the way the human brain distinguishes significant from unimportant information," says Eirik Thorsnes at Uni Research in Bergen, Norway.

Thorsnes heads a group in the company's Centre for Big Data Analysis focus area, which develops strategies for use of big data for research and commercial purposes. The Centre also works on developing advanced computing power that works in the same complex way as the human brain.

In many areas, the human brain's fantastic capacity and working methods will continue to outperform computers, but there are some areas where computers can do things better.

"There has been a tremendous development in recent years, and we are now surpassing the human level in terms of image recognition and analysis. After all, computers never get tired of looking at near-identical images and may be capable of noticing even the tiniest nuances that we humans cannot see. In addition, as it gets easier to analyse large volumes of images and video, many processes in society can be improved and optimised," Thorsnes explains.

Recognise which objects are important Thorsnes and his colleagues at the Centre for Big Data Analysis predict that image recognition and analysis will become increasingly important in areas such as health care, environmental monitoring, seabed surveys and satellite images.

Using big data in image analysis and recognition requires a combination of good hardware, algorithms (formulae) and software, as well as people who manage to recognise the best approaches.

"The need for this kind of technology will only increase in coming years, but it is not 'plug and play'. Our researchers have developed specialised knowledge about handling huge amounts of data, and thus how essential knowledge can be identified," says Thorsnes.

Researchers in the department Uni Research Computing develop computer systems that learn to recognise objects and recognise which objects are important in the image.

Alla Sapronova is an expert in artificial intelligence, image recognition and machine learning:

"I train computers in the same way we teach children. I show the computer patterns of input signals and tell it what I expect the output signal to be. I repeat this process until the system begins to recognise the patterns. Then I show the computer an input signal, such as an image, that it has not seen before and test whether the system understands what it is," Sapronova explains.

For example, on a relatively simple level, this kind of machine learning has resulted in smile recognition technology for mobile phone cameras.

Autistic children undergoing music therapy More advanced areas of application include medicine, with analysis of external bodily signs of illness, or the detection of positive / negative situations in consultation with a therapist.

"We have run a pilot project with GAMUT, with analysis of video footage of autistic children undergoing music therapy. Normally, the therapist would have to spend hours reviewing the footage to identify the exact moment that best reveals the status or progress of the patient. However, if we teach a computer what constitutes an interesting moment, it will be able to find and select them, although to date computers cannot rank them. There is great potential for further development in a subsequent project," says Thorsnes.

In another project, the researchers used a publicly available webcam at Danmarksplass, Bergen's busiest road intersection, as a starting point to teach computers to register how many and what types of vehicles passed through the junction during the course of the day.

This allows identification of traffic patterns, which can then be used in planning and decision-making. In addition, at times the air quality at Danmarksplass is very poor in winter, and Thorsnes envisages that better mapping of the traffic could also provide a basis for environmental improvements.

However, he believes that at the current time image analysis has the greatest potential in improving traffic safety, which is basically a matter of monitoring selected stretches of roads or tunnels. Computers could detect a range of different situations, including cars travelling in the wrong direction, fire, abandoned cars, people inside tunnels, etc.

"It will also be possible to get computers to monitor slopes susceptible to landslides along major roads, and teach the computers to recognise which changes in the landscape might imply an increased risk of a landslide," says Thorsnes.

Monitor the incidence of escapees from fish farms Uni Research Computing and the Centre for Big Data Analysis, headed by research director Klaus Johannsen, have also worked on a project mapping the movements of salmon and trout at the mouth of a river. This work was done in collaboration with another department in the company, Uni Research Environment.

"A camera was installed at the mouth of the river, and the computer was trained to record what kind of fish passed, and whether it was a wild fish or a farmed fish. In this way, we can monitor the incidence of escapees from fish farms, among other things," says Thorsnes.

Part of the reason that detection technology has made such good headway in recent years is what Thorsnes calls a rediscovery of algorithms for artificial intelligence.

The industry's needs and some good old artificial intelligence ideas found one another at the same time as massive computing power and sophisticated graphic processors from the gaming industry became available for use in analyses.

"Traditionally, these kinds of analyses have been carried out by people who have to sit and watch hours of video footage, for example medical analysis or traffic in tunnels," says Thorsnes.

The algorithms that have had something of a renaissance come from what is now called 'deep learning', because we now have enough computing power thanks to advanced processors and access to interesting material to be able to teach more advanced and 'deeper' algorithms.


Story Source:

Materials provided by Uni Research. Note: Content may be edited for style and length.


Cite This Page:

Uni Research. "Using big data to analyze images, video better than the human brain." ScienceDaily. ScienceDaily, 20 March 2017. <www.sciencedaily.com/releases/2017/03/170320090439.htm>.
Uni Research. (2017, March 20). Using big data to analyze images, video better than the human brain. ScienceDaily. Retrieved December 3, 2024 from www.sciencedaily.com/releases/2017/03/170320090439.htm
Uni Research. "Using big data to analyze images, video better than the human brain." ScienceDaily. www.sciencedaily.com/releases/2017/03/170320090439.htm (accessed December 3, 2024).

Explore More

from ScienceDaily

RELATED STORIES