New! Sign up for our free email newsletter.
Science News
from research organizations

A cautionary tale of machine learning uncertainty

Date:
March 10, 2022
Source:
Springer
Summary:
A new analysis shows that researchers using machine learning methods could risk underestimating uncertainties in their final results.
Share:
FULL STORY

A new analysis shows that researchers using machine learning methods could risk underestimating uncertainties in their final results.

The Standard Model of particle physics offers a robust theoretical picture of the fundamental particles, and most fundamental forces which compose the universe. All the same, there are several aspects of the universe: from the existence of dark matter, to the oscillating nature of neutrinos, which the model can't explain -- suggesting that the mathematical descriptions it provides are incomplete. While experiments so far have been unable to identify significant deviations from the Standard Model, physicists hope that these gaps could start to appear as experimental techniques become increasingly sensitive.

A key element of these improvements is the use of machine learning algorithms, which can automatically improve upon classical techniques by using higher-dimensional inputs, and extracting patterns from many training examples. Yet in new analysis published in EPJ C, Aishik Ghosh at the University of California, Irvine, and Benjamin Nachman at the Lawrence Berkeley National Laboratory, USA, show that researchers using machine learning methods could risk underestimating uncertainties in their final results.

In this context, machine learning algorithms can be trained to identify particles and forces within the data collected by experiments such as high-energy collisions within particle accelerators -- and to identify new particles, which don't match up with the theoretical predictions of the Standard Model. To train machine learning algorithms, physicists typically use simulations of experimental data, which are based on advanced theoretical calculations. Afterwards, the algorithms can then classify particles in real experimental data.

These training simulations may be incredibly accurate, but even so, they can only provide an approximation of what would really be observed in a real experiment. As a result, researchers need to estimate the possible differences between their simulations and true nature -- giving rise to theoretical uncertainties. In turn, these differences can weaken or even bias a classifier algorithm's ability to identify fundamental particles.

Recently, physicists have increasingly begun to consider how machine learning approaches could be developed which are insensitive to these estimated theoretical uncertainties. The idea here is to decorrelate the performance of these algorithms from imperfections in the simulations. If this could be done effectively, it would allow for algorithms whose uncertainties are far lower than traditional classifiers trained on the same simulations. But as Ghosh and Nachman argue, the estimation of theoretical uncertainties essentially involves well-motivated guesswork -- making it crucial for researchers to be cautious about this insensitivity.

In particular, the duo argues there is a real danger that these techniques will simply deceive the unsuspecting researcher by reducing only the estimate of the uncertainty, rather than the true uncertainty. A machine learning procedure that is insensitive to the estimated theory uncertainty may not be insensitive to the actual difference between nature, and the approximations used to simulate the training data. This in turn could lead physicists to artificially underestimate their theory uncertainties if they aren't careful. In high-energy particle collisions, for example, it may cause a classifier to incorrectly confirm the presence of certain fundamental particles.

In presenting this 'cautionary tale', Ghosh and Nachman hope that future assessments of the Standard Model which use machine learning will not be caught out by incorrectly shrinking uncertainty estimates. This could enable physicists to better ensure reliability in their results, even as experimental techniques become ever more sensitive. In turn, it could pave the way for experiments which finally reveal long-awaited gaps in the Standard Model's predictions.


Story Source:

Materials provided by Springer. Note: Content may be edited for style and length.


Journal Reference:

  1. Aishik Ghosh, Benjamin Nachman. A cautionary tale of decorrelating theory uncertainties. The European Physical Journal C, 2022; 82 (1) DOI: 10.1140/epjc/s10052-022-10012-w

Cite This Page:

Springer. "A cautionary tale of machine learning uncertainty." ScienceDaily. ScienceDaily, 10 March 2022. <www.sciencedaily.com/releases/2022/03/220310115132.htm>.
Springer. (2022, March 10). A cautionary tale of machine learning uncertainty. ScienceDaily. Retrieved December 21, 2024 from www.sciencedaily.com/releases/2022/03/220310115132.htm
Springer. "A cautionary tale of machine learning uncertainty." ScienceDaily. www.sciencedaily.com/releases/2022/03/220310115132.htm (accessed December 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES