New! Sign up for our free email newsletter.
Science News
from research organizations

When to trust an AI model

More accurate uncertainty estimates could help users decide about how and when to use machine-learning models in the real world

Date:
July 12, 2024
Source:
Massachusetts Institute of Technology
Summary:
A new technique enables huge machine-learning models to efficiently generate more accurate quantifications of their uncertainty about certain predictions. This could help practitioners determine whether to trust the model when it is deployed in real-world settings.
Share:
FULL STORY

  Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

But a model’s uncertainty quantifications are only useful if they are accurate. If a model says it is 49% confident that a medical image shows a pleural effusion, then 49% of the time, the model should be right.

MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models. Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently.

In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.

This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task.

“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT.

Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. The research will be presented at the International Conference on Machine Learning.

Quantifying uncertainty

Uncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters. These methods also require users to make assumptions about the model and data used to train it.

The MIT researchers took a different approach. They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods. MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label.

The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings.

MDL involves considering all possible labels a model could give a test point. If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly.

“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says.

For example, consider a model that says a medical image shows a pleural effusion. If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision.

With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point. If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities.

The amount of code used to label a datapoint is known as stochastic data complexity. If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident.

But testing each datapoint using MDL would require an enormous amount of computation.

Speeding up the process

With IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function. They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs. This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity.

In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence. The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers.

The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods.

“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says.

IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models. This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions.

“People need to understand that these systems are very fallible and can make things up as they go. A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says.

In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle.


Story Source:

Materials provided by Massachusetts Institute of Technology. Original written by Adam Zewe. Note: Content may be edited for style and length.


Journal Reference:

  1. Nathan Ng, Roger Grosse, Marzyeh Ghassemi. Measuring Stochastic Data Complexity with Boltzmann Influence Functions. Submitted to arXiv, 2024 DOI: 10.48550/arXiv.2406.02745

Cite This Page:

Massachusetts Institute of Technology. "When to trust an AI model." ScienceDaily. ScienceDaily, 12 July 2024. <www.sciencedaily.com/releases/2024/07/240712222151.htm>.
Massachusetts Institute of Technology. (2024, July 12). When to trust an AI model. ScienceDaily. Retrieved November 18, 2024 from www.sciencedaily.com/releases/2024/07/240712222151.htm
Massachusetts Institute of Technology. "When to trust an AI model." ScienceDaily. www.sciencedaily.com/releases/2024/07/240712222151.htm (accessed November 18, 2024).

Explore More

from ScienceDaily

RELATED STORIES