New! Sign up for our free email newsletter.
Science News
from research organizations

How personalized algorithms trick your brain into wrong answers

Algorithms can make you confidently wrong while believing you’ve learned more than you actually have.

Date:
November 25, 2025
Source:
Ohio State University
Summary:
Personalized algorithms may quietly sabotage how people learn, nudging them into narrow tunnels of information even when they start with zero prior knowledge. In the study, participants using algorithm-curated clues explored less, absorbed a distorted version of the truth, and became oddly confident in their wrong conclusions. The research suggests that this kind of digital steering doesn’t just shape opinions—it can reshape the very foundation of what someone believes they understand.
Share:
FULL STORY

The personalized recommendation systems that curate content on platforms such as YouTube may also interfere with how people learn, according to new research. The study found that when an algorithm decided which information appeared during a learning task, participants who had no background knowledge on the topic tended to focus on only a small portion of what they were shown.

Because they explored less of the available material, these participants often answered questions incorrectly during later tests. Despite being wrong, they expressed high confidence in their responses.

These outcomes raise concerns, said Giwon Bahg, who conducted the work as part of his doctoral dissertation in psychology at The Ohio State University.

Algorithms Can Create Bias Even Without Prior Knowledge

Much of the existing research on personalized algorithms examines how they influence opinions about politics or social issues that people already know at least something about.

"But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality," said Bahg, now a postdoctoral scholar at Pennsylvania State University.

The findings appear in the Journal of Experimental Psychology: General.

Brandon Turner, a study co-author and professor of psychology at Ohio State, said the results indicate that people may quickly take the limited information supplied by algorithms and draw broad, often unfounded conclusions.

"People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they've never experienced," Turner said.

A Movie Recommendation Example

To illustrate how this bias might emerge, the researchers described a simple scenario: a person who has never watched movies from a certain country decides to try some. An on-demand streaming service offers recommendations.

The viewer selects an action-thriller because it appears at the top of the list. The algorithm then promotes more action-thrillers, which the viewer continues to choose.

"If this person's goal, whether explicit or implicit, was in fact to understand the overall landscape of movies in this country, the algorithmic recommendation ends up seriously biasing one's understanding," the authors wrote.

By only seeing one genre, the person may overlook strong films in other categories. They may also form inaccurate and overly broad assumptions about the culture or society represented in those movies, the authors noted.

Testing Algorithmic Effects With Fictional Creatures

Bahg and his research team explored this idea experimentally with 346 online participants. To ensure that no one brought in prior knowledge, the researchers used a completely fictional learning task.

Participants studied several types of crystal-like aliens, each defined by six features that varied across categories. For instance, one square-shaped part of the alien might appear dark black in some types and pale gray in others.

The objective was to learn how to identify each alien type without knowing how many types existed.

How the Algorithm Guided Learning

In the experiment, the aliens' features were concealed behind gray boxes. In one condition, participants were required to click all the features to see a complete set of information for each alien.

In another condition, participants chose which features to examine, and a personalization algorithm selected which items they were likely to sample most frequently. This algorithm steered them toward repeatedly examining the same features over time. They could still look at any feature they wanted, but they were also allowed to skip others entirely.

The results showed that those guided by the personalized algorithm viewed fewer features overall and did so in a patterned, selective manner. When they were later tested on new alien examples they had never seen before, they frequently sorted them incorrectly. Even so, participants remained confident in their answers.

"They were even more confident when they were actually incorrect about their choices than when they were correct, which is concerning because they had less knowledge," Bahg said.

Implications for Children and Everyday Learning

Turner noted that these findings carry real-world significance.

"If you have a young kid genuinely trying to learn about the world, and they're interacting with algorithms online that prioritize getting users to consume more content, what is going to happen?" Turner said.

"Consuming similar content is often not aligned with learning. This can cause problems for users and ultimately for society."

Vladimir Sloutsky, professor of psychology at Ohio State, was also a co-author.


Story Source:

Materials provided by Ohio State University. Note: Content may be edited for style and length.


Journal Reference:

  1. Giwon Bahg, Vladimir M. Sloutsky, Brandon M. Turner. Algorithmic personalization of information can cause inaccurate generalization and overconfidence.. Journal of Experimental Psychology: General, 2025; 154 (9): 2503 DOI: 10.1037/xge0001763

Cite This Page:

Ohio State University. "How personalized algorithms trick your brain into wrong answers." ScienceDaily. ScienceDaily, 25 November 2025. <www.sciencedaily.com/releases/2025/11/251125081912.htm>.
Ohio State University. (2025, November 25). How personalized algorithms trick your brain into wrong answers. ScienceDaily. Retrieved November 25, 2025 from www.sciencedaily.com/releases/2025/11/251125081912.htm
Ohio State University. "How personalized algorithms trick your brain into wrong answers." ScienceDaily. www.sciencedaily.com/releases/2025/11/251125081912.htm (accessed November 25, 2025).

Explore More

from ScienceDaily

RELATED STORIES