New! Sign up for our free email newsletter.
Science News
from research organizations

Seen and 'herd': Collective motion in crowds is largely determined by participants' field of vision

Date:
March 22, 2022
Source:
Brown University
Summary:
Researchers have developed a new model to predict human flocking behavior based on optics and other sensory data.
Share:
FULL STORY

Like flocks of birds or schools of fish, crowds of humans also tend to move en masse -- almost as if they're thinking as one. Scientists have proposed different theories to explain this type of collective pedestrian behavior.

A new model from researchers at Brown University takes the point of view of an individual crowd member, and is remarkably accurate at predicting actual crowd flow, its developers say.

The model, described in a Proceedings of the Royal Society B paper, illustrates the role of visual perception in crowd movement. It shows how crowd members who are visible from a participant's viewpoint determine how that participant follows the crowd and what path they take.

That approach is a departure from previous models, which operate from the point of view of an "omniscient observer," said study author William H. Warren, a professor of cognitive, linguistic and psychological sciences at Brown. In other words, the movement of the crowd was analyzed in prior studies from the perspective of someone observing the crowd from a distance.

"Most omniscient models were based on physics -- on forces of attraction and repulsion -- and didn't fully explain why humans in a group interact in the way that they do," Warren said.

In a series of experiments led by members of Warren's lab, which involved tracking the movements of people wearing virtual reality headsets, researchers could predict an individual's movement based on their view of a virtual crowd.

"We are the first group to provide a sensory basis for this type of coordinated movement," Warren said. "The model provides a better understanding of what individuals in a crowd are experiencing visually, so we can make better predictions about how an entire group of people will behave."

Warren said that models on crowd movement have a variety of applications and can be used to inform the design of public spaces, transportation infrastructure and emergency response plans.

Tracking an individual to understand a group

In human crowds, as in many other animal groups, "flocking" behavior emerges from interactions between individuals, Warren explained. Understanding these interactions involves identifying rules of engagement that govern how an individual responds to their neighbors in a crowd, and how multiple neighbors are combined.

To produce a realistic individual trajectory of movement, the team conducted experiments through Warren's Virtual Environment Navigation Lab (VENLab). Study participants in a large open room wore virtual reality headsets that showed animated 3D virtual humans who were manipulated to move in different ways -- for example, some people within the virtual crowd might turn in one direction, while others continued as a group. The participants were instructed to walk with the crowd, while researchers tracked their movements and their path.

The researchers knew from their previous work on pairs of pedestrians that a follower tends to match the walking direction and speed of a leader. From their new experiments, they found that each pedestrian controls their walking direction and speed by using two visual motions. First, they walk in a way that reduces the sideways motion of neighbors in the field of view. At the same time, they walk to reduce the expansion and contraction in the field of view, which happens when a neighbor gets closer or farther away. By using these two variables to control walking, they end up matching the average direction and speed of the crowd.

They also found that pedestrian participants responded less to virtual humans that were farther away, as might be expected, but that was driven by two visual factors, Warren said: the law of optics (things that are farther away in space have smaller visual motions) and the principle of occlusion (neighbors who are farther away are likely to be partially blocked by nearer neighbors, making them harder to see -- and harder to follow).

Previous models had taken into account the effect of distance on crowd behavior, but not from a visual perspective. "We found that responses decrease with distance for two reasons that weren't previously fully understood or appreciated," Warren said, "and they both have to do with who the people in the crowd can see."

When the researchers used the experiment results to create a new theory of collective motion, it successfully predicted individual trajectories in both virtual crowd experiments and real crowd data.

Warren explained that people in a group use visual information to guide their own walking -- to turn left or right, or speed up or slow down to avoid collisions. The way they use that information to control their movements is referred to as the visual coupling, he said. The other individuals in the group are also behaving according to the same principles.

Collective behavior in online crowds and virtual spaces

Warren added that the findings from case studies like this could be extrapolated to other situations in which people or animals unconsciously coordinate their behavior -- such as on social media.

Instead of being visually coupled as in a crowd in a public space, people in social networks are electronically coupled through the internet. In both situations, there is the same strong tendency for a person to imitate others around them, and follow those who are moving in a similar direction (ideologically as well as physically). But, as Warren and Brown graduate student Trent Wirth found in other experiments, when one group starts to diverge too much from a person's current "direction," the person will reject that group and follow another group moving in a less divergent direction.

"The visual network among people in a crowd isn't that dissimilar from a social network on social media, in terms of how people are interacting," he said. "You see analogous kinds of consensus and polarization."

Warren said that future studies from his lab will continue to explore crowd networks and collective decision-making, especially how groups decide to split or bifurcate to take different paths in physical space or in an online social network.

"There are all sorts of decisions being made at the individual level, but also collectively in groups," Warren said. "Our new study is just one case study of this self-organized collective behavior."

The paper is based on an idea developed by Gregory Dachner, who earned his Ph.D. at Brown in 2020, and is a study co-author.

This research was supported by the National Institutes of Health (grants R01EY010923, R01EY029745 and T32 EY018080) and the National Science Foundation (grants BCS-1431406 and BCS-1849446).


Story Source:

Materials provided by Brown University. Note: Content may be edited for style and length.


Journal Reference:

  1. Gregory C. Dachner, Trenton D. Wirth, Emily Richmond, William H. Warren. The visual coupling between neighbours explains local interactions underlying human ‘flocking'. Proceedings of the Royal Society B: Biological Sciences, 2022; 289 (1970) DOI: 10.1098/rspb.2021.2089

Cite This Page:

Brown University. "Seen and 'herd': Collective motion in crowds is largely determined by participants' field of vision." ScienceDaily. ScienceDaily, 22 March 2022. <www.sciencedaily.com/releases/2022/03/220322122820.htm>.
Brown University. (2022, March 22). Seen and 'herd': Collective motion in crowds is largely determined by participants' field of vision. ScienceDaily. Retrieved December 21, 2024 from www.sciencedaily.com/releases/2022/03/220322122820.htm
Brown University. "Seen and 'herd': Collective motion in crowds is largely determined by participants' field of vision." ScienceDaily. www.sciencedaily.com/releases/2022/03/220322122820.htm (accessed December 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES