New! Sign up for our free email newsletter.
Science News
from research organizations

Linguistic Research Moving In New Direction

Date:
March 10, 2005
Source:
University Of Arizona
Summary:
Some linguistics researchers are applying larger scientific principles that describe natural forces to the study of language. This represents a major shift in linguistics research done over the last several decades.
Share:
FULL STORY

Some linguistics researchers are applying larger scientific principles that describe natural forces to the study of language. This represents a major shift in linguistics research done over the last several decades.

A new strand of research uses the principle of "self organization," a concept used in studying all kinds of complex systems, from thunderstorms to the human immune system, and not just language. Self-organization, in a nutshell, is when a system evolves a large structure from repeated small-scale interactions between its smaller elements, says Andrew Wedel, an assistant professor of linguistics at the University of Arizona in Tucson.

"Sand dunes in the desert or ripples at the bottom of a streambed come about from the air or water flowing over them and the way individual grains of sand happen to bounce against one another," Wedel said.

"No individual sand grain knows that it is part of a sand dune or streambed. It is these repeated, small-scale interactions that, over time, result in this big, global structure that has a lot of order but isn't preprogrammed into the sand grains in any direct sense."

Some parts of language structure, says Wedel, may be the dunes and the individual sand grains may be the countless conversations carried on between people and parents teaching their children for millennia. All of this cycling, says Wedel, is a prerequisite for self-organization.

For several decades the model for studying linguistics has included the assumption that human brains come built with a genetically specified 'universal grammar', and that the features of human languages are derived from variations in how this universal grammar mechanism can operate. No matter how complicated, or mutually unintelligible they are, all languages still follow basic guidelines common to all of them.

He and others hypothesize that rather than genes preprogramming language in such a direct way as in the universal grammar hypothesis, the model for linguistics since the 1960s, they instead set up more abstract conditions for language and that self organization then creates many of the subsequent patterns and structures. That might explain why linguists can find strong tendencies within the sweep of the world's languages, but fewer absolute truths that run through all of them.

"It is relatively hard to find true universals that every language exhibits – as when we say, if a language does 'this,' then it always does 'that,'" Wedel says. "If you look very hard, you can often find some group of people somewhere speaking a language that does it differently.

"The universal grammar hypothesis has to get more complicated if you say it can have lots of exceptions like that. However, this a pattern of tendencies, with exceptions, is straightforward under the SO hypothesis. Because you expect it. Structure formation is a stochastic process, random. It's all about tendencies, it's not about absolutes. You only find hard absolutes when you run up against an absolute structural inability to do a particular thing."

To test this, Wedel has devised an experiment with several computers that "speak" to one another. Using basic assumptions about human tendencies drawn from psycholinguistics and cognitive science, Wedel gives his computer "people" a small lexicon of words they can say and has them take turns saying them.

Their language is built out of sounds analogous to the range of sounds from a vowel continuum that starts with "eee," which is made with the jaw closed, down to 'aaaah', which is made with the jaw open. As the jaw lowers from "eee", the vowel changes, eventually to something approaching "aaah." This continuum of sounds are assigned a number between zero and one or a percentage thereof.

At the beginning of these simulations, the computers start with random words made out of random combinations of vowels and don't understand what each other are saying, much like babies trying to understand and mimic speech for the first time.

"I do give the simulation some 'innate tools,'" says Wedel, "but they are general, rather than being tailored to solve the problems of these computer speakers, as one can argue 'universal grammar' would for humans. For example, I give them a mechanism for recognizing and categorizing sounds, but I don't give them anything that attempts to monitor words or sounds to make sure they are distinct from one another."

"These basic biases in categorization behavior could be innate. It could be part of our basic machinery for looking at the world," Wedel says. "And it might be language-specific and it might not be language-specific, but it is less highly specified than whether an adjective comes before the noun or after the noun."

After running for thousands of "conversations," the system begins to develop certain characteristics that are like human language that the simulations didn't begin with. The computers begin to recognize each others' sounds and agree on their meanings.

Eventually they develop a common vocabulary of words that mean certain things. Significantly, he says, they develop vocabularies that don't have homophones, words for different objects that sound the same.

Homophones that do occur, such as the English word "bank," develop in a context, so that in the course of a real conversation, a listener will know that the word means either the money place or the edge of the river. Homophony or near-homophony in the same context, on the other hand, can cause difficulty in communication.

"For example, one bad place we have gotten ourselves in English are the '-teens' and '-ty's. People are mishearing these all the time. 'Did you say fourteen or forty?' That is exactly that is that sort of near homophony that is a real problem," he says.

In addition, he says languages around the world - and this is a universal - are built out of reusable parts, an inventory, or set of reusable sounds that make our words. Humans have the biological equipment to manufacture all kinds of sounds: consonants, vowels, clicks, whistles, etc., but settle on only a few for their specific language.

That is also what Wedel's simulations do. They could use the entire vowel from 'eee' to 'aaah', but what they evolve to is a minimum number of sounds they need to express their set of meanings. It also is random, the way English vowels differ from German or Spanish vowels, for example. "That is one of the big results of these simulations. They develop just like a real language in these respects, but without any direct instructions to do so," Wedel says.

"This is one of the outcomes of this kind of work," Wedel says. "The basic 'universal grammar' model says that the lexicon and grammar algorithm are entirely separate from one another. They don't feed back to one another, they don't communicate much. This particular model on the other hand suggests that the particular features of a lexicon may influence how grammar evolves, and vice versa. Whether this turns out to be true, and to what degree, will require a lot more research."

"I think there is a big shift from the explanation from a single level, advocated by Noam Chomsky, that one grammar algorithm is coded in our genes, to a more layered set of explanations where structure gradually emerges in layers, over time through many cycles of talking and learning," he said.

"Languages are the ripples in the dunes and the grains of sand are our conversations, generations talking to each other and learning things and slowly creating these larger ripples in time."


Story Source:

Materials provided by University Of Arizona. Note: Content may be edited for style and length.


Cite This Page:

University Of Arizona. "Linguistic Research Moving In New Direction." ScienceDaily. ScienceDaily, 10 March 2005. <www.sciencedaily.com/releases/2005/02/050223150852.htm>.
University Of Arizona. (2005, March 10). Linguistic Research Moving In New Direction. ScienceDaily. Retrieved December 22, 2024 from www.sciencedaily.com/releases/2005/02/050223150852.htm
University Of Arizona. "Linguistic Research Moving In New Direction." ScienceDaily. www.sciencedaily.com/releases/2005/02/050223150852.htm (accessed December 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES