How sign language users learn intonation
- Date:
- September 28, 2015
- Source:
- Linguistic Society of America
- Summary:
- A spoken language is more than just words and sounds. Speakers use changes in pitch and rhythm, known as prosody, to provide emphasis, show emotion, and otherwise add meaning to what they say. In a new study, three linguists look at intonation (a key part of prosody) in ASL and find that native ASL signers learn intonation in much the same way that users of spoken languages do.
- Share:
A spoken language is more than just words and sounds. Speakers use changes in pitch and rhythm, known as prosody, to provide emphasis, show emotion, and otherwise add meaning to what they say. But a language does not need to be spoken to have prosody: sign languages, such as American Sign Language (ASL), use movements, pauses and facial expressions to achieve the same goals. In a study appearing in the September 2015 issue of Language, three linguists look at intonation (a key part of prosody) in ASL and find that native ASL signers learn intonation in much the same way that users of spoken languages do.
Diane Brentari (University of Chicago), Joshua Falk (University of Chicago), and George Wolford (Purdue University) studied how deaf children (ages 5-8) who were native learners of ASL used intonational features like 'sign lengthening' and facial cues as they acquired ASL. They found that children learned these features in three stages of "appearance, reorganization, and mastery": accurately replicating their use in simpler contexts, attempting unsuccessfully at first to use them in more challenging contexts, then using them accurately in all contexts as they fully learn the rules of prosody. Previous research has shown that native learners of spoken languages acquire intonation following a similar pattern. Brentari et al. also found that young signers of ASL use certain intonational features with different frequencies than adult ASL signers.
This study, "The acquisition of prosody in American Sign Language," is the first comparative analysis of prosody in ASL between children and adults who are native ASL signers, and helps demonstrate the similarities in language acquisition between signed and spoken languages. This research may also make it easier to accurately transcribe certain linguistic units of ASL, which could benefit automatic ASL translation through motion-capture software. Brentari et al.'s research was supported by grants from the National Science Foundation and the University of Chicago's Center for Gesture, Sign, and Language.
Story Source:
Materials provided by Linguistic Society of America. Note: Content may be edited for style and length.
Journal Reference:
- Diane Brentari, Joshua Falk, George Wolford. The acquisition of prosody in American Sign Language. Language, 2015; 91 (3): e144 DOI: 10.1353/lan.2015.0042
Cite This Page: