Rhythm :: New study suggests speakers of different languages perceive rhythm differently

Do the sounds of our native languages affect how we hear music and other non-language sounds? A team of American and Japanese researchers has found evidence that native languages influence the way people group non-language sounds into rhythms.

People in different cultures perceive different rhythms in identical sequences of sound, according to Drs. John R. Iversen and Aniruddh D. Patel of The Neuroscience Institute in San Diego and Dr. Kengo Ohgushi of the Kyoto City University of Arts in Kyoto, Japan. This provides evidence that exposure to certain patterns of speech can influence one’s perceptions of musical rhythms. In future work, they believe they may even be able to predict how people will hear rhythms based on the structures of their own languages. The researchers will present their findings Nov. 30 at the Fourth Joint Meeting of the Acoustical Society of America (ASA) and the Acoustical Society of Japan (ASJ), which will be held at the Sheraton Waikiki and Royal Hawaiian Hotels in Honolulu, Hawaii. The meeting will run from Nov. 28 through Dec. 2, and more than 1600 papers will be presented.

Researchers have traditionally tested how individuals group rhythms by playing simple sequences of tones. For example, listeners are presented with tones that alternate in loudness (…loud-soft-loud-soft…) or duration (…long-short-long-short…) and are asked to indicate their perceived grouping. Two principles established a century ago, and confirmed in numerous studies since, are widely accepted: a louder sound tends to mark the beginning of a group, and a lengthened sound tends to mark the end of a group. These principles have come to be viewed as universal laws of perception, underlying the rhythms of both speech and music. However, the cross-cultural data have come from a limited range of cultures, such as American, Dutch and French.

This new study suggests one of those so-called “universal” principles, perceiving a longer sound at the end of a group, may be merely a byproduct of English and other Western languages. In the experiments Iversen, Patel and Ohgushi performed, native speakers of Japanese and native speakers of American English agreed with the principle that they heard repeating “loud-soft” groups. However, the listeners showed a sharp difference when it came to the duration principle. English-speaking listeners most often grouped perceived alternating short and long tones as “short-long.” Japanese-speaking listeners, albeit with more variability, were more likely to perceive the tones as “long-short.” Since this finding was surprising and contradicted a common belief of perception, the researchers replicated and confirmed it with listeners from different parts of Japan.

To uncover why these differences exist, one clue may come from understanding how musical rhythms begin in the two cultures. For example, if most phrases in American music start with a short-long pattern, and most phrases in Japanese music start with a long-short pattern, then listeners might learn to use these patterns as cues for how to group them. To test this idea, the researchers examined phrases in American and Japanese children’s songs. They examined 50 songs per culture, and for each beginning phrase they computed the duration ratio of the first note to the second note and counted how often phrases started with a short-long pattern versus other possible patterns such as long-short, or equal duration. They found American songs show no bias to start phrases with a short-long pattern. But Japanese songs show a bias to start phrases with a long-short pattern, consistent with their perceptual findings.

One basic difference between English and Japanese is word order. In English, short grammatical, or “function,” words such as “the,” “a,” and “to,” come at the beginning of phrases and combine with longer meaningful, or “content,” words such as a nouns or verbs. Function words are typically reduced in speech, having short duration and low stress. This creates frequent linguistic chunks that start with a short element and end with a long one, such as “to eat,” and “a big desk.” This fact about English has long been exploited by poets in creating the English language’s most common verse form, iambic pentameter.

Japanese, in contrast, places function words at the ends of phrases. Common function words in Japanese include “case markers,” or short sounds which can indicate whether a noun is a subject, direct object, indirect object, etc. For example, in the sentence “John-san-ga Mari-san-ni hon-wo agemashita,” (“John gave a book to Mari”) the suffixes “ga,” “ni” and “wo” are case markers indicating that John is the subject, Mari is the indirect object and “hon” (book) is the direct object. Placing function words at the ends of phrases creates frequent chunks that start with a long element and end with a short one, which is just the opposite of the rhythm of short phrases in English.

In addition to potentially uncovering a new link between language and music, the researchers’ work demonstrates there is a need for cross-cultural research when it comes to testing general principles of auditory perception.


Leave a Comment