Speech perception involves multiple senses, not just hearing

Washington, February 12 : While people generally think of speech as being something they hear, a report now suggests that speech perception involves multiple senses.

Published in the journal Current Directions in Psychological Science, the report says that the brain treats speech as something that people hear, see, and even feel.

Psychologist Lawrence D. Rosenblum of the University of California, Riverside, says that people receive a lot of speech information via visual cues, such as lip-reading, and this type of visual speech occurs throughout all cultures.

He says that it is not just information from lips when someone is speaking, people even note the movements of the teeth, tongue and other non-mouth facial features.

The researcher says that it is likely that human speech perception has evolved to integrate many senses together, that is, speech is not meant to be just heard, but also to be seen.

The McGurk Effect is a well-characterized example of the integration between what we see and what we hear when someone is speaking to us.

Rosenblum points out that the phenomenon occurs when a sound is dubbed with a video showing a face making a different sound-for example, the audio may be playing "ba", while the face looks as though it is saying "va".

When confronted with this, we will usually hear "va" or a combination of the two sounds, such as "da."

The McGurk Effect occurs even when participants are aware of the dubbing or told to concentrate only on the audio.

Rosenblum says that this is evidence that once senses are integrated together, it is not possible to separate them.

According to him, studies conducted recently show that this integration occurs very early in the speech process, even before the basic units of speech are established.

Rosenblum suggests that physical movement of speech-the movement of mouths and lips-creates acoustic and visual signals that have a similar form.

He argues that as far as the speech brain is concerned, the auditory and visual information are never really separate, which is why people integrate speech so readily and in such a way that the audio and visual speech signals become indistinguishable from one another.

Rosenblum concludes that visual-speech research has a number of clinical implications, especially in the areas of autism, brain injury and schizophrenia and that "rehabilitation programs in each of these domains have incorporated visual-speech stimuli." (ANI)

Regions: