Free Essay

Decoding Speech Prosody: Do Music Lessons Help?

In:

Submitted By belinszkyanna
Words 7856
Pages 32
Emotion 2004, Vol. 4, No. 1, 46–64

Copyright 2004 by the American Psychological Association, Inc. 1528-3542/04/$12.00 DOI: 10.1037/1528-3542.4.1.46

Decoding Speech Prosody: Do Music Lessons Help?
William Forde Thompson, E. Glenn Schellenberg, and Gabriela Husain
University of Toronto at Mississauga Three experiments revealed that music lessons promote sensitivity to emotions conveyed by speech prosody. After hearing semantically neutral utterances spoken with emotional (i.e., happy, sad, fearful, or angry) prosody, or tone sequences that mimicked the utterances’ prosody, participants identified the emotion conveyed. In Experiment 1 (n 20), musically trained adults performed better than untrained 56), musically trained adults outperformed untrained adults. In Experiment 2 (n adults at identifying sadness, fear, or neutral emotion. In Experiment 3 (n 43), 6-year-olds were tested after being randomly assigned to 1 year of keyboard, vocal, drama, or no lessons. The keyboard group performed equivalently to the drama group and better than the no-lessons group at identifying anger or fear.

In the past 10 years, the possibility of links between musical and nonmusical domains has generated excitement among researchers and the popular press. One line of research concerns short-term benefits in nonmusical domains that occur as a consequence of passive listening to music. In two widely cited studies (Rauscher, Shaw, & Ky, 1993, 1995), listening to music composed by Mozart led to temporary improvements in spatial abilities. As an instance of crossmodal priming, this result is remarkable because the priming stimulus (music) is seemingly unrelated to the task being primed (a spatial task). Subsequent research indicates, however, that the so-called Mozart effect has nothing to do with Mozart in particular or with music in general (Nantais & Schellenberg, 1999). Rather, the apparent benefits of listening to

William Forde Thompson, E. Glenn Schellenberg, and Gabriela Husain, Department of Psychology, University of Toronto at Mississauga, Mississauga, Ontario, Canada. This research was supported by the International Foundation for Music Research. We thank the Royal Conservatory of Music for their cooperation in providing the lessons, Laura-Lee Balkwill for recording and providing the Tagalog sentences, Jane Campbell and Will Huggon for recruiting and testing the children, Patrik Juslin and Uli Schimmack for helpful comments on an earlier version of this article, and Raul Dudnic, Doug Gifford, Vlad Kosarev, and Cory Sand for technical assistance. Correspondence concerning this article should be addressed to William Forde Thompson, Department of Psychology, University of Toronto at Mississauga, Mississauga, Ontario, Canada L5L 1C6. E-mail: b.thompson@ utoronto.ca

music result from differences in mood and arousal induced by the testing conditions (Chabris, 1999; Thompson, Schellenberg, & Husain, 2001). Listening to fast-tempo music in a major key induces positive moods and arousal levels that facilitate performance on spatial tasks (Husain, Thompson, & Schellenberg, 2002; Thompson et al., 2001). A second line of research concerns long-term effects of formal training in music. The cognitive implications of taking music lessons are distinct from the short-term effects of music listening (Schellenberg, 2003). Whereas transient effects of music listening on spatial abilities are said to be instances of priming (Rauscher et al., 1993, 1995; Shaw, 2000), beneficial effects of music lessons on nonmusical abilities are best classified as positive transfer effects. Transfer occurs when previous experience in problem solving makes solving a new problem easier (positive transfer) or more difficult (negative transfer). In our view, the issue of whether music lessons yield positive transfer effects is an open question that is amenable to empirical investigation. Although transfer effects depend critically on similarity between the training and transfer contexts (Barnett & Ceci, 2002; Postman, 1971), previous studies of transfer between music lessons and nonmusical skills have focused on domains that are not linked closely to music. For example, positive associations with music lessons have been reported for measures of general intelligence (Lynn, Wilson, & Gault, 1989; Schellenberg, in press), symbolic reasoning (Gromko & Poorman, 1998), reading (Lamb & Gregory, 1993), mathematical ability (Gardiner, Fox, Knowles, & Jeffrey, 1996), verbal recall (Ho, Cheung, & Chan, 2003;

46

MUSIC LESSONS AND SPEECH PROSODY

47

Kilgour, Jakobson, & Cuddy, 2000), and spatial ability (for review, see Hetland, 2000). If music lessons were the actual source of these effects, the findings would represent instances of transfer between highly dissimilar contexts and domains, which are rare (Barnett & Ceci, 2002; Detterman, 1993). Although there is suggestive evidence that some of these associations are mediated by temporal-order processing skills (Jakobson, Cuddy, & Kilgour, 2003), the designs of most of these studies were correlational, which precludes determination of causal relations (for exceptions, see Gardiner et al., 1996; Schellenberg, in press). In the present investigation, we predicted that formal training in music enhances listeners’ ability to decode emotions conveyed by prosody in speech. Our prediction was motivated by theoretical and empirical links between music and emotion, similar links between speech prosody and emotion, and features and processes common to music and speech prosody. Because music and speech prosody represent two domains that are auditory, communicative, and linked with emotion, transfer effects between domains are much more likely than those noted above. Our prediction is also relevant to present conceptions of emotional intelligence (Mayer, Caruso, & Salovey, 1999; Mayer, Salovey, Caruso, & Sitarenios, 2001). Emotional intelligence consists of the following skills (ordered from lowest to highest level): (a) perceiving emotions, (b) using emotions to facilitate thought, (c) understanding and reasoning about emotions, and (d) managing emotions in self and others. We examined the possibility of transfer effects between music lessons and the lowest (most basic) level of the emotional intelligence hierarchy.

Emotion and Music
Several theorists have attempted to explain why music evokes emotional responses (for reviews, see Cook & Dibben, 2001; Davies, 2001; Scherer & Zentner, 2001; Sloboda & Juslin, 2001). Langer (1957) argued that music involves a number of dynamic patterns (i.e., motion and rest, tension and release, agreement and disagreement, and sudden or surprising change), which are inherently linked to emotion. Meyer (1956; see also Gaver & Mandler, 1987) claimed that violations of listeners’ musical expectations are arousing, which, in turn, leads to emotional responding. Cooke (1959) considered music to be a language of the emotions. He suggested that specific emotions are associated with particular melodic inter-

vals or patterns. Similarly, drawing from speculations by Darwin (1872) and others, Kivy (1980) argued that properties of music such as tempo, mode, and melodic motion resemble human emotional displays. Empirical evidence of links between music and emotion is compelling and widespread (for a review, see Juslin & Sloboda, 2001). Early research by Hevner (1935a, 1935b, 1936, 1937) illustrated that listeners associate specific emotions with basic characteristics of music, such as tempo and pitch height. For example, music played at a fast tempo is labeled exciting and happy, whereas music played at a slow tempo is perceived to be serene and dreamy. By attending to such basic characteristics, listeners are able to judge the emotional meaning of music from unfamiliar cultures (Balkwill & Thompson, 1999). Even children are sensitive to the emotions conveyed by music (Cunningham & Sterling, 1988; Dalla Bella, Peretz, Rousseau, & Gosselin, 2001; Kratus, 1993; Terwogt & Van Grinsven, 1991). When presented with classical music that conveys one of four emotions (happiness, sadness, fear, or anger), 5-year-olds, 10-year-olds, and adults can decode the appropriate emotion, although happiness and sadness are easier to identify than fear and anger (Terwogt & Van Grinsven, 1991). The advantage for decoding happiness and sadness over fear and anger is relatively widespread in studies with musical stimuli (e.g., Bunt & Pavlicevic, 2001; Terwogt & Van Grinsven, 1988, 1991; Thompson & Robitaille, 1992), but contextual factors play a role. For example, anger is expressed well on an electric guitar but not on a flute (Gabrielsson & Juslin, 1996). More generally, music’s emotional connotations are dependent on a combination of factors, including instrumentation, musical structure, and performance expression (Gabrielsson & Lindstrom, 2001; Juslin, 1997, 2001). As one would ex¨ pect, the ability to identify emotions expressed by music improves with age (Terwogt & Van Grinsven, 1988, 1991), and older children consider more factors when making their judgments. For example, when asked to decide whether a piece sounds happy or sad, 6- to 8-year-olds consider tempo (fast or slow) and mode (major or minor), but 5-year-olds are influenced only by tempo (Dalla Bella et al., 2001). Music also evokes physical responses that accompany emotions such as tears, tingles down the spine (or “chills”), and changes in heart rate, breathing rate, blood pressure, and skin conductance levels (Goldstein, 1980; Krumhansl, 1997; Panksepp, 1995; Sloboda, 1991, 1992; Thayer & Levenson, 1983). Adults report that they frequently listen to music in order to

48

THOMPSON, SCHELLENBERG, AND HUSAIN

change their emotional state (Sloboda, 1992), which implies that these physiological changes are accompanied by changes in phenomenological experience. Responses to paper-and-pencil questionnaires confirm that musical properties such as tempo and mode affect listeners’ self-reported mood and arousal levels (Husain et al., 2002). In short, a large body of research confirms that listeners decode the intended emotional content of a musical piece, and that they respond emotionally to music.

Emotion and Speech Prosody
Speech prosody refers to the musical aspects of speech, including its melody (intonation) and its rhythm (stress and timing). Prosody is often used to convey a speaker’s emotions (Frick, 1985; Juslin & Laukka, 2001, 2003), a connection that was noted by Darwin (1872). Since then, theorists have suggested that prosody emerges from the prelinguistic use of pitch to signal emotion (Bolinger, 1978, 1986). Similar prosodic cues are used across cultures to convey emotions (Bolinger, 1978), such that sensitivity to particular meanings does not depend on verbal comprehension. Distinct patterns of vocal cues signal specific emotions as well as the intensity with which emotions are communicated (Juslin & Laukka, 2001). Prosodic patterns signaling happiness, sadness, anger, and fear are decoded well above chance levels, with anger and sadness decoded more reliably than happiness and fear (Banse & Scherer, 1996; Johnstone & Scherer, 2000). In some stimulus contexts, however, fear (Apple & Hecht, 1982) and happiness (Johnson, Emde, Scherer, & Klinnert, 1986; Juslin & Laukka, 2001) are decoded as well as anger and sadness. Happiness is associated with rapid tempo, high pitch, large pitch range, and bright voice quality; sadness is associated with slow tempo, low pitch, narrow pitch range, and soft voice quality; anger is associated with fast tempo, high pitch, wide pitch range, and rising pitch contours; and fear is associated with fast tempo, high pitch, wide pitch range, large pitch variability, and varied loudness (Scherer, 1986). Speakers’ use of prosody to convey emotions is particularly obvious in speech directed toward infants and young children, which is consistent with Bolinger’s (1978, 1986) suggestion that prosody has its roots in prelinguistic speech. Indeed, prosodic cues are the only way to convey emotion when speaking to young infants. Compared with speech directed toward adults, infant-directed speech has higher pitch, larger

pitch excursions, slower rate, shorter utterances, and longer pauses (e.g., Ferguson, 1964; Fernald & Mazzie, 1991). Such modifications are made across cultures and genders (e.g., Fernald & Simon, 1984; Fernald et al., 1989; Grieser & Kuhl, 1988). Infantdirected speech facilitates the language-acquisition process by highlighting important words and linguistic structures (e.g., Fernald & Mazzie, 1991; Kemler Nelson, Hirsh-Pasek, Jusczyk, & Wright Cassidy, 1989), by hyperarticulating vowels (Burnham, Kitamura, & Vollmer-Conna, 2002), and by eliciting and maintaining young listeners’ attention (Fernald, 1991; Werker & McLeod, 1989). It also promotes emotional bonding between the speaker and the listener (Trainor, Austin, & Desjardins, 2000).

Music and Speech Prosody
Emotions are expressed in music and speech through variations in rate, amplitude, pitch, timbre, and stress (Juslin & Laukka, 2003; Scherer, 1995). Evolutionary theories suggest that musical behavior evolved in conjunction with—or as an adaptation of— vocal communication (Brown, 2000; Dissanayake, 2000; Joseph, 1988; Pinker, 1995). For example, Dissanayake (2000) proposed that vocal interactions between mothers and infants provide the foundation for a system of emotional communication that is used in music and other arts. In her view, music, speech prosody, and facial expression share a common ancestry as temporal-spatial patterns used in affiliative interactions between mothers and infants. Pitch contour and rhythmic grouping are critical dimensions in both music and prosody (Patel, Peretz, Tramo, & Labrecque, 1998). In music, pitch and temporal relations define musical tunes, which retain their identities across transformations in pitch level and tempo. In speech, pitch variation provides an important source of semantic and emotional information, and temporal properties help listeners determine boundaries between words and phrases. Descending pitch contours and syllables or notes of long duration typically mark ends of phrases in speech (Price, Ostendorf, Shattuck-Hufnagel, & Fong, 1991) and in music (Narmour, 1990). Even young infants parse speech (Hirsh-Pasek et al., 1987) and music (Jusczyk & Krumhansl, 1993) using this information. Musical pitch and speech intonation are also processed preferentially by the right hemisphere, whereas rhythms in music and speech are less clearly lateralized (e.g., McKinnon & Schellenberg, 1997; Peretz, 2001; Snow, 2000; Van Lancker & Sidtis, 1992). Moreover,

MUSIC LESSONS AND SPEECH PROSODY

49

music and speech share neural resources for combining their basic elements (i.e., musical tones and words, respectively) into rule-governed sequences (Patel, 2003).

The Present Study
Parallels between music and speech prosody raise the possibility that skills acquired through training in music lead to enhanced sensitivity to emotions conveyed by prosody. In other words, music lessons might nurture a basic skill of emotional intelligence by engaging, developing, and refining processes used for perceiving emotions expressed musically, which, in turn, could have consequences for perceiving emotions expressed in speech. Such effects would represent instances of transfer between training in music and a domain that is similar on several dimensions. Although effects of musical training on decoding emotions in music have been inconsistent (Gabrielsson & Juslin, 2003), two studies provided preliminary evidence consistent with our hypothesis that music lessons facilitate the ability to decode emotions conveyed by speech prosody. Nilsonne and Sundberg (1985) presented music and law students with tone sequences consisting of the fundamental frequencies of voice samples (i.e., no semantic cues) recorded from depressed and nondepressed individuals. Music students were superior at identifying the emotional state of the speakers. The authors suggested that, “The mastery of the expression of emotional information in music, which is a prerequisite for a competent musician, would then correspond to an enhanced ability to decode emotional information in speech” (p. 515). Thompson, Schellenberg, and Husain (2003) reported findings consistent with this perspective. In one experiment, musically trained and untrained participants heard an utterance spoken in a “happy” manner, followed by a sequence of tones that either matched or mismatched the prosody (pitch and duration) of the utterances. The musically trained participants were better at judging whether the utterances and tone sequences matched. In a second experiment, listeners heard “happy” or “sad” sounding utterances spoken in a foreign language (Spanish). Each utterance was followed by a low-pass filtered version of an utterance spoken with the same emotional tone. Again, musically trained listeners outperformed their untrained counterparts at judging whether the filtered version was derived from the preceding utterance. In the present investigation, we conducted three experiments that examined whether training in music

is predictive of increased sensitivity to emotions conveyed by prosody. In Experiments 1 and 2, we attempted to replicate and extend the findings of Nilsonne and Sundberg (1985) and Thompson et al. (in press). Adults were tested on their ability to decode the emotional meaning of spoken utterances or tone sequences that mimicked the prosody of those utterances. Some of the adults had extensive training in music that began in childhood; others had no music lessons. We predicted that adults who received musical training as children would be better than untrained adults at identifying emotions conveyed by the utterances and tone sequences. In Experiment 3, 6-year-olds were assigned to 1 year of keyboard, singing, drama, or no lessons and tested subsequently on their sensitivity to the emotions conveyed by spoken utterances and by tone sequences. There were two experimental groups (i.e., keyboard and singing) and two control groups (i.e., drama and no lessons). The ability of children in the no-lessons group to decode prosody should represent that of the average child. By contrast, vocal expression of emotion is central to drama training. Thus, children who received 1 year of drama lessons should be better than average at identifying emotions conveyed through prosodic cues. We also expected that training in music would lead to above-average abilities at decoding prosody in speech. It was unclear whether the music groups would perform as well as the drama group, or whether the singing group would perform as well as the keyboard group. On the one hand, singing lessons emphasize the use of the voice and might facilitate the ability to decode the emotional content of vocal utterances in general, and more so than keyboard lessons. On the other hand, singing lessons emphasize controlled use of the voice to produce a sequence of discrete pitches. This nonprosodic use of the voice could interfere with decoding prosodic expressions of emotion.

Experiment 1
Musically trained and untrained adults were assessed on their ability to decode the emotions conveyed by tone sequences that mimicked the pitch and temporal structure of spoken phrases. Unlike the “matching” judgments used by Thompson et al. (2003), our task required listeners to identify the corresponding emotion. We focused on pitch and temporal structures because they are the most musically relevant dimensions of prosody. To illustrate, a familiar song such as Happy Birthday can be identified regard-

50

THOMPSON, SCHELLENBERG, AND HUSAIN

less of timbre (e.g., sung, performed on the piano) or amplitude (e.g., soft, loud), provided the pitch and temporal relations among tones conform to those that define the tune. Our review of the literature motivated two predictions. The primary prediction was that musically trained participants would outperform their untrained counterparts. We also expected that identification accuracy would differ across the four emotions. As noted, sadness and anger are typically easier to decode than happiness and fear for spoken stimuli, whereas sadness and happiness are easier to decode than fear and anger for musical stimuli. Because our tone sequences combined prosodic and musical features, we expected that identification accuracy would be relatively high for sad sequences but relatively low for fearful sequences.

Method
Participants. Twenty undergraduates (12 men and 8 women) participated in the study. They were recruited from introductory psychology classes and received course credit for participating. The musically trained group consisted of 4 women and 5 men who had at least 8 years of formal music lessons (M 13.3 years, SD 5.5 years). All of them began taking music lessons during childhood. On average, they had 1.7 years of college (SD 0.6 years) and a mean grade point average (GPA) of 3.0 (SD 0.6). The untrained group consisted of 4 women and 7 men. Ten had never taken music lessons; 1 had 1 year of lessons. The average participant in the untrained group had 2.3 years of college (SD 0.7 years) and a GPA of 2.6 (SD 0.7). The two groups did not differ in age, years of education, or GPA (ps .2). Apparatus. Stimuli were presented to participants under computer control (Macintosh G4). A customized software program created with Hypercard was created to control presentation of stimuli and recording of responses. Participants listened to the stimuli over Sennheiser HD 480 (Sennheiser Communications, Tullamore, Ireland) headphones at a comfortable volume while sitting in a sound-attenuating booth. The stimuli were presented with a flute timbre from the Roland 110 sound library (Roland Canada Music Ltd., Richmond, British Columbia). Stimuli. Tone sequences were melodic analogues of spoken sentences. Specifically, they were constructed to mimic the prosody of spoken sentences included in the Name Emotional Prosody test of the Florida Affect Battery (Bowers, Blonder, & Heilman, 1991). The test consists of four semantically neutral

sentences (e.g., “The chairs are made of wood”) uttered by a female speaker in four different renditions, with each rendition conveying one of four emotions: happiness, sadness, fear, or anger. Each of the 16 sentences (4 sentences × 4 emotions) was transformed into an analogous tone sequence in a manner similar to that of Patel et al. (1998). Tone “syllables” were created by calculating the modal pitch and duration of each spoken syllable. The modal pitch was established by locating the longest duration of pitch stability in the spoken syllable, ignoring variations up to 10 Hz (cycles/second) in frequency. When such variation was present, the mode was taken as the median of the frequencies. The selected pitch was verified by a musically trained assistant who compared the tone syllable with the spoken syllable presented in isolation. Tone syllables were combined to create a tone-sequence counterpart for each of the 16 sentences. Unlike natural speech, each tone syllable had equal amplitude. Moreover, spoken language has pitch glides (i.e., continuous transitions), whereas the tone sequences had discrete steps, although these did not conform to any musical scale. Discrete pitches were used to convey the syllabic segments that give spoken utterances their essential rhythmic character (i.e., segmentation in speech is conveyed poorly by continuous pitch changes when consonants are absent). In short, the translation from spoken sentences to tone sequences isolated pitch and timing dimensions of prosody, which are known to be important cues to the emotions conveyed by speakers (e.g., Juslin & Laukka, 2001, 2003). Table 1 (uppermost section) provides duration and pitch information for tone sequences used in each of the four emotion categories. The table confirms that these cues were typical of the intended emotions. For example, “happy” sequences were relatively quick (short duration) with a wide pitch (frequency) range, whereas “sad” sequences were slow with low pitch and a narrow pitch range. Procedure. Participants were tested individually. They were told that they would hear a total of 16 tone sequences and that for each they should choose the emotion conveyed from a set of four alternatives. They were also told that each tone sequence mirrored the pitch and temporal information of a spoken phrase, and that the original phrases were spoken in a way that conveyed a happy, sad, fearful, or angry emotion. Before the test session began, practice trials were allowed until participants understood the task. The practice trials were drawn at random from the same set of stimuli as the test trials. Participants typi-

MUSIC LESSONS AND SPEECH PROSODY Table 1 Descriptive Statistics for the Tone Sequences Used as Stimuli in Experiments 1 (English) and 2 (English and Tagalog) Variable English Duration (s) Highest frequency (Hz) Lowest frequency (Hz) Tagalog Duration (s) Highest frequency (Hz) Lowest frequency (Hz) Happy 1.48 462.14 178.73 1.55 405.75 263.36 Sad 1.79 267.19 184.99 1.85 203.82 162.02 Fearful 1.51 290.98 240.18 1.70 296.96 219.06 Angry 1.76 244.40 130.62 1.31 332.13 221.46

51

Note. The values are averaged across the four tone sequences used in each emotion category. Duration values are related inversely to the speed (tempo) of the sequences. Frequency values correspond to perceived pitch (higher frequency higher perceived pitch).

cally completed two or three practice trials before initiating the 16 test trials.

Results and Discussion
Each participant had four scores that represented the percentage of correct responses in each of the four conditions. Each condition corresponded to one of the four target emotions. The data are illustrated in Figure 1. The main analysis consisted of a 2 × 4 mixeddesign analysis of variance (ANOVA), with musical training (trained or untrained) as the between-subjects factor and emotion (happiness, sadness, fear, or anger) as the within-subjects factor. The main effect of training was reliable, F(1, 18) 8.26, p .010. In line with our predictions, the musically trained group (M 45% correct) performed better than the un29%). Average levels of perfortrained group (M mance exceeded chance levels (25% correct) for the

Figure 1. Mean levels of performance in Experiment 1 (adults) as a function of musical training and emotion. Error bars represent standard errors.

musically trained participants, t(8) 4.72, p .001, but not for their untrained counterparts. The main effect of emotion was also reliable, F(3, 54) 2.82, p .048. As expected, performance was best in the sad condition (M 49% correct), worst in the fearful condition (M 26%), and intermediate in the happy and angry conditions (Ms 42% and 28%, respectively). Performance in the sad condition exceeded performance in the other three conditions, F(1, 54) 4.75, p .034, whereas performance in the fearful condition was marginally worse than performance in the other three conditions, F(1, 54) 2.99, p .090. The lack of a two-way interaction between musical training and emotion indicates that differences among emotions were similar for the two groups of participants (see Figure 1). In summary, the findings are consistent with the hypothesis that music lessons are positively associated with decoding speech prosody. Previous findings indicate that musicians exhibit advantages in detecting whether a prosodic pattern came from a depressed person (Nilsonne & Sundberg, 1985), and in extracting the pitch and duration patterns from happy and sad sounding speech (Thompson et al., 2003). The present findings reveal an advantage for musically trained adults in identifying emotions conveyed by prosodic cues. The results also corroborate previous indications that sadness is expressed with relatively distinctive and salient cues whether it is conveyed musically or prosodically (Bunt & Pavlicevic, 2001; Sloboda & Juslin, 2001; Terwogt & Van Grinsven, 1988, 1991). An alternative interpretation of these data is that trained listeners performed better than untrained listeners because the stimuli were tone sequences. Although the sequences were not tonal melodies (i.e., in

52

THOMPSON, SCHELLENBERG, AND HUSAIN

a recognizable key) and did not sound like music, musicians are known to be better than nonmusicians at processing unconventional musical sequences (e.g., Lynch, Eilers, Oller, Urbano, & Wilson, 1991). Moreover, the tone sequences may have seemed particularly odd for untrained listeners, which would make it difficult for them to distinguish among the sequences or to perceive them as abstract representations of speech prosody. Experiment 2 was designed to address these possibilities.

Experiment 2
In Experiment 2, listeners judged the emotional meaning of tone sequences and spoken utterances. The rationale was as follows: If musically trained listeners in Experiment 1 demonstrated enhanced performance merely because they are skilled at processing tone sequences, then no advantage of training should be observed for spoken utterances. We also addressed the possibility that musically untrained listeners performed poorly in Experiment 1 because they could not imagine how tone sequences represent elements of speech prosody. Specifically, listeners heard the spoken utterances before the tone sequences to highlight the connection between the two types of stimuli. Although this procedure should lead to improvements in performance among untrained listeners, we still expected performance to be better among trained listeners. Another aim was to evaluate sensitivity to speech prosody in a foreign language. To this end, we presented listeners not only with English speech but also with speech samples from a language that was not understood by any of our participants: Tagalog. (Also called Pilipino, Tagalog is spoken by roughly 25% of people in Philippines, an Asian country of 7,100 islands and islets off the southeast coast of mainland China.) Finally, we investigated whether musically trained listeners might outperform their untrained counterparts on our experimental tasks because they have superior cognitive abilities. This interpretation is consistent with recent evidence of effects of music lessons on IQ (Schellenberg, in press). Although we controlled for GPA in Experiment 1, we did not administer standard measures of intelligence. In the present experiment, participants completed a measure of fluid intelligence—the short form of the Raven’s Advanced Progressive Matrices (Bors & Stokes, 1998). If group differences in intelligence are driving the observed effects, individual differences in intelligence should predict individual differences in performance accuracy.

Trained and untrained adults were asked to identify the emotions conveyed by the prosody of spoken utterances as well as by tone sequences derived from those utterances. All of the utterances were semantically neutral. Some were in English; others were in Tagalog. We predicted that musically trained adults would have better decoding skills than musically untrained adults. We also expected that familiarity would lead to better performance with English compared with Tagalog stimuli, and with spoken utterances compared with tone sequences. On the basis of our review of the literature, we predicted that performance would be better for sad or angry sounding spoken utterances than for utterances that were happy or fearful sounding. For tone sequences (which have prosodic and musical properties), we predicted that decoding sadness would be particularly accurate, whereas decoding fear would be particularly inaccurate (as in Experiment 1).

Method
Participants. Fifty-six adults from a university community (18 men and 38 women) participated in the study. Some participants were recruited from introductory psychology classes and received course credit for participating. Others were recruited from the Faculty of Music and received token remuneration. The musically trained group consisted of 24 women and 4 men who had at least 8 years of music lessons (M 12.5 years, SD 3.5 years). All of them began taking music lessons during childhood. The untrained group consisted of 14 women and 14 men. None had ever taken private music lessons. Apparatus. PsyScope software (Cohen, MacWhinney, Flatt, & Provost, 1993) installed on a Macintosh computer (iMac) was used to create a customized program that controlled presentation of stimuli and collection of responses. The auditory stimuli were presented through Telephonics TDH-39P headphones (Telephonics Corporation, Farmingdale, NY) at a comfortable listening level. As in Experiment 1, participants were tested in a sound-attenuating booth. Stimuli. The stimuli consisted of 20 sentences uttered in English, 20 sentences uttered in Tagalog, and 32 tone sequences derived from the utterances. English sentences were taken from the speech prosody section of the Florida Affect Battery. Tagalog sentences were a subset of sentences used by Balkwill, Thompson, and Schubert (2002). In both languages, four sentences with semantically neutral content were uttered by a native female speaker in five different ways, corresponding to the four target emotions

MUSIC LESSONS AND SPEECH PROSODY

53

(happy, sad, fearful, and angry) plus one with neutral emotion. Utterances with neutral emotion were included to avoid ceiling levels of performance on the emotion-identification task. Tone sequences were derived from utterances spoken in a happy, sad, fearful, or angry manner using the procedure described in Experiment 1. Tone sequences were not derived from emotionally neutral utterances because their inclusion could have made the emotionidentification task formidable, particularly for untrained listeners. The 16 tone sequences derived from English sentences were identical to those used in Experiment 1 (see Table 1, uppermost section). Sixteen additional tone sequences were derived from Tagalog sentences (see Table 1, lowermost section). In summary, there were 40 spoken utterances (2 languages × 4 sentences × 5 emotions) and 32 tone sequences (2 languages × 4 sentences × 4 emotions). Procedure. Participants were tested individually in the sound-attenuating booth. They were told that they would hear 40 spoken utterances and 32 tone sequences and that for each they should select the emotion conveyed from the set provided. They were also told that each tone sequence mirrored the pitch and temporal information of a spoken phrase, and that the original phrases were spoken in a way that conveyed a happy, sad, fearful, angry, or neutral emotion. In the first task of the test phase, listeners heard the 40 spoken utterances presented in random order. On each trial, they decided whether the speaker sounded happy, sad, angry, fearful, or emotionally neutral by clicking one of the options displayed on the computer screen. In the second task, listeners heard the 32 tone sequences derived from happy, sad, fearful, and angry sounding spoken utterances. Listeners were told that tone sequences were derived from spoken utterances

presented in the first task and they were encouraged to imagine that each was a sentence spoken by the computer. On each trial, they decided whether the tone sequence was derived from happy, sad, fearful, or angry speech. Before the first (spoken utterances) and second (tone sequences) tasks began, participants had two and four practice trials, respectively, drawn randomly from the same sets of stimuli used in the actual tasks. Following the test phase, participants were given a maximum of 20 min to complete the short form of the Raven’s Advanced Progressive Matrices (Bors & Stokes, 1998). On average, participants took approximately 10 min to complete the Raven’s test and 20 min to complete the entire procedure.

Results and Discussion
Each listener had five scores representing the percentage of correct responses for the five conditions with spoken stimuli, and four scores for the four conditions with tone sequences. Descriptive statistics are provided in Table 2. Performance levels for both trained and untrained participants were reliably higher than chance levels in all conditions (ps < .001). The primary analysis was a 2 × 2 × 2 × 4 mixed-design ANOVA, with one between-subjects factor (musical training) and three within-subjects factors: language (English or Tagalog), modality (spoken utterances or tone sequences), and emotion (happy, sad, fearful, or angry). Responses to neutral spoken stimuli were analyzed separately. Significant main effects revealed superior performance with the English stimuli (M 64% correct) over the Tagalog stimuli (M 60%), F(1, 54) 5.60, p .022; and for the spoken utterances 79%) over the tone sequences (M 45%), (M

Table 2 Mean Percentage of Correct Responses (and Standard Deviations) in Experiment 2 Musically trained Variable Spoken utterance Happy Sad Fearful Angry Neutral Tone sequence Happy Sad Fearful Angry English 96.4 (13.1) 87.5 (21.0) 73.2 (21.5) 82.2 (17.8) 85.7 (18.6) 51.8 (31.1) 53.6 (23.3) 34.8 (22.9) 32.2 (27.1) Tagalog 58.9 (20.7) 77.7 (26.7) 70.5 (19.3) 96.4 (8.9) 92.0 (16.8) 39.3 (25.9) 75.0 (23.6) 39.3 (24.0) 37.5 (24.1) Musically untrained English 97.3 (7.9) 83.9 (24.7) 67.9 (23.4) 90.2 (19.7) 81.3 (20.0) 60.7 (27.6) 44.7 (23.9) 33.0 (25.5) 27.7 (25.8) Tagalog 63.4 (24.1) 70.5 (23.6) 47.3 (29.9) 100.0 (0.0) 76.8 (22.5) 50.0 (24.5) 60.7 (26.7) 33.0 (24.6) 40.2 (25.8)

54

THOMPSON, SCHELLENBERG, AND HUSAIN

F(1, 54) 351.76, p < .001, as predicted. Moreover, a main effect of emotion confirmed that some emotions were decoded more easily than others, F(3, 162) 26.48, p < .001. Across languages and modes, decoding accuracy was highest for stimuli conveying sadness (M 69%) and lowest for stimuli conveying fear (M 50%). A significant two-way interaction between language and modality, F(1, 54) 32.37, p < .001, stemmed from superior performance for English with speech stimuli but not with tone sequences. In other words, unfamiliar and unintelligible Tagalog phonemes and syllables interfered with performance in the speech conditions. The two-way interaction between modality and emotion was also significant, F(3, 162) 31.98, p < .001. This finding was a consequence of relatively good performance with angry sounding spoken utterances but relatively poor performance with angry sounding tone sequences. A two-way interaction between language and emotion, F(3, 162) 26.27, p < .001, indicated that prosodic cues to emotion varied between the English and Tagalog stimuli. Because a single female speaker uttered all of the sentences in both languages, we hesitate to attribute this finding to cultural rather than to individual differences in using prosody to express emotion. Finally, a reliable three-way interaction between language, modality, and emotion, F(3, 162) 6.51, p < .001, revealed that the two-way interaction between language and modality (i.e., a decrement in performance for Tagalog compared with English spoken conditions, noted above) was not evident for the angry sounding stimuli. In other words, the speaker who uttered the Tagalog sentences in an angry manner was relatively successful at conveying her intended emotion to our sample of English-speaking Canadians, and her unintelligible words did not interfere in these instances. Effects of training were less straightforward than those observed in Experiment 1 but generally consistent with our hypothesis. The main effect of musical training was not significant, but there was a significant interaction between emotion and musical training, F(3, 162) 5.77, p .001. Musical training did not interact with any other variable or combination of variables. Follow-up tests of the interaction between emotion and musical training showed enhanced decoding accuracy among trained adults for some emotions but not for others. Specifically, we examined each of the four emotion categories separately with a three-way (Musical Training × Language × Modality) mixed-design ANOVA.

Figure 2 displays mean scores (percent correct) of trained and untrained listeners for each of the four emotions as well as for the neutral emotion condition. A significant performance advantage for the musically trained group was evident for the sad sounding stimuli, F(1, 54) 5.05, p .029, and for the fearful sounding stimuli, F(1, 54) 5.94, p .018. The groups performed similarly in the angry conditions (F < 1). Interestingly, the untrained group performed marginally better with the happy sounding stimuli, F(1, 54) 3.88, p .054. Although we do not have data that address this issue (data were recorded as correct or incorrect), one possibility is that untrained listeners had a bias to respond “happy” that would have inflated performance with happy sounding stimuli (see also Juslin & Laukka, 2001). We analyzed response patterns for spoken utterances with neutral emotion separately with a 2 × 2 (Musical Training × Language) mixed-design ANOVA. As with the sad and fearful sounding stimuli, the neutral sounding utterances were identified better by musically trained (M 89% correct) than by untrained (M 79%) listeners, F(1, 54) 5.64, p .021. There was no main effect of language, and language did not interact with musical training. Although we observed benefits of musical training for some emotions but not for others, it is remarkable that enhanced performance levels were evident not only for tone sequences and for English speech but

Figure 2. Mean levels of performance in Experiment 2 (adults) as a function of musical training and emotion. Error bars represent standard errors. Asterisks indicate that the effect of music training is significant at p < .05. Means for happy, sad, fearful, and angry emotions are averaged across English and Tagalog spoken utterances and tone sequences. Means for the neutral emotion are averaged across English and Tagalog spoken utterances.

MUSIC LESSONS AND SPEECH PROSODY

55

also for utterances spoken in a language that our participants did not understand (Tagalog). Indeed, a separate analysis of responses to Tagalog spoken utterances revealed a significant main effect of training, F(1, 54) 7.39, p .009, with musically trained adults (M 79% correct) outperforming untrained adults (M 72%). To the best of our knowledge, this finding is the first to suggest that musical training is associated with enhanced sensitivity to emotions conveyed by prosody in a foreign language. A final set of analyses examined scores on the short form of the Raven’s Advanced Progressive Matrices (Bors & Stokes, 1998). Although the musically trained group had higher scores, t(54) 2.88, p .006, as one might expect (Schellenberg, in press), these scores were not correlated with identification accuracy for English utterances (p > .8), Tagalog utterances (p > .2), English tone sequences (p > .9), or Tagalog tone sequences (p > .5). In fact, the association was negative (but nonsignificant) in one case. In short, it is highly unlikely that differences in fluid intelligence between musically trained and untrained adults were the source of differential responding on our emotion-identification tasks. The results of Experiments 1 and 2 suggest that a personal history of music lessons in the childhood and teenage years may enhance one’s ability to decode prosody in adulthood. As with any quasi-experiment, however, it is impossible to determine the direction of causation. Nonetheless, the results cannot be explained by differences in years of education, GPA, or IQ. It is possible, however, that individuals with a naturally keen sensitivity to emotions expressed by prosody gravitate toward music lessons in their younger years, perhaps because they find the lessons more rewarding compared with their peers. In Experiment 3, we clarified this issue by adopting an experimental design. Specifically, we examined whether 6-year-olds assigned randomly to 1 year of music lessons would show similar advantages in decoding speech prosody.

Experiment 3
We assigned 6-year-olds randomly to one of four conditions and tested their prosody-decoding abilities 1 year later when they were 7 years of age. In three of the conditions, children took weekly arts lessons in the intervening year, during which they studied keyboard, singing, or drama in small groups; children in the fourth condition received no lessons. Because using the voice to convey emphasis, surprise, and emo-

tion was central to the drama lessons, we expected that these children would be better than the no-lessons group at decoding prosody. We contrasted the abilities of the music (keyboard and singing) groups with the no-lessons group and with the drama group. As in Experiment 2, the children were asked to make emotional judgments about the prosody of spoken utterances (English and Tagalog) in addition to making judgments about tone sequences. We predicted better performance with English compared with Tagalog stimuli, and with spoken utterances compared with tone sequences. We also predicted that the music groups would have better decoding skills than the no-lessons group. We had no predictions about whether the music groups would match the drama group in decoding accuracy, or whether the two music groups (keyboard and singing) would show equivalent performance levels. As noted earlier, performance could be enhanced or impaired for the singing group. The particular age group (6 years when the lessons began) was chosen on the basis of several factors. The children needed to be old enough so that the lessons could be relatively rigorous, yet young enough so that the experience would have maximum impact on development. In music conservatories, 6-year-olds are considered mature enough to begin serious instruction in music. Six-year-olds are also more sensitive than younger children at decoding the emotions conveyed by tunes that conform to the rules of their musical culture (Dalla Bella et al., 2001). Evidence of reduced plasticity for children slightly older is provided by studies of absolute pitch. Children who take music lessons before the age of 7 are more likely than other children to have this rare ability, which implies a critical period for its acquisition (Takeuchi & Hulse, 1993). Compared with Experiments 1 and 2, the task was simplified for children by reducing the number of alternatives in the forced-choice response to 2 (from 4 or 5). On some trials, children heard either a happy or a sad sounding utterance or tone sequence. On other trials, the stimuli were fearful or angry sounding. In general, we predicted superior performance in the happy–sad conditions compared with the fearful– angry conditions.

Method
Participants. Forty-three 7-year-olds participated in the study (11 boys and 32 girls). Thirty had recently completed 1 year of formal training in keyboard (n 10), singing (n 11), or drama (n 9). A fourth

56

THOMPSON, SCHELLENBERG, AND HUSAIN

no-lessons group had no training in music or drama (n 13). The children were assigned at random to one of the four conditions as they entered 1st grade (at 6 years of age). The lessons were provided free of charge. Children in the no-lessons group received training the next year. The sample was recruited from a larger group of 144 families who volunteered to participate in a largescale study designed to investigate whether arts lessons are predictive of intellectual development (Schellenberg, in press). The children came from families in the local area who responded to a newspaper advertisement for “free arts lessons.” All of the families had a keyboard with full-sized keys, but none of the children had prior music or drama lessons. The original study consisted of a pretest (including the Wechsler Intelligence Scale for Children—Third Edition [WISC–III]; Wechsler, 1991; the Kaufman Test of Educational Achievement; Kaufman & Kaufman, 1985; and the Parent Rating Scale of the Behavioral Assessment System for Children; Reynolds & Kamphaus, 1992), the arts lessons (except for children in the no-lessons group), and a posttest (all tests readministered). Because the lessons were taught outside the laboratory (at the conservatory), all four groups spent an equivalent amount of time in the laboratory prior to the present study. Table 3 provides descriptive statistics for scores on the pretest measures. In each case, there were no differences among groups. After the posttest, families were invited to participate in the present study. Children in the present sample were tested in the summer months between 1st and 2nd grade. Each child received a gift certificate for participating. Training. The children received weekly lessons at the Mississauga location of the Royal Conservatory of

Music (Toronto, Canada). The lessons were approximately 45 min in length, taught to groups of six children. The instructors (two each for keyboard, singing, and drama) were professional female teachers who were Conservatory affiliates or graduates. The keyboard lessons were designed by teachers at the Conservatory and consisted of traditional pianotraining approaches using electronic keyboards. The children studied sight-reading, music notation, fingering patterns, playing from memory, and clapping rhythms. Children in the singing group received training in the Kodaly method (Choksy, 1999), an inten´ sive musical program that involves improvisation as well as singing, playing, and dancing to simple tunes and folk melodies. Children in the drama group studied simple scripts for plays, intonation, memorization of lines, staging, and acting. In all groups, children were expected to practice at home between lessons. Apparatus. The apparatus was identical to Experiment 2, except that children responded by pressing one of two buttons on a button box connected to the computer. Stimuli. The stimuli were identical to those of Experiment 2, except the spoken utterances with neutral prosody were excluded. Hence, there were 32 spoken sentences (2 languages × 4 sentences × 4 emotions) and 32 corresponding tone sequences. Procedure. Children were tested individually by an assistant who was blind to group assignment. They were informed that they would participate in a series of short tasks and that each would require eight responses. For each task, they were told to choose between two answers and shown how to respond. In two tasks, children heard eight happy and sad sentences presented in random order, once in English

Table 3 Means (and Standard Deviations) and Group Comparisons on the Pretest Measures in Experiment 3 Group Keyboard Singing Drama No lessons F(3, 39) p VIQ 106 (12) 107 (11) 108 (10) 108 (11)

Similar Documents

Free Essay

However Some Studies, Like One from Sweden Cited on Sage Journals Psychology of Music, Show Students Performing More Poorly on Reading Comprehension Tests While Listening to Music. the Key Difference Seems to Be That

...HOW DOES A MUSIC PROGRAM AFFECT THE READING FLUENCY OF SECOND GRADE ESL STUDENTS? by Candace Rose Cooper A Thesis submitted in partial fulfillment of the requirements for the Master of Arts degree in English as a Second Language Hamline University St. Paul, Minnesota April, 2011 Committee: Ann Mabbott-Primary Advisor Cynthia Lundgren-Secondary Advisor Kristin Weidlein-Peer Reader To my aunt, Mary Lou Merdan, Ph.D., who dedicated her career to reading literacy through the education of children and teachers. ii TABLE OF CONTENTS Chapter One: Introduction……………………………………………………………….1 Folk Songs………………………………………………………………………...1 Background of the Research………………………………………………………3 Benefits of Music Education……………………………………………………...4 Conclusion………………………………………………………………………...5 Chapter Two: Literature Review…………………………………………………………7 Music, Motivation, Language, and Reading Fluency……………………………..7 Music and Language…………………………………………………………..…..8 Music and Motivation……………………………………………………………10 Oral Language……………………………………………………………………15 Reading Fluency…………………………………………………………………17 Strategies for Enhancing Reading Fluency………………………………………21 Fluency and ELLs…………………………………………………...…………...24 Fluency Assessment……………………………………………………………...25 Conclusion……………………………………………………………………….29 Chapter Three: Methodology…………………………………………….……………...31 Participants and Research Design………………………………………………..31 Research Paradigm…………………………………………….………………....31 Setting………………………….………………………………………………...32 iii Participants………………………………………………………………………...

Words: 20500 - Pages: 82

Free Essay

Nursery Rhymes Compilation (Unedited)

...------------------------------------------------- Nursery rhyme From Wikipedia, the free encyclopedia See also: Children's music and Children's song Illustration of "Hey Diddle Diddle", a popular nursery rhyme A nursery rhyme is a traditional poem or song for young children in Britain and many other countries, but usage only dates from the late 18th/early 19th century and in North America the term Mother Goose Rhymes, introduced in the mid-18th century, is still often used.[1] Contents   [hide]  * 1 History * 1.1 Lullabies * 1.2 Early nursery rhymes * 1.3 19th century * 2 Meanings of nursery rhymes * 3 Nursery rhyme revisionism * 4 Nursery rhymes and education * 5 See also * 6 Notes ------------------------------------------------- History[edit] Lullabies[edit] Main article: Lullaby The oldest children's songs of which we have records are lullabies, intended to help a child sleep. Lullabies can be found in every human culture.[2] The English term lullaby is thought to come from "lu, lu" or "la la" sound made by mothers or nurses to calm children, and "by by" or "bye bye", either another lulling sound, or a term for good night.[3] Until the modern era lullabies were usually only recorded incidentally in written sources. The Roman nurses' lullaby, "Lalla, Lalla, Lalla, aut dormi, aut lacta", is recorded in a scholiumon Persius and may be the oldest to survive.[4] Many medieval English verses associated with the birth of Jesus take...

Words: 27825 - Pages: 112

Premium Essay

Summary

...[pic] Гальперин И.Р. Стилистика английского языка Издательство: М.: Высшая школа, 1977 г. В учебнике рассматриваются общие проблемы стилистики, дается стилистическая квалификация английского словарного состава, описываются фонетические, лексические и лексико-фразеологические выразительные средства, рассматриваются синтаксические выразительные средства и проблемы лингвистической композиции отрезков высказывания, выходящие за пределы предложения. Одна глава посвящена выделению и классификации функциональных стилей. Книга содержит иллюстративный текстовой материал. Предназначается для студентов институтов и факультетов иностранных языков и филологических факультетов университетов. GALPERIN STYLISTICS SECOND EDITION, REVISED Допущено Министерством высшего и среднего специального образования СССР в качестве учебника для студентов институтов и факультетов иностранных языков |[pic] |MOSCOW | | |"HIGHER SCHOOL" | | |1977 | TABLE OF CONTENTS Page Предисловие к первому изданию……………………………………………………..6 Предисловие к второму изданию……………………………………………………..7 Part I. Introduction 1. General Notes on Style and Stylistics…………………………………………9 2. Expressive Means (EM) and Stylistic Devices (SD)………………………...25 3. General Notes on Functional Styles of Language……………………………32 4. Varieties of Language………………………………………………………..35 5. A Brief...

Words: 151690 - Pages: 607

Free Essay

Academic Writing

...Mathematical Writing by Donald E. Knuth, Tracy Larrabee, and Paul M. Roberts This report is based on a course of the same name given at Stanford University during autumn quarter, 1987. Here’s the catalog description: CS 209. Mathematical Writing—Issues of technical writing and the effective presentation of mathematics and computer science. Preparation of theses, papers, books, and “literate” computer programs. A term paper on a topic of your choice; this paper may be used for credit in another course. The first three lectures were a “minicourse” that summarized the basics. About two hundred people attended those three sessions, which were devoted primarily to a discussion of the points in §1 of this report. An exercise (§2) and a suggested solution (§3) were also part of the minicourse. The remaining 28 lectures covered these and other issues in depth. We saw many examples of “before” and “after” from manuscripts in progress. We learned how to avoid excessive subscripts and superscripts. We discussed the documentation of algorithms, computer programs, and user manuals. We considered the process of refereeing and editing. We studied how to make effective diagrams and tables, and how to find appropriate quotations to spice up a text. Some of the material duplicated some of what would be discussed in writing classes offered by the English department, but the vast majority of the lectures were devoted to issues that are specific to mathematics and/or computer science. Guest lectures by...

Words: 48549 - Pages: 195

Free Essay

La Singularidad

...NOTE: This PDF document has a handy set of “bookmarks” for it, which are accessible by pressing the Bookmarks tab on the left side of this window. ***************************************************** We are the last. The last generation to be unaugmented. The last generation to be intellectually alone. The last generation to be limited by our bodies. We are the first. The first generation to be augmented. The first generation to be intellectually together. The first generation to be limited only by our imaginations. We stand both before and after, balancing on the razor edge of the Event Horizon of the Singularity. That this sublime juxtapositional tautology has gone unnoticed until now is itself remarkable. We're so exquisitely privileged to be living in this time, to be born right on the precipice of the greatest paradigm shift in human history, the only thing that approaches the importance of that reality is finding like minds that realize the same, and being able to make some connection with them. If these books have influenced you the same way that they have us, we invite your contact at the email addresses listed below. Enjoy, Michael Beight, piman_314@yahoo.com Steven Reddell, cronyx@gmail.com Here are some new links that we’ve found interesting: KurzweilAI.net News articles, essays, and discussion on the latest topics in technology and accelerating intelligence. SingInst.org The Singularity Institute for Artificial Intelligence: think tank devoted to increasing...

Words: 237133 - Pages: 949

Premium Essay

Myths

...Contents Preface Acknowledgments Introduction 1 BRAIN POWER Myth #1 Most People Use Only 10% of Their Brain Power Myth #2 Some People Are Left-Brained, Others Are Right-Brained Myth #3 Extrasensory Perception (ESP) Is a Well-Established Scientific Phenomenon Myth #4 Visual Perceptions Are Accompanied by Tiny Emissions from the Eyes Myth #5 Subliminal Messages Can Persuade People to Purchase Products 2 FROM WOMB TO TOMB Myth #6 Playing Mozart’s Music to Infants Boosts Their Intelligence Myth #7 Adolescence Is Inevitably a Time of Psychological Turmoil Myth #8 Most People Experience a Midlife Crisis in | 8 Their 40s or Early 50s Myth #9 Old Age Is Typically Associated with Increased Dissatisfaction and Senility Myth #10 When Dying, People Pass through a Universal Series of Psychological Stages 3 A REMEMBRANCE OF THINGS PAST Myth #11 Human Memory Works like a Tape Recorder or Video Camera, and Accurate Events We’ve Experienced Myth #12 Hypnosis Is Useful for Retrieving Memories of Forgotten Events Myth #13 Individuals Commonly Repress the Memories of Traumatic Experiences Myth #14 Most People with Amnesia Forget All Details of Their Earlier Lives 4 TEACHING OLD DOGS NEW TRICKS Myth #15 Intelligence (IQ) Tests Are Biased against Certain Groups of People My th #16 If You’re Unsure of Your Answer When Taking a Test, It’s Best to Stick with Your Initial Hunch Myth #17 The Defining Feature of Dyslexia Is Reversing Letters Myth #18 Students Learn Best When Teaching Styles Are Matched to...

Words: 130018 - Pages: 521