Music surrounds us¿and we wouldn't have it any other way. An
exhilarating orchestral crescendo can bring tears to our eyes and send
shivers down our spines. Background swells add emotive punch to movies
and TV shows. Organists at ballgames bring us together, cheering, to
our feet. Parents croon soothingly to infants.
And our fondness has deep roots: we have been making music since
the dawn of culture. More than 30,000 years ago early humans were
already playing bone flutes, percussive instruments and jaw harps--and
all known societies throughout the world have had music. Indeed, our
appreciation appears to be innate. Infants as young as two months will
turn toward consonant, or pleasant, sounds and away from dissonant
ones. And when a symphony's denouement gives delicious chills, the
same kinds of pleasure centers of the brain light up as they do when
eating chocolate, having sex or taking cocaine.
Therein lies an intriguing biological mystery: Why is
music--universally beloved and uniquely powerful in its ability to
wring emotions--so pervasive and important to us? Could its emergence
have enhanced human survival somehow, such as by aiding courtship, as
Geoffrey F. Miller of the University of New Mexico has proposed? Or
did it originally help us by promoting social cohesion in groups that
had grown too large for grooming, as suggested by Robin M. Dunbar of
the University of Liverpool? On the other hand, to use the words of
Harvard University's Steven Pinker, is music just "auditory
cheesecake"--a happy accident of evolution that happens to tickle
the brain's fancy?
Why is
music--universally beloved and uniquely powerful in its ability to
wring emotions--so pervasive and important to us?
Neuroscientists don't yet have the ultimate answers. But in recent
years we have begun to gain a firmer understanding of where and how
music is processed in the brain, which should lay a foundation for
answering evolutionary questions. Collectively, studies of patients
with brain injuries and imaging of healthy individuals have
unexpectedly uncovered no specialized brain "center" for
music. Rather music engages many areas distributed throughout the
brain, including those that are normally involved in other kinds of
cognition. The active areas vary with the person's individual
experiences and musical training. The ear has the fewest sensory cells
of any sensory organ--3,500 inner hair cells occupy the ear versus 100
million photoreceptors in the eye. Yet our mental response to music is
remarkably adaptable; even a little study can "retune" the
way the brain handles musical inputs.
Inner Songs
Until the advent of modern imaging techniques, scientists gleaned
insights about the brain's inner musical workings mainly by studying
patients--including famous composers--who had experienced brain
deficits as a result of injury, stroke or other ailments. For example,
in 1933 French composer Maurice Ravel began to exhibit symptoms of
what might have been focal cerebral degeneration, a disorder in which
discrete areas of brain tissue atrophy. His conceptual abilities
remained intact--he could still hear and remember his old compositions
and play scales. But he could not write music. Speaking of his
proposed opera Jeanne d'Arc, Ravel confided to a friend,
"...this opera is here, in my head. I hear it, but I will never
write it. It's over. I can no longer write my music." Ravel died
four years later, following an unsuccessful neurosurgical procedure.
The case lent credence to the idea that the brain might not have a
specific center for music.
The experience of another composer additionally suggested that
music and speech were processed independently. After suffering a
stroke in 1953, Vissarion Shebalin, a Russian composer, could no
longer talk or understand speech, yet he retained the ability to write
music until his death 10 years later. Thus, the supposition of
independent processing appears to be true, although more recent work
has yielded a more nuanced understanding, relating to two of the
features that music and language share: both are a means of
communication, and each has a syntax, a set of rules that govern the
proper combination of elements (notes and words, respectively).
According to Aniruddh D. Patel of the Neurosciences Institute in San
Diego, imaging findings suggest that a region in the frontal lobe
enables proper construction of the syntax of both music and language,
whereas other parts of the brain handle related aspects of language
and music processing.
Imaging studies have also given us a fairly fine-grained picture of
the brain's responses to music. These results make the most sense when
placed in the context of how the ear conveys sounds in general to the
brain. Like other sensory systems, the one for hearing is arranged
hierarchically, consisting of a string of neural processing stations
from the ear to the highest level, the auditory cortex. The processing
of sounds, such as musical tones, begins with the inner ear (cochlea),
which sorts complex sounds produced by, say, a violin, into their
constituent elementary frequencies. The cochlea then transmits this
information along separately tuned fibers of the auditory nerve as
trains of neural discharges. Eventually these trains reach the
auditory cortex in the temporal lobe. Different cells in the auditory
system of the brain respond best to certain frequencies; neighboring
cells have overlapping tuning curves so that there are no gaps.
Indeed, because neighboring cells are tuned to similar frequencies,
the auditory cortex forms a "frequency map" across its
surface.
The response to music per se, though, is more complicated. Music
consists of a sequence of tones, and perception of it depends on
grasping the relationships between sounds. Many areas of the brain are
involved in processing the various components of music. Consider tone,
which encompasses both the frequencies and loudness of a sound. At one
time, investigators suspected that cells tuned to a specific frequency
always responded the same way when that frequency was detected.
But in the late 1980s Thomas M. McKenna and I, working in my
laboratory at the University of California at Irvine, raised doubts
about that notion when we studied contour, which is the pattern of
rising and falling pitches that is the basis for all melodies. We
constructed melodies consisting of different contours using the same
five tones and then recorded the responses of single neurons in the
auditory cortices of cats. We found that cell responses (the number of
discharges) varied with the contour. Responses depended on the
location of a given tone within a melody; cells may fire more
vigorously when that tone is preceded by other tones rather than when
it is the first. Moreover, cells react differently to the same tone
when it is part of an ascending contour (low to high tones) than when
it is part of a descending or more complex one. These findings show
that the pattern of a melody matters: processing in the auditory
system is not like the simple relaying of sound in a telephone or
stereo system.
Although most research has focused on melody, rhythm (the relative
lengths and spacing of notes), harmony (the relation of two or more
simultaneous tones) and timbre (the characteristic difference in sound
between two instruments playing the same tone) are also of interest.
Studies of rhythm have concluded that one hemisphere is more involved,
although they disagree on which hemisphere. The problem is that
different tasks and even different rhythmic stimuli can demand
different processing capacities. For example, the left temporal lobe
seems to process briefer stimuli than the right temporal lobe and so
would be more involved when the listener is trying to discern rhythm
while hearing briefer musical sounds.
The situation is clearer for harmony. Imaging studies of the
cerebral cortex find greater activation in the auditory regions of the
right temporal lobe when subjects are focusing on aspects of harmony.
Timbre also has been "assigned" a right temporal lobe
preference. Patients whose temporal lobe has been removed (such as to
eliminate seizures) show deficits in discriminating timbre if tissue
from the right, but not the left, hemisphere is excised. In addition,
the right temporal lobe becomes active in normal subjects when they
discriminate between different timbres.
Brain responses also depend on the experiences and training of the
listener. Even a little training can quickly alter the brain's
reactions. For instance, until about 10 years ago, scientists believed
that tuning was "fixed" for each cell in the auditory
cortex. Our studies on contour, however, made us suspect that cell
tuning might be altered during learning so that certain cells become
extra sensitive to sounds that attract attention and are stored in
memory.
Learning
retunes the brain, so that more cells respond best to behaviorally
important sounds.
To find out, Jon S. Bakin, Jean-Marc Edeline and I conducted a
series of experiments during the 1990s in which we asked whether the
basic organization of the auditory cortex changes when a subject
learns that a certain tone is somehow important. Our group first
presented guinea pigs with many different tones and recorded the
responses of various cells in the auditory cortex to determine which
tones produced the greatest responses. Next, we taught the subjects
that a specific, nonpreferred tone was important by making it a signal
for a mild foot shock. The guinea pigs learned this association within
a few minutes. We then determined the cells' responses again,
immediately after the training and at various times up to two months
later. The neurons' tuning preferences had shifted from their original
frequencies to that of the signal tone. Thus, learning retunes the
brain so that more cells respond best to behaviorally important
sounds. This cellular adjustment process extends across the cortex,
"editing" the frequency map so that a greater area of the
cortex processes important tones. One can tell which frequencies are
important to an animal simply by determining the frequency
organization of its auditory cortex.
The retuning was remarkably durable: it became stronger over time
without additional training and lasted for months. These findings
initiated a growing body of research indicating that one way the brain
stores the learned importance of a stimulus is by devoting more brain
cells to the processing of that stimulus. Although it is not possible
to record from single neurons in humans during learning, brain-imaging
studies can detect changes in the average magnitude of responses of
thousands of cells in various parts of the cortex. In 1998 Ray Dolan
and his colleagues at University College London trained human subjects
in a similar type of task by teaching them that a particular tone was
significant. The group found that learning produces the same type of
tuning shifts seen in animals. The long-term effects of learning by
retuning may help explain why we can quickly recognize a familiar
melody in a noisy room and also why people suffering memory loss from
neurodegenerative diseases such as Alzheimer's can still recall music
that they learned in the past.
Even when incoming sound is absent, we all can "listen"
by recalling a piece of music. Think of any piece you know and
"play" it in your head. Where in the brain is this music
playing? In 1999 Andrea R. Halpern of Bucknell University and Robert
J. Zatorre of the Montreal Neurological Institute at McGill University
conducted a study in which they scanned the brains of nonmusicians who
either listened to music or imagined hearing the same piece of music.
Many of the same areas in the temporal lobes that were involved in
listening to the melodies were also activated when those melodies were
merely imagined.
Well-Developed Brains
Studies of musicians have extended many of the findings noted above,
dramatically confirming the brain's ability to revise its wiring in
support of musical activities. Just as some training increases the
number of cells that respond to a sound when it becomes important,
prolonged learning produces more marked responses and physical changes
in the brain. Musicians, who usually practice many hours a day for
years, show such effects--their responses to music differ from those
of nonmusicians; they also exhibit hyperdevelopment of certain areas
in their brains.
Christo Pantev, then at the University of M¿nster in Germany, led
one such study in 1998. He found that when musicians listen to a piano
playing, about 25 percent more of their left-hemisphere auditory
regions respond than do so in nonmusicians. This effect is specific to
musical tones and does not occur with similar but nonmusical sounds.
Moreover, the authors found that this expansion of response area is
greater the younger the age at which lessons began. Studies of
children suggest that early musical experience may facilitate
development. In 2004 Antoine Shahin, Larry E. Roberts and Laurel J.
Trainor of McMaster University in Ontario recorded brain responses to
piano, violin and pure tones in four- and five-year-old children.
Youngsters who had received greater exposure to music in their homes
showed enhanced brain auditory activity, comparable to that of
unexposed kids about three years older.
Musicians may display greater responses to sounds, in part because
their auditory cortex is more extensive. Peter Schneider and his
co-workers at the University of Heidelberg in Germany reported in 2002
that the volume of this cortex in musicians was 130 percent larger.
The percentages of volume increase were linked to levels of musical
training, suggesting that learning music proportionally increases the
number of neurons that process it.
In addition, musicians' brains devote more area toward motor
control of the fingers used to play an instrument. In 1995 Thomas
Elbert of the University of Konstanz in Germany and his colleagues
reported that the brain regions that receive sensory inputs from the
second to fifth (index to pinkie) fingers of the left hand were
significantly larger in violinists; these are precisely the fingers
used to make rapid and complex movements in violin playing. In
contrast, they observed no enlargement of the areas of the cortex that
handle inputs from the right hand, which controls the bow and requires
no special finger movements. Nonmusicians do not exhibit these
differences. Further, Pantev, now at the Rotman Research Institute at
the University of Toronto, reported in 2001 that the brains of
professional trumpet players react in such an intensified manner only
to the sound of a trumpet--not, for example, to that of a violin.
Musicians also must develop greater ability to use both hands,
particularly for keyboard playing. Thus, one might expect that this
increased coordination between the motor regions of the two
hemispheres has an anatomical substrate. That seems to be the case.
The anterior corpus callosum, which contains the band of fibers that
interconnects the two motor areas, is larger in musicians than in
nonmusicians. Again, the extent of increase is greater the earlier the
music lessons began. Other studies suggest that the actual size of the
motor cortex, as well as that of the cerebellum--a region at the back
of the brain involved in motor coordination--is greater in musicians.
Ode to Joy--or Sorrow
beyond examining how the brain processes the auditory aspects of
music, investigators are exploring how it evokes strong emotional
reactions. Pioneering work in 1991 by John A. Sloboda of Keele
University in England revealed that more than 80 percent of sampled
adults reported physical responses to music, including thrills,
laughter or tears. In a 1995 study by Jaak Panksepp of Bowling Green
State University, 70 percent of several hundred young men and woman
polled said that they enjoyed music "because it elicits emotions
and feelings." Underscoring those surveys was the result of a
1997 study by Carol L. Krumhansl of Cornell University. She and her
co-workers recorded heart rate, blood pressure, respiration and other
physiological measures during the presentation of various pieces that
were considered to express happiness, sadness, fear or tension. Each
type of music elicited a different but consistent pattern of
physiological change across subjects.
Until recently, scientists knew little about the brain mechanisms
involved. One clue, though, comes from a woman known as I. R.
(initials are used to maintain privacy), who suffered bilateral damage
to her temporal lobes, including auditory cortical regions. Her
intelligence and general memory are normal, and she has no language
difficulties. Yet she can make no sense of nor recognize any music,
whether it is a previously known piece or a new piece that she has
heard repeatedly. She cannot distinguish between two melodies no
matter how different they are. Nevertheless, she has normal emotional
reactions to different types of music; her ability to identify an
emotion with a particular musical selection is completely normal! From
this case we learn that the temporal lobe is needed to comprehend
melody but not to produce an emotional reaction, which is both
subcortical and involves aspects of the frontal lobes.
An imaging experiment in 2001 by Anne Blood and Zatorre of McGill
sought to better specify the brain regions involved in emotional
reactions to music. This study used mild emotional stimuli, those
associated with people's reactions to musical consonance versus
dissonance. Consonant musical intervals are generally those for which
a simple ratio of frequencies exists between two tones. An example is
middle C (about 260 hertz, or Hz) and middle G (about 390 Hz). Their
ratio is 2:3, forming a pleasant-sounding "perfect fifth"
interval when they are played simultaneously. In contrast, middle C
and C sharp (about 277 Hz) have a "complex" ratio of about
8:9 and are considered unpleasant, having a "rough" sound.
What are the underlying brain mechanisms of that experience? PET
(positron emission tomography) imaging conducted while subjects
listened to consonant or dissonant chords showed that different
localized brain regions were involved in the emotional reactions.
Consonant chords activated the orbitofrontal area (part of the reward
system) of the right hemisphere and also part of an area below the
corpus callosum. In contrast, dissonant chords activated the right
parahippocampal gyrus. Thus, at least two systems, each dealing with a
different type of emotion, are at work when the brain processes
emotions related to music. How the different patterns of activity in
the auditory system might be specifically linked to these
differentially reactive regions of the hemispheres remains to be
discovered.
In the same year, Blood and Zatorre added a further clue to how
music evokes pleasure. When they scanned the brains of musicians who
had chills of euphoria when listening to music, they found that music
activated some of the same reward systems that are stimulated by food,
sex and addictive drugs.
Overall, findings to date indicate that music has a biological
basis and that the brain has a functional organization for music. It
seems fairly clear, even at this early stage of inquiry, that many
brain regions participate in specific aspects of music processing,
whether supporting perception (such as apprehending a melody) or
evoking emotional reactions. Musicians appear to have additional
specializations, particularly hyperdevelopment of some brain
structures. These effects demonstrate that learning retunes the brain,
increasing both the responses of individual cells and the number of
cells that react strongly to sounds that become important to an
individual. As research on music and the brain continues, we can
anticipate a greater understanding not only about music and its
reasons for existence but also about how multifaceted it really is.