This is the second in a series of two articles by neuroscience student and musician Matt Gartry entitled ‘What is Music and What is Sound Art? A Neuro-Historical Approach’

Recap & Overview
Music and sound art alike can both illicit powerful responses in the human brain but often take different approaches to achieving their effects. The first part of this series has briefly introduced the difficulties of pinpointing the boundaries between music and sound art. This final article will pull together findings from recent neuroscience research to support a novel brain-based definition of music, against which sound art can be differentiated.

Hearing Music
“Sound is everything we hear and many things we don’t”
– Prof. Allen S. Weiss, New York University

Sound arises from an object’s vibrations. Sounds are detected and perceived by the auditory system – a series of brain connections that link your ears to brain regions involved in sound processing, memory, movement, intellectual thought, pleasure and emotion. It is here, in these highly connected neural networks, that music is distinguished from non-music.

Sounds heard as music activate much more of the brain than those heard as other noises. For instance, many of the circuits involved in the perception of speech and language are also used to process music, but not background noise. Even John Cage was aware of this difference between music and noise, saying: “When I hear what we call music, it seems to me that someone is talking… but when I hear traffic… I don’t have the feeling that anyone is talking. I have the feeling that sound is acting.” Moreover, musicians often use language as a metaphor when describing melodic phrases.

This isn’t surprising when you consider the wealth of similarities between music and speech which are not shared by background noise. These similarities are demonstrated rather nicely by Steve Reich’s early works which experimented with speech and phasing, such as Come Out (1966). As in Lucier’s I Am Sitting in a Room, these involve morphing pure speech into rough, roaring soundscapes.

In both music and language, timing, volume and pitch are structured in accordance with complex sets of rules. People are born with a basic knowledge of these rules. Even newborn babies prefer in-key consonant music from out-of-key dissonant music [1] and can recognise tunes played to them over periods of weeks [2]. An understanding of the fundamental principles of music gives listeners a basis for deriving meaning from the sounds they hear.

Electrophysiological evidence suggests that the grammatical structures of both music and speech are analysed by a single brain region, called Broca’s area [3]. Broca’s area seems to be one part of the brain that pulls together related words or notes and lets you understand the meaning of a sentence or a musical phrase as a whole. Interestingly, it has been demonstrated that the rhythms and melodies of a culture’s instrumental music often reflect the unique patterns of rhythm and pitch in that culture’s native speech [4]. This phenomenon may underlie the observation of musicologist Jean-Jacques Nattiez that “the border between music and noise is always culturally defined.”

reich_ jodene e
Steve Reich’s ‘Come Out’ at the MOMA
By Jodene

Every human culture in recorded history has shared both language and music. Music’s remarkable capacity for conveying and evoking emotion most likely evolved, at least in part, as a means of bonding groups of people and promoting safety by numbers. The same could be said of dance, which is intimately entwined with music and also dates back to the earliest human civilisations. Functional brain imaging reveals that simply listening to music, without moving to it, activates the brain’s motor cortex and the cerebellum which are involved in analysing rhythm and coordinating your body’s movements [5, 6].

The human brain automatically and involuntarily tracks periodic rhythm and melodic progression while listening to music and speech [7]. This creates a predictive framework against which expectations of upcoming rhythms/accents and changes in pitch are forged. It has been suggested that the balance between rhythmic and melodic predictability and surprise makes a fundamental contribution to the emotional experience of music [8].

Taken together, it seems that the healthy human brain is tuned to appreciate the basic principles of music, such as melodic and rhythmic progression. Certain sequences of sound are innately pleasing, such as those which conform to a tonal key. The brain’s natural ability to make sense of music is fine-tuned throughout life and, at least historically, tends to accord with cultural norms.

At its core, music perception does not require book-learning or intelligent conscious thought (although these can influence perception separately). It is an innate ability endowed to almost all people in very early life. Consequently, brain responses to a range of genres – from Vivaldi to Miles Davis to The Beatles – can be accurately predicted using functional imaging [9].

Therefore music can perhaps be defined from a neuroscience perspective as any sequence of sounds, structured in time and pitch, which stimulate the brain in a relatively predictable, stereotyped way – activating parts that process sound and language, parts that coordinate your body’s movements, parts that recall past memories and build expectations and parts that give rise to judgement and emotion.

So for example, by this definition, John Cage’s 4’33 does not meet the criteria for music since the quiet rustling and fidgeting of a respectful audience would not evoke such a complex auditory response of this kind in the brain.

Towards a Distinction Between Music & Sound Art

The neuroscientific definition of music posited here (relating to the human brain’s innate, universal, stereotyped responses to certain progressions of sound) is admittedly narrower than some others. Composer Edgar Varese simply defined music as “organised sound.” However this brain-based definition is useful in teasing out the boundaries between music and sound art.

I would argue that the creative process of organising sounds into a structured, musical form is an art – just like organising paint on a canvas is an art – but this process alone is never sound art.

From a neuroscience perspective, if sound art is to feature music, it must use it in combination with some other artistic medium, such as spoken language, visual imagery or an underlying conceptual rationale such that novel patterns of brain activity are produced and the percept created is ultimately different from that of music alone.
Sound art is an extremely heterogeneous grouping of artistic activity and still in its infancy; the term was only coined in 1984. As the field gains exposure and popularity, what is and is not sound art is likely to become an increasingly debated matter.

I am neither so foolish nor so arrogant as to prescribe any notion of a complete definition of sound art. Instead, in this series, I have briefly set forward an evidence-based framework for future artists and thinkers against which the identity of sound art can be separated from that of music and uniquely shaped over the coming decades.

References
1. Perani D, Saccuman MC, Scifo P, Spada D, Andreolli G, Rovelli R, Baldoli C, Koelsch S. Functional specializations for music processing in the human newborn brain. Proc Natl Acad Sci U S A. 2010; 107(10):4758-63.

2. Zatorre R, McGill J. Music, the food of neuroscience? Nature. 2005; 434(7031):312-5.

3. Maess B, Koelsch S, Gunter TC, Friederici AD. Musical syntax is processed in Broca’s area: an MEG study. Nat Neurosci. 2001; 4(5):540-5.

4. Patel AD, Iversen JR, Rosenberg JC. Comparing the rhythm and melody of speech and music: the case of British English and French. J Acoust Soc Am. 2006; 119(5 Pt 1):3034-47.

5. Chen JL, Penhune VB, Zatorre RJ. Listening to musical rhythms recruits motor regions of the brain. Cereb Cortex. 2008; 18(12):2844-54.

6. Bengtsson SL, Ullén F, Ehrsson HH, Hashimoto T, Kito T, Naito E, Forssberg H, Sadato N. Listening to rhythms activates motor and premotor cortices. Cortex 2009; 45(1):62-71.

7. Snyder JS, Large EW. Gamma-band activity reflects the metric structure of rhythmic tone sequences. Brain Res Cogn Brain Res. 2005; 24(1):117-26.

8. Levitin DJ, Chordia P, Menon V. Musical rhythm spectra from Bach to Joplin obey a 1/f power law. Proc Natl Acad Sci U S A. 2012; 109(10):3716-20.

9. Alluri V, Toiviainen P, Lund TE, Wallentin M, Vuust P, Nandi AK, Ristaniemi T, Brattico E.
From Vivaldi to Beatles and back: predicting lateralized brain responses to music. Neuroimage. 2013; 83:627-36.

Links to mentioned work

Lucier’s I am sitting in a room
http://www.lovely.com/titles/cd1013.html

Reich’s Come out
http://www.youtube.com/watch?v=uGDo1YN_q3c

Cage’s 4’33
http://www.forbes.com/fdc/welcome_mjx.shtml

Main article image is ‘Ear Anatomy’ by El Bibliomata

Related Posts