SlideShare a Scribd company logo
1 of 48
Download to read offline
!
School of Performing Arts
!
!
A Comparison Between Electronic Music Production in the MIDI
Based Studios of the late 1980’s/early 1990’s, and the Modern Day
DAW Studio.
!
!
!
Edmund Hull
!
May 2015
!
!
!
Music Technology Dissertation
!
	1
I. Acknowledgements
!
I	would	like	to	thank	Adam	Collis,	my	project	supervisor	for	the	observation	and	
guidance	throughout	the	construction	and	development	of	this	paper.		
I	would	also	like	to	thank	my	housemates,	friends,	and	family	who	have	
supported	me	throughout	the	3	years	of	my	Music	Technology	degree	course	at	
Coventry	University.		
!
	2
Table of Contents
I.	ACKNOWLEDGEMENTS	 2	
TABLE	OF	CONTENTS	 3	
II.	LIST	OF	FIGURES	&	TABLES	 4	
1.	INTRODUCTION	 5	
2.	RESEARCH	 7	
2.1 The MIDI Studio 7
2.1	a)	Introduction	to	MIDI	 7	
2.1	b)	How	MIDI	Messages	Communicate	 8	
2.2	c)	The	Practical	Uses	of	MIDI	 10	
2.1	d)	MIDI	Sequencers	 11	
2.1	e)	MIDI	Synchronization		 12	
2.1	f)	MIDI	Bandwidth	 13	
2.1	g)	The	Contrast	between	MIDI	and	Digital	Audio	 13	
2.2 The DAW Studio 15
2.2	a)	Introduction	to	DAWs	 15	
2.2	b)	DAW	Integration	 16	
2.2	c)	What	DeSines	a	Typical	DAW?	 17	
2.2	d)	What	New	Audio	Manipulation	Tools	are	Offered	by	the	DAW?	 18	
2.2	e)	MIDI	Implementation	 20	
2.3 Technology & Electronic Music 22
2.3	a)	The	Divide	Between	Computer	Music	&	Synthesis	 22	
2.3	b)	Technological	Innovation	Through	Electronic	Music	 23	
2.3	c)	Psychology	&	Sound	Perception	 24	
3.	PRACTICAL	 27	
3.1) Rhythim is Rhythim (Derrick May) 27
3.2) Black Box 31
3.3) Burial & How We Interpret Rhythm 33
3.3a)	Broken	Home	 33	
3.3b)	Homeless	 36	
4.	CONCLUSION	 41	
4.1) What does MIDI & DAW Equipment Offer the User? 41
4.2) Users Versus Innovators 42
5.	BIBLIOGRAPHY	 45
	3
II. List of Figures & Tables
!
List of Figures:
Figure 1: Basic MIDI connectivity/Daisy-chaining
Figure 2: Typical MIDI sequencer setup
Figure 3: Digital Audio/MIDI Comparison
Figure 4: Image of Logic Pro 9 Audio/Instrument/External MIDI integration
Figure 7: BPM Analysis – Strings of Life
Figure 7: BPM Analysis – Strings of Life
Figure 8: Broken Home Structure
Figure 9: Homeless Structure
Figure 10: Sound Forge 9 editing software (Walden, 2007)
!
List of Tables:
Table 1: MIDI data Transmission
!
!
!
!
!
!
!
!
!
	4
1. Introduction
!
Electronic music production has always been at the forefront in the use, development,
and manipulation of new technological advancements. The last 20 years have seen a
phenomenal development in the way electronic music producers approach music
production and the way in which technological equipment has influenced, sculpted, and
even defined new music styles and production techniques.
The technological equipment available at any given time period has limited all pioneers
of electronic music production as far back as the likes of Pierre Schaeffer and Pierre
Henry and their development of ‘Musique Concrète’ in 1948. Where their famous use of
self recorded “sound effects, musical fragments, vocalizing’s, and other sounds and
noises produced by man, his environment, and his artifacts” (Encyclopædia Britannica 02
December 2013) were processed through their use of locked vinyl groove manipulation
techniques. However, their limitation in equipment was what actually enabled them to
create and define their new genre of Musique Concrète, without the technology of
locked-groove vinyl, and through their experimentation in the manipulation of vinyl
technologies their music creation would not exist as we know it, or perhaps not at all. As
stated in ‘The Cambridge Companion to Electronic Music;’ “the technologies that are
used to make electronic music are a realisation of the human urge to originate, record
and manipulate sounds” (Collins, N. & d’Escriván: 2010).
Every decade sees a vast change in music technologies offered to electronic music
producers, and in turn new genres, styles, and music production techniques are
developed to use, manipulate, and innovate with new ideas through the use of new
technological equipment available.
Up until the recent developments of the Digital Audio Workstation (DAW) it seems that
every generation of electronic musicians were limited to some degree by the equipment
available to them at that time. Hardware, no matter how advanced always seemed to limit
an electronic musician to some degree. However, in recent years, the possibilities offered
by the DAW and computer technology are almost endless, seemingly offering electronic
music producers and sound engineers alike; the tools to produce, edit and manipulate
almost anything they could possibly imagine. All that is required is an imagination for a
song structure or particular sound and all the tools to realise this vision are featured
within a users’ DAW where endless plugins and software synths can be accessed at the
click of a mouse, and intricate virtual edits and automations of the various parameters
can be performed to create a sonic masterpiece.
	5
Or does it? Perhaps the possibilities of the DAW are too great, too daunting, and too
time consuming for most electronic producers to fully appreciate and use to a full
knowledge of the scope of even the most basic DAW tools available to them. Previously,
an electronic musician would realise their ideas through the use of hardware, often
multiple units, in order to create electronic music. A thorough knowledge of the
hardware and how it could be used to create and manipulate sound was essential when
manipulating hardware for a desired result.
This paper will examine the most recent, and vast change from the hardware-based
MIDI (Music Instrument Digital Interface) studios that dominated the late-1980s to
mid-1990’s, compared with the software-based DAW studios of the late 1990’s to the
present day. The aim of this paper is to highlight how changes in technology have
affected the way in which music is produced, both sonically and musically, and how new
music genres have developed as a result of the technological equipment available.
!
!
	6
2. Research
Throughout the research section of this paper, both MIDI and DAW technologies will
be examined in turn, followed by an investigation into the relationship between electronic
music and technology.
2.1 The MIDI Studio
MIDI technology and in turn the MIDI studio was the most recent hardware-based
technology prior to the DAW. Although MIDI is still frequently used and integrated with
the DAW environment, it tends to be used primarily as simply a control surface to
sequence MIDI piano roll data for virtual synthesizers and samplers to play back. It
seems that the connection of a number of MIDI devices is now no longer needed as the
DAW is able to sequence all elements of production in an all-in-one feature. Although
the age of the MIDI studio is somewhat over, much of the MIDI hardware technology
is still sought after by many electronic musicians who prefer a hardware-based workflow.
2.1 a) Introduction to MIDI
Introduced in 1983 MIDI was the first “digital protocol for interconnecting synthesisers”
(Collins, N. and d’Escrivan, J 48:2010). MIDI enabled previously unavailable connectivity
for synthesisers and other hardware-based music equipment to be integrated with
software. This was a huge step in synthesizer performance and meant that a number of
synthesis/sampling modules etc. could be controlled and triggered by one individual.
“Simply stated, Musical Instrument Digital Interface (MIDI) is a digital communications
language and compatible specification that allows multiple hardware and software
electronic instruments, performance controllers, computers and other related devices to
communicate with each other over a connected network.” (Huber, D. 1:2007)
At its simplest, MIDI allows the musician to play several instruments at once from a
single keyboard, rather than having to dash around the stage constantly adjusting and
monitoring all keyboard/sampler based elements.
MIDI provided a universal electronic instrument communications language; “Prior to
MIDI, there were some attempts at providing ways of connecting instruments, but none
were entirely standard and all were very limited” (White, P. 11:2003).
There was no reliable and universal digital music communication language enabling the
synchronization of hardware and software instruments, meaning that the integration of a
number of electronic instruments was significantly more difficult prior to MIDI
standardization.
	7
A well renowned rhythm machine, the ‘Roland 808’, is a perfect example of connectivity
prior to MIDI. On the back of the Roland 808, as well as numerous other pre-1981
MIDI Roland synthesisers and rhythm machines including the TR-606, the CR-8000, the
TB-303 Bass line, and finally the EP-6060 electronic piano (which featured an
arpeggiator) “you’ll find a five pin DIN jack that a standard MIDI plug will fit
into” (Vail, 2000). This however was not MIDI, but its predecessor the ‘DCB Bus’,
developed by Roland. The DCB Bus was what the MIDI protocol was based on, as this
quote from Roland’s then owner ‘Mr Kakehashi’ explains:
“We had developed our own communications protocol,” he explains. “Inside, it was the
same as today’s MIDI. At the same time, Sequential Circuits was developing a MIDI-like
protocol. We called ours the DCB Bus; they called theirs by another name. Then we
discussed how to develop a common standard. Eventually MIDI came out, but actually
more that 80 or 90% of it was based on the DCB Bus. Of course I don’t want to say that
everything was developed by Roland, because that isn’t fair. It was a joint effort. Both
companies agreed to implement the best ideas from both companies, so we jointly
created MIDI. But when you compare it with the DCB Bus, you can see how similar they
are.” (Vail, 2000).
As the quote explains, both Roland and Sequential Circuits - both of whom had been
trying to develop their own digital universal connections prior to the creation of MIDI
technology - developed MIDI collaboratively, implementing the best features from both
technologies. This would explain why the Roland TR-909 (the successor to the TR-808
and Roland’s first analog-digital hybrid machine) was also Roland’s first drum machine to
feature the ‘newly developed’ MIDI ports on the rear of the device (Vail: 2000). Other
notable features of the 909 included; the ability to accent any percussive events occurring
within the beat, and the ability to trigger 909 sounds from a MIDI controller which
actually enabled the user to obtain a wider dynamic range as described in Vail: 2000.
!
2.1 b) How MIDI Messages Communicate
“MIDI isn’t about transmitting sounds; it’s about transmitting information that tells the
instrument what your fingers were doing on the keyboard” (White: 2003).
MIDI relies on the transmission of digital data, in order to communicate a wide range of
player entry information, including velocity, sustain time of each note, pitch bend and
modulation wheel parameters. “When a key is depressed on a MIDI keyboard, a signal
known as a Note On message is sent, along with a note number identifying the key. This
is how MIDI instruments know what note to play, when to play it and when to stop
	8
playing it” (White: 2003). MIDI transmits this data as a range of numbers (between 1
and 127), which are used to describe how each feature is interacted with or manipulated.
For example, the scale could be used to describe how hard the player presses down each
key (velocity), where 1 would represent low velocity and 127 would represent maximum
velocity.
This information is then sent between/to and from, other MIDI devices/controllers or
to a computer based interface or sequencer where it is decoded to either re-trigger
another keyboard or visually recorded into a computer sequencer, where it could be
played back through the keyboard just as the player had played it.
“Like computers, the data is in a digital form - a sort of ultra-fast Morse Code for
machines.” (White: 2003). Pedals and wheels (expressive mediums) of the keyboard are
known as “Continuous Controller” meaning that their movements are constantly monitored
by the MIDI system, which operates over a sequence of minute steps, however these
steps are so tiny that the imprint is one of continuous change.
!
The general MIDI controller numbers are as follows:
“Controllers 0 to 63 are used for continuous controllers, while 64 to 95 are used for
switches. 96 to 121 are as yet undefined and 122 to 127 are reserved for Channel Mode
messages.” (White: 2003).
!
The messages are made up of four key sections: Message, Status, Data 1, and Data 2.
The message is the type of information, which is being transferred, this could be ‘note
on’ for example. The status represents the channel number. Data 1 in this instance would
be which note(s) are being pressed on the keyboard (note number). Data 2 would
transmit the velocity data how hard each note is pressed (see table 1)
!
!
!
!
!
!
Status Data 1 Data 2
The ‘sss’ bits of the MIDI message define the message type.
1 s s s n n n n 0 x x x x x x x 0 y y y y y y y
	9
8	Bits
The ‘nnnn’ bits are used to define channel number
Both ‘xxxxxxx’ and ‘yyyyyyy’ are used to carry the message data.
Table 1: MIDI data transmission (Authors Own)
2.2 c) The Practical Uses of MIDI
MIDI is a diverse and powerful tool, with a variety of uses and applications, enabling the
user a wide range of possibilities as to the connectivity of devices. The first and
foremost of these being the ability to link MIDI keyboards together: Linking MIDI
instruments is accomplished by means of standard MIDI cables - twin-cored, screened
cables with five-pin DIN plugs on either end.
The cables are plugged into the relevant ports depending on the desired function, for
example, the typical ‘MIDI master/slave connection’ requires the user to plug the MIDI
cable into the ‘MIDI Out’ port of the master keyboard into the ‘MIDI In’ port of the
slave keyboard, providing both instruments are set to the same MIDI channel, notes
played on the master keyboard will also play on the slave keyboard.
Figure 1: Basic MIDI connectivity/Daisy-chaining (Author’s Own)
!
“The ability to link a second instrument via MIDI means that the sound of both
instruments can be played from just one keyboard” (White: 2003), however, it doesn’t
stop there. The use of the ‘MIDI Thru’ port slave keyboards can be ‘daisy-chained’
enabling the player to play multiple slave keyboards at once from just one MIDI
	10
keyboard, the player just assigns each slave to input on the desired channel (up to 16) to
allow the master instrument to communicate with one specific slave without all the
others trying to play along.
The standard means of controlling MIDI usually revolve around the use of a keyboard
based MIDI device; however, unlike a conventional electronic keyboard MIDI enables
the player functionalities
!
2.1 d) MIDI Sequencers
A “MIDI sequencer is really a multi-track MIDI recorder” (White, P. 33:2003) which uses
input sources from live audio and MIDI to control synthesizers. Each track “may be
edited, erased or re-recorded independently of the other parts ”(White: 2003). A modern
MIDI sequencer will provide at least 16 tracks as a minimum.
Live recording and editing is the typical workflow offered by the MIDI sequencer, usually
to capture the idea and then to edit the MIDI data and in turn its playback. This could be
achieved by perhaps by quantizing the notes so they sit closer to the beat. Having
captured a MIDI recording, it is then possible to play back that same sequence of notes
by plugging in any keyboard, however the sound of the synthesizer would not be
captured in the recording unless it had been recorded as an audio track as well as a MIDI
track. It is also important to note that the sequencer itself cannot generally play back any
of the recorded sounds without a synthesizer or keyboard connected via MIDI to it, with
the exception of sequencers with ‘built in’ synthesizers.
!
	11
MIDI sequencers are now found in the form of computer sequencing software in which
you can set out and arrange your MIDI tracks. The sequencer will capture
!
Figure 2: Typical MIDI sequencer setup (Author’s Own)
!
“velocity, pitch, modulation, aftertouch and other controller information, as well as
MIDI Program Change, Bank Change, and Note On and Off messages” (White: 2003).
!
!
!
2.1 e) MIDI Synchronization
Because of the versatility that MIDI offers, it means that it can be set up and
synchronized with different pieces of equipment in order to optimize usability.
The key element of MIDI use is the ‘MIDI sync’ box.
All MIDI sequencers, and drum machines contain a ‘MIDI Clock.’ The MIDI clock acts
as a high resolution metronome: “it provides the electronic sprockets and gears that
allow two or more pieces of MIDI equipment to be run in perfect synchronization, with
one device acting as a master (and thus dictating the tempo) and the others functioning
as slaves” (White, P: 2000) This demonstration can be seen in ‘Figure 2’ where the master
	12
synth controls the two slave modules, this sequence could then be recorded to the
sequencer in order to play back the phrase exactly as it was played through the
synthesizer and two slave modules.
!
2.1 f) MIDI Bandwidth
MIDI bandwidth is a transmission rate of 3125 bytes per second or 3.125KB/s
(kilobytes per second). This may sound significant, but when compared to the
transmission rates of modern connections such as USB 3.0 which has a transmission rate
of 400MB/s (Megabytes per second) and is 128,000 times faster than a MIDI message,
the MIDI message seems somewhat slow and outdated in terms of connectivity and data
transmission speed. In terms of bandwidth speed, MIDI is extremely slow when
compared to bandwidths of data transmission of devices seen in the modern day. For
example when compared to Apple’s most recent development of ‘Thunderbolt’ MIDI is
almost insignificant, with Thunderbolt offering a data transmission rate of 10GB/s
(Gigabytes per second, equivalent to 10 million kilobytes per second) which is 25 times
faster than USB 3.0 and 3,200,000 times faster than the transmission of MIDI data
(Apple: 2015).
!
2.1 g) The Contrast between MIDI and Digital Audio
It is important to explain the distinction between both MIDI and digital audio. MIDI and
digital audio do not perform the same task. MIDI sequencing is entirely different to the
recording of multiple channels of audio through use of digital equipment. When a MIDI
based song is played back, the instruments will all behave depending on the recorded and
set MIDI data, which note to play, what velocity to play it, how long to hold the note etc.
This is unlike digital audio, which essentially plays back a pre-recorded audio recording or
audio sample exactly as it has been recorded (or edited in DAW software). This is the
main difference, the MIDI data is only data related to the expressivity of how the notes/
chord sequences are played, and has no input on the actual sound of the instrument
itself. This means that the MIDI data is independent of the actual sound of the
synthesiser or keyboard, and so the player is able to play back the musical phrase through
any MIDI based device.
Figure 3 shows how a keyboard device supporting both MIDI and Line outputs could be
‘recorded’ and played back as the player had played in the phrase or sound, both as a
digital audio recording and a MIDI recording. The diagram demonstrates, that for a
	13
MIDI recording to play back as it had been played, it requires the same keyboard that
played in the sound to be connected in order to play back the phrase with the original
sound. However, if a different keyboard were to be connected then the same phrase
would be played, but with a different timbre/tonality of the sound of the new keyboard’s
defaulted or selected sound type.
!
!
Figure 3: Digital Audio/MIDI Comparison (Authors Own)
!
	14
2.2 The DAW Studio
The DAW studio is the modern norm for both recording, editing, mixing and mastering
for all genres and types of music, with almost all studios comprising a central computer
and a DAW of some description.
2.2 a) Introduction to DAWs
Historically DAWs have been a part of the audio production process as far back as 1978
with the introduction of the first ‘DAW’ (Digital Audio Workstation) made by
‘Soundstream’, although computer music has dated back much further in the
development of computer music in Bell Labs in the 1960’s. Hard disks (very low in
capacity by today’s standards) were used for storage, and accommodated, very basic
editing of the recorded audio in addition to mix-down and cross-fades (Langford, 2014:
9). However, because of the text-based DOS (Disk Operating Systems) these early DAW
systems were extremely non-user friendly to navigate competently, which made them
extremely unpopular with most musicians who did not understand how text based
production systems work.
It was this text-based system and hard to use interface, which provoked the move to PC’s
in the 1980s. The PC operated with GUIs; otherwise known as ‘Graphical User
Interfaces’ in which onscreen icons and objects visually represented the commands and
interactions by the user (see Figure 4). By the late 1980s, many affordable computer
platforms using GUI operating systems were available, many of which already had
sequencing software packages written for them allowing the control, playback and
recording of MIDI instruments as described in Langford: 2014.
As processing power of PC’s increased; the evolution of music sequencing software
advanced, and it wasn’t long before the introduction of Digidesign’s ‘SoundTools’
software in 1989. SoundTools was a big step in music production mainly due to its
‘advanced editing features’ but one in particular changed the way we view music on a
DAW. This advancement was the introduction of the ‘FFT Window’, providing users
with a Fast Fourier Transform view of the audio recording.
This gave a similar view to that of a spectrum analyser with frequency on the horizontal
axis and amplitude on the vertical axis, but it also showed how the spectrum changed
over time. This provided the user with a ‘3D’ view of how a sound would change over
time and in turn an idea of what might need to be done to alter that sound in its desired
way, “then the traditional tools would be used to actually make the changes” (Langford,
2014: 10).
!
	15
2.2 b) DAW Integration
The DAW is now the most practical and widely used form of audio creation and
production. The DAW is essentially a development and advancement of the PC acting as
a sequencer in the typical MIDI setup of the early 1990’s (see ‘2.1d - MIDI Sequencers’).
Most DAW’s when starting-up will bring the user to a typical sequencer window, where
they can then add a selection of channels to begin a session. These channels typically
include: audio tracks, software instrument tracks, and external MIDI tracks.
!
Figure 4: Image of Logic Pro 9 Audio/Instrument/External MIDI integration (GUI)
!
It is typically the ability of the PC that enables the user to create, manipulate, mix, and
master internally, without any external sound source needed which defines the ‘Digital
Audio Workstation.’ A track can now essentially be constructed and completed entirely in
the virtual world of the DAW without the need of any outboard equipment other than a
set of monitors to listen back to. The vast amounts of memory now available at low
prices mean it is easy for even the bedroom producer to run what would have once been
considered to be very powerful synthesis modelling synthesisers. For Example, Native
Instruments ‘Massive’ synthesiser, a wavetable synthesiser well known for its assisted role
in its contribution to bass sound production is often associated with the fairly recent
genre of ‘Dubstep’ in which a large granulated warbling bassline typically defines the
genre, below the slow half tempo thud of the 140BPM drums.
!
	16
2.2 c) What Defines a Typical DAW?
DAWs now have the power to “effectively replace and encapsulate much of or all of the
functionality present in a traditional console-and-outboard-gear-based studio” (Leider,
2004: 46). This can be seen in almost all DAW’s that feature numerous software
synthesisers and software plugins, which enable the user to internally mix and develop a
track without the need of any outboard hardware modules to shape and sculpt the
sound.
There are two typical DAW setups:
The first is the ‘audio interface’ based setup (see figure 5), where the purpose of the
audio interface is solely to act as a high quality A/D and D/A convertor for the
computer. Typically this compromises two to six audio inputs, a mixture of XLR and 1/4
inch Jack inputs, as well as a stereo output monitor mix and often a headphone mix. This
form of DAW is inexpensive and often more practical for small scale recording and
production work where the computer will be more than able to cope with all sound
processing aspects.
!
Figure 5: Image of Basic ‘Audio Interface’ DAW Setup (Firewire 800 Interface)
!
The second setup is the ‘audio interface and expansion card’ setup. Generally a much
larger audio interface is assumed, typically with 16, 24 or 48 inputs, which offers the user
a much larger scope to record multiple sound sources at once. However, the multiple
	17
recording of so many inputs would not be possible without the expansion cards which
assumes the role of: audio processing, editing, and mixing. “These systems free the host
computer to concentrate on running its operating system and managing files and disk
access.” (Leider, 2004: 46). However the cost of these types of systems is high and they
can be expensive to upgrade once they have become out-dated. Most of these systems
will also require the user to incorporate a sound desk module to work alongside the
expansion card for audio recording control and often DAW control in digital desks (as
seen in figure 6 with the Control 24 mixing desk used to digitally interact with Pro
Tools).
!
Figure 6: Image of ‘audio interface and expansion card’ DAW Setup
!
!
2.2 d) What New Audio Manipulation Tools are Offered by the DAW?
Audio manipulation is something, which every music producer seeks when building and
creating a track. The DAW offers its users a vast scope of audio manipulation tools that
can be used to totally transform an audio recording. The most notable audio
manipulation tools include the following:
	18
• “Drum replacement”, in which the user replaces the hits of recorded drums
with selected samples, retaining the style and arrangement of how the drums
were recorded but replacing the hits with samples:
• “Time stretching” where the length/duration of the audio file is altered but not
the pitch:
• “Pitch shifting” where the pitch is altered, but not the duration of the audio
file/sample:
• “Elastic audio” which allows the user to ‘stretch’ audio recordings either with
or without affecting the pitch of the recording. (Langford: 2014).
‘Drum Replacement’ is an example of an audio manipulation tool, which is offered by
the DAW. Essentially what drum replacement enables the user to do is to replace
elements of recorded drums, this could be for example, total replacement of the snare
drum of a track, where the DAW is set to analyse every recorded snare drum hit and
then replace each hit in exactly the same place as the recorded snare. The velocity and
volume of each hit can then be altered individually to create a sense of reality to the
replacement drum samples.
‘Time Stretching’ is the name given to the lengthening and shortening of audio files/
samples. Perviously in the analog domain the user would have to alter the playback speed
of the audio sample, and in turn the pitch, now time stretching “enables the user to
expand or compress the length of the audio file without affecting the pitch” (Langford:
2014). Time stretching is a feature that has been available for some time, and has
improved since its early development, although it still performs at its best below around
4 second time differences (shorter/longer) before the audio artefacts and ‘audio jitter’ of
the stretched out audio sample can begin to be heard (usually an undesirable effect).
‘Pitch Shifting’ similarly appears to be the counterpart to time stretching, in which the
pitch of the audio file/sample is altered but the length of the audio file is not lengthened
(lower pitch) or shortened (higher pitch) as a result of the pitch change, a result found
when altering pitch in the analog domain.
However, pitch shifting was available as a feature, prior to computer based audio editing
in the digital domain in the form of the Eventide “H910 Harmonizer” in which the user
would have to input a ratio difference (only to 2 decimal places) rather than a pitch
difference amount, “which meant it was difficult for users to achieve exact pitch
matching” (Langford: 2014). The H910 was superseded by the “H3000” in 1986,
featuring MIDI connectivity, enabling the user to control the parameters of the device
	19
through MIDI. The H3000 also featured a pitch based difference amount (+1/-1
semitones).
Despite the release of pitch shifting in the digital domain, it wasn’t until the release of
Antares Audio Technologies release of “Auto Tune” in 1997, which relied heavily on the
technology of the H3000, that offered a ‘quantize’ feature enabling the user to
synchronize their audio to exact pitches. This technology became rapidly popular
throughout the early 2000’s due to the low cost of the plugin, when compared to
hardware units such as the H3000, and Auto Tune is arguabley the most innovative early
DAW technology step in establishing the DAW as “the go-to medium for audio
recording, production and manipulation” (Langford: 2014). This technology has since
been developed and now Celemony’s “Melodyne” is the industry standard pitch shifting
plugin, allowing users to input and control the audio key through MIDI input, which is
then mapped out on a GUI window.
‘Elastic Audio’ is an extremely recent concept as a direct result of the huge increase in
computer processing power over the last 10 years and is essentially a real-time and non-
destructive way of time-stretching audio files, where the user quite literally drags
elements within the audio track (such as transient points) to synchronise them up to the
tempo grid behind. Elastic audio, although not a new technology in its own right, is a
much simpler visual based approach to time stretching/pitch editing. Truly elastic audio
will have two characteristics of flexibility: time manipulation and pitch manipulation, that
is, the ability to alter audio length with/without the altering of pitch. The main difference
between elastic audio and time stretching is essentially the ease of use of elastic audio in
comparison to time stretching. The user doesn’t need be concerned about tempo ratios
or audio file length, as long as the tempo grid behind is correct. There will then be no
issue when time stretching the audio file through use of elastic audio. Another useful
feature of elastic audio is the quantisation of audio events (using the quantisation feature
identically to the way in which one would quantise a MIDI event) the user is able to
quantise every hit of a drum track exactly in time with the tempo grid, all through use of
transient markers and the versatility of elastic audio.
!
!
2.2 e) MIDI Implementation
As mentioned earlier, MIDI is still very much a part of the DAW setup. However it is
now interpreted in three forms:
	20
• The first use is as a traditional MIDI keyboard input and sequencer setup; where
the computer is literally just used to record MIDI data and play it back as the user
played it in, exactly as MIDI was traditionally used.
• The second use is as an internal software instrument input, where much like the
traditional MIDI setup, the interactions of the user are interpreted as MIDI data.
However when the recorded MIDI data is played back, it is an internal software
instrument, which responds to the MIDI messages.
• The third use is literally the use of MIDI as an extra control device to control/
automate the behaviour of plugins and audio editing parameters. For example
this could be the use of the modulation wheel to automate the lowpass filter
cutoff of a filter plugin, or as mentioned in section 2.2d using the keys to
automate the polyphonic playback autotune response of melodyne when auto
tuning a vocal track.
!
!
!
!
!
!
!
!
	21
2.3 Technology & Electronic Music
The relationship between technology and electronic music is inseparable, where
advancements in technology will almost definitely result in advancements in the way
electronic music is created, manipulated and produced. This section will examine the
pinnacle arguments within Electronic Music with regard to Technology use for
expressive electronic music production.
2.3 a) The Divide Between Computer Music & Synthesis
In Schrader’s ‘Introduction To Electronic Music’ he interviews ‘Jean-Claude Risset’, a
French pioneer of early computer music. “Risset pioneered computer music and sound
analysis, particularly of brass instruments, through the use of computers at ‘Bell Labs’
from 1964” (Schrader 1982). The interview between Schrader and Risset is based around
his works of “Mutations I” released in 1969, an experimental computer music piece
featuring a number of unusual and discordant sounds. Within the interview Schrader
asks; “Do you feel there are any fundamental differences between electronic music
composed with synthesisers and computer music?” to which Risset clearly states that he
is “only interested in the kind of computer music that differs from electronic music
composed with synthesisers” (Schrader 1982). He then expands on this, explaining his
dislike for composers who simply use the computer as an ‘elaborate synthesiser’ stating
that the computer is a far more powerful, flexible and precise tool than the synthesiser.
He continues to set out his dislike for synthesisers, explaining that the synthesiser
“restricts the sonic possibilities” of the sounds created and that in the way synthesisers
are developed they bias the user towards “instrument like performances”, unlike the
computer which enables the user complete freedom of composition and sound shaping
and manipulation (Schrader 1982). Risset’s argument clearly illistrates the major
differences between computer music and electronic music through synthesis, and it
would still appear that such a divide exists, having analysed both MIDI and DAW studios
(sections 2.1 and 2.2). The MIDI studio - much like Risset description of earlier synthesis
- encourages the user to bias and structure their music as an ‘instrument like
performance’ with many synthesis arpeggiators mainly working in 4/4 timings. This is
unlike the total freedom of music expression offered by the computer. This t isn’t to say
that the DAW doesn’t encourage the user in a similar fashion, with the default editing
window of many DAW’s (both Logic and Pro Tools for example) being a tempo grid set
to a 4/4 timing at 120BPM. This automatically encourages the user to begin structuring
their music to conform within these parameters. However it is important to mention that
	22
these parameters can easily be disabled, leaving the user with a ‘blank canvas’ from which
to begin their musical productions.
Furthermore, throughout the interview Schrader asks another poignant question; “You
have been involved with computer music for several years and you have experienced
several technological changes. How do you think the technology of computer music has
affected your compositional style?” (Schrader: 1982). Risset quite clearly states that his
compositional style relates closely to the development in technological advancements in
computer music. In terms of the implication of technological advances to computer
music Risset explains that he believes that the computer itself has given a totally new
perspective towards “completely formalized processes that can be easily
automated” (Schrader 1982) where an individual can design almost all of the sonic
constrains involved within their musical works (Schrader 1982). Essentially he is
explaining that, even when this interview was published in 1982, there was still a
significant difference, in his opinion, between the possibilities offered by both the
hardware and software music production worlds.
!
2.3 b) Technological Innovation Through Electronic Music
“The barriers to electronic music have significantly dropped in the past twenty years;
cost, size and speed are the three main factors in this revolution” (Collins, N. &
d’Escriván: 2010). The time it now takes for a composer or producer to hear the results
of their musical efforts is almost instantaneous where now Laptops and PC based audio
workstations are so powerful that the majority of musicians do not use their full capacity
or even appreciate the full capacity of their laptop workstations. Furthermore, most
DAW’s are now priced so low that it is “not uncommon for musicians, even in
developing countries, to own a number of machines” (Collins, N. & d’Escriván: 2010).
With the ease and low cost of acquiring a DAW, many would assume that this correlates
with a direct increase in musical innovation as a result of new technology multiplied by
the accessibility of the masses to electronic music production equipment. However it
appears that often this is not the case. Many electronic producers and musicians alike feel
that innovation through technological equipment is not occurring in the ‘new’ digital and
computerised age of music production. This argument is supported by ‘Alejandro Viñao’
in his article “The Future of Technology in Music” a feature within “The Cambridge
Companion to Electronic Music.” (Collins, N. & d’Escriván: 2010) In this he suggests
that, in fact, although most innovative electronic musicians are using ‘the technology of
another time’ to realise their creative musical ideas. “They appear to have lost their lust
	23
for innovation through the new and latest technologies, and are instead only using what
they feel comfortable with, not pushing the boundaries in their use of musical
equipment” (Viñao: 2010). To a degree the idea of ‘only using what you know’ does
make sense in terms of music production as it provides the music producer/electronic
musician with predictable results, they know how the equipment works and what to do in
order to obtain a desired sound/result. Alternatively it could be the equipment itself that
actually defines the music and it is the use of that equipment that many associate with
that particular music producer. This could for example be Jimmy Hendrix and his use of
the electric guitar, an instrument that he mastered and experimented with, and as a result
defined his distinct sound.
!
2.3 c) Psychology & Sound Perception
Sound perception, sound localisation and psychology within music is essential to
embrace when composing and producing a musical work, regardless of its genre. It is all
of these elements combined which help the listener to interpret the song as the producer
had intended, with the envisioned musical meaning passed on to the listener through his/
her listening experience. However, because many of the sounds heard within electronic
music are not ‘natural’ - i.e. acoustic - sounds, it is especially important for the electronic
music producer to establish the environment in which the listener should associate with
these synthesised sounds, this may revolve around the localisation of the sound (where
the sound is situated within the sound field), the timbre, pitch and rhythm.
!
The localisation of a sound is split into three key elements:
• Location of azimuth,
• Elevation and
• Distance.
The localisation of azimuth refers to the identification of sound on the horizontal plane.
In music production this would refer to left/right panning of the sound field in order to
obtain differentiation between the elements that make up the musical piece (Collins, N. &
d’Escriván: 2010).
The localisation of elevation refers to where the sound source is in relation to the vertical
plane, i.e. high-pitched sounds appear to be located ‘above us’ whereas low-pitched
sounds appear ‘below us’. Although less accurate than the localisation of azimuth, “the
localisation of pitch is still an essential feature to consider when producing
music” (Collins, N. & d’Escriván: 2010).
	24
Finally the localisation of distance is interpreted through a mix of ‘loudness of the
sound source’, ‘a knowledge of the sound itself’ and ‘the loudness ratio between direct
and reverberant sound’. In electronic music production it tends to be predominantly the
loudness ratio between direct and reverberant sound that is manipulated in order to give
a sense of distance to a particular sound. This is different from acoustic music recording
where all of these techniques will be used and incorporated into microphone technique
in order to create a sense of the instrument and the room it is in and the intended
distance from the sound source the listener will to be.
!
When listening to music our bodies automatically tell us what we need to hear and
acquire meaning from in terms of sound. This is known as ‘Auditory Streaming’, a
phenomenon which enables us to concentrate and focus on single elements within a
complex sound field, e.g. picking out the speech of an individual in a loud/busy
environment. This enables us to make sense of the sounds around us. In music this
phenomenon enables us to “hear the music as a collection of its individual streams,
vocals, bass lines, melodic lines, and rhythm” (Collins, N. & d’Escriván: 2010).
!
These ideas of auditory streaming are a part of ‘Gestalt Psychology’ founded by Paul
Ehrenfels and Max Wertheimer. Gestalt psychology can be split into the following
principals:
• Principal of Common Fate: Objects which move together are usually grouped
together
• Principal of Closure: Objects which appear to form ‘closed entities’ are usually
grouped together
• Principal of Similarity: Objects sharing ‘similar characteristics’ are usually
grouped together
• Principal of Proximity: Objects that appear close to the listener are usually
grouped together
• Principal of Good Continuation: Continuous forms tend to be preferred.
(Collins, N. & d’Escriván: 2010).
!
Pitch perception is the interpretation of the pitch and pitch relation of all elements
within a musical piece where “traditionally, most music is compromised of discrete
pitches, or scales, instead of a continuum of them. Furthermore it is common that the
scales repeat themselves after an octave or a frequency ratio of two” (Collins, N. &
	25
d’Escriván: 2010). In western music this scale is one which is a part of the ’12 Tone
Equal Tempered Tuning (12TET)’ a tuning in which almost all music in the western
world, regardless of genre, tends to use. However with today’s computer music
technology it is fundamentally easy for the user to experiment with adaptive tunings, in
which the intonation itself is modified to fit with the current key of the piece.
!
Rhythm is essentially the ‘glue’ that holds together a musical piece and “the ability to
infer beat and metre from music is one of the basic activities of music
cognition” (Collins, N. & d’Escriván: 2010). Even if a piece is rhythmically complex with
altering time signatures and values we are able to interpret these complex rhythmic
patterns despite their complexity. This occurs because humans are readily able to
interpret pulses/beats. Electronic music - particularly computer music - allows the user
the possibility to investigate alternative and non-standard musical structures that deviate
from the common metre practice of western music.
!
!
!
!
!
	26
3. Practical
!
Over the course of this practical examination a number of tracks from each technology
period will be examined in detail, including the physical makeup and construction of the
tracks, which will be examined using Pro Tools to break down each track into its
corresponding elements, as well as recording anomalies and unusual findings in the
process. The technical equipment used to create each track will then be researched and
examined to determine how the limitations and scope offered by the technical equipment
used would have influenced the structure and style of the tracks created.
This examination will demonstrate the differences in technology offered both by the
MIDI and DAW studios respectively, outlining what the technology could offer
producers in terms of creativity in relation to the limitations, exploring whether a direct
link between creativity and technological advancements exists in series or if either
creativity or technology precedes or influences the other.
!
3.1) Rhythim is Rhythim (Derrick May)
“Strings of Life” is a definitive Detroit Techno track released in 1987 by ‘Rhythim is
Rhythim’ (Derrick May). The track is a well renowned and popular example of electronic
music production of its time, although it would be considered extremely basic by today’s
standards. May is notorious for his use of MIDI equipment and it is reported that May
uses “Korg sequencers, Roland sequencers, Roland drum machines” (May 2006).
Reportedly May’s equipment includes the likes of:
• Roland TR808
• Roland TR909
• Roland TR727
• Yamaha DX100
• Kawai K3
• Sequential Circuits Pro One
• Nord Lead 1 Keyboard
• Memory Moog
• Waldorf Micro Q
• Yamaha DX21
• Ensoniq Mirage
• Korg Poly800
	27
• Atari ST (computer)
!
It seems that musical innovation through educated use of technological equipment is
what May believes is essential when creating music stating that: “Now, with the age of
technology, you don’t even have to be a ‘synthesist’. You don’t even have to know what a
synthesizer is, to make music. I’m all for the future, 100%, but I just find the future not
100% into being creative. The future doesn’t have a creative agenda, were becoming less
creative, not just in making music but in everything” (May 2006). This links in particularly
well with the article ‘New Sounds, old Technology’ (Voorvelt 2000) which states
throughout that “Musical innovation tends to precede technological innovation rather
than the other way round” (Voorvelt: 2000). The article explains that it is innovative
musicians who explore and abuse their old equipment and instruments that help push
the boundaries in music production and often develop “new forms and styles, testing
new musical ways of thinking and widening the range of expressive
possibilities” (Voorvelt 2000). May himself describes artists who do not push the
boundaries as “riding the coat tails of technology” (May 2006), an interesting point
which fits Voorvelt’s description of typical pop production. This is where new
technology is used in the production of new music, however only in traditional ways.
Voorvelt describes the use of 1980’s drum machines stating that; “the popular Roland
TR-808 and the Linn drum machine, defined the drum sounds for genres such as new
wave, electronic body music and acid house, but the actual sounds and drum patterns
remained very similar to those developed in the 1950s and 1960s.” (Voorvelt: 2000). In
other words the sounds that the equipment created as a default were ‘new sounds’ in
popular music, however, the way in which the equipment was used was not innovative in
any way whatsoever. May makes an interesting point regarding the use of new
technology in fitting with Voorvelt questioning what defines a musician: “What do you
consider a musician? In other words, is it because you can program music on a
computer? You have particular programs, you can edit on a particular program, does that
make you a musician? Because you can actually make a good song? Or is it because you
can actually play an instrument? Do you implement this into your music? Or do you just
use the technology?” (May 2006). Is it the musician being led by the new technology? Or
is the new the new technology being implemented by the musician? It seems that it is this
question that defines the difference between innovative music production through the
use of technology and music production guided by technology. May feels very strongly
about how technology should be used in music stating: “I recommend that you don’t lean
	28
and depend on your technology 100%, it’s too easy to give up and not really use your
imagination, I don’t want a computer to tell me what I can and can’t do, I don’t want to
have to fight a machine to tell me that I can’t do something” (May 2006).
This perhaps explains May’s love of analogue electronic equipment, describing producing
in the analogue domain as “working with your ears and your instincts” and urging new
electronic musicians to “try and get as much analogue stuff as you can and implement it
into your technology, you’ll find that there are advantages to doing that, it’s not a bad
thing to hear a bit of history from that machine” (May 2006).
!
When analysing ‘Strings of Life’ it is clear that the production, by today’s standards, is
extremely dated, and definitely a product of its time. It is immediately clear that this track
revolves around the use of samplers, sequencers and a MIDI clock in which to structure
the triggering of these samples. The entire piece focuses on the piano sequence that
May's then-friend Michael James had recorded for him, originally at 80 BPM. May
increased the tempo of the piano recording, sliced it up into loops, and then added
percussion and string samples to create Strings of Life (Discogs 2015).
!
It is obvious that many of the string hits for example have only been recorded on one
note, and so that when these string stabs are played back through the sampler and re-
pitched up/down, the actual tonality of the string hits change: “If a recorded sound is
played back at a different speed, the timbre of the sound will be effected, since all the
harmonics of the sound will be heard at correspondingly, different harmonics” (Schrader
1982)
For example, when the string hits are pitched up, the size of the sample becomes shorter
and in turn pitch increases. The envelope of the sample also becomes ‘compressed’ most
noticeably shortening the transients, the attack and release phases of the sample,
meaning that it eventually becomes more of a ‘hit’ than a ‘stab’ of the strings. Likewise,
when pitched down, the transients become longer and more drawn out, and the
“envelope of the sound will be lengthened” (Schrader 1982). This in turn means that the
stabs become very slow and much less impactful. The overall result of this pitching is
that the strings have no noticeable characteristic to their sound throughout the piece, as
the string sound is constantly being morphed as a result of the pitching effect of the re-
sampling process.
	29
In an interview with ‘Red Bull Music’ May was keen to emphasise that all samples used in
the production of ‘Strings of Life’ were ones which he had collected personally, stating
that:
!
“We weren’t using the sequencers, synthesisers or programs just as a ‘crutch’, they were
an asset. In other words, ‘Strings of Life’, the piano was real, it was performed. The
orchestra hits that you hear were recorded from various progressions of an orchestra. I
recorded these sounds to cassette, and I put these into an old ‘Mirage Sonic Sequencer’
and I played progressions on the keyboard to play the notes that you hear on the song, so
it’s actually completely performed.” (Red Bull Music 2006)
!
Another interesting finding drawn from this analysing process is that, similar to black
box, the timing of this track is inconsistent.
!
Figure 7: BPM Analysis – Strings of Life
!
The track varying between 125.7 and 128.8 BPM throughout the course of the piece,
seems unusual for a MIDI based track to differ this much in tempo, when revolving
around a central clock. Both of these tracks were created in the late 1980’s (within 2 years
of each other), and it is more than likely that they would have used much of the same/
similar MIDI equipment. Perhaps MIDI clock devices were unstable/unreliable at this
time, as MIDI was a relatively new data connectivity device, it is likely that faults in MIDI
"Strings of Life" BPM Change
Over Time
BPM
125
126.25
127.5
128.75
130
Time (SMPTE)
0:00:00	AM 0:00:48	AM 0:01:52	AM 0:03:10	AM 0:03:35	AM 0:04:54	AM
BPM
	30
equipment would still be in the troubleshooting stage of development. The notorious
sound of the Roland TR 808 drum machine can be heard sequencing the drums with the
five percussion sounds that distinctly characterize the 808: “The hum kick, the ticky snare
the tishy high hats (open and closed), and the spacey cowbell. Low, mid, and high toms,
congas, a rim shot, claves, a handclap, maracas, and cymbal fill out the 808’s sonic
complement” (Vail, M. 2000) all of which can be heard orchestrating throughout ‘Strings
of Life.’
However, the 808 itself in fact synchronises with other MIDI-type equipment using the
‘DCB Bus’ connection, a predecessor to MIDI technology as referred to in section 2.1a
of this paper. This would mean that May would either have had to use a ‘DCB Bus-to-
MIDI’ type connector to synchronise the 808 to the central MIDI clock. Or perhaps he
pre-recorded the drum sequence(s) onto tape to use with his ‘Ensoniq Mirage’ sampler to
play back and trigger from the sampler, bypassing the MIDI-to-DCB Bus issue he would
have previously faced when using the TR 808.
!
3.2) Black Box
The hit single “Ride On Time” by Italian House group ‘Black Box’ released in 1989 is a
popular example of MIDI based studio production in the late 80s, featuring as part of
‘The Guardian’s” ‘UK million-selling singles list’ with sales figures in the UK of 1.05
million since its release date, placing it 102 in the rankings of best-selling singles in the
UK (Sedghi 2012).
The song is renowned for its use of its heavily sampled vocals from “Love Sensation” by
Loleatta Holloway. These samples were un-credited when the song was released and
Black Box were sued by Loleatta Holloway and her writer/producer Dan Hartman.
Because these samples were never approved by Holloway and her Hartman, Black Box
had very little recourse when the lawsuit regarding the intellectual and mechanical
property of the vocals was issued, following the international success of “Ride On
Time.” This led to the compensation of an undisclosed sum in damages to both
Holloway and Hartman (Independent: 2011). Although the success of the pop hit
outweighed the issue of the lawsuit, this is still a good example of the dangers of
unauthorised sampling gone wrong.
In spite of the seriousness of the lawsuit relating to the vocals, it is the way in which the
vocal samples are used that gives an insight into pop production through the use of
MIDI equipment. The samples have been set to be triggered by a sequencer. This in turn
gives the actual sound of the vocals an almost ‘percussive’ effect as the vocals are
	31
‘punched in’ to the track when triggered by the sequencer. This would in turn be
synchronised to the central MIDI clock. Sampling vocals in this way was a relatively new
phenomenon, as the vocals were triggered in the way most producers would trigger
percussive samples in order to create a rhythmic sequence. This was an innovative way of
using MIDI technology to treat vocal samples as most would traditionally treat
percussive samples is a perfect example of innovation through technological use.
Because Black Box are classed as a ‘Pop Dance’ trio this somewhat contradicts May’ view
relating to innovation only occurring through use of technology. That being said, the rest
of the track and the way in which it was produced was very traditional in its use of MIDI
equipment and its uses, not pushing the boundaries in terms of what the MIDI
equipment could offer.
!
Having analysed the structure of the track it is clear that Ride On Time was created using
the MIDI studio. All events occur exactly in time with each other, the track itself is
completely quantised throughout and with all musical events sitting exactly within the
and ths of the 4/4-time grid set. Interestingly however; the timing/clock used to create
the track seems to change throughout, suggesting that as the track progresses it actually
increases in tempo, starting at 118.4BPM and finishing at 119.2BPM (Figure 7 shows the
tempo transition throughout the course of the track).
!
!
Figure 7: BPM Analysis – Ride On Time
"Ride On Time" BPM Change Over
Time
BPM
118
118.5
119
119.5
120
Time (SMPTE)
00:00:00 00:01:51 00:02:15 00:03:06 00:03:59 00:05:33
BPM
	32
!
It is unclear as to the reason for this distinct increase in tempo, however, what can be
determined is that all elements of the track remain in time and relevant to each other (no
elements become faster or out of time with another). This suggests that if this issue was
related to the central clock, it was the overall timing that was affected rather than the
individual MIDI signals to the various synthesisers and MIDI hardware. This change in
timing was not expected due to the fact that “Ride On Time” is a dance track and
therefore would be expected to retain a fairly constant tempo in order for the DJ to cue
up, mix and beat-match this track with the current ending track and then again with the
next track as this track begins to reach the end of its playback time. However this distinct
change in tempo would be significant enough to inconvenience DJ’s who would assume
that the tempo of the track would remain fairly consistent and constant throughout its
playback.
!
3.3) Burial & How We Interpret Rhythm
“The borderline between composition and sound synthesis is becoming increasingly
blurred as sound synthesis becomes more sophisticated and as composers begin to
experiment with compositional structures that are less related to traditional musical
syntax” (Encyclopædia Britannica 02 December 2013).
This statement is true of Burial’s production, where little can be seen to relate to a typical
musical syntax. The idea of song structure appears to have been left far behind when
listening to Burials’ ‘Broken Home’ and ‘Homeless’ tracks, where the listener is immersed
in a complex and confusing sonic environment and surrounded by rich and diverse
developing textures throughout almost ‘music concreté’ style productions.
3.3a) Broken Home
‘Broken Home’ by Burial features a number of signature DAW-only sound manipulation
features including time stretching of the guitar sample, heard at the beginning of the
track. Here the sample has been time stretched to the point at which audio ‘jitter’ can be
heard, due to the audio sample being stretched so far from its original recorded tempo.
Similarly the use of pitch shifted vocal samples can be heard throughout this track. The
track is extremely unstructured and does not follow any distinguishable time signature.
However a 4 bar loop can be created from the repetitive aspects of the song, based solely
on when the melodic elements loop, rather than the actual drum beat which even within
this 4 bar loop, still don’t meet the gridlines or match up to any typically used time
	33
signatures. The melodic loop suggests that the tempo of Broken Home is 140BPM, but
there would be no way in which an individual would be able to conclude this solely
through listening to the song alone. The structure of the track (assuming the 140BPM
tempo) appears to be as follows:
• 8 bar intro
• 4 bar verse
• 48 bar chorus
• 8 bar breakdown
• 8 bar verse
• 48 bar chorus
• 8 bar breakdown
• 32 bar chorus
• 8 bar outro.
!
Figure 8: Broken Home Structure
!
However, the transition of each element of the song is extremely transparent, with the
song itself remaining extremely dissonant and disjointed, leaving the listener with little
idea of tempo. The issue with this is explained in an extract from ‘Rhythm, Music and
the Brain’ by Michael H. Thaut, which states;
“Rhythm organises time. In music, as a time-based acoustical language, rhythm assumes a
central syntactical role in organising musical events into coherent and comprehensible
patterns and forms. Thus the structure of rhythm communicates a great deal of the
actual, comprehensive “musical meaning” of a musical composition”. (Thaut, 2005).
!
This may suggest that the lack of an obvious rhythmic structure of Broken Home leads
to the listener struggling to interpret the ‘comprehensible patterns and forms’ within the
music, such as melodic patterns and musical phrases.
Does this in turn mean that it is difficult for the listener to interpret a ‘musical meaning’
of this track? This may well have been the exact purpose for creating such a rhythmically
	34
disjointed track. This may raise the question: what does the listener derive from ‘Broken
Home’ as a result the lack of a rhythmic structure?
!
The lack of structure in Burial’s music poses questions about both the ‘syntactic and
semantic meanings’ found in almost all types of western music. A 2005 paper by ‘Stefan
Koelsch’ entitled “Neural substrates of processing syntax and semantics in music”
examines syntactic and semantic meanings in music in depth.
Koelsch explains that all music is guided by certain regularities, which constrain, and
organize how, simultaneous tones (i.e. intervals and chords), individual tones and
durations of tones are arranged to create what can be interpreted as ‘meaningful musical
phrases’ (Koelsch 2005). Koelsch emphasizes that music inherently relies on the use of
some sort of regularities in order to portray meaning to the listener. This conforms to
the earlier theory portrayed by Thaut that “Rhythm organizes time” and in turn that
“rhythm assumes a central syntactical role in organising musical events into coherent and
comprehensible patterns and forms” (Thaut, 2005) suggesting that music fundamentally
relies on the form of regularities in patterns and phrases to portray meaning and
appreciation for the musical ideas and work(s) of artists/musicians.
Koelsch’s findings further support this idea suggesting that listeners, even without any
musical training in ‘tonic’ or ‘dominant’ chord structures, find that “Music-syntactically
irregular chords elicit an early right anterior negativity (ERAN)” in the brain (Koelsch
2005). Therefore, this suggests that the brain prefers predictability in music both in terms
of chord structure and rhythm, two characteristics to which the music of Burial does not
conform..
In terms of ‘meaning’ within music, Koelsch explains that music transfers and
communicates ‘meaningful information.’ However, for the music to become meaningful
“the emergence of meaning based on the processing of musical structure requires
integration of both expected and unexpected events into a larger, meaningful musical
context” (Koelsch 2005). Therefore regardless of whether the piece as a ‘whole’ makes
‘musical sense,’ if a musical phrase works within that piece and its structure then listeners
are able to gain meaning from this piece of music.
!
	35
!
3.3b) Homeless
Burial’s track ‘Homeless’ appears somewhat more structured than ‘Broken Home’ with a
distinct 4'/4-shuffle pattern heard in the drums and a fairly steady tempo of 134.8BPM.
However many of the drum hits still sit very much off the gridlines at this tempo,
suggesting very little, if no quantisation has been used. Compressed noise/vinyl crackle
samples can be heard throughout the track, a signature “drizzly crackle that has become
one of his sonic signatures.” (Fisher, M. 2007).
Much like in ‘Broken Home’ the vocals heard in Homeless are also pitch shifted, yet in
‘Homeless’ they are also overdriven. In the breakdown section the vocals can be heard to
be timestretched, and processed to the point at which clipping distortion occurs (the
track itself does not clip, only the vocal processing suggests this). In a 2007 interview
with ‘The Wire’, Burial claims to “remove voices from biography and narrative” and then
“pitch down female vocals so they sound male, and pitching up male vocals so they
sound like a girl singing” (Fisher, M. 2007) which explains the strange tonality of the
pitched vocals.
Burial’s use of vocal manipulation, morphology and the way in which the vocals are
processed is extremely unusual, as he appears to use it to totally re-invent the sound of
the individuals he samples. When listening to a vocal there are a number of traits a voice
can portray to the listener, including; the “age, sex and health-image of the utterer, the
personality (or the pretended personality of the actor), the intent (friendly, malicious) and
state of mind (angry, frightened)” (Wishart 2012) as well as the attitude/or meaning in
which a speaker is aiming to portray from the manner in which they sing or speak
(Wishart 2012). However, the manner in which Burial approaches his vocal production
and manipulation seems to totally defy and mystify the appearance and portrayed
character of the vocals. The listener is no longer able to distinguish any character traits
from the original formant, pitch, or manner in which the vocals spoken/sung. This
isolates the words/phrases themselves, as the personality of the vocalist/speaker has
long since been lost due to the processing effects. The isolation of the vocals however
still appears to retain it’s meaning to the listener, if not perhaps enhancing or altering the
manner/opinion of the actor or singer behind the original phrase. The earlier mention
of Burial’s extraction of “voices from biography and narrative” (Fisher, M. 2007)
suggests that he is taking poetic/narrative passages from spoken media, and in turn
giving the phrases a new lease of life through his manipulative production processes,
meaning that the vocals now have a musical tonality to them, rather than spoken voice.
	36
!
Following on from the unconventional audio processing heard in the vocals, the track
can also be heard to be cut at a transient point rather than at a zero crossing point, most
noticeably after the breakdown to chorus transition, where the first attack of the chorus
has been edited and cut at one region, meaning that all sounds start at an exact single
point, and all audio has lost its attack.
In terms of structure, “Homeless” seems to have a much more noticeable structure than
that heard in “Broken home”. The structure is as follows:
• 3 bar intro
• 8 bar verse
• 28 bar chorus + extended end
• 32 bar chorus + extended end
• 20 bar chorus
• 24 bar breakdown
• 24 bar chorus
• 8 bar transition section
• 34 bar outro
!
!
Figure 9: Homeless Structure
!
Following on from the analysis of the two Burial tracks, the element that stood out most
was the lack of rhythmic structure, particularly in “Broken Home”, in which I was unable
to determine a definitive tempo, aside from measuring the repetitiveness of the musical
phrases. This led to researching into how Burial produces his music. Throughout many
articles and interviews with the illusive ‘Burial’ it has become clear that the way in which
he works is extremely unconventional to say the least, claiming to use ‘Sound Forge’ as
his chosen DAW for production. However this is an audio editing software
predominantly used for finalising post-production works, and with little to no MIDI
	37
integration/sequencing. Audio files are recorded or imported and are arranged without a
‘tempo grid’.
“In essence, Sound Forge has always provided an efficient and well-featured environment
within which to perform detailed editing of mono and stereo audio files. Basic editing
tasks such as trimming, adding fades, normalizing and resampling can all be performed
accurately and with ease, and file output formats cover all the usual standards, including
MP3 encoding.” (Waldon, J. 2007).
It is clear that Sound Forge is an editing software, not a production software by design,
and due to the absence of a tempo grid combined with only audio import options
available (little MIDI integration) it quickly becomes clear why almost all tracks produced
by Burial lack regular rhythm, and do not match up well when placed over a tempo grid
in any other DAW. It seems feasible that Burial may well use Sound Forge when
arranging tracks, although to use it as a sequencer would be extremely time consuming
and complex in terms of the processing of audio and re sampling.
Could it be argued that in producing in this way, through the re-sampling of recorded
material in an un-sequenced and disjointed way, Burial could be seen as being the
modern pioneer of ‘Musique Concrète’, where he creates his own “vivid audio portrait
of a wounded South London, a semi-abstract sound painting of a city’s disappointment
and anguish.” (Fisher, M. 2007). Much of Burial’s music appears to focus on the sonic
aspects of sound, and treating them as a painting of a scene or situation, rather than a
song or musical work as such, his sound is a collective mourning nostalgia for the past,
all of which he feels has been lost in modern London.
This idea of a sonic painting in sound, rather than a musical work relates very much to
the work of Schaeffer who developed his theory of composition centered around what
technology was available, stating that any sound could be extracted from its environment
and altered through manipulation techniques. This means that any sound regardless of its
source was available for use in a musical context. According to Schrader, Schaeffer
manipulated a number of his sound sources through the use of ‘locked groove discs’
which he essentially used to create loops within music “where the effect would be to alter
aspects of the recorded event itself to create complex rhythmic patterns, giving it a new
lease of life in a sonic and musical context” (Schrader 1982). Similarly he was also fond
of speed change manipulation of records, which would effectively change both the pitch
and the envelopes of the recorded sound.
Many of these techniques appear to relate very closely to the work of Burial, whose use
of unusual samples including vinyl crackle, and recorded ambience are used,
	38
manipulated, and looped to create complex sonic textures within his music, similar to the
looping ideas and concepts used in Schaeffer’s work. Pitch manipulation similarly is
another aspect both Schaeffer and Burial appear to have in common, where Schaeffer
would pitch up and down recordings of vinyl’s to his desire, Burial uses DAW pitch
shifting time-stretching techniques within Sound Forge to manipulate vocal recordings
of spoken work and literature.
This ‘sonic concept’ is integrated throughout all aspects of his work, even down to a
number of the song titles including the likes of ‘Night Bus’, ‘Distant Lights’ and ‘In
McDonalds’ suggesting that the music he produces is created to represent a particular
environment/feeling related to that particular environment.
!
Figure 10: Sound Forge 9 editing software (Walden, 2007)
!
Overall, Burial’s production style, and unconventional construction of his music is an
ideal demonstration of the power of the DAW. Almost all of the techniques involved in
the production of his music would not be capable without the editing capabilities offered
by his chosen DAW. It would be impossible to re-create such a rich and diverse sound
scape heard within these two tracks using the MIDI studio alone. Effects such as time-
stretching and pitch shifting are not offered by MIDI equipment in the way in which it
	39
would be necessary to re-create the sounds heard throughout both ‘Broken Home’ and
‘Homeless’, so in that respect, these songs are a brilliant example of the vast capabilities
offered to producers by the DAW studio.
!
!
!
!
	40
4. Conclusion
4.1) What does MIDI & DAW Equipment Offer the User?
Following my practical investigation it appears that there are clear differences between
MIDI and DAW production, down to the equipment, its connectivity, usability and
production scope offered.
This study has demonstrated that in terms of hardware versus software for production, it
mainly comes down to the individual and personal preference as to what they prefer to
use or what they feel most creative using. Many producers, who have grown up using
hardware synthesisers/MIDI equipment to create musical works, often continue to
favour the equipment they first learned with rather than use the modern DAW based
equipment. This was outlined by Derrick May in his views regarding the implementation
of analog hardware equipment into what would otherwise be digital DAW software
based systems. He supported the feeling that the DAW and software based systems are
too easy to ‘lean on’, and as a result do not challenge the producer to use his/her
imagination in their production method stating simply that; “I don’t want a computer to
tell me what I can and can’t do, I don’t want to have to fight a machine to tell me that I
can’t do something.” (May 2007).
However, in contrast to May, early computer music producer Jean-Claude Risset
describes his embracing of new computer technologies as a relation to the implications
of his compositional styles, believing that as computer technology advances the sonic
design constraints of sound creation and manipulation diminish and as a result the scope
for creativity increases in parallel to technological advancements (Risset: 1982).
It appears that the DAW offers the tools to break free of ‘strict rhythm boundaries’ (as
demonstrated by the work of Burial and his use of ‘Sound Forge’) that challenge MIDI
where equipment requires a central clock, tempo, and time signature in order to organise,
create and sequence a track. As a result, MIDI based tracks often follow a distinct and
clear musical structure as demonstrated by the analysation of “Ride On Time” which is
simple for the listener to interpret and predict in terms of its structure and progression.
!
Despite this finding, it is obvious that most DAW’s are developed to initially encourage
users to work within a ‘time and tempo’ based organisation of track structures, most
featuring a GUI (Graphic User Interface) based around a tempo grid on which audio/
instrument/MIDI tracks are placed, arranged and edited (see figure 4). However, with
certain DAW-Only features (Time Stretching, Elastic Audio, Pitch Shifting etc.) the user
	41
is able to create more within these guidelines compared with MIDI equipment, where the
user must source all of the MIDI modules, in addition to the recording equipment
required to record their performances (a time consuming and costly process). It is also
that case that the physical connectivity of the modules may not be completely reliable, as
demonstrated by the BPM inconsistencies seen in both “Strings of Life” and “Ride on
Time” which suggest that despite encouraging the user to work within a strict time
constraint, the equipment itself once synchronised actually struggled with the demands
of the MIDI producer and the amount of interconnected synchronised equipment being
sequenced at one time.
!
These results would suggest that in terms of reliability and scope for creativity, the DAW
is the medium of production that should be favoured. With that in mind, it is also
important to consider May’s idea of ‘knowing your equipment and how it behaves’ which
is easier said than done when considering the vast capabilities of the DAW, so in this
respect it is easier and faster for an individual to learn a select set of MIDI modules,
rather than a whole DAW software program.
!
4.2) Users Versus Innovators
An interesting and unexpected finding of the research and analysation processes revealed
from this paper, was the clear divide between ‘users of music technology’ versus
‘innovators through use of music technology.’
Throughout this study it has become clear that innovators through their use of the
technology already available to them in new and creative ways are the pioneers of the
advancements in what new technology will offer. The techniques developed by
innovators through experimentation with using technology are what create exciting new
developments in the music technology industry. These in turn filter through to becoming
techniques used in mainstream pop production. They tend to use well-established
techniques with new technology, rather than directly promoting technical innovation. All
artists examined throughout this investigation - Derrick May, Black Box, and Burial - all
show elements of technical innovation through the use of technology available to them
to some degree. May with his use of ‘out-dated’ MIDI and pre MIDI hardware
equipment used to manipulate and process his selection of collected piano and string
samples in order to create a complex pioneering track, which inspired the development
of the Detroit Techno scene. Similarly, Black Box with their inventive and original use of
‘triggered vocal samples’ through the use of MIDI based samplers in order to treat the
	42
vocals as almost a percussive type element rather than a musical element. And finally
Burial with his use of totally unconventional and abnormal use of Sound Forge’s DAW
capabilities and plugins to create his unusual eerie sonic soundscapes, demonstrating the
modern approach to Musique Concrète through the use of the DAW and computer
technologies.
!
This then leads onto psychology and musical meaning, an aspect that the production of
Burial’s music in particular appeared to challenge. Burial has questioned all that is
considered ‘essential’ in music production. His use of wide amounts of dissonant sonic
ambience, the un-interpretable rhythm structure and the use of pitch shifted spoken
word makes very little sense in terms of the spliced vocal phrases, yet Burial treats the
sonic content as a musical tool, a significant breakthrough. Despite breaking pretty much
every rule of traditional popular music production through his unorthodox approach to
track creation his work is still extremely popular, well renowned, and admired. This
therefore suggests that often music popularity is not based on the musical meaning,
rather the creative innovation of an innovative individual in realising one’s creative vision
regardless of whether or not that vision may or may not conform to traditional and
commonly used production practices or equipment use.
!
To conclude, the research throughout this paper has highlighted the significant and vast
change from MIDI equipment and the MIDI studio used to create electronic music in
the late 80s to early 90s. From the vast technological advancements in data transmission
and bandwidth to the point where modern data transmission is now up to 3,200,000
times faster than MIDI, with Apple’s ‘Thunderbolt’ capable of streaming up to 10Gb/s
per second is a phenomenal increase in data transmission compared to MIDI’s 3.125Kb/
s.
!
The differences don’t just lie in the speed at which one can now connect their equipment.
The DAW offers the ease of use of an ‘all in the box’ system, with a number of features
which were previously unavailable, including elastic audio, drum replacement, and auto-
tune/pitch shifting which have all contributed to the development of most modern
production styles and techniques. The DAW also (more recently) offers simple portability
of music sessions, either on a laptop that enables the musician to access their DAW on
the go as/when they need to, or saved to an external hard drive/USB as a session file
which can be run on any computer device running the same DAW as the saved session.
	43
The studio itself has not fundamentally changed, the synthesizers, drum machines,
samplers and sequencers that were once there in the form of MIDI equipment during
the 1980s and 90s are still there, however they now exist in the virtual form of a DAW
plugin.
!
What this paper has demonstrated regardless of MIDI or DAW use, is that electronic
musicians who innovate and push the boundaries of music production techniques do so
with technology that they know well, and often this equipment may not be the latest or
most technologically advanced equipment, something both Derrick May and Burial have
demonstrated. What creates advancements in music technology and its use are the
innovators, those who know every aspect of the equipment they own and use, and as a
result they apply that knowledge of their equipment to compose creatively and inspire
new techniques and uses for that equipment which may otherwise have been overlooked.
What most musicians would describe as limitations in their equipment is often what
innovators seek to challenge and use to fuel the creativity that defines their musical style,
much like Burial and his use of Sound Forge. The MIDI and DAW studios offer vast
differences in what they offer to the user, however, it is only the user and their
knowledge that can limit their creative uses of the equipment available to them.

	44
5. Bibliography
!
Apple (N.D.) ‘Thunderbolt, The most advanced I/O ever.’ Available at: https://
www.apple.com/thunderbolt/ (Accessed 19 April 2015)
!
B l a c k d o w n ( 2 1 s t M a r c h 2 0 0 6 ) ‘ B u r i a l ’ Av a i l a b l e a t : h t t p : / /
blackdownsoundboy.blogspot.co.uk/2006/03/soundboy-burial.html (Accessed 11
February 2015)
!
Clash Music (16th February 2012) ‘Untrue: Burial’ Available at: http://
www.clashmusic.com/feature/untrue-burial (Accessed 11 February 2015)
!
Collins, N. and d’Escrivan, J. (2010) The Cambridge Companion to Electronic Music,
Cambridge: Cambridge University Press
!
Detroit Techno Militia (N.D.) ‘Derrick May – The Secret Of Techno’ Available at: http://
www.detroittechnomilitia.com/main/index.php/techno-history/interviews/180-derrick-
may-the-secret-of-techno (Accessed 7 February 2015)
!
Discogs (N.D.) ‘Rhythim Is Rhythim- Strings Of Life’ Available at: http://www.discogs.com/
Rhythim-Is-Rhythim-Strings-Of-Life/master/695 (accessed 17 March 2015)
!
Encyclopædia Britannica (02 December 2013) ‘Electronic Music’ Available at:
http://www.britannica.com/EBchecked/topic/183823/electronic-music/27524/
Establishment-of-electronic-studios (accessed 23 March 2015)
!
FACT Magazine (1st July 2012) ‘Burial: “It’s quite a simple thing I want to do.” Available at:
http://www.factmag.com/2012/07/01/interview-burial/ (Accessed 11 February 2015)
!
Freaky Trigger (4 October 2010) ‘Black Box – “Ride On Time” Available at http://
freakytrigger.co.uk/popular/2010/10/black-box-ride-on-time/ (Accessed 17 April 2015)
The Guardian (Sunday 4 November 2012) ‘UK's million-selling singles: the full list’ Available
at: http://www.theguardian.com/news/datablog/2012/nov/04/uk-million-selling-
singles-full-list (accessed 17 April 2015)
!
	45
The Guardian (Friday 26 October 2007) ‘Only five people know I make tunes’ Is Burial the most
elusive man in music? Available at: http://www.theguardian.com/music/2007/oct/26/
urban (accessed 30 October 2014).
!
Huber, D. (2007) The MIDI Manual, A Practical Guide to MIDI in the Project Studio, Oxford:
Linacre House
!
The Independent (Friday 25 March 2011) “Loleatta Holloway: Much-sampled disco diva who
sued Black Box over their worldwide hit ‘Ride on Time” Available at: http://
www.independent.co.uk/news/obituaries/loleatta-holloway-muchsampled-disco-diva-
who-sued-black-box-over-their-worldwide-hit-lsquoride-on-timersquo-2252360.html
(accessed 15 January 2015).
!
Koelsch, S. (2005) Neural substrates of processing syntax and semantics in music, Available online
at: http://www.sciencedirect.com/science/article/pii/S0959438805000371 (Accessed 18
April 2015)
!
Langford, S. (2014) Digital Audio Editing Correcting and Enhancing Audio with DAWs,
Abingdon: Focal Press
!
Leider, C. (2004) Digital Audio Workstation, New York: Mcgraw-Hill Professional,
!
Moylan, W. (2002) The Art of Recording, Woburn: Focal Press
!
Nokes, S. and Kelly, D. (2003) The Definitive Guide To Project Management, Harlow: Pearson
Educated Ltd
!
Roads, C. (1996) The Computer Music Tutorial, Massachusetts: The MIT Press
!
Rumsey, F. (2007) Desktop Audio Technology, Oxford: Focal Press
!
Red Bull Music Academy (2006) ‘Lecture: Derrick May (Melbourne 2006)’ Available at:
http://www.redbullmusicacademy.com/lectures/derrick-may--it-is-what-it-isnt (accessed 28
March 2015)
!
	46
Russ, M. (2011) Sound Synthesis and Sampling (Third Edition), Oxford: Focal Press
!
Schrader, B. (1982) Introduction to Electro-Acoustic Music, London: Prentice-Hall
!
Sound on Sound (September 1996) LIAM HOWLETT: The Prodigy & Firestarter
Available at: http://www.soundonsound.com/sos/1996_articles/sep96/prodigy.html
(accessed 30 October 2014).
!
Sound on Sound (October 2004) The Prodigy ‘Liam Howlett: Recording Always Outnumbered,
Never Outgunned’ Available at: http://www.soundonsound.com/sos/Oct04/articles/
prodigy.htm (accessed 30 October 2014).
!
Thaut, M, H. (2005) Rhythm, Music and the Brain, Abingdon: Routledge
!
Vail, M. (2000) Vintage Synthesizers, San Francisco: Miller Freeman Books
!
Voorvelt, M. (2000) New Sounds, old Technology, Organised Sound
!
Viñao, A. (2010) “Artists Statements II”. In The Cambridge Companion to Electronic Music. ed.
by Collins, N. and d’Escrivan, J. Cambridge: Cambridge University Press,
!
Walden, J. (2007) Sony Sound Forge 9 [online image]. Available at: http://
www.soundonsound.com/sos/jun07/articles/soundforge9.htm
!
Waldon, J. http://www.soundonsound.com/sos/jun07/articles/soundforge9.htm Sound
Forge. (accessed 25 March 2015)
!
White, Paul. (2000) The Sound On Sound Book of Desktop Digital Studio, London: Sanctuary
Publishing Limited
!
White, Paul. (2000) Basic MIDI, SMT, London: Bobcat Books Limited
!
White, Paul. (2003) MIDI For The Technophobe, London: SMT
!
	47
The Wire (December 2007). ‘Burial’ The Wire 2007 (286) 28-31. London: The Wire
Magazine Limited.
!
The Wire (December 2012) ‘ Burial: Unedited Transcript’ Available at:
http://www.thewire.co.uk/in-writing/interviews/burial_unedited-transcript (accessed 11
February 2015)
!
Wired (2004) ‘Six Machines That Changed The Music World’ Available at: http://
archive.wired.com/wired/archive/10.05/blackbox_pr.html (accessed 21 February 2015)
!
Wishart, T. (2012) Sound Composition, York: Orpheus the Pantomime
!
!
!
!
!
!
!
!
!
!
!
!
	48

More Related Content

Viewers also liked

Tecnologia de la informacion[1]
Tecnologia de la informacion[1]Tecnologia de la informacion[1]
Tecnologia de la informacion[1]Yuya Sanz
 
Presentation
PresentationPresentation
PresentationSteph_21
 
Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712
Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712
Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712antonio perez
 

Viewers also liked (8)

Tecnologia de la informacion[1]
Tecnologia de la informacion[1]Tecnologia de la informacion[1]
Tecnologia de la informacion[1]
 
ECA Corporate Brochure
ECA Corporate BrochureECA Corporate Brochure
ECA Corporate Brochure
 
Guia01com218 2012
Guia01com218 2012Guia01com218 2012
Guia01com218 2012
 
Bitcoin
BitcoinBitcoin
Bitcoin
 
Linea del tiempoo
Linea del tiempooLinea del tiempoo
Linea del tiempoo
 
Presentation
PresentationPresentation
Presentation
 
Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712
Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712
Equipo11 u3 a3_softwaredepresentacionesproblemas_tein050712
 
التعدين
التعدينالتعدين
التعدين
 

Similar to Final Doccument (Finished Copy)

Multimedia definitions
Multimedia definitionsMultimedia definitions
Multimedia definitionsGerhard Lock
 
The Evolution of Music Technology.pdf
The Evolution of Music Technology.pdfThe Evolution of Music Technology.pdf
The Evolution of Music Technology.pdfPeterYarrow4
 
MA: Presentation Seminar
MA: Presentation SeminarMA: Presentation Seminar
MA: Presentation Seminarelmidodd
 
Presentation Seminar
Presentation SeminarPresentation Seminar
Presentation Seminarelmidodd
 
Rloynd ig2 t1 ws
Rloynd ig2 t1 wsRloynd ig2 t1 ws
Rloynd ig2 t1 wsrosstapher
 
Menggabungkan audio ke dalam sajian multimedia 4.english
Menggabungkan audio ke dalam sajian multimedia 4.englishMenggabungkan audio ke dalam sajian multimedia 4.english
Menggabungkan audio ke dalam sajian multimedia 4.englishEko Supriyadi
 
Future Of The Music Industry
Future Of The Music IndustryFuture Of The Music Industry
Future Of The Music IndustrySheridanC
 
Digital audio formats
Digital audio formatsDigital audio formats
Digital audio formatsamels_john
 
At the ready, sheet music minus the sheets -- The New York Times
At the ready, sheet music minus the sheets -- The New York TimesAt the ready, sheet music minus the sheets -- The New York Times
At the ready, sheet music minus the sheets -- The New York TimesAdam Baer
 
Towards User-friendly Audio Creation
Towards User-friendly Audio CreationTowards User-friendly Audio Creation
Towards User-friendly Audio CreationJean Vanderdonckt
 
Ch8 Section A: Audio Basics
Ch8 Section A: Audio BasicsCh8 Section A: Audio Basics
Ch8 Section A: Audio BasicsAna Isabel Ramos
 
Cultural factors effecting ict
Cultural factors effecting ictCultural factors effecting ict
Cultural factors effecting ictAdam Heatherington
 
Richard_Final_Poster
Richard_Final_PosterRichard_Final_Poster
Richard_Final_PosterRichard Jung
 
Internet Of Cultures
Internet Of CulturesInternet Of Cultures
Internet Of CulturesArtur Serra
 
Electronic Music and Software Craftsmanship: analogue patterns.
Electronic Music and Software Craftsmanship: analogue patterns.Electronic Music and Software Craftsmanship: analogue patterns.
Electronic Music and Software Craftsmanship: analogue patterns.Guillaume Saint Etienne
 
MIDI for A2 music tech students
MIDI for A2 music tech studentsMIDI for A2 music tech students
MIDI for A2 music tech studentsmusic_hayes
 

Similar to Final Doccument (Finished Copy) (20)

Edm
EdmEdm
Edm
 
Multimedia definitions
Multimedia definitionsMultimedia definitions
Multimedia definitions
 
The Evolution of Music Technology.pdf
The Evolution of Music Technology.pdfThe Evolution of Music Technology.pdf
The Evolution of Music Technology.pdf
 
Midinote Presentation
Midinote PresentationMidinote Presentation
Midinote Presentation
 
MA: Presentation Seminar
MA: Presentation SeminarMA: Presentation Seminar
MA: Presentation Seminar
 
Presentation Seminar
Presentation SeminarPresentation Seminar
Presentation Seminar
 
Rloynd ig2 t1 ws
Rloynd ig2 t1 wsRloynd ig2 t1 ws
Rloynd ig2 t1 ws
 
Menggabungkan audio ke dalam sajian multimedia 4.english
Menggabungkan audio ke dalam sajian multimedia 4.englishMenggabungkan audio ke dalam sajian multimedia 4.english
Menggabungkan audio ke dalam sajian multimedia 4.english
 
Future Of The Music Industry
Future Of The Music IndustryFuture Of The Music Industry
Future Of The Music Industry
 
Digital audio formats
Digital audio formatsDigital audio formats
Digital audio formats
 
Open sourcepres eva2013
Open sourcepres eva2013Open sourcepres eva2013
Open sourcepres eva2013
 
At the ready, sheet music minus the sheets -- The New York Times
At the ready, sheet music minus the sheets -- The New York TimesAt the ready, sheet music minus the sheets -- The New York Times
At the ready, sheet music minus the sheets -- The New York Times
 
Towards User-friendly Audio Creation
Towards User-friendly Audio CreationTowards User-friendly Audio Creation
Towards User-friendly Audio Creation
 
Ch8 Section A: Audio Basics
Ch8 Section A: Audio BasicsCh8 Section A: Audio Basics
Ch8 Section A: Audio Basics
 
Cultural factors effecting ict
Cultural factors effecting ictCultural factors effecting ict
Cultural factors effecting ict
 
Richard_Final_Poster
Richard_Final_PosterRichard_Final_Poster
Richard_Final_Poster
 
Internet Of Cultures
Internet Of CulturesInternet Of Cultures
Internet Of Cultures
 
Electronic Music and Software Craftsmanship: analogue patterns.
Electronic Music and Software Craftsmanship: analogue patterns.Electronic Music and Software Craftsmanship: analogue patterns.
Electronic Music and Software Craftsmanship: analogue patterns.
 
MIDI for A2 music tech students
MIDI for A2 music tech studentsMIDI for A2 music tech students
MIDI for A2 music tech students
 
Audio Mixing Console
Audio Mixing ConsoleAudio Mixing Console
Audio Mixing Console
 

Final Doccument (Finished Copy)

  • 1. ! School of Performing Arts ! ! A Comparison Between Electronic Music Production in the MIDI Based Studios of the late 1980’s/early 1990’s, and the Modern Day DAW Studio. ! ! ! Edmund Hull ! May 2015 ! ! ! Music Technology Dissertation ! 1
  • 3. Table of Contents I. ACKNOWLEDGEMENTS 2 TABLE OF CONTENTS 3 II. LIST OF FIGURES & TABLES 4 1. INTRODUCTION 5 2. RESEARCH 7 2.1 The MIDI Studio 7 2.1 a) Introduction to MIDI 7 2.1 b) How MIDI Messages Communicate 8 2.2 c) The Practical Uses of MIDI 10 2.1 d) MIDI Sequencers 11 2.1 e) MIDI Synchronization 12 2.1 f) MIDI Bandwidth 13 2.1 g) The Contrast between MIDI and Digital Audio 13 2.2 The DAW Studio 15 2.2 a) Introduction to DAWs 15 2.2 b) DAW Integration 16 2.2 c) What DeSines a Typical DAW? 17 2.2 d) What New Audio Manipulation Tools are Offered by the DAW? 18 2.2 e) MIDI Implementation 20 2.3 Technology & Electronic Music 22 2.3 a) The Divide Between Computer Music & Synthesis 22 2.3 b) Technological Innovation Through Electronic Music 23 2.3 c) Psychology & Sound Perception 24 3. PRACTICAL 27 3.1) Rhythim is Rhythim (Derrick May) 27 3.2) Black Box 31 3.3) Burial & How We Interpret Rhythm 33 3.3a) Broken Home 33 3.3b) Homeless 36 4. CONCLUSION 41 4.1) What does MIDI & DAW Equipment Offer the User? 41 4.2) Users Versus Innovators 42 5. BIBLIOGRAPHY 45 3
  • 4. II. List of Figures & Tables ! List of Figures: Figure 1: Basic MIDI connectivity/Daisy-chaining Figure 2: Typical MIDI sequencer setup Figure 3: Digital Audio/MIDI Comparison Figure 4: Image of Logic Pro 9 Audio/Instrument/External MIDI integration Figure 7: BPM Analysis – Strings of Life Figure 7: BPM Analysis – Strings of Life Figure 8: Broken Home Structure Figure 9: Homeless Structure Figure 10: Sound Forge 9 editing software (Walden, 2007) ! List of Tables: Table 1: MIDI data Transmission ! ! ! ! ! ! ! ! ! 4
  • 5. 1. Introduction ! Electronic music production has always been at the forefront in the use, development, and manipulation of new technological advancements. The last 20 years have seen a phenomenal development in the way electronic music producers approach music production and the way in which technological equipment has influenced, sculpted, and even defined new music styles and production techniques. The technological equipment available at any given time period has limited all pioneers of electronic music production as far back as the likes of Pierre Schaeffer and Pierre Henry and their development of ‘Musique Concrète’ in 1948. Where their famous use of self recorded “sound effects, musical fragments, vocalizing’s, and other sounds and noises produced by man, his environment, and his artifacts” (Encyclopædia Britannica 02 December 2013) were processed through their use of locked vinyl groove manipulation techniques. However, their limitation in equipment was what actually enabled them to create and define their new genre of Musique Concrète, without the technology of locked-groove vinyl, and through their experimentation in the manipulation of vinyl technologies their music creation would not exist as we know it, or perhaps not at all. As stated in ‘The Cambridge Companion to Electronic Music;’ “the technologies that are used to make electronic music are a realisation of the human urge to originate, record and manipulate sounds” (Collins, N. & d’Escriván: 2010). Every decade sees a vast change in music technologies offered to electronic music producers, and in turn new genres, styles, and music production techniques are developed to use, manipulate, and innovate with new ideas through the use of new technological equipment available. Up until the recent developments of the Digital Audio Workstation (DAW) it seems that every generation of electronic musicians were limited to some degree by the equipment available to them at that time. Hardware, no matter how advanced always seemed to limit an electronic musician to some degree. However, in recent years, the possibilities offered by the DAW and computer technology are almost endless, seemingly offering electronic music producers and sound engineers alike; the tools to produce, edit and manipulate almost anything they could possibly imagine. All that is required is an imagination for a song structure or particular sound and all the tools to realise this vision are featured within a users’ DAW where endless plugins and software synths can be accessed at the click of a mouse, and intricate virtual edits and automations of the various parameters can be performed to create a sonic masterpiece. 5
  • 6. Or does it? Perhaps the possibilities of the DAW are too great, too daunting, and too time consuming for most electronic producers to fully appreciate and use to a full knowledge of the scope of even the most basic DAW tools available to them. Previously, an electronic musician would realise their ideas through the use of hardware, often multiple units, in order to create electronic music. A thorough knowledge of the hardware and how it could be used to create and manipulate sound was essential when manipulating hardware for a desired result. This paper will examine the most recent, and vast change from the hardware-based MIDI (Music Instrument Digital Interface) studios that dominated the late-1980s to mid-1990’s, compared with the software-based DAW studios of the late 1990’s to the present day. The aim of this paper is to highlight how changes in technology have affected the way in which music is produced, both sonically and musically, and how new music genres have developed as a result of the technological equipment available. ! ! 6
  • 7. 2. Research Throughout the research section of this paper, both MIDI and DAW technologies will be examined in turn, followed by an investigation into the relationship between electronic music and technology. 2.1 The MIDI Studio MIDI technology and in turn the MIDI studio was the most recent hardware-based technology prior to the DAW. Although MIDI is still frequently used and integrated with the DAW environment, it tends to be used primarily as simply a control surface to sequence MIDI piano roll data for virtual synthesizers and samplers to play back. It seems that the connection of a number of MIDI devices is now no longer needed as the DAW is able to sequence all elements of production in an all-in-one feature. Although the age of the MIDI studio is somewhat over, much of the MIDI hardware technology is still sought after by many electronic musicians who prefer a hardware-based workflow. 2.1 a) Introduction to MIDI Introduced in 1983 MIDI was the first “digital protocol for interconnecting synthesisers” (Collins, N. and d’Escrivan, J 48:2010). MIDI enabled previously unavailable connectivity for synthesisers and other hardware-based music equipment to be integrated with software. This was a huge step in synthesizer performance and meant that a number of synthesis/sampling modules etc. could be controlled and triggered by one individual. “Simply stated, Musical Instrument Digital Interface (MIDI) is a digital communications language and compatible specification that allows multiple hardware and software electronic instruments, performance controllers, computers and other related devices to communicate with each other over a connected network.” (Huber, D. 1:2007) At its simplest, MIDI allows the musician to play several instruments at once from a single keyboard, rather than having to dash around the stage constantly adjusting and monitoring all keyboard/sampler based elements. MIDI provided a universal electronic instrument communications language; “Prior to MIDI, there were some attempts at providing ways of connecting instruments, but none were entirely standard and all were very limited” (White, P. 11:2003). There was no reliable and universal digital music communication language enabling the synchronization of hardware and software instruments, meaning that the integration of a number of electronic instruments was significantly more difficult prior to MIDI standardization. 7
  • 8. A well renowned rhythm machine, the ‘Roland 808’, is a perfect example of connectivity prior to MIDI. On the back of the Roland 808, as well as numerous other pre-1981 MIDI Roland synthesisers and rhythm machines including the TR-606, the CR-8000, the TB-303 Bass line, and finally the EP-6060 electronic piano (which featured an arpeggiator) “you’ll find a five pin DIN jack that a standard MIDI plug will fit into” (Vail, 2000). This however was not MIDI, but its predecessor the ‘DCB Bus’, developed by Roland. The DCB Bus was what the MIDI protocol was based on, as this quote from Roland’s then owner ‘Mr Kakehashi’ explains: “We had developed our own communications protocol,” he explains. “Inside, it was the same as today’s MIDI. At the same time, Sequential Circuits was developing a MIDI-like protocol. We called ours the DCB Bus; they called theirs by another name. Then we discussed how to develop a common standard. Eventually MIDI came out, but actually more that 80 or 90% of it was based on the DCB Bus. Of course I don’t want to say that everything was developed by Roland, because that isn’t fair. It was a joint effort. Both companies agreed to implement the best ideas from both companies, so we jointly created MIDI. But when you compare it with the DCB Bus, you can see how similar they are.” (Vail, 2000). As the quote explains, both Roland and Sequential Circuits - both of whom had been trying to develop their own digital universal connections prior to the creation of MIDI technology - developed MIDI collaboratively, implementing the best features from both technologies. This would explain why the Roland TR-909 (the successor to the TR-808 and Roland’s first analog-digital hybrid machine) was also Roland’s first drum machine to feature the ‘newly developed’ MIDI ports on the rear of the device (Vail: 2000). Other notable features of the 909 included; the ability to accent any percussive events occurring within the beat, and the ability to trigger 909 sounds from a MIDI controller which actually enabled the user to obtain a wider dynamic range as described in Vail: 2000. ! 2.1 b) How MIDI Messages Communicate “MIDI isn’t about transmitting sounds; it’s about transmitting information that tells the instrument what your fingers were doing on the keyboard” (White: 2003). MIDI relies on the transmission of digital data, in order to communicate a wide range of player entry information, including velocity, sustain time of each note, pitch bend and modulation wheel parameters. “When a key is depressed on a MIDI keyboard, a signal known as a Note On message is sent, along with a note number identifying the key. This is how MIDI instruments know what note to play, when to play it and when to stop 8
  • 9. playing it” (White: 2003). MIDI transmits this data as a range of numbers (between 1 and 127), which are used to describe how each feature is interacted with or manipulated. For example, the scale could be used to describe how hard the player presses down each key (velocity), where 1 would represent low velocity and 127 would represent maximum velocity. This information is then sent between/to and from, other MIDI devices/controllers or to a computer based interface or sequencer where it is decoded to either re-trigger another keyboard or visually recorded into a computer sequencer, where it could be played back through the keyboard just as the player had played it. “Like computers, the data is in a digital form - a sort of ultra-fast Morse Code for machines.” (White: 2003). Pedals and wheels (expressive mediums) of the keyboard are known as “Continuous Controller” meaning that their movements are constantly monitored by the MIDI system, which operates over a sequence of minute steps, however these steps are so tiny that the imprint is one of continuous change. ! The general MIDI controller numbers are as follows: “Controllers 0 to 63 are used for continuous controllers, while 64 to 95 are used for switches. 96 to 121 are as yet undefined and 122 to 127 are reserved for Channel Mode messages.” (White: 2003). ! The messages are made up of four key sections: Message, Status, Data 1, and Data 2. The message is the type of information, which is being transferred, this could be ‘note on’ for example. The status represents the channel number. Data 1 in this instance would be which note(s) are being pressed on the keyboard (note number). Data 2 would transmit the velocity data how hard each note is pressed (see table 1) ! ! ! ! ! ! Status Data 1 Data 2 The ‘sss’ bits of the MIDI message define the message type. 1 s s s n n n n 0 x x x x x x x 0 y y y y y y y 9 8 Bits
  • 10. The ‘nnnn’ bits are used to define channel number Both ‘xxxxxxx’ and ‘yyyyyyy’ are used to carry the message data. Table 1: MIDI data transmission (Authors Own) 2.2 c) The Practical Uses of MIDI MIDI is a diverse and powerful tool, with a variety of uses and applications, enabling the user a wide range of possibilities as to the connectivity of devices. The first and foremost of these being the ability to link MIDI keyboards together: Linking MIDI instruments is accomplished by means of standard MIDI cables - twin-cored, screened cables with five-pin DIN plugs on either end. The cables are plugged into the relevant ports depending on the desired function, for example, the typical ‘MIDI master/slave connection’ requires the user to plug the MIDI cable into the ‘MIDI Out’ port of the master keyboard into the ‘MIDI In’ port of the slave keyboard, providing both instruments are set to the same MIDI channel, notes played on the master keyboard will also play on the slave keyboard. Figure 1: Basic MIDI connectivity/Daisy-chaining (Author’s Own) ! “The ability to link a second instrument via MIDI means that the sound of both instruments can be played from just one keyboard” (White: 2003), however, it doesn’t stop there. The use of the ‘MIDI Thru’ port slave keyboards can be ‘daisy-chained’ enabling the player to play multiple slave keyboards at once from just one MIDI 10
  • 11. keyboard, the player just assigns each slave to input on the desired channel (up to 16) to allow the master instrument to communicate with one specific slave without all the others trying to play along. The standard means of controlling MIDI usually revolve around the use of a keyboard based MIDI device; however, unlike a conventional electronic keyboard MIDI enables the player functionalities ! 2.1 d) MIDI Sequencers A “MIDI sequencer is really a multi-track MIDI recorder” (White, P. 33:2003) which uses input sources from live audio and MIDI to control synthesizers. Each track “may be edited, erased or re-recorded independently of the other parts ”(White: 2003). A modern MIDI sequencer will provide at least 16 tracks as a minimum. Live recording and editing is the typical workflow offered by the MIDI sequencer, usually to capture the idea and then to edit the MIDI data and in turn its playback. This could be achieved by perhaps by quantizing the notes so they sit closer to the beat. Having captured a MIDI recording, it is then possible to play back that same sequence of notes by plugging in any keyboard, however the sound of the synthesizer would not be captured in the recording unless it had been recorded as an audio track as well as a MIDI track. It is also important to note that the sequencer itself cannot generally play back any of the recorded sounds without a synthesizer or keyboard connected via MIDI to it, with the exception of sequencers with ‘built in’ synthesizers. ! 11
  • 12. MIDI sequencers are now found in the form of computer sequencing software in which you can set out and arrange your MIDI tracks. The sequencer will capture ! Figure 2: Typical MIDI sequencer setup (Author’s Own) ! “velocity, pitch, modulation, aftertouch and other controller information, as well as MIDI Program Change, Bank Change, and Note On and Off messages” (White: 2003). ! ! ! 2.1 e) MIDI Synchronization Because of the versatility that MIDI offers, it means that it can be set up and synchronized with different pieces of equipment in order to optimize usability. The key element of MIDI use is the ‘MIDI sync’ box. All MIDI sequencers, and drum machines contain a ‘MIDI Clock.’ The MIDI clock acts as a high resolution metronome: “it provides the electronic sprockets and gears that allow two or more pieces of MIDI equipment to be run in perfect synchronization, with one device acting as a master (and thus dictating the tempo) and the others functioning as slaves” (White, P: 2000) This demonstration can be seen in ‘Figure 2’ where the master 12
  • 13. synth controls the two slave modules, this sequence could then be recorded to the sequencer in order to play back the phrase exactly as it was played through the synthesizer and two slave modules. ! 2.1 f) MIDI Bandwidth MIDI bandwidth is a transmission rate of 3125 bytes per second or 3.125KB/s (kilobytes per second). This may sound significant, but when compared to the transmission rates of modern connections such as USB 3.0 which has a transmission rate of 400MB/s (Megabytes per second) and is 128,000 times faster than a MIDI message, the MIDI message seems somewhat slow and outdated in terms of connectivity and data transmission speed. In terms of bandwidth speed, MIDI is extremely slow when compared to bandwidths of data transmission of devices seen in the modern day. For example when compared to Apple’s most recent development of ‘Thunderbolt’ MIDI is almost insignificant, with Thunderbolt offering a data transmission rate of 10GB/s (Gigabytes per second, equivalent to 10 million kilobytes per second) which is 25 times faster than USB 3.0 and 3,200,000 times faster than the transmission of MIDI data (Apple: 2015). ! 2.1 g) The Contrast between MIDI and Digital Audio It is important to explain the distinction between both MIDI and digital audio. MIDI and digital audio do not perform the same task. MIDI sequencing is entirely different to the recording of multiple channels of audio through use of digital equipment. When a MIDI based song is played back, the instruments will all behave depending on the recorded and set MIDI data, which note to play, what velocity to play it, how long to hold the note etc. This is unlike digital audio, which essentially plays back a pre-recorded audio recording or audio sample exactly as it has been recorded (or edited in DAW software). This is the main difference, the MIDI data is only data related to the expressivity of how the notes/ chord sequences are played, and has no input on the actual sound of the instrument itself. This means that the MIDI data is independent of the actual sound of the synthesiser or keyboard, and so the player is able to play back the musical phrase through any MIDI based device. Figure 3 shows how a keyboard device supporting both MIDI and Line outputs could be ‘recorded’ and played back as the player had played in the phrase or sound, both as a digital audio recording and a MIDI recording. The diagram demonstrates, that for a 13
  • 14. MIDI recording to play back as it had been played, it requires the same keyboard that played in the sound to be connected in order to play back the phrase with the original sound. However, if a different keyboard were to be connected then the same phrase would be played, but with a different timbre/tonality of the sound of the new keyboard’s defaulted or selected sound type. ! ! Figure 3: Digital Audio/MIDI Comparison (Authors Own) ! 14
  • 15. 2.2 The DAW Studio The DAW studio is the modern norm for both recording, editing, mixing and mastering for all genres and types of music, with almost all studios comprising a central computer and a DAW of some description. 2.2 a) Introduction to DAWs Historically DAWs have been a part of the audio production process as far back as 1978 with the introduction of the first ‘DAW’ (Digital Audio Workstation) made by ‘Soundstream’, although computer music has dated back much further in the development of computer music in Bell Labs in the 1960’s. Hard disks (very low in capacity by today’s standards) were used for storage, and accommodated, very basic editing of the recorded audio in addition to mix-down and cross-fades (Langford, 2014: 9). However, because of the text-based DOS (Disk Operating Systems) these early DAW systems were extremely non-user friendly to navigate competently, which made them extremely unpopular with most musicians who did not understand how text based production systems work. It was this text-based system and hard to use interface, which provoked the move to PC’s in the 1980s. The PC operated with GUIs; otherwise known as ‘Graphical User Interfaces’ in which onscreen icons and objects visually represented the commands and interactions by the user (see Figure 4). By the late 1980s, many affordable computer platforms using GUI operating systems were available, many of which already had sequencing software packages written for them allowing the control, playback and recording of MIDI instruments as described in Langford: 2014. As processing power of PC’s increased; the evolution of music sequencing software advanced, and it wasn’t long before the introduction of Digidesign’s ‘SoundTools’ software in 1989. SoundTools was a big step in music production mainly due to its ‘advanced editing features’ but one in particular changed the way we view music on a DAW. This advancement was the introduction of the ‘FFT Window’, providing users with a Fast Fourier Transform view of the audio recording. This gave a similar view to that of a spectrum analyser with frequency on the horizontal axis and amplitude on the vertical axis, but it also showed how the spectrum changed over time. This provided the user with a ‘3D’ view of how a sound would change over time and in turn an idea of what might need to be done to alter that sound in its desired way, “then the traditional tools would be used to actually make the changes” (Langford, 2014: 10). ! 15
  • 16. 2.2 b) DAW Integration The DAW is now the most practical and widely used form of audio creation and production. The DAW is essentially a development and advancement of the PC acting as a sequencer in the typical MIDI setup of the early 1990’s (see ‘2.1d - MIDI Sequencers’). Most DAW’s when starting-up will bring the user to a typical sequencer window, where they can then add a selection of channels to begin a session. These channels typically include: audio tracks, software instrument tracks, and external MIDI tracks. ! Figure 4: Image of Logic Pro 9 Audio/Instrument/External MIDI integration (GUI) ! It is typically the ability of the PC that enables the user to create, manipulate, mix, and master internally, without any external sound source needed which defines the ‘Digital Audio Workstation.’ A track can now essentially be constructed and completed entirely in the virtual world of the DAW without the need of any outboard equipment other than a set of monitors to listen back to. The vast amounts of memory now available at low prices mean it is easy for even the bedroom producer to run what would have once been considered to be very powerful synthesis modelling synthesisers. For Example, Native Instruments ‘Massive’ synthesiser, a wavetable synthesiser well known for its assisted role in its contribution to bass sound production is often associated with the fairly recent genre of ‘Dubstep’ in which a large granulated warbling bassline typically defines the genre, below the slow half tempo thud of the 140BPM drums. ! 16
  • 17. 2.2 c) What Defines a Typical DAW? DAWs now have the power to “effectively replace and encapsulate much of or all of the functionality present in a traditional console-and-outboard-gear-based studio” (Leider, 2004: 46). This can be seen in almost all DAW’s that feature numerous software synthesisers and software plugins, which enable the user to internally mix and develop a track without the need of any outboard hardware modules to shape and sculpt the sound. There are two typical DAW setups: The first is the ‘audio interface’ based setup (see figure 5), where the purpose of the audio interface is solely to act as a high quality A/D and D/A convertor for the computer. Typically this compromises two to six audio inputs, a mixture of XLR and 1/4 inch Jack inputs, as well as a stereo output monitor mix and often a headphone mix. This form of DAW is inexpensive and often more practical for small scale recording and production work where the computer will be more than able to cope with all sound processing aspects. ! Figure 5: Image of Basic ‘Audio Interface’ DAW Setup (Firewire 800 Interface) ! The second setup is the ‘audio interface and expansion card’ setup. Generally a much larger audio interface is assumed, typically with 16, 24 or 48 inputs, which offers the user a much larger scope to record multiple sound sources at once. However, the multiple 17
  • 18. recording of so many inputs would not be possible without the expansion cards which assumes the role of: audio processing, editing, and mixing. “These systems free the host computer to concentrate on running its operating system and managing files and disk access.” (Leider, 2004: 46). However the cost of these types of systems is high and they can be expensive to upgrade once they have become out-dated. Most of these systems will also require the user to incorporate a sound desk module to work alongside the expansion card for audio recording control and often DAW control in digital desks (as seen in figure 6 with the Control 24 mixing desk used to digitally interact with Pro Tools). ! Figure 6: Image of ‘audio interface and expansion card’ DAW Setup ! ! 2.2 d) What New Audio Manipulation Tools are Offered by the DAW? Audio manipulation is something, which every music producer seeks when building and creating a track. The DAW offers its users a vast scope of audio manipulation tools that can be used to totally transform an audio recording. The most notable audio manipulation tools include the following: 18
  • 19. • “Drum replacement”, in which the user replaces the hits of recorded drums with selected samples, retaining the style and arrangement of how the drums were recorded but replacing the hits with samples: • “Time stretching” where the length/duration of the audio file is altered but not the pitch: • “Pitch shifting” where the pitch is altered, but not the duration of the audio file/sample: • “Elastic audio” which allows the user to ‘stretch’ audio recordings either with or without affecting the pitch of the recording. (Langford: 2014). ‘Drum Replacement’ is an example of an audio manipulation tool, which is offered by the DAW. Essentially what drum replacement enables the user to do is to replace elements of recorded drums, this could be for example, total replacement of the snare drum of a track, where the DAW is set to analyse every recorded snare drum hit and then replace each hit in exactly the same place as the recorded snare. The velocity and volume of each hit can then be altered individually to create a sense of reality to the replacement drum samples. ‘Time Stretching’ is the name given to the lengthening and shortening of audio files/ samples. Perviously in the analog domain the user would have to alter the playback speed of the audio sample, and in turn the pitch, now time stretching “enables the user to expand or compress the length of the audio file without affecting the pitch” (Langford: 2014). Time stretching is a feature that has been available for some time, and has improved since its early development, although it still performs at its best below around 4 second time differences (shorter/longer) before the audio artefacts and ‘audio jitter’ of the stretched out audio sample can begin to be heard (usually an undesirable effect). ‘Pitch Shifting’ similarly appears to be the counterpart to time stretching, in which the pitch of the audio file/sample is altered but the length of the audio file is not lengthened (lower pitch) or shortened (higher pitch) as a result of the pitch change, a result found when altering pitch in the analog domain. However, pitch shifting was available as a feature, prior to computer based audio editing in the digital domain in the form of the Eventide “H910 Harmonizer” in which the user would have to input a ratio difference (only to 2 decimal places) rather than a pitch difference amount, “which meant it was difficult for users to achieve exact pitch matching” (Langford: 2014). The H910 was superseded by the “H3000” in 1986, featuring MIDI connectivity, enabling the user to control the parameters of the device 19
  • 20. through MIDI. The H3000 also featured a pitch based difference amount (+1/-1 semitones). Despite the release of pitch shifting in the digital domain, it wasn’t until the release of Antares Audio Technologies release of “Auto Tune” in 1997, which relied heavily on the technology of the H3000, that offered a ‘quantize’ feature enabling the user to synchronize their audio to exact pitches. This technology became rapidly popular throughout the early 2000’s due to the low cost of the plugin, when compared to hardware units such as the H3000, and Auto Tune is arguabley the most innovative early DAW technology step in establishing the DAW as “the go-to medium for audio recording, production and manipulation” (Langford: 2014). This technology has since been developed and now Celemony’s “Melodyne” is the industry standard pitch shifting plugin, allowing users to input and control the audio key through MIDI input, which is then mapped out on a GUI window. ‘Elastic Audio’ is an extremely recent concept as a direct result of the huge increase in computer processing power over the last 10 years and is essentially a real-time and non- destructive way of time-stretching audio files, where the user quite literally drags elements within the audio track (such as transient points) to synchronise them up to the tempo grid behind. Elastic audio, although not a new technology in its own right, is a much simpler visual based approach to time stretching/pitch editing. Truly elastic audio will have two characteristics of flexibility: time manipulation and pitch manipulation, that is, the ability to alter audio length with/without the altering of pitch. The main difference between elastic audio and time stretching is essentially the ease of use of elastic audio in comparison to time stretching. The user doesn’t need be concerned about tempo ratios or audio file length, as long as the tempo grid behind is correct. There will then be no issue when time stretching the audio file through use of elastic audio. Another useful feature of elastic audio is the quantisation of audio events (using the quantisation feature identically to the way in which one would quantise a MIDI event) the user is able to quantise every hit of a drum track exactly in time with the tempo grid, all through use of transient markers and the versatility of elastic audio. ! ! 2.2 e) MIDI Implementation As mentioned earlier, MIDI is still very much a part of the DAW setup. However it is now interpreted in three forms: 20
  • 21. • The first use is as a traditional MIDI keyboard input and sequencer setup; where the computer is literally just used to record MIDI data and play it back as the user played it in, exactly as MIDI was traditionally used. • The second use is as an internal software instrument input, where much like the traditional MIDI setup, the interactions of the user are interpreted as MIDI data. However when the recorded MIDI data is played back, it is an internal software instrument, which responds to the MIDI messages. • The third use is literally the use of MIDI as an extra control device to control/ automate the behaviour of plugins and audio editing parameters. For example this could be the use of the modulation wheel to automate the lowpass filter cutoff of a filter plugin, or as mentioned in section 2.2d using the keys to automate the polyphonic playback autotune response of melodyne when auto tuning a vocal track. ! ! ! ! ! ! ! ! 21
  • 22. 2.3 Technology & Electronic Music The relationship between technology and electronic music is inseparable, where advancements in technology will almost definitely result in advancements in the way electronic music is created, manipulated and produced. This section will examine the pinnacle arguments within Electronic Music with regard to Technology use for expressive electronic music production. 2.3 a) The Divide Between Computer Music & Synthesis In Schrader’s ‘Introduction To Electronic Music’ he interviews ‘Jean-Claude Risset’, a French pioneer of early computer music. “Risset pioneered computer music and sound analysis, particularly of brass instruments, through the use of computers at ‘Bell Labs’ from 1964” (Schrader 1982). The interview between Schrader and Risset is based around his works of “Mutations I” released in 1969, an experimental computer music piece featuring a number of unusual and discordant sounds. Within the interview Schrader asks; “Do you feel there are any fundamental differences between electronic music composed with synthesisers and computer music?” to which Risset clearly states that he is “only interested in the kind of computer music that differs from electronic music composed with synthesisers” (Schrader 1982). He then expands on this, explaining his dislike for composers who simply use the computer as an ‘elaborate synthesiser’ stating that the computer is a far more powerful, flexible and precise tool than the synthesiser. He continues to set out his dislike for synthesisers, explaining that the synthesiser “restricts the sonic possibilities” of the sounds created and that in the way synthesisers are developed they bias the user towards “instrument like performances”, unlike the computer which enables the user complete freedom of composition and sound shaping and manipulation (Schrader 1982). Risset’s argument clearly illistrates the major differences between computer music and electronic music through synthesis, and it would still appear that such a divide exists, having analysed both MIDI and DAW studios (sections 2.1 and 2.2). The MIDI studio - much like Risset description of earlier synthesis - encourages the user to bias and structure their music as an ‘instrument like performance’ with many synthesis arpeggiators mainly working in 4/4 timings. This is unlike the total freedom of music expression offered by the computer. This t isn’t to say that the DAW doesn’t encourage the user in a similar fashion, with the default editing window of many DAW’s (both Logic and Pro Tools for example) being a tempo grid set to a 4/4 timing at 120BPM. This automatically encourages the user to begin structuring their music to conform within these parameters. However it is important to mention that 22
  • 23. these parameters can easily be disabled, leaving the user with a ‘blank canvas’ from which to begin their musical productions. Furthermore, throughout the interview Schrader asks another poignant question; “You have been involved with computer music for several years and you have experienced several technological changes. How do you think the technology of computer music has affected your compositional style?” (Schrader: 1982). Risset quite clearly states that his compositional style relates closely to the development in technological advancements in computer music. In terms of the implication of technological advances to computer music Risset explains that he believes that the computer itself has given a totally new perspective towards “completely formalized processes that can be easily automated” (Schrader 1982) where an individual can design almost all of the sonic constrains involved within their musical works (Schrader 1982). Essentially he is explaining that, even when this interview was published in 1982, there was still a significant difference, in his opinion, between the possibilities offered by both the hardware and software music production worlds. ! 2.3 b) Technological Innovation Through Electronic Music “The barriers to electronic music have significantly dropped in the past twenty years; cost, size and speed are the three main factors in this revolution” (Collins, N. & d’Escriván: 2010). The time it now takes for a composer or producer to hear the results of their musical efforts is almost instantaneous where now Laptops and PC based audio workstations are so powerful that the majority of musicians do not use their full capacity or even appreciate the full capacity of their laptop workstations. Furthermore, most DAW’s are now priced so low that it is “not uncommon for musicians, even in developing countries, to own a number of machines” (Collins, N. & d’Escriván: 2010). With the ease and low cost of acquiring a DAW, many would assume that this correlates with a direct increase in musical innovation as a result of new technology multiplied by the accessibility of the masses to electronic music production equipment. However it appears that often this is not the case. Many electronic producers and musicians alike feel that innovation through technological equipment is not occurring in the ‘new’ digital and computerised age of music production. This argument is supported by ‘Alejandro Viñao’ in his article “The Future of Technology in Music” a feature within “The Cambridge Companion to Electronic Music.” (Collins, N. & d’Escriván: 2010) In this he suggests that, in fact, although most innovative electronic musicians are using ‘the technology of another time’ to realise their creative musical ideas. “They appear to have lost their lust 23
  • 24. for innovation through the new and latest technologies, and are instead only using what they feel comfortable with, not pushing the boundaries in their use of musical equipment” (Viñao: 2010). To a degree the idea of ‘only using what you know’ does make sense in terms of music production as it provides the music producer/electronic musician with predictable results, they know how the equipment works and what to do in order to obtain a desired sound/result. Alternatively it could be the equipment itself that actually defines the music and it is the use of that equipment that many associate with that particular music producer. This could for example be Jimmy Hendrix and his use of the electric guitar, an instrument that he mastered and experimented with, and as a result defined his distinct sound. ! 2.3 c) Psychology & Sound Perception Sound perception, sound localisation and psychology within music is essential to embrace when composing and producing a musical work, regardless of its genre. It is all of these elements combined which help the listener to interpret the song as the producer had intended, with the envisioned musical meaning passed on to the listener through his/ her listening experience. However, because many of the sounds heard within electronic music are not ‘natural’ - i.e. acoustic - sounds, it is especially important for the electronic music producer to establish the environment in which the listener should associate with these synthesised sounds, this may revolve around the localisation of the sound (where the sound is situated within the sound field), the timbre, pitch and rhythm. ! The localisation of a sound is split into three key elements: • Location of azimuth, • Elevation and • Distance. The localisation of azimuth refers to the identification of sound on the horizontal plane. In music production this would refer to left/right panning of the sound field in order to obtain differentiation between the elements that make up the musical piece (Collins, N. & d’Escriván: 2010). The localisation of elevation refers to where the sound source is in relation to the vertical plane, i.e. high-pitched sounds appear to be located ‘above us’ whereas low-pitched sounds appear ‘below us’. Although less accurate than the localisation of azimuth, “the localisation of pitch is still an essential feature to consider when producing music” (Collins, N. & d’Escriván: 2010). 24
  • 25. Finally the localisation of distance is interpreted through a mix of ‘loudness of the sound source’, ‘a knowledge of the sound itself’ and ‘the loudness ratio between direct and reverberant sound’. In electronic music production it tends to be predominantly the loudness ratio between direct and reverberant sound that is manipulated in order to give a sense of distance to a particular sound. This is different from acoustic music recording where all of these techniques will be used and incorporated into microphone technique in order to create a sense of the instrument and the room it is in and the intended distance from the sound source the listener will to be. ! When listening to music our bodies automatically tell us what we need to hear and acquire meaning from in terms of sound. This is known as ‘Auditory Streaming’, a phenomenon which enables us to concentrate and focus on single elements within a complex sound field, e.g. picking out the speech of an individual in a loud/busy environment. This enables us to make sense of the sounds around us. In music this phenomenon enables us to “hear the music as a collection of its individual streams, vocals, bass lines, melodic lines, and rhythm” (Collins, N. & d’Escriván: 2010). ! These ideas of auditory streaming are a part of ‘Gestalt Psychology’ founded by Paul Ehrenfels and Max Wertheimer. Gestalt psychology can be split into the following principals: • Principal of Common Fate: Objects which move together are usually grouped together • Principal of Closure: Objects which appear to form ‘closed entities’ are usually grouped together • Principal of Similarity: Objects sharing ‘similar characteristics’ are usually grouped together • Principal of Proximity: Objects that appear close to the listener are usually grouped together • Principal of Good Continuation: Continuous forms tend to be preferred. (Collins, N. & d’Escriván: 2010). ! Pitch perception is the interpretation of the pitch and pitch relation of all elements within a musical piece where “traditionally, most music is compromised of discrete pitches, or scales, instead of a continuum of them. Furthermore it is common that the scales repeat themselves after an octave or a frequency ratio of two” (Collins, N. & 25
  • 26. d’Escriván: 2010). In western music this scale is one which is a part of the ’12 Tone Equal Tempered Tuning (12TET)’ a tuning in which almost all music in the western world, regardless of genre, tends to use. However with today’s computer music technology it is fundamentally easy for the user to experiment with adaptive tunings, in which the intonation itself is modified to fit with the current key of the piece. ! Rhythm is essentially the ‘glue’ that holds together a musical piece and “the ability to infer beat and metre from music is one of the basic activities of music cognition” (Collins, N. & d’Escriván: 2010). Even if a piece is rhythmically complex with altering time signatures and values we are able to interpret these complex rhythmic patterns despite their complexity. This occurs because humans are readily able to interpret pulses/beats. Electronic music - particularly computer music - allows the user the possibility to investigate alternative and non-standard musical structures that deviate from the common metre practice of western music. ! ! ! ! ! 26
  • 27. 3. Practical ! Over the course of this practical examination a number of tracks from each technology period will be examined in detail, including the physical makeup and construction of the tracks, which will be examined using Pro Tools to break down each track into its corresponding elements, as well as recording anomalies and unusual findings in the process. The technical equipment used to create each track will then be researched and examined to determine how the limitations and scope offered by the technical equipment used would have influenced the structure and style of the tracks created. This examination will demonstrate the differences in technology offered both by the MIDI and DAW studios respectively, outlining what the technology could offer producers in terms of creativity in relation to the limitations, exploring whether a direct link between creativity and technological advancements exists in series or if either creativity or technology precedes or influences the other. ! 3.1) Rhythim is Rhythim (Derrick May) “Strings of Life” is a definitive Detroit Techno track released in 1987 by ‘Rhythim is Rhythim’ (Derrick May). The track is a well renowned and popular example of electronic music production of its time, although it would be considered extremely basic by today’s standards. May is notorious for his use of MIDI equipment and it is reported that May uses “Korg sequencers, Roland sequencers, Roland drum machines” (May 2006). Reportedly May’s equipment includes the likes of: • Roland TR808 • Roland TR909 • Roland TR727 • Yamaha DX100 • Kawai K3 • Sequential Circuits Pro One • Nord Lead 1 Keyboard • Memory Moog • Waldorf Micro Q • Yamaha DX21 • Ensoniq Mirage • Korg Poly800 27
  • 28. • Atari ST (computer) ! It seems that musical innovation through educated use of technological equipment is what May believes is essential when creating music stating that: “Now, with the age of technology, you don’t even have to be a ‘synthesist’. You don’t even have to know what a synthesizer is, to make music. I’m all for the future, 100%, but I just find the future not 100% into being creative. The future doesn’t have a creative agenda, were becoming less creative, not just in making music but in everything” (May 2006). This links in particularly well with the article ‘New Sounds, old Technology’ (Voorvelt 2000) which states throughout that “Musical innovation tends to precede technological innovation rather than the other way round” (Voorvelt: 2000). The article explains that it is innovative musicians who explore and abuse their old equipment and instruments that help push the boundaries in music production and often develop “new forms and styles, testing new musical ways of thinking and widening the range of expressive possibilities” (Voorvelt 2000). May himself describes artists who do not push the boundaries as “riding the coat tails of technology” (May 2006), an interesting point which fits Voorvelt’s description of typical pop production. This is where new technology is used in the production of new music, however only in traditional ways. Voorvelt describes the use of 1980’s drum machines stating that; “the popular Roland TR-808 and the Linn drum machine, defined the drum sounds for genres such as new wave, electronic body music and acid house, but the actual sounds and drum patterns remained very similar to those developed in the 1950s and 1960s.” (Voorvelt: 2000). In other words the sounds that the equipment created as a default were ‘new sounds’ in popular music, however, the way in which the equipment was used was not innovative in any way whatsoever. May makes an interesting point regarding the use of new technology in fitting with Voorvelt questioning what defines a musician: “What do you consider a musician? In other words, is it because you can program music on a computer? You have particular programs, you can edit on a particular program, does that make you a musician? Because you can actually make a good song? Or is it because you can actually play an instrument? Do you implement this into your music? Or do you just use the technology?” (May 2006). Is it the musician being led by the new technology? Or is the new the new technology being implemented by the musician? It seems that it is this question that defines the difference between innovative music production through the use of technology and music production guided by technology. May feels very strongly about how technology should be used in music stating: “I recommend that you don’t lean 28
  • 29. and depend on your technology 100%, it’s too easy to give up and not really use your imagination, I don’t want a computer to tell me what I can and can’t do, I don’t want to have to fight a machine to tell me that I can’t do something” (May 2006). This perhaps explains May’s love of analogue electronic equipment, describing producing in the analogue domain as “working with your ears and your instincts” and urging new electronic musicians to “try and get as much analogue stuff as you can and implement it into your technology, you’ll find that there are advantages to doing that, it’s not a bad thing to hear a bit of history from that machine” (May 2006). ! When analysing ‘Strings of Life’ it is clear that the production, by today’s standards, is extremely dated, and definitely a product of its time. It is immediately clear that this track revolves around the use of samplers, sequencers and a MIDI clock in which to structure the triggering of these samples. The entire piece focuses on the piano sequence that May's then-friend Michael James had recorded for him, originally at 80 BPM. May increased the tempo of the piano recording, sliced it up into loops, and then added percussion and string samples to create Strings of Life (Discogs 2015). ! It is obvious that many of the string hits for example have only been recorded on one note, and so that when these string stabs are played back through the sampler and re- pitched up/down, the actual tonality of the string hits change: “If a recorded sound is played back at a different speed, the timbre of the sound will be effected, since all the harmonics of the sound will be heard at correspondingly, different harmonics” (Schrader 1982) For example, when the string hits are pitched up, the size of the sample becomes shorter and in turn pitch increases. The envelope of the sample also becomes ‘compressed’ most noticeably shortening the transients, the attack and release phases of the sample, meaning that it eventually becomes more of a ‘hit’ than a ‘stab’ of the strings. Likewise, when pitched down, the transients become longer and more drawn out, and the “envelope of the sound will be lengthened” (Schrader 1982). This in turn means that the stabs become very slow and much less impactful. The overall result of this pitching is that the strings have no noticeable characteristic to their sound throughout the piece, as the string sound is constantly being morphed as a result of the pitching effect of the re- sampling process. 29
  • 30. In an interview with ‘Red Bull Music’ May was keen to emphasise that all samples used in the production of ‘Strings of Life’ were ones which he had collected personally, stating that: ! “We weren’t using the sequencers, synthesisers or programs just as a ‘crutch’, they were an asset. In other words, ‘Strings of Life’, the piano was real, it was performed. The orchestra hits that you hear were recorded from various progressions of an orchestra. I recorded these sounds to cassette, and I put these into an old ‘Mirage Sonic Sequencer’ and I played progressions on the keyboard to play the notes that you hear on the song, so it’s actually completely performed.” (Red Bull Music 2006) ! Another interesting finding drawn from this analysing process is that, similar to black box, the timing of this track is inconsistent. ! Figure 7: BPM Analysis – Strings of Life ! The track varying between 125.7 and 128.8 BPM throughout the course of the piece, seems unusual for a MIDI based track to differ this much in tempo, when revolving around a central clock. Both of these tracks were created in the late 1980’s (within 2 years of each other), and it is more than likely that they would have used much of the same/ similar MIDI equipment. Perhaps MIDI clock devices were unstable/unreliable at this time, as MIDI was a relatively new data connectivity device, it is likely that faults in MIDI "Strings of Life" BPM Change Over Time BPM 125 126.25 127.5 128.75 130 Time (SMPTE) 0:00:00 AM 0:00:48 AM 0:01:52 AM 0:03:10 AM 0:03:35 AM 0:04:54 AM BPM 30
  • 31. equipment would still be in the troubleshooting stage of development. The notorious sound of the Roland TR 808 drum machine can be heard sequencing the drums with the five percussion sounds that distinctly characterize the 808: “The hum kick, the ticky snare the tishy high hats (open and closed), and the spacey cowbell. Low, mid, and high toms, congas, a rim shot, claves, a handclap, maracas, and cymbal fill out the 808’s sonic complement” (Vail, M. 2000) all of which can be heard orchestrating throughout ‘Strings of Life.’ However, the 808 itself in fact synchronises with other MIDI-type equipment using the ‘DCB Bus’ connection, a predecessor to MIDI technology as referred to in section 2.1a of this paper. This would mean that May would either have had to use a ‘DCB Bus-to- MIDI’ type connector to synchronise the 808 to the central MIDI clock. Or perhaps he pre-recorded the drum sequence(s) onto tape to use with his ‘Ensoniq Mirage’ sampler to play back and trigger from the sampler, bypassing the MIDI-to-DCB Bus issue he would have previously faced when using the TR 808. ! 3.2) Black Box The hit single “Ride On Time” by Italian House group ‘Black Box’ released in 1989 is a popular example of MIDI based studio production in the late 80s, featuring as part of ‘The Guardian’s” ‘UK million-selling singles list’ with sales figures in the UK of 1.05 million since its release date, placing it 102 in the rankings of best-selling singles in the UK (Sedghi 2012). The song is renowned for its use of its heavily sampled vocals from “Love Sensation” by Loleatta Holloway. These samples were un-credited when the song was released and Black Box were sued by Loleatta Holloway and her writer/producer Dan Hartman. Because these samples were never approved by Holloway and her Hartman, Black Box had very little recourse when the lawsuit regarding the intellectual and mechanical property of the vocals was issued, following the international success of “Ride On Time.” This led to the compensation of an undisclosed sum in damages to both Holloway and Hartman (Independent: 2011). Although the success of the pop hit outweighed the issue of the lawsuit, this is still a good example of the dangers of unauthorised sampling gone wrong. In spite of the seriousness of the lawsuit relating to the vocals, it is the way in which the vocal samples are used that gives an insight into pop production through the use of MIDI equipment. The samples have been set to be triggered by a sequencer. This in turn gives the actual sound of the vocals an almost ‘percussive’ effect as the vocals are 31
  • 32. ‘punched in’ to the track when triggered by the sequencer. This would in turn be synchronised to the central MIDI clock. Sampling vocals in this way was a relatively new phenomenon, as the vocals were triggered in the way most producers would trigger percussive samples in order to create a rhythmic sequence. This was an innovative way of using MIDI technology to treat vocal samples as most would traditionally treat percussive samples is a perfect example of innovation through technological use. Because Black Box are classed as a ‘Pop Dance’ trio this somewhat contradicts May’ view relating to innovation only occurring through use of technology. That being said, the rest of the track and the way in which it was produced was very traditional in its use of MIDI equipment and its uses, not pushing the boundaries in terms of what the MIDI equipment could offer. ! Having analysed the structure of the track it is clear that Ride On Time was created using the MIDI studio. All events occur exactly in time with each other, the track itself is completely quantised throughout and with all musical events sitting exactly within the and ths of the 4/4-time grid set. Interestingly however; the timing/clock used to create the track seems to change throughout, suggesting that as the track progresses it actually increases in tempo, starting at 118.4BPM and finishing at 119.2BPM (Figure 7 shows the tempo transition throughout the course of the track). ! ! Figure 7: BPM Analysis – Ride On Time "Ride On Time" BPM Change Over Time BPM 118 118.5 119 119.5 120 Time (SMPTE) 00:00:00 00:01:51 00:02:15 00:03:06 00:03:59 00:05:33 BPM 32
  • 33. ! It is unclear as to the reason for this distinct increase in tempo, however, what can be determined is that all elements of the track remain in time and relevant to each other (no elements become faster or out of time with another). This suggests that if this issue was related to the central clock, it was the overall timing that was affected rather than the individual MIDI signals to the various synthesisers and MIDI hardware. This change in timing was not expected due to the fact that “Ride On Time” is a dance track and therefore would be expected to retain a fairly constant tempo in order for the DJ to cue up, mix and beat-match this track with the current ending track and then again with the next track as this track begins to reach the end of its playback time. However this distinct change in tempo would be significant enough to inconvenience DJ’s who would assume that the tempo of the track would remain fairly consistent and constant throughout its playback. ! 3.3) Burial & How We Interpret Rhythm “The borderline between composition and sound synthesis is becoming increasingly blurred as sound synthesis becomes more sophisticated and as composers begin to experiment with compositional structures that are less related to traditional musical syntax” (Encyclopædia Britannica 02 December 2013). This statement is true of Burial’s production, where little can be seen to relate to a typical musical syntax. The idea of song structure appears to have been left far behind when listening to Burials’ ‘Broken Home’ and ‘Homeless’ tracks, where the listener is immersed in a complex and confusing sonic environment and surrounded by rich and diverse developing textures throughout almost ‘music concreté’ style productions. 3.3a) Broken Home ‘Broken Home’ by Burial features a number of signature DAW-only sound manipulation features including time stretching of the guitar sample, heard at the beginning of the track. Here the sample has been time stretched to the point at which audio ‘jitter’ can be heard, due to the audio sample being stretched so far from its original recorded tempo. Similarly the use of pitch shifted vocal samples can be heard throughout this track. The track is extremely unstructured and does not follow any distinguishable time signature. However a 4 bar loop can be created from the repetitive aspects of the song, based solely on when the melodic elements loop, rather than the actual drum beat which even within this 4 bar loop, still don’t meet the gridlines or match up to any typically used time 33
  • 34. signatures. The melodic loop suggests that the tempo of Broken Home is 140BPM, but there would be no way in which an individual would be able to conclude this solely through listening to the song alone. The structure of the track (assuming the 140BPM tempo) appears to be as follows: • 8 bar intro • 4 bar verse • 48 bar chorus • 8 bar breakdown • 8 bar verse • 48 bar chorus • 8 bar breakdown • 32 bar chorus • 8 bar outro. ! Figure 8: Broken Home Structure ! However, the transition of each element of the song is extremely transparent, with the song itself remaining extremely dissonant and disjointed, leaving the listener with little idea of tempo. The issue with this is explained in an extract from ‘Rhythm, Music and the Brain’ by Michael H. Thaut, which states; “Rhythm organises time. In music, as a time-based acoustical language, rhythm assumes a central syntactical role in organising musical events into coherent and comprehensible patterns and forms. Thus the structure of rhythm communicates a great deal of the actual, comprehensive “musical meaning” of a musical composition”. (Thaut, 2005). ! This may suggest that the lack of an obvious rhythmic structure of Broken Home leads to the listener struggling to interpret the ‘comprehensible patterns and forms’ within the music, such as melodic patterns and musical phrases. Does this in turn mean that it is difficult for the listener to interpret a ‘musical meaning’ of this track? This may well have been the exact purpose for creating such a rhythmically 34
  • 35. disjointed track. This may raise the question: what does the listener derive from ‘Broken Home’ as a result the lack of a rhythmic structure? ! The lack of structure in Burial’s music poses questions about both the ‘syntactic and semantic meanings’ found in almost all types of western music. A 2005 paper by ‘Stefan Koelsch’ entitled “Neural substrates of processing syntax and semantics in music” examines syntactic and semantic meanings in music in depth. Koelsch explains that all music is guided by certain regularities, which constrain, and organize how, simultaneous tones (i.e. intervals and chords), individual tones and durations of tones are arranged to create what can be interpreted as ‘meaningful musical phrases’ (Koelsch 2005). Koelsch emphasizes that music inherently relies on the use of some sort of regularities in order to portray meaning to the listener. This conforms to the earlier theory portrayed by Thaut that “Rhythm organizes time” and in turn that “rhythm assumes a central syntactical role in organising musical events into coherent and comprehensible patterns and forms” (Thaut, 2005) suggesting that music fundamentally relies on the form of regularities in patterns and phrases to portray meaning and appreciation for the musical ideas and work(s) of artists/musicians. Koelsch’s findings further support this idea suggesting that listeners, even without any musical training in ‘tonic’ or ‘dominant’ chord structures, find that “Music-syntactically irregular chords elicit an early right anterior negativity (ERAN)” in the brain (Koelsch 2005). Therefore, this suggests that the brain prefers predictability in music both in terms of chord structure and rhythm, two characteristics to which the music of Burial does not conform.. In terms of ‘meaning’ within music, Koelsch explains that music transfers and communicates ‘meaningful information.’ However, for the music to become meaningful “the emergence of meaning based on the processing of musical structure requires integration of both expected and unexpected events into a larger, meaningful musical context” (Koelsch 2005). Therefore regardless of whether the piece as a ‘whole’ makes ‘musical sense,’ if a musical phrase works within that piece and its structure then listeners are able to gain meaning from this piece of music. ! 35
  • 36. ! 3.3b) Homeless Burial’s track ‘Homeless’ appears somewhat more structured than ‘Broken Home’ with a distinct 4'/4-shuffle pattern heard in the drums and a fairly steady tempo of 134.8BPM. However many of the drum hits still sit very much off the gridlines at this tempo, suggesting very little, if no quantisation has been used. Compressed noise/vinyl crackle samples can be heard throughout the track, a signature “drizzly crackle that has become one of his sonic signatures.” (Fisher, M. 2007). Much like in ‘Broken Home’ the vocals heard in Homeless are also pitch shifted, yet in ‘Homeless’ they are also overdriven. In the breakdown section the vocals can be heard to be timestretched, and processed to the point at which clipping distortion occurs (the track itself does not clip, only the vocal processing suggests this). In a 2007 interview with ‘The Wire’, Burial claims to “remove voices from biography and narrative” and then “pitch down female vocals so they sound male, and pitching up male vocals so they sound like a girl singing” (Fisher, M. 2007) which explains the strange tonality of the pitched vocals. Burial’s use of vocal manipulation, morphology and the way in which the vocals are processed is extremely unusual, as he appears to use it to totally re-invent the sound of the individuals he samples. When listening to a vocal there are a number of traits a voice can portray to the listener, including; the “age, sex and health-image of the utterer, the personality (or the pretended personality of the actor), the intent (friendly, malicious) and state of mind (angry, frightened)” (Wishart 2012) as well as the attitude/or meaning in which a speaker is aiming to portray from the manner in which they sing or speak (Wishart 2012). However, the manner in which Burial approaches his vocal production and manipulation seems to totally defy and mystify the appearance and portrayed character of the vocals. The listener is no longer able to distinguish any character traits from the original formant, pitch, or manner in which the vocals spoken/sung. This isolates the words/phrases themselves, as the personality of the vocalist/speaker has long since been lost due to the processing effects. The isolation of the vocals however still appears to retain it’s meaning to the listener, if not perhaps enhancing or altering the manner/opinion of the actor or singer behind the original phrase. The earlier mention of Burial’s extraction of “voices from biography and narrative” (Fisher, M. 2007) suggests that he is taking poetic/narrative passages from spoken media, and in turn giving the phrases a new lease of life through his manipulative production processes, meaning that the vocals now have a musical tonality to them, rather than spoken voice. 36
  • 37. ! Following on from the unconventional audio processing heard in the vocals, the track can also be heard to be cut at a transient point rather than at a zero crossing point, most noticeably after the breakdown to chorus transition, where the first attack of the chorus has been edited and cut at one region, meaning that all sounds start at an exact single point, and all audio has lost its attack. In terms of structure, “Homeless” seems to have a much more noticeable structure than that heard in “Broken home”. The structure is as follows: • 3 bar intro • 8 bar verse • 28 bar chorus + extended end • 32 bar chorus + extended end • 20 bar chorus • 24 bar breakdown • 24 bar chorus • 8 bar transition section • 34 bar outro ! ! Figure 9: Homeless Structure ! Following on from the analysis of the two Burial tracks, the element that stood out most was the lack of rhythmic structure, particularly in “Broken Home”, in which I was unable to determine a definitive tempo, aside from measuring the repetitiveness of the musical phrases. This led to researching into how Burial produces his music. Throughout many articles and interviews with the illusive ‘Burial’ it has become clear that the way in which he works is extremely unconventional to say the least, claiming to use ‘Sound Forge’ as his chosen DAW for production. However this is an audio editing software predominantly used for finalising post-production works, and with little to no MIDI 37
  • 38. integration/sequencing. Audio files are recorded or imported and are arranged without a ‘tempo grid’. “In essence, Sound Forge has always provided an efficient and well-featured environment within which to perform detailed editing of mono and stereo audio files. Basic editing tasks such as trimming, adding fades, normalizing and resampling can all be performed accurately and with ease, and file output formats cover all the usual standards, including MP3 encoding.” (Waldon, J. 2007). It is clear that Sound Forge is an editing software, not a production software by design, and due to the absence of a tempo grid combined with only audio import options available (little MIDI integration) it quickly becomes clear why almost all tracks produced by Burial lack regular rhythm, and do not match up well when placed over a tempo grid in any other DAW. It seems feasible that Burial may well use Sound Forge when arranging tracks, although to use it as a sequencer would be extremely time consuming and complex in terms of the processing of audio and re sampling. Could it be argued that in producing in this way, through the re-sampling of recorded material in an un-sequenced and disjointed way, Burial could be seen as being the modern pioneer of ‘Musique Concrète’, where he creates his own “vivid audio portrait of a wounded South London, a semi-abstract sound painting of a city’s disappointment and anguish.” (Fisher, M. 2007). Much of Burial’s music appears to focus on the sonic aspects of sound, and treating them as a painting of a scene or situation, rather than a song or musical work as such, his sound is a collective mourning nostalgia for the past, all of which he feels has been lost in modern London. This idea of a sonic painting in sound, rather than a musical work relates very much to the work of Schaeffer who developed his theory of composition centered around what technology was available, stating that any sound could be extracted from its environment and altered through manipulation techniques. This means that any sound regardless of its source was available for use in a musical context. According to Schrader, Schaeffer manipulated a number of his sound sources through the use of ‘locked groove discs’ which he essentially used to create loops within music “where the effect would be to alter aspects of the recorded event itself to create complex rhythmic patterns, giving it a new lease of life in a sonic and musical context” (Schrader 1982). Similarly he was also fond of speed change manipulation of records, which would effectively change both the pitch and the envelopes of the recorded sound. Many of these techniques appear to relate very closely to the work of Burial, whose use of unusual samples including vinyl crackle, and recorded ambience are used, 38
  • 39. manipulated, and looped to create complex sonic textures within his music, similar to the looping ideas and concepts used in Schaeffer’s work. Pitch manipulation similarly is another aspect both Schaeffer and Burial appear to have in common, where Schaeffer would pitch up and down recordings of vinyl’s to his desire, Burial uses DAW pitch shifting time-stretching techniques within Sound Forge to manipulate vocal recordings of spoken work and literature. This ‘sonic concept’ is integrated throughout all aspects of his work, even down to a number of the song titles including the likes of ‘Night Bus’, ‘Distant Lights’ and ‘In McDonalds’ suggesting that the music he produces is created to represent a particular environment/feeling related to that particular environment. ! Figure 10: Sound Forge 9 editing software (Walden, 2007) ! Overall, Burial’s production style, and unconventional construction of his music is an ideal demonstration of the power of the DAW. Almost all of the techniques involved in the production of his music would not be capable without the editing capabilities offered by his chosen DAW. It would be impossible to re-create such a rich and diverse sound scape heard within these two tracks using the MIDI studio alone. Effects such as time- stretching and pitch shifting are not offered by MIDI equipment in the way in which it 39
  • 40. would be necessary to re-create the sounds heard throughout both ‘Broken Home’ and ‘Homeless’, so in that respect, these songs are a brilliant example of the vast capabilities offered to producers by the DAW studio. ! ! ! ! 40
  • 41. 4. Conclusion 4.1) What does MIDI & DAW Equipment Offer the User? Following my practical investigation it appears that there are clear differences between MIDI and DAW production, down to the equipment, its connectivity, usability and production scope offered. This study has demonstrated that in terms of hardware versus software for production, it mainly comes down to the individual and personal preference as to what they prefer to use or what they feel most creative using. Many producers, who have grown up using hardware synthesisers/MIDI equipment to create musical works, often continue to favour the equipment they first learned with rather than use the modern DAW based equipment. This was outlined by Derrick May in his views regarding the implementation of analog hardware equipment into what would otherwise be digital DAW software based systems. He supported the feeling that the DAW and software based systems are too easy to ‘lean on’, and as a result do not challenge the producer to use his/her imagination in their production method stating simply that; “I don’t want a computer to tell me what I can and can’t do, I don’t want to have to fight a machine to tell me that I can’t do something.” (May 2007). However, in contrast to May, early computer music producer Jean-Claude Risset describes his embracing of new computer technologies as a relation to the implications of his compositional styles, believing that as computer technology advances the sonic design constraints of sound creation and manipulation diminish and as a result the scope for creativity increases in parallel to technological advancements (Risset: 1982). It appears that the DAW offers the tools to break free of ‘strict rhythm boundaries’ (as demonstrated by the work of Burial and his use of ‘Sound Forge’) that challenge MIDI where equipment requires a central clock, tempo, and time signature in order to organise, create and sequence a track. As a result, MIDI based tracks often follow a distinct and clear musical structure as demonstrated by the analysation of “Ride On Time” which is simple for the listener to interpret and predict in terms of its structure and progression. ! Despite this finding, it is obvious that most DAW’s are developed to initially encourage users to work within a ‘time and tempo’ based organisation of track structures, most featuring a GUI (Graphic User Interface) based around a tempo grid on which audio/ instrument/MIDI tracks are placed, arranged and edited (see figure 4). However, with certain DAW-Only features (Time Stretching, Elastic Audio, Pitch Shifting etc.) the user 41
  • 42. is able to create more within these guidelines compared with MIDI equipment, where the user must source all of the MIDI modules, in addition to the recording equipment required to record their performances (a time consuming and costly process). It is also that case that the physical connectivity of the modules may not be completely reliable, as demonstrated by the BPM inconsistencies seen in both “Strings of Life” and “Ride on Time” which suggest that despite encouraging the user to work within a strict time constraint, the equipment itself once synchronised actually struggled with the demands of the MIDI producer and the amount of interconnected synchronised equipment being sequenced at one time. ! These results would suggest that in terms of reliability and scope for creativity, the DAW is the medium of production that should be favoured. With that in mind, it is also important to consider May’s idea of ‘knowing your equipment and how it behaves’ which is easier said than done when considering the vast capabilities of the DAW, so in this respect it is easier and faster for an individual to learn a select set of MIDI modules, rather than a whole DAW software program. ! 4.2) Users Versus Innovators An interesting and unexpected finding of the research and analysation processes revealed from this paper, was the clear divide between ‘users of music technology’ versus ‘innovators through use of music technology.’ Throughout this study it has become clear that innovators through their use of the technology already available to them in new and creative ways are the pioneers of the advancements in what new technology will offer. The techniques developed by innovators through experimentation with using technology are what create exciting new developments in the music technology industry. These in turn filter through to becoming techniques used in mainstream pop production. They tend to use well-established techniques with new technology, rather than directly promoting technical innovation. All artists examined throughout this investigation - Derrick May, Black Box, and Burial - all show elements of technical innovation through the use of technology available to them to some degree. May with his use of ‘out-dated’ MIDI and pre MIDI hardware equipment used to manipulate and process his selection of collected piano and string samples in order to create a complex pioneering track, which inspired the development of the Detroit Techno scene. Similarly, Black Box with their inventive and original use of ‘triggered vocal samples’ through the use of MIDI based samplers in order to treat the 42
  • 43. vocals as almost a percussive type element rather than a musical element. And finally Burial with his use of totally unconventional and abnormal use of Sound Forge’s DAW capabilities and plugins to create his unusual eerie sonic soundscapes, demonstrating the modern approach to Musique Concrète through the use of the DAW and computer technologies. ! This then leads onto psychology and musical meaning, an aspect that the production of Burial’s music in particular appeared to challenge. Burial has questioned all that is considered ‘essential’ in music production. His use of wide amounts of dissonant sonic ambience, the un-interpretable rhythm structure and the use of pitch shifted spoken word makes very little sense in terms of the spliced vocal phrases, yet Burial treats the sonic content as a musical tool, a significant breakthrough. Despite breaking pretty much every rule of traditional popular music production through his unorthodox approach to track creation his work is still extremely popular, well renowned, and admired. This therefore suggests that often music popularity is not based on the musical meaning, rather the creative innovation of an innovative individual in realising one’s creative vision regardless of whether or not that vision may or may not conform to traditional and commonly used production practices or equipment use. ! To conclude, the research throughout this paper has highlighted the significant and vast change from MIDI equipment and the MIDI studio used to create electronic music in the late 80s to early 90s. From the vast technological advancements in data transmission and bandwidth to the point where modern data transmission is now up to 3,200,000 times faster than MIDI, with Apple’s ‘Thunderbolt’ capable of streaming up to 10Gb/s per second is a phenomenal increase in data transmission compared to MIDI’s 3.125Kb/ s. ! The differences don’t just lie in the speed at which one can now connect their equipment. The DAW offers the ease of use of an ‘all in the box’ system, with a number of features which were previously unavailable, including elastic audio, drum replacement, and auto- tune/pitch shifting which have all contributed to the development of most modern production styles and techniques. The DAW also (more recently) offers simple portability of music sessions, either on a laptop that enables the musician to access their DAW on the go as/when they need to, or saved to an external hard drive/USB as a session file which can be run on any computer device running the same DAW as the saved session. 43
  • 44. The studio itself has not fundamentally changed, the synthesizers, drum machines, samplers and sequencers that were once there in the form of MIDI equipment during the 1980s and 90s are still there, however they now exist in the virtual form of a DAW plugin. ! What this paper has demonstrated regardless of MIDI or DAW use, is that electronic musicians who innovate and push the boundaries of music production techniques do so with technology that they know well, and often this equipment may not be the latest or most technologically advanced equipment, something both Derrick May and Burial have demonstrated. What creates advancements in music technology and its use are the innovators, those who know every aspect of the equipment they own and use, and as a result they apply that knowledge of their equipment to compose creatively and inspire new techniques and uses for that equipment which may otherwise have been overlooked. What most musicians would describe as limitations in their equipment is often what innovators seek to challenge and use to fuel the creativity that defines their musical style, much like Burial and his use of Sound Forge. The MIDI and DAW studios offer vast differences in what they offer to the user, however, it is only the user and their knowledge that can limit their creative uses of the equipment available to them.
 44
  • 45. 5. Bibliography ! Apple (N.D.) ‘Thunderbolt, The most advanced I/O ever.’ Available at: https:// www.apple.com/thunderbolt/ (Accessed 19 April 2015) ! B l a c k d o w n ( 2 1 s t M a r c h 2 0 0 6 ) ‘ B u r i a l ’ Av a i l a b l e a t : h t t p : / / blackdownsoundboy.blogspot.co.uk/2006/03/soundboy-burial.html (Accessed 11 February 2015) ! Clash Music (16th February 2012) ‘Untrue: Burial’ Available at: http:// www.clashmusic.com/feature/untrue-burial (Accessed 11 February 2015) ! Collins, N. and d’Escrivan, J. (2010) The Cambridge Companion to Electronic Music, Cambridge: Cambridge University Press ! Detroit Techno Militia (N.D.) ‘Derrick May – The Secret Of Techno’ Available at: http:// www.detroittechnomilitia.com/main/index.php/techno-history/interviews/180-derrick- may-the-secret-of-techno (Accessed 7 February 2015) ! Discogs (N.D.) ‘Rhythim Is Rhythim- Strings Of Life’ Available at: http://www.discogs.com/ Rhythim-Is-Rhythim-Strings-Of-Life/master/695 (accessed 17 March 2015) ! Encyclopædia Britannica (02 December 2013) ‘Electronic Music’ Available at: http://www.britannica.com/EBchecked/topic/183823/electronic-music/27524/ Establishment-of-electronic-studios (accessed 23 March 2015) ! FACT Magazine (1st July 2012) ‘Burial: “It’s quite a simple thing I want to do.” Available at: http://www.factmag.com/2012/07/01/interview-burial/ (Accessed 11 February 2015) ! Freaky Trigger (4 October 2010) ‘Black Box – “Ride On Time” Available at http:// freakytrigger.co.uk/popular/2010/10/black-box-ride-on-time/ (Accessed 17 April 2015) The Guardian (Sunday 4 November 2012) ‘UK's million-selling singles: the full list’ Available at: http://www.theguardian.com/news/datablog/2012/nov/04/uk-million-selling- singles-full-list (accessed 17 April 2015) ! 45
  • 46. The Guardian (Friday 26 October 2007) ‘Only five people know I make tunes’ Is Burial the most elusive man in music? Available at: http://www.theguardian.com/music/2007/oct/26/ urban (accessed 30 October 2014). ! Huber, D. (2007) The MIDI Manual, A Practical Guide to MIDI in the Project Studio, Oxford: Linacre House ! The Independent (Friday 25 March 2011) “Loleatta Holloway: Much-sampled disco diva who sued Black Box over their worldwide hit ‘Ride on Time” Available at: http:// www.independent.co.uk/news/obituaries/loleatta-holloway-muchsampled-disco-diva- who-sued-black-box-over-their-worldwide-hit-lsquoride-on-timersquo-2252360.html (accessed 15 January 2015). ! Koelsch, S. (2005) Neural substrates of processing syntax and semantics in music, Available online at: http://www.sciencedirect.com/science/article/pii/S0959438805000371 (Accessed 18 April 2015) ! Langford, S. (2014) Digital Audio Editing Correcting and Enhancing Audio with DAWs, Abingdon: Focal Press ! Leider, C. (2004) Digital Audio Workstation, New York: Mcgraw-Hill Professional, ! Moylan, W. (2002) The Art of Recording, Woburn: Focal Press ! Nokes, S. and Kelly, D. (2003) The Definitive Guide To Project Management, Harlow: Pearson Educated Ltd ! Roads, C. (1996) The Computer Music Tutorial, Massachusetts: The MIT Press ! Rumsey, F. (2007) Desktop Audio Technology, Oxford: Focal Press ! Red Bull Music Academy (2006) ‘Lecture: Derrick May (Melbourne 2006)’ Available at: http://www.redbullmusicacademy.com/lectures/derrick-may--it-is-what-it-isnt (accessed 28 March 2015) ! 46
  • 47. Russ, M. (2011) Sound Synthesis and Sampling (Third Edition), Oxford: Focal Press ! Schrader, B. (1982) Introduction to Electro-Acoustic Music, London: Prentice-Hall ! Sound on Sound (September 1996) LIAM HOWLETT: The Prodigy & Firestarter Available at: http://www.soundonsound.com/sos/1996_articles/sep96/prodigy.html (accessed 30 October 2014). ! Sound on Sound (October 2004) The Prodigy ‘Liam Howlett: Recording Always Outnumbered, Never Outgunned’ Available at: http://www.soundonsound.com/sos/Oct04/articles/ prodigy.htm (accessed 30 October 2014). ! Thaut, M, H. (2005) Rhythm, Music and the Brain, Abingdon: Routledge ! Vail, M. (2000) Vintage Synthesizers, San Francisco: Miller Freeman Books ! Voorvelt, M. (2000) New Sounds, old Technology, Organised Sound ! Viñao, A. (2010) “Artists Statements II”. In The Cambridge Companion to Electronic Music. ed. by Collins, N. and d’Escrivan, J. Cambridge: Cambridge University Press, ! Walden, J. (2007) Sony Sound Forge 9 [online image]. Available at: http:// www.soundonsound.com/sos/jun07/articles/soundforge9.htm ! Waldon, J. http://www.soundonsound.com/sos/jun07/articles/soundforge9.htm Sound Forge. (accessed 25 March 2015) ! White, Paul. (2000) The Sound On Sound Book of Desktop Digital Studio, London: Sanctuary Publishing Limited ! White, Paul. (2000) Basic MIDI, SMT, London: Bobcat Books Limited ! White, Paul. (2003) MIDI For The Technophobe, London: SMT ! 47
  • 48. The Wire (December 2007). ‘Burial’ The Wire 2007 (286) 28-31. London: The Wire Magazine Limited. ! The Wire (December 2012) ‘ Burial: Unedited Transcript’ Available at: http://www.thewire.co.uk/in-writing/interviews/burial_unedited-transcript (accessed 11 February 2015) ! Wired (2004) ‘Six Machines That Changed The Music World’ Available at: http:// archive.wired.com/wired/archive/10.05/blackbox_pr.html (accessed 21 February 2015) ! Wishart, T. (2012) Sound Composition, York: Orpheus the Pantomime ! ! ! ! ! ! ! ! ! ! ! ! 48