Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Archiving and disseminating sound archives – 3. Analysis and treatment of sound data
1. Alexandre Agergel, Radio France
alexandre.abergel@gmail.com
Véronique Ginouvès, MMSH
veronique.ginouves@univ-amu.fr
Archiving and
disseminating
sound archives – 3:
Analysis and
treatment of sound data
2. Our program
1. Processes and procedures in digitizing
sound materials
2. Managing sound data
3. Analysis and treatment of sound data
4. Collecting sound data for digital storage
and dissemination
3. 3. Analysis and treatment of
sound data
1. Acquire the skills to design a sound
(listening) archive
2. Master Audacity by analyzing sound files
3. Practice common standards used for
sound archives
4. Practice treating sound files with Audacity
5. Learn advanced sound treatment
techniques
4. What is a sound
archive ?How would you
summarize our
previous
exchanges?
5. DO YOU REMEMBER ?!
There are different kinds of media: analog
discs (78 rpm/33/45rpm), reel to reel tapes
with different speeds, audicassette, MD’s,
DAT’s.
Handling media: they are unique, so their
fragility makes them more sensitive
Identifying: with your eyes and your ears
Inventorying & Prioritizing
And… What are the 6 values not to forget ?
Ethics rules; Collection integrity; Contextualization; Rights of present users as
well as of future users; Rational use of available tools; Standards and catalogs.
6. 3. ANALYSIS AND TREATMENT
OF SOUND DATA
3.1. Concrete example of document
treatment involved in listening to a sound
archive.
3.2. Master Audacity by analyzing sound files
- A technical course in using Audacity to
analyze sound.
3.3. Learn common standards used in sound
archival practices.
7. LISTEN TO AN ARCHIVE
RECORDED BY TBC
AND CATALOGUE IT
8. ABOUT METADATA
The word "metadata" means "data about data".
Metadata articulates a context for objects of
interest -- "resources" such as Wave or MP3 files,
library books, images… -- in the form of "resource
descriptions".
As a tradition, resource description dates back to
the earliest archives and library catalogs. The
modern "metadata" field that gave rise to Dublin
Core and other recent standards emerged with the
Web revolution of the mid-1990s.
http://dublincore.org/metadata-basics
9. Inventory the files
Give a name to your files (unique
accession number)
Organize the files according to TBC
use.
Physical items like discs,
cassettes, tapes, DAT, minidiscs
are different from the content.
10. Prioritize your digitization
You have to choose what to digitize first, for
example :
- Collections from Tanzania ;
- Collections about struggles in Africa ;
- Older collections ;
- Sounds which can be easily disseminated on
line (with legal and ethical issues resolved) ;
- Etc.
Organize your digitizing campaign by collection :
define them before digitizing.
11. What kind of content to
catalogue ?
You can choose the cataloging level according to your
priorities:
- The collection (An artificial accumulation of materials
devoted to a single theme, person, event, or type of
document acquired from a variety of sources)
- The item (a speech, a concert, an enquiry…)
- The track (a song, a tale, a proverb…).
Digitization makes it possible for you to listen to recordings in
their entirety, or pieces by piece. Always remember that
although sound is continuous information, digitalization makes
it possible to pinpoint that special piece you are interested,
much like a book.
12. Inventory unpublished
sound recording
You have to inventory two
elements :
●The physical media
●The content
The most important element is what the media
contains (an interview, a concert, a radio show, a
speech, a proverb, an instrumental pieces,
sayings, anecdotes…)
13. Today, we are witnessing the
death of physical media :
tapes, discs, mini discs, DAT,
compacts discs…
only file data will last.
The important number for the file that is digitized is
the content, not the media serial number (shelf
number).
REMEMBER : You can have many different kinds of
content items on one physical media form.
Inventory the file with a unique accession number.
14. DUBLIN CORE METADATA
HTTP://DUBLINCORE.ORG/DOCUMENTS/DCES
Titre : Title Title of the document (collection, item, track)
Responsabilités:Creator An entity primarily responsible for making the resource
(informant, interviewer).
Droits :Rights Ethical and legal issues
Langue: Language A language of the resource (using a chosen, “monitored”
vocabulary)
Date : Date YYYY-MM-DD
Type de doc :Type sound
Format : Format Information about physical document (analog and digital)
Localisation : Coverage The geographic or temporal topic of the resource, the spatial
applicability of the resource, or the jurisdiction under which the
resource is relevant.
Identifiant : Identifier An unambiguous reference to the resource within a given
context.
15. Résumé :Description
Description may include but is not limited to: an
abstract, a table of contents, a graphical representation,
or a free-text account of the resource.
Descripteurs : Subject The topic of the resource. Typically, the subject will be
represented using keywords, key phrases, or
classification codes. Recommended best practice is to
use a chosen, monitored vocabulary.
Notice en lien : Relation
A related resource. Recommended best practice is to
identify the related resource by means of a string
conforming to a formal identification system.
Source : Source
A related resource from which the described
resource is derived: The described resource may
be derived from the related resource in whole or
in part. Recommended best practice is to
identify the related resource by means of a
string conforming to a formal identification
system.
Dublin Core metadata
http://dublincore.org/documents/dces
17. WHAT IS SOUND?
There's no sound in space.
We hear sounds because our ears are sensitive to these pressure waves.
Perhaps the easiest type of sound wave to understand is a short, sudden
event like a clap. When you clap your hands, the air that was between your
hands is pushed aside.
This increases the air pressure in the space near your hands, because more
air molecules are temporarily compressed into less space. The high pressure
pushes the air molecules outwards in all directions at the speed of sound,
which is about 340 meters per second. When the pressure wave reaches your
ear, it pushes on your eardrum slightly, causing you to hear the clap.
Sounds are the pressure waves of air.
If there wasn't any air, we wouldn't be able to hear sounds.
18. A hand clap is a short event that causes a single pressure
wave that quickly dies out. The image above shows the
waveform for a typical hand clap. In the waveform, the
horizontal axis represents time, and the vertical axis is for
pressure. The initial high pressure is followed by low
pressure, but the oscillation quickly dies out.
19. HOW IS SOUND RECORDED?
A microphone consists of a small membrane that is free to
vibrate, along with a mechanism that translates movements
of the membrane into electrical signals.
So acoustical waves are translated into electrical waves by
the microphone.
Typically, higher pressure corresponds to higher voltage,
and vice versa.
A tape recorder translates the waveform yet again - this time
from an electrical signal on a wire, to a magnetic signal on a
tape. When you play a tape, the process gets performed in
reverse, with the magnetic signal transforming into an
electrical signal, and the electrical signal causing a speaker
to vibrate.
20. HOW IS SOUND RECORDED DIGITALLY ?
Recording onto a tape is an example of analog recording.
Audacity deals with digital recordings - recordings that have been
sampled so that they can be used by a digital computer, like the
one you will be using.
Digital recording has a lot of benefits over analog recording.
Digital files can be copied as many times as you want, with no loss
in quality, and they can be burned to an audio CD or shared via the
Internet.
Digital audio files can also be edited much more easily than analog
tapes.
The main device used in digital recording is a Analog-to-Digital
Converter (ADC), present in a sound card.
21. The ADC captures a snapshot of the electric
voltage.
By capturing the voltage thousands of times per
second, you can get a very good approximation of
the original audio signal:
Each dot in the figure above represents one audio sample.
HOW IS SOUND RECORDED DIGITALLY ?
22. There are two factors that determine the quality of a digital
recording:
Sample rate: The rate at which the samples are captured or
played back, measured in Hertz (Hz), or samples per second.
Bit depth: Essentially this is the number of digits in the
digital representation of each sample.
Higher sampling rates allow a digital recording to accurately
record higher frequencies of sound.
HOW IS SOUND RECORDED DIGITALLY ?
23. STANDARD FILE FORMATS
FOR PCM AUDIO
There are two main types of audio files on a computer
- PCM stands for Pulse Code Modulation: Common examples of
PCM files are WAV files, AIFFfiles, and Sound Designer II files.
Audacity supports WAV, AIFF, and manyother PCM files.
- Compressed files.
Audio files use sophisticated algorithms to represent the essential
frequencies of the audio signal with far less space.
Ex. include MP3 (MPEG I, layer 3), Ogg Vorbis,
and WMA (Windows Media Audio).
Please remember that MP3 does not store uncompressed
PCM audio data.
When you create an MP3 file, you are deliberately losing
some quality in order to use less disk space.
29. Crédits
Slide 1 and 8: Sound archives Marceau Gast, MMSH, photogr.
Laure Principaud, janvier 2010.
Slide 1: Digitizing sound, MMSH, photogr. Serge Mercier, 2012.
Slides 1 and more : July 12, 1967 in northwest Burundi, Sekere,
Emile Mworoha and Jean-Pierre Chrétien
Slide 11: Introduction to Archival Terminology by Maygene F.
Daniels (1984)
http://www.archives.gov/research/alic/reference/archives-
resources/terminology.html
Slides about Audacity: wikipedia
Slide 18: Sharat Ganapati, Hands Clapping at the Game, 2006 (CC
BY 2.0), https://www.flickr.com/photos/frozenchipmunk/186912531
Slide 49 Nationaal Archief Follow Opname van een hoorspel /
Recording a radio play, 1949, No known copyright restrictions