Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Fundamentals of sound module (basic level)

1,530 views

Published on

The "Fundamental of sound" module is focused to adults learners interested in exploring the possibilities of managing digital sound.

This module is part of a set of materials designed and developed in the project Telecentre Multimedia Academy (Lifelong learning - Grundtvig (2012-2014)) project.

The Telecentre Multimedia Academy is a project where Fundación Esplai worked with a consortium of 8 partners from Croatia, Latvia, Lithuania, Romania, Serbia and Hungary, whose coordinator is Telecentre Europe.

You can learn more about the Telecentre Multimedia Academy project in:
http://fundacionesplai.org/e-inclusion-internacional/tma/

Published in: Education
  • Be the first to comment

Fundamentals of sound module (basic level)

  1. 1. PROJECT MANEGEMENT MODULEADVANCED COURSE OF MEDIA LITERACY 1 2.FUNDAMENTALS OF SOUND MODULE ADVANCED COURSE OF MEDIA LITERACY
  2. 2. BASIC COURSE OF MEDIA LITERACY AUGUST 2014 AUTHOR Authors: Skaidrite Bukbãrde, Žarko Čižmar,Antra Skinča, Ivan Stojilović. Partners: Telecentre Europe, DemNet, Fundatia EOS - Educating for An Open Society, IAN,Telecentar, LIKTA, Langas ateit, Fundación Esplai. Coordination of the content development: Alba Agulló GRAPHIC DESIGN AND DESIGN Fundación Esplai (www.fundacionesplai.org) & Niugràfic (www.niugrafic.com) Under Creative Commons Attribution - NonCommercial - CompartieIgual (by-nc-sa) To obtain permission beyond this license, contact http://tma.telecentre-europe.org/contacts Access to Multimedia Toolkit http://tma.telecentre-europe.org/toolkit LEGAL NOTICE This project has been funded with support from the European Commission. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.
  3. 3. 2.1 Introduction P.4 2.2 The basics of sound P.4 2.2.1 Definition 2.2.2 The characteristics of sound 2.2.3 Volume 2.2.4 Pitch 2.2.5 Tone 2.2.6 The spreading of sound 2.2.7 Spreading speed 2.2.8 Noise-signal ratio 2.2.9 Dynamics 2.3 Sound design P.7 2.3.1 Analogous and digital signals 2.3.2 The sound of the computer, the sound card 2.3.3 Making a sound recording 2.3.4 Cutting and mixing sound 2.3.5 Audio outlet devices 2.4 Sound recording tools P.14 2.4.1 The sound transmission chain 2.4.2 Microphones 2.4.3 Cables 2.4.4 Sound mixer 2.4.5 Amplifiers 2.4.6 Sound card 2.4.7 Computer 2.4.8 The structure of a simple digital radio 2.5 Microphones P.20 2.5.1 The grouping of microphones 2.5.2 Wireless microphones 2.6 Audio editing P.27 2.6.1 Obtaining sound – Digital recording 2.6.2 Cutting the audio material 2.6.3 The relationship of music and speech 2.6.4 Background music on the radio 2.6.5 Audio effects 2.6.6 Archiving and converting sound files 2.6.7 If we think we are finished... 2.7 Distribution of sound files P.29 2.7.1 Compressed sound files 2.7.2 Distribution on the internet 2.8 Radio P.33 2.8.1 The history of radio 2.8.2 Types of radio shows 2.8.3 The characteristics of public service radios 2.8.4 The characteristics of commercial radios 2.8.5 Characteristics of small community radios 2.8.6 Genre theory 2.9 Activities P.40 Index 2 FUNDAMENTALS OF SOUND MODULE
  4. 4. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 4 Introducction2.1 We all listen to music and the radio, and we all have probably attended concerts or weekend parties with music. Our entertain- ment suffers a great loss if what we hear is at poor quality. The microphone shrieks, you can’t make out the lyrics, or the sound is crackling. Since sound is perceived through hearing, it is very important that whatever we sense with our ears should be pleas- ant. Modern technology offers us opportunities to hear sound at normal quality with the help of various means of sound storage. To complete all of this, you might need a training course where you can learn the basics a sound master needs to know. The task of the sound master is to record sound in a mechanical or electronic way; if necessary, to modify sound and then to pres- ent it. We are talking about a very creative activity, since ways of recording, its tools, post-production or the mixing of sound tracks allow for individual ideas. Among many things, you must be famil- iar with formats of sound storage, analogous or digital multitrack recording devices, work stations, tools which help the modifica- tion of sound (e.g. compressor, limiter, equaliser), and power am- plifiers; and you will also need some computer skills. This basic-level course offers all these things, besides qualifica- tion for radio editing and presentation. By the end of the course, you will be able to tell the difference between an interview and a report, you will know how to edit news, and you will be familiar with the various genres of the radio. The main objective of our basic audio training is to master and fully understand radio work processes, to improve digital skills and creativity, and to become familiar with the media. All in all, you will learn the basics of a cre- ative activity, which will enable you to work for a community radio station or a newspaper. The basics of sound2.2 2.2.1 Definition Sound is a collective concept with several meanings. It can mean a wave phenomenon independent of perception, but it also means the subjective perception of sound. According to present day physics, sound is a wave which occurs in a flexible medium and is related to mechanical vibrations. The spreading of sound
  5. 5. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 5 A source of sound can generally be anything that creates me- chanical vibration in the medium. We differentiate between prima- ry and secondary sound sources. 2.2.1.1 Primary sound sources These are mostly flexible solid bodies – strings, rods, discs or air columns. Other things can also cause spasmodic pressure differ- ences, for example an explosion or a streaming vehicle hitting an obstacle; this is what we experience, for instance, when the wind blows. 2.2.1.2 Secondary sound sources Often the sound source itself creates such a weak sound wave that is very hard to perceive, but it causes another, well-radiat- ing body to vibrate, and this will amplify the sound. This second body functions as a secondary sound source. We see secondary sound sources in action in the case of many instruments, when the body of the instrument works as a secondary sound source. 2.2.2 The characteristics of sound According to surveys, the perception of sound is usually created in humans by such mechanical vibrations (with the proper ampli- tude) whose frequency is between 16 Hz and 20000 Hz. Vibration with less frequency is called infrasound, and vibration with higher frequency is called ultrasound. 2.2.3 Volume In the sound wave, as in every kind of wave there is an energy flow, since at a specific point in space the particles of the medium are caused to vibrate. Energy flow can easiest be characterised by how much sound energy flows through a surface unit, during a specific unit of time. Our ears are not equally sensitive to sounds of different frequen- cies. That is why in the case of different frequencies, the hearing threshold always appears at different amplitudes. 2.2.4 Pitch Pitch depends on the frequency of the sound wave (the vibra- tion produced by the particles of the medium) in such a way that higher sound corresponds to higher frequency. Consequently, frequency is the objective measure of pitch. 2.2.5 Tone We can differentiate between the sounds of various instruments even if their sound is at the same pitch. We also recognise peo- ple’s voices, even if they are singing a note at the same pitch. The explanation is that we notice a “shade” of difference between their voices.
  6. 6. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 6 2.2.6 The spreading of sound Sound, just like every wave, means the spreading of a state of vibration within the medium. Thus the spreading of sound is only possible if there is a kind of medium (air, gas, fluid or solid materi- al) in which the state of vibration may spread. For example, sound cannot spread in a vacuum; in outer space an explosion or the sound of a spaceship’s engine cannot be heard. The sound wave loses energy at different measures in different materials. As we say: it is absorbed at a different extent in differing materials. Intensity reduces in every material, only the measure of reduction is different. This, in the case of audible sounds largely depends on frequency. Higher sounds are absorbed sooner than deeper ones. That is why we feel that if music is played loudly at the neighbour’s place, we only hear lower, “stomping” sounds. Only such materials are good for sound-proofing which absorb energy even at a short distance. The best sound-proofing materi- al is a vacuum or space made empty. 2.2.7 Spreading speed In the air at 0oC, with normal pressure and moisture content, the spreading speed of sound is 331.5 m/s, in air at 15oC it is 340 m/s. In fluids, the speed of sound is usually higher than in gases. Compared to freshwater, in sea water this speed is higher and it increases with depth. 2.2.8 Noise-signal ratio During operation, inside the electric parts of the amplifier, the vol- tage generates a bellow which we call noise voltage. Noise volta- ge deteriorates sound quality, as it is added to useful signal. We can observe something like this when the music is played at high volume and the music player switches itself off – but the speakers are still buzzing. The noise-signal ratio is therefore an important parameter of high-quality sound recording. In the technical sense, it is produced by the quotient of two per- formances: the quotient of signal (information) and background noise performance. When determining the noise/signal ratio, the logarithmic decibel scale is used. It is measured in decibels. 2.2.9 Dynamics The dynamics of a transmission channel is expressed by the ra- tio of two things: the maximum value of impeccably reproducible output signal and the maximal value of the output signal still per- ceived as “soundless”. Dynamics are regulated from above by maximum drive, from below by process noise. That is to say, in the case of a given sound system, we get a higher dynamic value (and therewith, better sound quality) if during amplification the difference between maximum performance and process noise value is quite big. The chart below shows examples for volume used in practice. In this chart, relevant dB values ere determined on the basis of comparing measured Sound Pressure Levels (SPL).
  7. 7. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 7 dB(SPL) Source (with distance) 194 A theoretical limit of sound waves, in the case of 1 atmosphere pressure 180 missile engine from 30 m; The explosion of the Cra- catau volcano from 160 km (100 miles) in the air[1] 150 jet engine from 30 m 140 a shot from 1 m 120 pain threshold; train horn from 10 m 110 accelerating motorcycle from 5 m; chain saw from 1 m 100 air hammer from 2 m; disco room inside 90 a noisy workshop, heavy truck from 1 m 80 vacuum cleaner from 1 m, sidewalk of a busy street at heavy traffic 70 heavy traffic from 5 m 60 office or restaurant inside 50 quiet restaurant inside 40 populated area at night 30 theatre, completely silent 10 human breath from 3 m 0 human hearing threshold (in the case of healthy ears); the sound of a mosquito’s flight from 3 m Sound design2.3 2.3.1 Analogous and digital signals Analogous signals are continuously changing in terms of signal, time and amplitude alike. A digital signal consists of a series of impulses, as opposed to the continuous nature of the analogous signal. The digitisation (blue) of the analogous signal (green)
  8. 8. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 8 This means that digitised sound never contains every detail of the original analogous sound, only its sound patterns. Since sound can be divided into an endless number of parts in time, we would not be able to store such an amount of patterns. Digitised sound, even though it does not contain the entire original sound, gener- ally makes an impression of better quality than the original analo- gous sound; but of course, its quality is not better. The cause of a better and fuller effect is the greater signal-to- noise ratio and the larger dynamic range. The characteristics of digital sound are the following:  Impervious to temperature and voltage fluctuation;  Quite impervious to the noises of the transmission channel;  Great signal transmission speed;  Any number of opportunities to copy without degradation of quality;  Greater signal-to-noise relation and dynamic range;  No distortion of signals  The digital signal is sensitive to data loss – the use of correc- tion circuits  The circuits that process signals are complicated 2.3.1.1 The process of sound digitisation In the course of sound digitisation, analogous signals are trans- formed into a series of discreet time impulses. The information content of amplitude values is carried by bina- ry-coded series of codes. The process consists of four steps, called Pulse Code Modulation (PCM). MINTAVÉTEL KVANTÁLÁS SÁVHATÁROLÁS ANALÓG-DIGITÁLIS ÁTALAKITÁS KÓDOLÁS PCM JEL ANALÓG JEL 01101 Steps of sound digitisation
  9. 9. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 9 The quality of digitisation is determined by two factors:  sampling frequency: this indicates how frequently sample is taken from the continuously changing signal (number of sam- ples per second).  size of sample: the quality of resolution, that is, how many bits a chosen sample consists of. Sampling. At given intervals, we sample the analogous signal and read its voltage. These values are not yet to be used for digi- tal processing, since we receive continuous information. The signal can be fully restored if the sampling frequency is at least twice as high as the components with the largest frequency occurring within the signal. These items require some explana- tion, but it is easy to get the idea. Sampling The frequency range of human hearing is between 16-20.000 Hz. This means that the greatest frequency occurring in the analo- gous signal according to the item is 20.000 Hz. Since the item says that we must take at least the double of this frequency as a sample, the sampling frequency will be 40.000 Hz, which means that we must take a minimum of 40.000 sample from the sound per second. According to the Hi-Fi standard, 44.100 Hz is the value of reference, but in the course of professional digitisation the applied values could be 48 KHz, 96 KHz, or even 192 KHz. Of course, the larger the sampling frequency, the higher our quality. Quantisation. The first step of digitising is quantisation. In the course of quantisation, we determine the resolution of the sample. Now we are going to look at the steps of quantisation. The more parts the voltage of the analogous signal is divided into, the more accurately it can be reconstructed during the A/D transformation. Today’s sound cards are able to do 16-24 bits (in extreme cases, 64 bits) resolutions, but based on the Hi-Fi standard, a 16 bit res- olution is sufficient for restoring the original sound. Analogous digital transformation. In the third step of sound digitisation, the values taken in the course of sampling are stored in the digitising algorithm; the values at this stage are presented in the decimal system. Coding. In the course of coding, the momentary decimal values of the samples taken from the sound are converted into binary codes.
  10. 10. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 10 2.3.2 The sound of the computer, the sound card The general sound tool of the computers is the sound card. Sound cards offer several opportunities, but their two main functions are making digital sound files audible and digitising speech or other sounds. In order to record at good and reliable quality, profes- sional users must purchase sound cards developed for unique and special needs, and the digitising tools of the library should also be like this. Compressed and not compressed sound formats: The WAV format is one of the data formats for digital audio files. As op- posed to MP3 and other data formats, the WAV format usually does not compress audio data. 2.3.3 Making a sound recording 2.3.3.1 Making a sound recording on a free online surface Vocaroo is a very practical little free online programme with a simple work surface where we can make quick sound recordings even without knowing English. The recording can be shared with others easily, we can embed it anywhere, or we can also down- load it in various sound formats... The best thing is that for a re- cording we don’t even have to register! http://vocaroo.com/ SoundCloud is a professional sound distribution portal, where, af- ter registration we can store and share our sound materials. At the same time, it is an online sound recording programme, because we can even make sound recordings with it directly through the mic of our computer.... We can store about two hours of sound on SoundCloud completely free of charge. https://soundcloud.com/
  11. 11. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 11 After a quick registration process, Audioboo is also accessible for free online recording, sound storing and distribution, where maximum five-minute-long online sound recordings can be pro- duced quite simply. The best thing is that during recording we can stop and continue as we like... The finished sound can then be embedded and/or distributed on community pages (Twitter, Facebook, Tumblr). https://audioboo.fm/ Audio Pal has a unique feature: we can make a maximum one-minute-recording with it not only online but through the phone as well, and the link of the recording, plus other distribution/em- bedding data are sent to us via email. http://audiopan.com/ Record MP3, as shown in its name, helps us record in mp3 format online; it is a very simple programme. Hungarian video instructions about the use of the above sound recording programmes can be accessed at: http://www.youtube.com/watch?v=4rj0vypLOy0 http://www.recordmp3.org/ 2.3.3.2 Making a recording on the computer For recording sound on the computer, we can use a sound re- cording programme. To start it, we have to select All programs/ Accessories/ Entertainment, and then Sound recording. (In the case of Windows 7 it is only via All programs/Accessories). Then the Sound recording programme starts. To start the sound recording, we have to select the record button with the red circle. Then the recording will start. To stop it, we click on the stop button with the black rectangle. We can listen to the
  12. 12. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 12 recording by selecting the start button with the arrow pointing to the right. As we click it, the programme will play the previously recorded material. Effects For capturing sound on the computer, we can use the audio re- cording programme. The programme can be started at All pro- grams/Accessories. For capturing sound, we have to select the record button with the red circle, for finishing we click on the button with the black rec- tangle, which will stop the recording. The recorded material can be modified by effects. The sound re- cording programme has effects for increasing volume, increasing speed and playing backwards. For the application of these ef- fects, within the Effects folder we select the proper button, and as we replay the sound material, we will hear the added effect. 2.3.3.3 Audio recording with a digital recorder Zoom H1 V2 The Zoom H1 has two mics with X/Y config- uration. With the dictaphone, conversa- tions, conferences and personal remarks can be recorded in WAV and MP3 formats. The Zoom H1 has 2 GB capacity, which can be extended with a micro SD card. Due to the built-in speakers, recorded ma- terial can be listened to immediately, with- out having to connect the device to a com- puter. With the Zoom H1 digital dictaphone you can make stereo recordings. The backlit LCD display of the device makes managing the information and navigation between functions much easier. On the Zoom H1 dictaphone there are USB and microphone line inputs. You can view the introductory video at: http://www.youtube.com/watch?v=cLvESdlgHAk User’s manual: http://www.zoom.co.jp/downloads/h1/manual/
  13. 13. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 13 2.3.4 Cutting and mixing sound The two most basic options for editing are cutting and making mixes. Cutting sound practically means to remove the selected bits from the audio material, and inserting them into another file. Mixing means crossfading two sounds on top of each other in such a way that both remain audible. This option is often used in radio shows. Of course one sound, for example, the background music, is quieter than the other, e.g. human speech. 2.3.5 Audio outlet devices 2.3.5.1 Speakers, sound boxes Let us start with the most common electro-acoustic transformer, the generally used dynamic speaker, which is an inverted form of the dynamic microphone. Within the device, the electronic in- put signals are collected on a roll, which, due to the interaction, starts moving in a permanent magnetic space. By attaching a membrane to the roll and by suspending it in a flexible way, the membrane will set in motion the air particles exactly at the beat and frequency of the sound signal. Dynamic speaker However, a speaker cannot transform low, medium and high sig- nals equally well, which is why various sizes of speakers have to be placed inside a sound box. The big ones assure the even relay of low sounds (20Hz – 400-600 Hz). Middle size (10-20 cm diam- eter) speakers are suitable for medium sounds (~400-6000 Hz). Small speakers, the so-called “Dome” are specifically designed for transforming high sounds. This sounds quite simple, but with the band filters we also have to make sure that each quantised signal reaches the proper speak- ers. This part, unfortunately, involves further error possibilities. And we haven’t even mentioned the effects of the box (closed or with reflex opening), as without a good sound box good sound does not exist. That is why engineers have to experiment in an- echoic rooms to make sure that sound boxes work with relative- ly smooth frequency transfers, and that the newly manufactured sound boxes are good even by subjective measures. This even more applies to the audio monitors in the studios. 2.3.5.2 Headsets Headsets provide a different acoustic experience than sound boxes. They do not have the acoustics of a room, which means that they give a more sterile, clearer impression of sound. The acoustic coupling between the two parts is missing, and the headset can be better dimensioned than a sound box, as sound box designers do not know in advance in what room and how the equipment will be arranged. Headsets can have an open or closed structure. A headset should be closed, so that only the sound material coming from it would
  14. 14. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 14 get into our ears, and the outside noises would be locked out. We expect the headset to transmit the whole audible range of sounds with the smallest possible fluctuation, and, of course, the lack of distortion. The headset is the exclusive, or at least most useful tool of re- cordings on location, since it locks out all of the disturbing noises coming from outside. We use it less often for editing in the studio, but it is used every day for listening to material or searching for something. It is attached to a sound mixer, so we can listen to ma- terial without disturbing the editor, the director, and the work that is in progress in the studio. Sound recording tools2.4 2.4.1 The sound transmission chain In the main figure of this chapter, we see some sound transmission devices. They can be categorised into four groups: input devices, sound processing equipment, storage devices and output units. Elements of the sound transmission chain Input devices get the signal to the processing units: these are usually the sound channels of microphones, optical players (CD, DVD), video players or other tape players, dictaphones, tele- phones or cameras.
  15. 15. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 15 Sound processing equipment can be the following: preamplifiers, mixers, correctors, tone controllers, high-performance devices, home cinema amplifiers, computers, sound editing tools, etc. Groups of storage devices: optical recording systems, magnetic recording systems, hard disc recording systems, flash memories. Output devices: headsets, speakers, sound boxes, stereoscopic sound box components. As we examine the sound transmission chain, in every case there is always a starting point and a finishing point. 2.4.2 Microphones Sound energy always spreads via a medium. To make it simple, let us take an example where the source is the human voice, and the vocal chords cause the air particles to vibrate. Produced sound can only be stored by using a device to record the momentary pressure, extension, speed and frequency of particles, in their temporary state. With the help of a membrane, the microphone transforms sound energy into mechanical, then electric energy. 2.4.3 Cables In sound technology, it is probably the most difficult thing to be fa- miliar with the various cables and outputs. This field always gen- erates arguments. 2.4.3.1 Asymmetric cables Simple asymmetric (warm core + shading) outputs are used everywhere to transmit low-performance signals. ASYMMETRIC SIGNAL 1. CORE (EARTH) 2. CORE (WARM) The most typical applications are the following:  outputs of instruments,  connecting instruments and effects,  the insert points of mixers (input and output parts for certain mixers). + + ASYMMETRIC CABLE
  16. 16. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 16 The asymmetric systems are sensitive to both low and high-fre- quency disturbances, which is why only short cables are allowed for use when transmitting instrument or LINE signals. With bigger lengths, it is more practical to use a symmetric-asymmetric trans- former. 2.4.3.2 Symmetric cables The two-core shaded cables (most often referred to as micro- phone cables, two warm cores + shading) are used where noise immunity is important. Mostly the in- and outputs of long signal cables and mixers require symmetric signal transmission. On the stage, the power amplifiers and the lights generate a lot of low-frequency noise, which can easily be collected by long cables. SYMMETRIC SIGNAL 1. CORE (EARTH) 2. CORE (WARM) 3. CORE (COLD) The two-core shaded cables (most often referred to as micro- phone cables, two warm cores + shading) are used where noise immunity is important. Mostly the in- and outputs of long signal cables and mixers require symmetric signal transmission. On the stage, the power amplifiers and the lights generate a lot of low-frequency noise, which can easily be collected by long cables. - - + + SYMMETRIC CABLE In the case of low signal levels (microphone signals), the differ- ence between symmetric and asymmetric signal transmission can be perceived very well. The cables are exposed to a high level of mechanical stress, so their solidity is important. They must be mechanically strong and flexible at the same time.
  17. 17. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 17 2.4.3.3 Speaker cables Speaker cables must be suitable for transmitting high voltage within the whole band of sound frequencies. This requires copper wires with the proper cross-section and chemical combination. It is generally known that up until 400 W and 10 m length, 1.5 mm2 cores are used. In the case of longer wires, 2.5 mm2 cables are the most practical. Speaker cables must be suitable for transmitting high voltage within the whole band of sound frequencies. This requires cop- per wires with the proper cross-section and chemical combina- tion. It is generally known that up until 400 W and 10 m length, 1.5 mm2 cores are used. In the case of longer wires, 2.5 mm2 cables are the most practical. 2.4.4 Sound mixer The sound mixer is a sound frequency amplifier mounted with several inputs, which allows the simultaneous transmission of two or more sound frequency voltage source signals at a specific de- gree or at a variable proportion. The sound frequency voltage levels can be set to the required level at the input, with the help of potentiometers. Depending on its intended purpose, the sound mixer can be used for studio work or public events, or it can help the work of music bands or amateurs.
  18. 18. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 18 The three basic types are the following:  Analogous mixers. Analogous mixers work only in the anal- ogous range, and they do not convert sound into digital form; their sound is not suitable for the computer.  Digital mixers. They have both digital and analogous inputs. They convert sound into digital form as soon as it enters the mixer; then the sound runs through a series of effects and processors until it is transformed into analogous sound again. More recent types have a USB or Firewire interface (or both).  new analogous mixers have USB and Firewire audio interface. 2.4.5 Amplifiers Commonly speaking, every device with the capability to convert the level of signals are simply called amplifiers. The basic types of sound amplifier are the following: two-channel amplifier, ampli- fier with several inputs, mixing amplifier, multiple channel (home cinema) amplifier, power amplifier, power amplifier for higher per- formance. 2.4.6 Sound card The sound card is an expansion card for the computer which re- ceives and issues sound at the order of computer programmes. Its typical fields of use are the following: multimedia applications, sound and video editing, entertainment (watching movies, listen- ing to music, playing games). In most computers today, this de- vice is integrated with the motherboard, but it has to be installed into certain older types. Professional users also buy sound cards separately due to their much better quality and performance. The sound card contains an analogous-digital (A/D) and a digi- tal-analogous (D/A) converter, so that we can digitise the incom- ing analogous signals, and we can make the outgoing digital sig- nal analogous.
  19. 19. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 19 The interfaces of most sound cards manufactured after 1999 are coloured the following way, according to the Microsoft PC 99 standard: Colour Function Pink Analogous microphone input. Light blue Analogous input. Green Analogous output for the main speakers or the headset. Black Analogous output for the back speakers (in the case of 4.0 or more). Silver Analogous output for the side speakers (in the case of 7.1). Orange S/PDIF digital output (sometimes used as analogous output, the subwoofer and the middle speaker are attached to it) 2.4.7 Computer In sound recording, the computer has a much bigger role than just to function as an A/D or D/A converter. There are various soft- wares which can turn the computer into a virtual sound studio. These are, for example:  sound editors  virtual effects  virtual instruments  sound libraries and sound patterns
  20. 20. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 20 2.4.8 The structure of a simple digital radio microphone SPEAKERS HEADPHONES TAX FM ANTENNA computer sound card mp3 player software DATA RECORDER SOFTWARE CD PLAYER TRANSMISSION CONTROL broadcast monitoring Microphones2.5 In order to record, amplify or otherwise electronically manipulate the sound of an event, first of all we have to convert it into electri- cal signals. This is what is done by the first converting elements of the sound system (which were mentioned at the basics: acous- tical energy / electricity transformer), which we call microphones. The operating principle of the simplest microphone capsule is identical with that of the commonly used loudspeakers, except the dimensions are different (these are the dynamic microphones). 2.5.1 The grouping of microphones 2.5.1.1 The grouping of microphones based on the principle of audio-electrical signal conversion The most commonly used mics are categorised in the following way: Dynamic microphones: These mics, in terms of structure, are based on the phenomenon of electromagnetic induction, where power is induced within the electric conductor that moves in the magnetic field. In dynamic microphones, a voice coil attached to a diaphragm moves in the air gap of a magnetic circuit (it is caused to move by sound waves), and inside it, power equiv- alent to sound is induced, which we use in amplified form. The
  21. 21. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 21 mechanical and electrical parameters of the voice coil are not always suitable for our electric requirements. For this reason, a special transformer is often built into the microphone body in or- der to fix the outgoing signal and to make it symmetrical. The dynamic microphone is one of the most common types be- cause of its simple and reliable nature, robustness, and good sound quality. Condenser microphones: The condenser microphone – as in- dicated by its name – is none other than a condenser with air insulation, and with a capacity of a few pF (p = pico = very little). One of its surfaces is the diaphragm, which is usually a plastic foil onto which metal was fumigated. Its other surface is a piece of ceramic or metal, fumigated with metal. Onto the condenser, direct current is switched. The sound waves cause the diaphragm to move, this way the distance between the condenser surfaces grows, which changes the capacity of the condenser – and thus the voltage between the condenser surfaces might also change. This change of voltage corresponds to the sound waves. Unfortunately, the performance of this elec- tric signal is very low, so a preamplifier is always used inside the condenser mics. The sound quality of condenser microphones supersedes that of the dynamic mics, because the diaphragm does not have to move the mass of the voice coil; therefore the impulse transmission of condenser mics is much better. Condenser mics are more sensitive to shock and the parameters of the environment (temperature, relative moisture) than dynamic mi- crophones, which is why they are rarely used in sound technology.
  22. 22. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 22 Electret microphones: These microphones are such condenser mics whose diaphragm was polarised in the course of production (electric charge is frozen in them) so they do not require much ef- fort. The electret microphone also works with big impedance and requires preamplification. This kind of amplification is implement- ed with phantom feed (see later) or with a battery placed inside the microphone house. Electret microphones are simpler than condenser mics and rela- tively cheaper too, that is why they are used in sound technolo- gy and in studios more and more often. They can be very small, which makes some special applications possible. (Buttonhole mi- crophones) 2.5.1.2 Microphone categories according to their fixing Handheld microphones: One of the most frequently used mi- crophone types; it was designed for the purpose of a singer or speaker to hold it in their hand during a performance. It is their drawback that other than the required signals, they also receive those small noises that are caused by the hand rubbing against the microphone – and this is annoying. In the case of today’s mics, efforts have been made to eliminate this harmful effect. A simple way to do that is to place the microphones on a mic stand. Handheld microphones: Clip-on microphones Clip-on microphones: In the case of singing or dancing shows, or other interactive performances or video shots we are met with the problem that the performer’s head cannot stay in one place, in front of the mic on the stand. That is why clip-on mics were invent- ed, which can be fixed directly onto the speaker/singer’s clothes. Sometimes, especially in music or dance shows, the flesh-colour- ed mic is glued to the singer’s forehead. Here the advantage is that during the dance, the distance between the singer’s mouth and forehead always remains the same, so the character of sound does not change. Clip-on mics are also fixed onto certain instruments, where the use of a stand would be inconvenient or it would restrict the artist in his freedom (e.g. saxophone or other woodwind instruments).
  23. 23. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 23 Contact microphones: These microphones are directly fixed onto the sound source, and they take in the sources’ surface vi- bration. They are often mounted on the instrument (guitar pie- zo or foot microphone for the double bass). The street window alarm systems of diamond shops sometimes operate based on the same principle: if the window is hit with a certain force, the contact mic senses the vibrations and the alarm switches on. Shotgun microphones: They were given this name due to their shape. These mics are very orientation sensitive, which is im- plemented by the acoustic - or perhaps electrical - extinction of sound waves coming from the sides. These microphones are mostly used at film shooting, where the microphone and the stand must be kept out of sight. Parabolic microphone: The sound is focused onto a microphone with the help of an acoustic parabolic mirror, through which its orientation and absolute sensitivity greatly increases. It is used when the sounds of nature are recorded. Contact microphones PARABOLIC microphone SHOTGUN microphone 2.5.1.3 The grouping of microphones according to orientation sensitivity Based on their structure, microphones show various degrees of sensitivity at different frequencies, from different directions. This is called direction characteristics, and it is usually displayed on a polar diagram. The polar diagram is a pie chart, where the main diversion from the main direction of the microphone is shown on the degree display, and sensitivity is indicated with the distance from the centre. A microphone in most cases has different orienta- tion sensitivity curves at different frequencies. Since microphones are usually cylindrically symmetric, it is enough to put these po- lar diagrams on one longitudinal section, but other options might also exist. Spherical microphones: As its name indicates, this mic is equal- ly sensitive to sounds coming from every direction in space. When it comes to amplification, it is not very practical, as it does not of- fer any protection against the mike’s howling. It is true, however, that due to its even frequency response, it is not characterised by any peaks, which means it does not have frequencies that are more sensitive to howling. This type is usually used for sound recording. Cardoid (kidney/heart) mics: The name refers to the fact that this mic’s direction characteristic curve, according to the English, looks like a heart, and according to the German, it resembles a kidney. This is the most commonly used mic type. Due to its guid-
  24. 24. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 24 ed sensitivity, it is very good for amplification, and howling can be avoided with its help. Unfortunately, its frequency response is not so even as that of the spherical mics, especially if the sound source is not in its axis. Spherical microphones Cardoid (kidney/heart) mics Two-way or figure-of-eight Two-way or figure-of-eight pattern microphones: Their name comes from the fact that their direction characteristic looks like a figure of eight. These mics are immune to sideways signals, but they are approximately equally sensitive to signals coming from both directions. This method works well if we want to record two sound sources facing each other, for example in an interview sit- uation or at a stereo recording. 2.5.1.4 Other characteristics of microphones Frequency transmission: The frequency transfer of mics is not even. This, however, may not be a problem, as every sound source is characterised by different frequencies. That is why there are microphones that are used specifically for one instrument or sound source, e.g. for voice, drums, flutes, guitar, low-tone instru- ments (kick drum or double bass), etc… The frequency transfer of mics is usually indicated between 20Hz- 20kHz, and an approximate frequency transfer curve is drawn for reference. 20 10 0 -10 20 30 100 1000 10.000 20.000 dBre1V/PaSensitivity: By microphone sensitivity we mean how big the out- put level is in the case of a given input sound pressure.
  25. 25. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 25 Overcharging: Good microphones are usually capable of trans- mitting 140 dB SPL sound pressure level without distortion. This sound pressure level is by 10 dB above the pain threshold! Such sound pressure level may occur for example at the close miking of a drum set. At these sound pressure levels, the outgoing signal level of the mike can be so high that it can overpower the input amplification degree. Symmetric and asymmetric interfaces: Professional mics are almost always mounted with 3-pole XLR interfaces. The three poles are wired in the following way: 1. earth/shading 2. signal+ 3. signal– 1 12 2 3 3 Symmetric signal conduction enables a high degree of immunity to disturbance. Cheaper mics might also be mounted with 6.3 mm Jack plugs too. They are wired like this: Peak: Signal + Ring (if there is one): Signal - Shading: Earth Phantom feed: Certain microphones (e.g. condenser mics) need power supply voltage for operation. This is necessary, partly to pre-load the condenser, and partly to feed the preamplifier. In practice we use DC of 9-48 V which is fed into the mic through the mic cable.
  26. 26. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 26 Mike foam: Mics used for vocals / speech are either equipped with some foam inside the guard, or they can be bought with an outer foam cover that can be pulled onto the mic. The foam has the ability to let sound waves through, and it keeps back the noise of wind and the additional noises when the sounds t or p are ut- tered; it also protects the mic from tiny drops of saliva. Especially in studios, so-called pop filters are used. If placed 8-15 cm far from the mic, they save us from the unpleasant extra nois- es of p and t sounds, they protect from overpowering, and our mic will be spared from damage caused by small saliva drops. 2.5.2 Wireless microphones Wireless microphones consist of VHF or UHF FM radios placed with a microphone capsule in the same house or a separate box. These mics do not need a cable to connect them, which provides a lot of freedom; also, the mess of cables are no longer an eye sore. Their disadvantage is that due to the use of radio channels they are sensitive to disturbance, and because of the band width of channels the frequency swing is restricted. Therefore the dy- namics transfer is limited. When using wireless mics, it is important that one frequency at a given place should be used by one device only (e.g. at a press conference several filming crews use similar wireless mics). They are used mostly with a battery, which must be replaced or recharged before use.
  27. 27. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 27 Audio editing2.6 In the course of digital audio editing, theoretically, the quality of the recording does not deteriorate, but today it is practically im- possible to do the whole editing phase in the digital form. Moreo- ver, airing will take place in analogous mode, anyway (except for land DAB or digital satellite broadcasting or material published on the internet). 2.6.1 Obtaining sound – Digital recording Sound recorded with a microphone (one’s own recording) with a digital sound recorder can be copied into the computer as a file. 2.6.2 Cutting the audio material For simple cutting, we should use a one-track editing programme (e.g. SoundForge for cutting a report), for complex cutting we need a multi-track editing tool, i.e. CoolEdit Pro. With multi-track programmes we can pan various sounds onto each other, while we control their volume. The finished, mixed sound can be saved as a basic wav file, but we can compress it at saving if we like. After the recording is finished, it has to be cut/edited, so that it can be aired. When we listen to it, we have to pay attention to the following:  Does the interviewee say “er” or “ehm” too often? / Does he pause too long? / Does he repeat the same word, sound, sen- tence or idea too many times? (Unwanted noises, long silenc- es should be cut out, except if they have a function, e.g. they tell us that the speaker is stuck and takes time to think.)  Have we saved the original file? (The uncut version must al- ways be kept until the final material is made, but for the sake of avoiding any misunderstandings, it is worth keeping it even after.) As we edit material, we must consider the cuts in the following ways:  Will the sentence still make sense after the cutting?  Will the sentence or train of thought not change after the cutting?  Do the sentences make sense, do they have a function?  Did the same sentence not end up in the final cut twice by ac- cident?  Do you not hear a click at the cuts? If yes, it can be eliminated by enhancing the sound wave and evening out the wave line with a fine cut.  Is the cut not distracting? If it is more disturbing than the “ehm” sounds, then we should keep the original.
  28. 28. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 28 2.6.3 The relationship of music and speech Long ago it was natural that the songs were announced before they were played, and while the music was on, there was no talk- ing (it never even crossed the presenters’ mind to speak then). Today it is more common to have some kind of music even as the news are read; it has become a tool to fill space, it is like the “horror vacui” of scientists (which means that nature is terrified of emptiness, that is why it fills up every vacuum – today it means more like being terrified of silence or the lack of continuous rhythm on the radio). At the beginning of the 90s, the “just music – no speech” type of editing method of commercial radio channels became more common. More and more programme blocks were made in which only music was heard for a few tracks or 10-20 minutes. In the 2000s, especially in the morning shows, more and more speaking appeared – and this happened on the music radios! The 3-minute limit for talking is no longer in force. Commercial radio shows go on even for ten or more minutes without music (or background music). During that period, the presenters are simply having dis- cussion. 2.6.4 Background music on the radio Just like with film soundtracks, in the case of radio shows we often see that the music is not in the forefront, but it provides some kind of a background, which works best if the listener is not fully aware of its presence. The style of music (or its presence/absence) tells us something about the values and type of the radio channel. The use of music during talking is important if the speech does not require the close attention of the listener. If we really want to pay attention to the discussion, the background music could be annoying. So we have to use background music carefully and in good proportion. A few examples for background music use:  during the news (recurring, rhythmical),  during weather forecast (new age, chirping birds)  during a scientific documentary (electronic, maybe just chords),  during the announcement of events and programmes,  to fill space (if there is still time before the next show – neutral jazz which can be downmixed anytime). 2.6.5 Audio effects SoundCli.ps is a free sound effect distributing portal where any- body can upload sound effects, and we can also download other people’s effects free of charge. On the SoundCli.ps page we don’t even have to register for down- loading a free sound effect, but for uploading, registration is re- quired. There are various pages of this kind on the internet, e.g. http://sweetsoundeffects.com/
  29. 29. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 29 2.6.6 Archiving and converting sound files Our large wav files and complete programmes can also be ar- chived in mp3 format. This requires an mp3-wav converting pro- gramme (CDex, Waver, Rightclickmp3, etc.). Some are complete- ly free, others are free for 30 days.) Good-quality mp3 converting must have at least the following pa- rameters:  Stereo:192 kbps,  Mono:96 kbps. (Anything smaller than these is too noisy and crackling.) 2.6.7 If we think we are finished We have to listen to the final, edited show once again. It is even better if other people give their feedback as well. They might no- tice mistakes that we would never hear. 2.7 Distribution of sound files 2.7.1 Compressed sound files 2.7.1.1 Why compression is necessary Digital audio files spread very quickly on the hard disc, they take 10.5 MB per minute. With compression procedures, we can make them shrink to a more practical size: we can make them as small as half or a quarter of the original size. Audio files are significantly different from other data, which is why we must apply a different compression technique. Charts or ma- terial made with a word processor, and in general, files including text and numbers contain a lot of repeated characters (e.g. spa- ces). These can be compressed with the general-purpose com- pression tools, i.e. PKZIP or ARJ, and as a result, we can have one fifth of the original size. Such types of compression programmes, however, are not good for audio. Sound, by nature, keeps chan- ging; it does not include repeated patterns or redundant data. Another question is, what type of sound material we would like to compress. Speech, for example, due to repeated silences, can be compressed much more successfully than music. In music material, the so-called “silence periods” also include sound.
  30. 30. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 30 Thus we need a more complicated process, which utilises the characteristics of sound and the hearing mechanism of the hu- man ear. Unfortunately, these procedures involve data loss. When using a particular compression technique, the selected degree of compression is always a compromise between the transfer qua- lity and the transfer capacity-demand. That is why our goal is to find an efficient compression procedure which “drops” such data out of the material that do not dominantly effect the beauty of the resulting sound. Another important aspect might be the speed of compression, as sound is more practical to store in a compressed form if we can listen to it without extracting it, with the help of the so-called real time player programmes. The intensive develop- ment of sound compression procedures started in the 80s and continues to this day. 2.7.1.2 Types of sound compression Compression without loss: The essence of compression wi- thout loss is that the size of data is reduced without the deteriora- tion of quality, this way data loss is avoided. Better programmes of this type are the ones that are able to achieve a greater degree of compression. The principle of these procedures is that based on the main featu- res of sound files, specially designed algorithms are used. The re- duction of size without loss is significantly smaller (15-30%) than with loss (typically 50–60%). Such a compression procedure is, for example:  Meridian Lossless Packing – MLP  Free Lossless Audio Codec – FLAC Compression with loss: Most sound compression procedures involve some loss, that is, information is lost in the process: the goal is to make sure that this loss of information does not result in the audible deterioration of quality, and that the deterioration of quality is as little as possible. How well this goal can be achieved (that is, in the case of a compression process, to what extent the quality gets worse) will determine the success and quality of the procedure. (Strictly speaking, these procedures are not data compression, but irrelevance coding, as they cause loss of data.) The essence of this method is the so-called psycho-acoustics, that is, that the human ear cannot hear every existing sound, and it is not sensitive to every pitch of sound at the same degree. The procedures attempt to omit the not so well heard or not audible parts, or to incorporate them with the better audible bits with the aim to achieve loss of data. The efficiency of these methods is greatly determined by the quality of their psycho-acoustics mo- del. If the model is faulty, then audible parts will be missing from the sound material, which causes a deterioration of quality.
  31. 31. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 31 As coding involves loss of data, and differing procedures cause differing degrees of data loss, every sound material coded with such procedures (compression with loss) would deteriorate signi- ficantly in quality if they were to be recoded or transformed. Such compression procedures are the following:  MP3  RealAudio  Windows Media Audio – WMA 2.7.2 Distribution on the internet 2.7.2.1 Uploading a sound file onto a community portal Since YouTube is not suitable for storing purely audio files (.MP3, .WAV etc.) or purely image files (.JPG, .PNG etc.), before uploading them, they have to be combined with such formats that will work on YouTube. Free programmes, e.g. Windows Live Movie Maker help us add an image to the sound track, and thus we can convert it into .WMV video file format. After the conversion of the file, we can upload it into the YouTube system. Step 1: Select your audio and image files 1. On the upper bar click on Add photos and videos, and choose an image from your computer. This image will be displayed on the screen. 2. Click on Add music, and select your audio track. Step 2: Match the image with the audio file To make sure that the file is uploaded into the YouTube system, set the length of the picture to match with the length of the sound track on the time line. 1. Double click on the green sound track on the time line, which will display the “final point”. Take a copy of the number (e.g. 261.49). 2. Double click on the picture miniature on the time line and write in a time length that agrees with the length of the sound track. If, for example, the length of the sound track is 261.49, then the time length of the picture must also be set for 261.49. Step 3: Save the file 1. Select the option Recommended for this project. 2. After giving your video a name, click on Save. 2.7.2.2 Radios on the internet Only a limited number of radio programmes can be transmitted. In the frequency economy framework, the ITU (International Te- lecommunications Union) assigns a specified number of set fre- quencies to every country, in which the radio stations can opera- te. That is why the use (rent) of frequency bands is rather costly.
  32. 32. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 32 At the same time, there is no limit whatsoever for operating radios on the internet. The edited programme stream is converted into some kind of a compressed streaming format (for example rm). To access it, it is sufficient to operate or rent a webhosting on a server, with the required bandwidth, in order to provide the con- tinuous broadcast stream. The upload bandwidth of the server determines how many people can listen to the broadcast at the same time. If we calculate with a better quality data flow, e.g. 256kbps, then, for a hundred people to be able to listen to the programme at the same time, we need a hundred times of this value. The only restriction of internet radio is that copy rights must be considered. ”Streaming media” means that data comes in in packages; they become readable as soon as they are interpreted, and there is no need to download the whole set of data. This is how so-called media servers enable quick data supply. Of course, our computer must have the extracting algorithm of this codec. 2.7.2.3 Podcasting Podcasting technology enables the serial publication of sound, video and other files on the internet in such a way that the users may subscribe to the channel containing episodes of a specific material. Podcasting became popular at the end of 2004, when portable music players were more and more commonly used. Its use: With the help of “podcasting” technology (the phrase is made of the terms iPOD and broadcasting), users can subscribe for news channels, thus they can immediately be informed, for example, about a new development in a scientific topic, or they can download information in any field of interest: the latest news, currency rates, etc. After subscription, the RSS (Really Simple Sindication) reader which is built into the browser will indicate the occurrence of recent news, which the user can read at any time on the computer or on the phone. In our age of portable media players, smart phones and other gimmicks this means a rather fast way of information acquisition, so we don’t always have to keep an eye on our favourite news portal.
  33. 33. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 33 Radio2.8 2.8.1 The history of radio The history of radio began in the 19th century. Hertz’s resonator is considered the predecessor of radio, with which Hertz proved the existence of electromagnetic waves in the course of an ex- periment in 1887-88. In 1894, the Italian Marconi presented the practical application of telecommunication waves, and the wire- less telegraph. In 1896, the Russian Popov succeeded in exchan- ging wireless telegraph messages at a distance of 250m, and five years later he established wireless connection between England and America. In 1906, Lee de Forest invented the three-electrode pipe, the so-called triode, with which a much better receiver was created. This way, it became possible for the radio not only to transmit signals but human voice as well. The first radio show was aired in 1914 in Lacken, Belgium, and in 1921, in Pittsburgh the first regular broadcasting was launched. Radio broadcasting soon became common in Europe as well: in 1922, the British Broadcasting Co. Ltd. (BBC) was founded, and in Germany two private companies started broadcasting. Getting back to the time of ancient radio, the very first radios were wireless crystal detector receivers. This type was invented by the American G. W. Pickard in 1906. The detector radio had the advantage that AC power or batteries were not required for its operation. Its disadvantage, however, was that it could only receive nearby, high-performance broadcasts, and its ability to separate stations was weak. What is more, it was only possible for one person to listen to this device, with one earphone. If more people wanted to hear the programme (e.g. other members of the family), that would have required loudspeakers, which the crystal detector could not operate on its own. The signals caught from the ether had to be amplified. For this purpose, the electron tube was invented, whose task was to amplify received signals, multi- plying them by ten or even twenty times their value. Electron tube devices were already manufactured during the First World War for military purposes, but in Hungary they only became common in 1925 by the name “lantern”. The production of “multi-lantern” radios started in the 30s. These were super-receivers with several wave bands, as besides the long wave and medium wave, the short wave (HF – high frequency) was also introduced. The electron tube was eventually standardised in 1930, and so was the look of radio devices: on one side they had the speakers, on the other side there was a scale, the buttons and the “magic eye”. The latter was invented in 1938-39 for the purpose of in- dicating the strength of transmission. In Western Europe, tube devices were manufactured in the early 1920s, which consisted of a flat box with radio tubes at the top, and a curvy shaped speaker (exponential cone) also formed a part of it (also know as the “swan’s neck”). Source: vecsmo.eoldal.hu
  34. 34. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 34 2.8.2 Types of radio shows Radio shows can be categorised according to many aspects: Based on content, there are domestic full service radios or spe- cific “format” radio stations (within this category, some specialise in music styles or discussion shows); and there are propaganda stations broadcasting for listeners abroad. Mission/religious radio stations form a different category. Just like in the case of a ma- gazine, the image and the success of a radio station is determi- ned by the properly selected items, good editing and packaging. Minority or nationality radios can be categorised as commercial, community, public; general or specialised. Based on Coverage area, they can be small community (cove- ring a few km2, e.g. a housing estate), local (10-30 km2, for a town), regional (30-100 km2, for a region), national (a network of stations covering a whole country), or especially intended for abroad, for an international or generally global community (e.g. BBC WS). In terms of financing, they can be commercial or non-commer- cial. In the case of the latter, financial support might come partly or fully from the state, or from listeners (in the form of a prescrip- tion fee or voluntary support) or from other organisations (church, foundation, etc). In terms of owners, radios can be in the hands of the state, a church, a community/foundation/university, or they can belong to the local government or a private individual. The latter might be independent private stations, where the radio is operated by a local enterprise, or a radio network, where the owner operates not just a local, but a nationwide network; or the owner might be a foreign company. According to distribution technology, we can differentiate be- tween analogous and digital programmes. Within analogous, we have medium or long wave (AM) or short wave types, which can usually cover a larger area, so they are mostly used for nationwide or international broadcast. Within the URH band, the FM broad- cast is suitable for airing stereo music, but it can only be recei- ved in a smaller area; for nationwide coverage, more transmission would be required. According to another categorisation, we differentiate between land, satellite, cable broadcast, or internet broadcast. Internet broadcast can only be digital, the others can be either analogous or digital. Since the distribution technology of radio stations has a limited number of sources (frequencies), it is important to know that the number of radio stations aired in the ether must be regu- lated, so stations can only operate with a state permit. The permit is usually obtained via tenders and competitions in which impor- tant aspects are content (public interest) or the funds offered for broadcast. However, the problem of frequency shortage is often exaggerated, and it is used as an excuse. The above categories can be varied in many ways, so in practice we can specify a few basic types.
  35. 35. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 35 Public service radio can be financed by the state, or it can be in private hands. The main difference between the public radio and the purely state radio is that the public one provides unbiased in- formation, it does not serve a state propaganda. Its mission is to inform, educate and entertain. A good example for public service radio is the BBC. A public service radio station can normally pro- vide several parallel programmes. These can be full service pro- grammes, which cover all genres (the original BBC concept), and it can have special programmes (among these, the most common ones are the classical music and the literature programmes). Public service, in the classic sense broadcasts valuable shows, which is why the “valueless” mass programmes are left out of it. In our days, the concept of public service is being re-evaluated: commercial radios also consider themselves as public service ra- dios, as they serve a larger crowd than the traditional public servi- ce radios. The purpose of public service is to address the biggest possible audience of listeners. A radio station airing various types of high-quality shows used to be suitable for this purpose – until its rivals appeared with the aim to serve commercial interests. Then radios made a move towards lighter shows instead of qua- lity programmes. In this environment, the task of traditional public service stations is to maintain quality – this is a kind of mission. 2.8.3 The characteristics of public service radios:  an obligatory minimum of information supply  very few comments  operating in the form of “announcements”; a one-way informa- tion flow: providing information  palpable distance between listeners and staff, a more reser- ved style  an even share of workload  Besides public service stations, commercial radios form the other major category of our dual media system. Their only aim is to make profit, everything is subordinated to this purpose. Their source of income is commercials, which are meant to target as many solvent individuals as possible with the help of music and discussion shows aired inbetween. The media laws may oblige these stations to air a specific proportion of public service programme items. Commercial radios target a specific, smaller group of listeners whose needs can easily be predicted. A certain format is used for this, in which specifica- lly determined music styles are played. 2.8.4 The characteristics of commercial radios:  to reach the widest possible audience at all costs  attracting attention by commercial campaigns, sales, games and impressive show elements  in their style they strives to be as smooth and catchy as possible  involving celebrities,
  36. 36. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 36  they turn communication into a commodity  their main ambition: to fill time slots between commercials  A development of the last few decades is the trial media sys- tem, in which a third component is community radios. These give voice to a smaller community in much more informal cir- cumstances than public service or commercial radios. They are usually maintained by a community, the staff works without a salary. Their goal is to put the microphone into a community’s hand. The number of their listeners is usually very low, as their specific shows are aired for a small group. 2.8.5 Characteristics of small community radios:  they don’t want to address everybody at all costs  they support subcultures, minorities, specific groups of society  they are mostly based on voluntary work  they are characterised by a personal, open attitude  accessibility: they are made by the people they are intended for  they consider their listeners as their partners  radio and listener affect one another mutually  non-profit operational framework: their possible income is reu- sed for maintenance, they are cheap  the listener participates in the birth of a radio show  the local aspect is significant  they strive to strengthen local culture and language  mostly identify themselves with the topics they tackle  they do not focus on stars but on everyday people  work roles, tasks strongly overlap each other, unified roles  their topics are not international or national events, nor party politics, but events with local interest (or news from neighbou- ring locations); issues of local people and communities, their pleasures and conflicts, or events, topics important for a group of likeminded people  their values: based on local culture, the presentation of local organisations, permanent issues, trends, art work, etc. 2.8.6 Genre theory 2.8.6.1 Interview A personal conversation with a set form, recorded and presented with the means of journalism. The conversation takes place be- tween press representative(s) and individual(s) from whom state- ment(s) for the purpose of publication is (are) elicited. The aim of a classic or informative interview is to publish exact information obtained from the interviewee, a reliable and authentic source. The goal of a portrait or a person-focused interview is to present the personality of the interviewee in a detailed way. 2.8.6.2 The concept of the interview: It is a conversation of two or more people where information is elicited from the interviewee regarding a specific subject. The interview is a widely used technique which is often applied, for example, in scientific research (e.g. a deep interview), in the me- dia, in market research or in selecting employees.
  37. 37. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 37 2.8.6.3 Interview types When making an interview, two basic points are opposing each other: the subtlety and the processability of information obtained in the course of the interview. Thus the interview can be one of the two major types, or a go-between of the two: Unstructured interview: the subject of the interview may elabo- rate on his views freely about the given topic. He can express himself subtly and informatively, although this makes it more diffi- cult to compare this interview to other interviews. Structured interview: the subject of the interview can only res- pond to predefined questions, within specified circumstances. This way, it is easier to compare this interview to others, but the interviewee’s means of self-expression are restricted, and thus he can be less informative. 2.8.6.4 Report The report is a complex news coverage which includes features of several genres; a phenomenon, or a group of phenomena is reflected in it as filtered through the journalist’s personality, shown in the context of social psychology. The journalist’s personality is manifested in the choice of topic, which shows his social and psychic sensitivity. His person shines through his journalism ex- pertise as he shapes the factual material, and as he occasionally expresses his opinion. The reporter interrogates his subjects or interviewees on location or in the studio. The interview, the news coverage and the portrait are not reports The concept of the report: The report gives account of an inte- resting event. It is a human-centred genre, a newspaper, radio or TV coverage shot on location or based on experiences related to the location, by eliciting information from people involved in an event. It is pragmatic, it explains reality, and the plot defines it. It is a transitional genre, a cross between factual and opinion-based journalism. The report explores, evaluates and generalises. Besides the event, the location and the characters are also important. The report is more liberal than the informative genres. It may even include dialogues or descriptions. It does not analyse, but forms a subjective opinion; the journalist can make an appearance in it. The author can even depict the featured personalities with the means of fiction. Types of report:  pragmatic, fact-finding  judging, analysing  report on an event or a condition (state)  documentary report 2.8.6.5 News coverage The news coverage is a faithful imprint of reality in an edited and condensed form; it is important that the reporter is on location, and makes the coverage based on his own experiences and im- pressions. He gives factual and exact information, helps us feel the atmosphere. The most essential difference between a news coverage and a news clip is personal tone.
  38. 38. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 38 The news coverage is actually a piece of news, still, it is more than that. Other than focusing on the main idea, it attempts to show the atmosphere, as it is made on location (e.g. war coverage). It is different from the news clip in the sense that it is not impersonal, but the events are seen with the eyes of the journalist. The news coverage gives account of the same things as a news clip (what happened to whom, when and where), but it also talks about the how. It does not usually discuss the why, because that is in the scope of the report. It is a member of the factual genre family. News show: At least 90 percent of its time is made of items presenting recent domestic and international events – not inclu- ding traffic news, weather forecast and sports news. This is the most important subgroup of information shows; it mostly informs viewers in the form of daily news shows and weekly summaries about the news of the world. The concept of news: News is composed of pieces of informa- tion coming from controlled and authentic sources; it concerns the majority of population, and it is a result of professional journa- list research. The structure and content of news: News is short, concise, clear and objective press material. All in all, it must answer six questions: “Who? When? Where? What? How? Why?” With a per- fectionist approach, we can say that in a well-written news item there is not one unnecessary word, which means that removing any word would make the news impossible to understand. The news starts with a lead which summarises the most important pieces of information. The news body includes the elaboration of the lead. The structure of news looks like a triangle upside down (in press lingo it is often referred to as a pyramid). At the widest part, that is, at the beginning of the news, the most important pieces of information are contained, whereas the least significant facts are left to the peak of the triangle (the end of the news item). Objectivity in sharing news: In most cases, news organisations are expected to be impartial, which is often a challenge. Journa- lists often make the mistake of reporting about a topic influenced by their own opinion or political conviction, thus losing their ob- jectivity: their authenticity will be questioned, they will become targets of ethical attacks. Factors increasing the value of information: Those pieces of information will make it into the news that are selected by press people (or as we say, the gatekeepers). The selection takes pla- ce based on news value. This is a characteristic of the info items which makes them worthwhile to be used as news, so that they can reach as many people as possible. From another approach: news value is an additional characteristic of information that makes it sellable. Determining the news value is a strictly professional task. The news value is increased by the following factors:  the up-to-date nature and freshness of the event or action  the position or influence of characters, whether they are famous  geographical and cultural closeness
  39. 39. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 39  the uniqueness of the event or action, its unusual, surprising or shocking nature  the ability to place it into a thematical frame, that is, whether the news has an antecedent  the interesting aspect of the information, its entertaining nature 2.8.6.6 Magazine shows It is usually a series of live studio discussions which are linked together by the show host’s personality (or the show hosts’ perso- nalities). The live discussions with invited guests and the studio scenes are made more colourful by film clips, i.e. pre-shot inter- views, movie parts, and videos. Most of these shows are so-ca- lled service programmes which inform the public about important things in an entertaining way, for example, about health topics, parenting, weather or traffic. Most of these shows are aired every day, mainly early in the morning; they would typically incorporate all the shows offered during that time of day. 2.8.6.7 Background programme This is a programme type based on the detailed analysis of news or topics of public interest. As opposed to the mostly factual short news shows, which, due to their genre, must separate fact and opinion, these shows aim to provide the viewer not only with the mere facts or the condensed news clip form of events, but with the images of events which explain themselves, comments and often the conflicting opinions. The background show – whose form is most often a studio report or a debate – usually starts with a clip that is related to the theme, and it is often framed by a fact-finding report. 2.8.6.8 Sports broadcast Live sports broadcasts are either aired simultaneously with the event, or with a delay, or in edited form, from a recording in the audiovisual media. The commentator or presenter gets a highligh- ted role in this genre, because he, other than following the events, must provide the viewers or listeners with extra information and interesting facts in order to involve them with the event; he often voices his own opinion as well. 2.8.6.9 Cultural and educational shows Educational and/or scientific programmes have the aim to cultiva- te the values of our cultural heritage, or to promote cultural versa- tility. Previously, the “school TV”* aired programmes which were linked with the standard educational curriculum. Today we have thematical scientific channels along with some others that air lite- rary programmes, classical music concerts, reports with artists or opera performances. *In the 60s, 70s and 80s, TV programmes were often incorporated with class work at schools, and the programme structure of television took school schedules into consideration. Shows related to school material were often taken out of TV archives for repetition. The age of “school television” was a very important era. The “school TV” was a show in which learning and the teachers’ preparation work were supported by the televised presentation of school material. http://www.youtube.com/watch?v=8qXO0qFO-Ls
  40. 40. FUNDAMENTALS OF SOUND MODULEADVANCED COURSE OF MEDIA LITERACY 40 Activities2.9 1. The basics of sound A. Practice on basics of sound 1. After hearing some sound samples, try to determine their frequency. You might want to check yourself with the help of a smart phone application. 2. Determine the noise/signal ratio and dynamics of various audio files. 2. Sound design B. Practice on sound design 3. Determine whether certain audio files were prepared at be- tter or worse compression ratios. 4. Listen to a sound recording through speakers, and then with a headset. What sort of differences can you hear? What is the advantage or disadvantage of each device? 3. Sound recording tools C. Practice on sound recording tools 5. Copy the recorded audio file into the computer, open it with the sound editor and listen to it. 4. Microphones D. Practice on microphones 6. Make sound recordings in various situations with the micro- phones available. Try to select the most suitable microphone solution for the given situation and environment! 5. Audio editing E. Practice on audio editing 7. Do the following with the file copied into the computer: 8. Edit it in such a way that the full length is between 75 and 90 seconds. 9. Cut it. 10. If the software allows it, use tone control, compression, or normaliser. 6. Distribution of sound files F. Practice on distribution of sound files 11. Compress the recording at various settings. 12. Select the sound quality that is still enjoyable. 13. Upload it onto a community page. 7. Radio G. Practice on radio 14. As a situational exercise, take turns in being reporters and interviewees. 15. Hold an improvised crew meeting, do brainstorming, pre- pare a radio show; distribute the tasks. 16. In groups of two or three, go on locations, and based on the material of the genre theory lecture, make reports, in- terviews. 17. Edit and cut the recorded material. 18. Listen to the final materials together, and share your opi- nion regarding any good or bad solutions.
  41. 41. BASIC COURSE OF MEDIA LITERACY Project supported by: This project has been funded with support from the European Commission

×