this is based on JNVU jodhpur for BCA student
prepared by :
Assistant Professor
Gajendra Jinagr
for more update connected with me 9166304153(whatsapp+)
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Multi media unit-2.doc
1. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
1
UNIT-II
Sound: Sound and it Attributes, Mono Vs Stereo sound, Sound channels, Sound and its effect in multimedia. Analog
Vs Digital sound, Basics of digital sounds-Sampling, Frequency, Sound Depth, Channels, Sound on Pc,Sound
standards on PC, Capturing and Editing sound on PC, Overview and using some sound recording. editing software.
Overview of various sound file formats on PC - WAV, MP3, MP4, Ogg Vorbise etc.
Animation: Basics of animation, Principle and use of animation in multimedia, Effect of resolutions. pixel depth.
Images size on quality and storage. Overview of 2-D and 3-D animation techniques and software- animation pro. 3D
studio & Paint Shop pro animator.
Animation on the Web - features and limitations, creating simple animations for the Web using GIF Animator and
Flash.
2. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
2
Sound and Its Attributes
The amount of noise in a sound varies independently of its amplitude; you can have a
nice, clean, resonant sound, or you can have a wheezier sound. In the waveform display,
you've got to know how to distinguish these: noise is irregular whereas pitched sounds
are regular. But if the degree of noisiness could be computed, it could be displayed
separately; for example, a clean sound could look like this:
and a noisy one with a similar envelope could look like this:
The characteristics (Attributes) of sound are frequency, wavelength, amplitude and
velocity.
Sound - Frequency:
The frequency is the number of air pressure oscillations per second at a fixed point
occupied by a sound wave. One single oscillatory cycle per second corresponds to 1 Hz.
Sound-Wavelength:
The wavelength is the distance between two successive crests and is the distance that a
wave travels in the time of one oscillatory cycle.
The wavelength of a sound wave of frequency f and travelling at speed c is given by c/f.
Given a speed of 343 m/s, a 20 kHz sound wave has a wavelength of about 17 mm.
Sound - Amplitude
The amplitude is the magnitude of sound pressure change within the wave, or basically,
the maximum amount of pressure at any point in the sound wave. A sound wave is
caused literally by increases in pressure at certain points, high pressure points are the
crests mentioned above, and behind them are low pressure points which tail them.
Amplitude is the maximal displacement of particles of matter that is obtained in
compressions, the amplitude is more often referred to as sound pressure level and
measured in decibels.
Sound - Velocity
Sound's propagation speed depends on the type, temperature and pressure of the medium
through which it propagates. Under normal conditions, however, because air is nearly a
perfect gas, the speed of sound does not depend on air pressure. In dry air at 20 °C
(68 °F) the speed of sound is approximately 343 m/s (approximately 1 meter every 2.9
milliseconds). The speed of sound relates frequency to wavelength.
Sound Channels and Sound Depth:
The audio signal is split into multiple channels so that different sound information comes
out of the various speakers. Mono has only once channel whereas Stereo Sound has two
channels, left and right.
If one instrument or voice is only produced in the left channel, it will seem to originate
from the left side of the listening area. If a particular sound is only slightly louder in one
3. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
3
of the channels, that sound will seem to originate off center slightly toward the channel in
which the sound is louder. The two channels in stereo are used to give the audio a sense
of depth.
Stereo Vs Mono:
Stereo sound has two independent channels, one left and one right. The left and right
signals of the stereo signal are similar but not exactly the same. The two channels are
used to give the audio a sense of depth. If one instrument or voice is only produced in the
left channel, it will seem to originate from the left side of the listening area. If a particular
sound is only slightly louder in one of the channels, that sound will seem to originate off
center slightly toward the channel in which the sound is louder. If you have two speakers
but supply mono signal to both of them, there will be no sense of separation or depth. If a
mono signal fed to both channels of a stereo amplifier, with a speaker on each channel,
the output will be mono.
Note:
In the following diagrams, 'X', 'Y', and 'Z' are the different sounds (instruments,
vocals...) in the audio. The red letters are where the signal 'appears' to originate
from and the yellow letters are where they are being reproduced.
Mono with one speaker:
In this diagram, the speaker is directly in front of the listening position and the
audio appears to (and does) originate from the speaker.
Mono with 2 speakers:
In this diagram, you can see that the same signal is reproduced by both speakers. Since
the signal content going to each speaker is precisely the same, this is a mono system. If
the level of the signal is the same in both speakers, the signal will appear to originate
precisely in the center of the speakers.
Below, you can see that the signal content from each speaker is the same but it is slightly
louder in the right channel. This means that it will seem to originate a little to the right of
center.
4. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
4
Stereo audio:
In this diagram, you can see that the 'x' portion of the audio is reproduced equally in both
channels and appears to originate in the center of the 2 speakers. The 'y' portion of the
audio is only in the left speaker and appears to originate from the left speaker's position.
The 'z' portion of the audio is only reproduced by the right speaker. This means that it
will appear to originate from the right speaker's position.
Below, you can see that the 'y' portion of the audio is produced in both channels but is at
a reduced level in the right channel. This will cause the 'image' of the y part of the audio
signal to appear to originate from left of center (not the far left or the center). This is how
the audio 'stage' is reproduced with a stereo signal (different signals are
recorded/reproduced at different levels in each of the speakers).
Analog vs Digital
Well, they are both ways of encoding information. Digital lends itself to computers and
other electronic equipment by recording information into 1's and 0's. This data can then
be read by electronic instruments and then produced into something familiar we can
understand such as words, picture or sound.
Analog on the other hand is comprised of continuous and variable electrical waves that
5. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
5
represent an infinite number of values. If sound was recorded digitally, it is made into 0s
and 1s right? Those 0s and 1s represent all the little bits of a sound. When you put them
together you get a full sound. Analog records sound just as it hears it, it doesn't break it
down into all these separate pieces...it's CONTINUOUS.
Analog sound on the other hand basically records sound as it produced, evidently giving
a deeper richer representation of the original. As we become more computerized,
everything is going digital, and why not? Digital offers a lot of improvements over
analog.
Surround Sound Basics
The main thing that sets a home theater apart from an ordinary television setup is the
surround sound. For a proper surround-sound system, you need two to three speakers in
front of you and two to three speakers to your sides or behind you. The audio signal is
split into multiple channels so that different sound information comes out of the various
speakers.
The most prominent sounds come out of the front speakers. When someone or something
is making noise on the left side of the screen, you hear it more from a speaker to the left
of the screen. When something is happening on the right, you hear it more from a speaker
to the right of the screen.
The third speaker sits in the center, just under or above the screen. This center speaker is
very important because it anchors the sound coming from the left and right speakers -- it
plays all the dialogue and front sound effects so that they seem to be coming from the
center of your television screen, rather than from the sides.
The speakers behind you fill in various sorts of background noise in the movie -- dogs
barking, rushing water, the sound of a plane overhead. They also work with the speakers
in front of you to give the sensation of movement -- a sound starts from the front and then
moves behind you.
Today, there are two main sources for surround-sound formats -- Dolby Laboratories and
Digital Theater Systems. Dolby Laboratories formats include various versions of Dolby
Digital® and Dolby Pro Logic®. Digital Theater Systems has created a range of DTS
Digital Theater Sound formats.
DTS encoding uses less compression than Dolby encoding. This means that DTS sound is
clearer and sharper.
However, DTS encoding is also less commonly used on DVDs and television broadcasts.
Most DVDs have some Dolby sound options, and some also offer choices for DTS sound.
The most common options are 5.1, 6.1 and 7.1 surround, named for the number of
channels. The ".1" indicates a channel for a subwoofer. The subwoofer channel carries
low-frequency sound to give a bass boost and create a rumbling effect for certain special
effects sounds, such as explosions and trains. These are the typical speaker setups and
formats that will support them:
6. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
6
7. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
7
Importance of Sound on multimedia
Sound Effects are present in almost every media that you see and hear on a daily basis.
Television, movies, web sites, and digital music uses sound effects for our brains to get
understood the topic or environment easily. Their importance can easily be measured by
their absence. for example if we see a gun, we watch it fire, but if we don't hear the
gunshot we feel that the experience is somehow broken, fake, or just doesn't make any
sense.
Having the exact right sound for your images can be crucial to get attention of the
audience. Sometimes the sounds can be isolated or symbolic, like the ceiling fan in the
beginning of "Disaster Now". Much of the other sound effects are removed to focus on
the ceiling fan, which is a combination of blades moving quickly past the microphone,
and the blades of a helicopter.
Sometimes the sounds are a little more collaborative and are mixed together to make a
scene sound realistic. For example sound of a busy airport can create the unconsciously
expected realism that your brain is expecting.
But you'll never get lucky enough to find a prefabricated sound byte that has everything
you need laid into it at exactly the right time. That's why clean, individual sounds are
vitally necessary to create the feel that you'll need.
The importance of sound in your production is paramount. Cheap sounds pull your
8. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
8
audience out of the realism of the experience. This is why high quality sounds should be
collected and used.
SOUND AND ITS EFFECT IN MULTIMEDIA
There are mainly five (5) sound/audio effects available in all most audio (Editing
software. These are the followings:
1. Amplitude Effect
2. Delay effects
3. Time/Pitch effects
4. Reverse effect
5. Invert effect
1. Amplitude Effects: The amplitude effect classified in the following eight (8) groups:
1) Amplify
2) Fade In/ Fade Out
3) Normalize
4) Compressor
5) Expander
6) Envelope
7) Mute
8) Vibrato
Amplify: Amplify effect is used to increase or decrease the amplification of the sound in
the media file. If you select a part of the file with the mouse, this effect will amplify or
attenuate this exact part of the file. if not the sound of the entire file will be amplified or
diminished.
Fade In and Fade Out:
Use the Fade In effect to fade in the sound in the media file. If you select a part of the file
with the mouse, this effect will fade in the sound of this exact part of the file. Otherwise
the sound of the beginning of the file will be faded in.
Use the Fade Out effect to fade out the sound in the media file. If you select a part of the
file with the mouse, this effect will fade out the sound of this exact part of the file.
Otherwise the sound of the end of the file will be faded out.
Normalize: Use this effect to achieve the maximum amount of amplification that will not
result in clipping. If you select a part of the file with the mouse, this effect will amplify
the highlighted selection to the percentage of the greatest level. if not the sound of the
entire file will be normalized.
Compressor: Compressor effect is used to reduce the dynamic range of an audio signal.
For example, compressors can be used to remove the variations in the peaks of an electric
bass signal by clamping them to a constant level (thus providing an even, solid bass line.)
Compressors can also be useful in compensating for the wide variations in the level of a
signal produced by a singer who moves frequently or has an unreliable dynamic range.
Expander: Expander effect is used to expand the dynamic range of an audio signal.
Expander boosts the high-level signals and satisfies low level signals.
Envelope: Envelope effect is used to change the audio file amplitude in accordance with
the specified coordinates. It allows absorbing the sound and making it quiet slowly. It
generally used to smooth beginning or ending of the sounds. Also this effect is useful for
9. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
9
creating the audio loops and samples.
Mute: Mute effect is used to switch off the sound in the edited audio file.
Vibrato: Vibrato equals to a cyclical changing of a certain frequency of the input signal.
2. Delay Effects:
The Delay effect classified in the following five (5) groups:
1) Delay
2) Phaser
3) Flanger
4) Chorus
5) Reverb
Delay: This effect permits you to create an echo effect of your audio track by replaying
the sounds of the selected audio portion after a certain period of time. Applying of this
filter can bring life to dull mixes, widen and fill out your instrument's sound.
You can use this function to create single echoes, as well as a number of other effects.
Delays of 35 milliseconds (ms) or more will be received as discrete echoes, while those
falling within the 35-15 ms range can be used to creeate a simple chorus or flanging
effect. (These effects will not be as effective as the actual Chorus or Flanger effects, as
the delay settings will be fixed and will not change over time).
Phaser: The Phaser filter makes the selected portion of your audio thinner or fuller
through mixing the automatically filtered and unfiltered audio signals. You can apply this
filter to give a "synthesized" or electronic effect to natural sounds.
The Phaser achieves its distinctive sound by creating one or more notches in the
frequency domain that eliminate sounds at the notch frequencies.
It is very similar to flanging. If two signals that are identical, but out of phase, are added
together, then the result is that they will cancel each other out. If, however, they are
partially out of phase, then partial cancellations and partial enhancements occur. This
leads to the phasing effect .
F1anger: The Flanger effect is one of the other elaborated audio effects that is created by
mixing a signal with a slightly delayed copy of itself, where the length of the delay is
constantly changing. With the Flanger filter you can "shape" the sound through
controlling how much delayed signal is added to the original. Use it if you want to create
the ''whooshing'' sound effect in some fragment of your audio track.
Flanger is a special case of the Chorus effect: it is created in the same way that Chorus is
created. In days gone by, flanging used to be created by sound engineers who put their
finger onto the tape reel's flange, thus slowing it down. Two identical recordings are
played back simultaneously, and one is slowed down to give the flanging effect.
Fanger gives a ''whooshing'' sound, like the sound is pulsating. It is essentially an
exaggerated Chorus.
Chorus: The Chorus effect allows you to make your audio sound fuller. It can make a
10. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
10
single instrument sound like there are actually several instruments being played. It adds
some thickness to the sound, and can be described as 'lush' or 'rich'.
The Chorus effect is so named because it makes the recording of a vocal track sound like
it was sung by two or more people singing in chorus. This is achieved by adding a single
delayed signal (echo) to the original input.
The Chorus differs from the Hanger in only a couple of ways. One difference is the
amount of delay that is used. The delay times in a Chorus are larger than in a Flanger.
This longer delay doesn't produce the characteristic sweeping sound of the Hanger. The
Chorus also differs from the Flanger in that there is generally no feedback used.
Reverb: The Reverberation filter helps you apply the particular effect when the sound
stops but the reflections continue, decreasing in amplitude, until they can no longer be
heard.
You can use this function to set Reverb effect that is used to simulate audio space, and
consists of both early reflections and echoes that are so closely spaced that they are
perceived as a single fading sound. Reverb is different from the basic echo function in
that the delays are not repeated at regularly spaced intervals. Reverb function can create a
wide range of highly quality reverb effects.
It is the sound you hear in a room with hard surfaces where sound bounces around the
room for a while after the initial sound stops. Reverb is used to simulate the acoustical
effect of rooms and enclosed buildings. In a room, for instance, sound is reflected off the
walls, the ceiling and the floor. The sound heard at any given time is the sum of the sound
from the source, as well as the reflected sound.
3. Time/Pitch Effects:
The Tune/Pitch effect classified in the following two (2) groups:
1) Tune Stretch
2) Pitch Shift
Time Stretch: The Time Stretch effect permits to change the tempo (rhythm), but keep
the pitch the same throughout. If you select a part of the file with the mouse, this effect
will change the tempo of this exact part of the file. Otherwise the tempo of the whole file
will be changed.
Pitch Shift: The Pitch Shift effect shifts the frequency spectrum of the input signal. It can
be used to mask a person's voice, or make the voice sound like that of the "chipmunks",
through to "Darth Vader". It is also used to create harmony in lead passages, although it
is an ''unintelligent" harmonizer.
4. Reverse effect:
With the help of this function you can make a selection play backwards by reversing the
order of its samples. It is useful for creating special effects.
If you select a part of the file with the mouse, this effect will applied to this exact part of
the file. Otherwise the sound of the whole file will be changed.
5. Invert effect:
With the help of this function you can simply invert the samples, so that all positive
offsets are negative and all negative offsets are positive. Inverting does not produce an
audible effect, but it can be useful in lining up amplitude curves when creating loops, or
11. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
11
pasting. On stereo waveforms, both channels are inverted. If you select a part of the file
with the mouse, this effect will be applied to this exact part of the file. Otherwise the
sound of the whole file will be altered.
Sound Standards
Sound card produces sound; sound cards have an audio connector for CD-audio
output. Sound cards when attached to a system the system becomes multimedia system.
The hardware configuration of the AdLib soundcard was the first standard of sound
cards.
Later on sound card developed by Creative Labs' (SoundBlaster) set standard for digital
audio on PC.
Creative Lab developed 8-bit sound card later on it developed 16-bit sound card, followed
by AWE32 (32 bit), and AWE32 satisfied the requirements of pc users. AWE32 was sold
to PC manufacturers as OEM kit that helped to bring down price and uplift the standard.
The AWE64, launched in late 1997 and offering 64-note polyphony from a single MIDI
device, 32 controlled in hardware and 32 in software, is the current benchmark.
Most sound cards sold today should support the SoundBlaster and General MIDI standards
and should be capable of recording and playing digital audio at 44.1 KHz stereo. This is
the resolution at which CD-Audio is recorded, which is why sound cards are often referred
to as having "CD-quality" sound.
Surround sound for the movies is pre-recorded and delivered consistently to the ear, no
matter what cinema or home it is replayed in. Just about the only thing Dolby cares about
is how far away the rear speakers are from the front and from the listener. Beyond that it's
the same linear delivery, without any interaction from the listener - the same as listening
to music.
This is obviously no good for games, where the sound needs to interactively change with
the on-screen action in real time. What now seems like a very long time ago, Creative
Labs came up with its SoundBlaster mono audio standard for DOS games on PCs. As the
standard matured, realism improved with stereo capabilities (SoundBlaster Pro), and
quality of sound uplifted with CD resolution (SoundBlaster 16). When you started your
game, you'd select the audio option that matched your sound card. Microsoft, however,
changed the entire multimedia standards game with its DirectX standard in Windows 95.
The idea was that DirectX offered a load of commands, also known as APIs, which did
things like "make a sound on the left" or "draw a sphere in front". Games would then
simply make DirectX calls and the hardware manufacturers would have to ensure their
sound and graphics card drivers understood them.
Sound on PC
The sound in a high-end desktop PC is usually produced by a sound card (also known as an
audio card) that is installed on the computer's motherboard in an adapter slot or it is inbuilt
in motherboard which is known as integrated sound card. The sound card, in turn, is
attached to other peripheral devices such as a CD/DVD drive in order to produce sound via
the operating system, which uses the card's software driver and a sound player, such as
the Windows Media Player (WMP), or a third-party sound player such as a Winamp,
Real Player, or Apple's QuickTime.
12. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
12
A sound card can be supplied to a laptop computer via a PC card via a
CardBus/PCMCIA slot (old technolody), or via an ExpressCard/54 slot (current
technology).
External USB or FireWire sound devices, sometimes called a sound card that plug into a
USB or FireWire port on the computer, are available. They work externally to the
computer in the same way as an internal sound card.
In a desktop PC, the internal or external connections for sound devices are found on the
sound card (in the form of a a PCI or PCI Express adapter card) or they are provided
from the computer's motherboard. For example, if the motherboard has a built-in sound
chip, on its external ports panel, the Line Out or Speaker Output port is used for the
speakers and headphones, and the Line In port is used for an external CD player. In order
to provide sound, an internal CD/DVD drive would be connected by an internal cable
connected to an internal header on the motherboard. If the computer has an internal or
external sound card, the connections go to it.
If you want to use a video/graphics card with a HDMI (High Definition Multimedia
Interface) output that combines sound and video, you have to cable the digital S/PDIF
surround-sound output from the sound card or motherboard into the graphics card to
provide HDMI with both sound and picture so as to take advantage of its full capabilities.
However, note that if you are only using a 2.1 stereo speakers for the sound output, you
will only get two-channel simulated surround sound from them. A 5.1 or 7.1 surround-
sound speaker system is required for actual surround sound.
If you plug an external USB or FireWire sound card into a USB or FireWire port
provided by a desktop or laptop PC, you should remove any internal PCI sound card
(installed in a PCI slot of a desktop PC's motherboard). Windows will then stop installing its
device driver at startup.
Sound cards are available in two current motherboard standards - PCI and PCI Express .
The ISA standard has become obsolete, but you can still purchase ISA sound cards that
fit into the ISA slots on old motherboards, which provide an ISA slot for the use of what is
known as legacy (obsolete) hardware. All new sound cards fit into a PCI or a PCI Express
x1 slot on the motherboard.
A PCI Express card, such as the Creative Sound Blaster X-Fi Xtreme Audio sound card, uses a
x1 PCI Express slot on a motherboard that provides one or more of them.
The Sound Blaster X-Fi Xtreme Audio sound card can turn downloaded music into a personal
concert, allows you to watch DVDs or downloaded videos with full cinematic surround
sound, and provides 3D audio and EAX effects in PC games.
The image below shows the slot arrangement on a typical motherboard.
Remember that all new motherboards do not have any ISA slots. Also note that
13. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
13
motherboards are coming out now that use their own colour schemes for the slots instead
of the standard black (ISA), white (PCI), and brown (AGP and PCI Express) colours.
The slots can be any colour.
Note that some motherboards have a PCI-X slot. PCI-X is the extended PCI standard,
both of which have been replaced by the PCI Express standard. The 64-bit PCI-X bus
slot has double the maximum throughput of PCI, at a maximum speed of 3Gbps. Most
PCI-X cards are backwards compatible with PCI bus slots, which means that you can
install a PCI-X card in a PCI slot provided that it has the correct voltage keying for the
slot and that the area directly behind a PCI slot must have available space to
accommodate the additional length of PCI-X cards.
The AGP graphics standard is no longer used on most new motherboards, having been
replaced by the PCI Express and PCI Express 2 standards. PCI Express x1 slots are
used for devices, such as some graphics cards, sound cards, and Ethernet network cards.
The following diagram shows the PCI Express x16 and x1 slots, and the two standard
PCI slots on a Gigabyte GA-MA78GM-S2H motherboard.
Sound Recording System
Today, there are two main sources of sound recording system -- Dolby Laboratories and
Digital Theater Systems.
Dolby Laboratories formats include various versions of Dolby Digital® and Dolby Pro
Logic®.
Digital Theater Systems has created a range of DTS Digital Theater Sound formats.
DTS encoding uses less compression than Dolby encoding. This means that DTS sound is
clearer and sharper.
However, DTS encoding is also less commonly used on DVDs and television broadcasts.
Most DVDs have some Dolby sound options, and some also offer choices for DTS sound.
Sound/Audio Editing Software
Audio editing software allows you to open, edit, manipulate, transform and save digital
audio sound files in various formats. Sound Forge XP, Audacity, and CoolEdit are some
examples.
Audio editing software permits a pictorial view of the audio waveform and then allows
editing to be done by selecting specific points or ranges to be selected on the waveform
by the mouse and keyboard and choosing editing functions from a menu.
Following are the common features:
14. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
14
Opening an existing sound file
Playing a file whole or selected portion.
Accurately positioning the playback head by specifying time or by using zoom feature.
Copying and pasting portions of a file.
magnifying and zooming
mixing sound ,cross-fading and sounds effects
converting between mono and stereo sound
Changing the sampling rate and bit depth.
Recording of sound.
Multimedia Sound Formats
The MIDI Format
The MIDI (Musical Instrument Digital Interface) is a format for sending music
information between electronic music devices like synthesizers and PC sound cards.
The MIDI format was developed in 1982 by the music industry. The MIDI format is very
flexible and can be used for everything from very simple to real professional music
making.
MIDI files do not contain sampled sound, but a set of digital musical instructions
(musical notes) that can be interpreted by your PC's sound card.
The downside of MIDI is that it cannot record sounds (only notes). Or, to put it another
way: It cannot store songs, only tunes.
The upside of the MIDI format is that since it contains only instructions (notes), MIDI
files can be extremely small. The example above is only 23K in size but it plays for
nearly 5 minutes.
The MIDI format is supported by many different software systems over a large range of
platforms. MIDI files are supported by all the most popular Internet browsers.
Sounds stored in the MIDI format have the extension .mid or .midi.
The RealAudio Format
The RealAudio format was developed for the Internet by Real Media. The format also
supports video. The format allows streaming of audio (on-line music, Internet radio) with
low bandwidths. Because of the low bandwidth priority, quality is often reduced.
Sounds stored in the RealAudio format have the extension .rm or .ram.
The AU Format
The AU format is supported by many different software systems over a large range of
platforms. Sounds stored in the AU format have the extension .au.
The AIFF Format
The AIFF (Audio Interchange File Format) was developed by Apple.
AIFF files are not cross-platform and the format is not supported by all web browsers.
Sounds stored in the AIFF format have the extension .aif or .aiff.
15. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
15
The SND Format
The SND (Sound) was developed by Apple.
SND files are not cross-platform and the format is not supported by all web browsers.
Sounds stored in the SND format have the extension .snd.
The WAVE Format
The WAVE (waveform) format is developed by IBM and Microsoft.
It is supported by all computers running Windows, and by all the most popular web
browsers. Sounds stored in the WAVE format have the extension .wav. it is an
expandable format which supports multiple data formats and compression schemes. It is
used for uncompress 8, 12 and 16-bit audio file both mono and multi-channel at a variety
of sampling rate including 44.1 kHz. Wav uses some lossless CODECs like DPCM and
ADPCM therefore professional user use this format for maximum audio quality. WAV
audio can also be edited and manipulated with relative ease using software.
The MP3 Format (MPEG)
MP3 files are actually MPEG files. But the MPEG format was originally developed for
video by the Moving Pictures Experts Group. We can say that MP3 files are the sound
part of the MPEG video format.
MP3 is one of the most popular sound formats for music recording. The MP3 encoding
system combines good compression (small files) with high quality. Expect all your future
software systems to support it.
Sounds stored in the MP3 format have the extension .mp3, or .mpga (for MPG Audio).
The file can be coded at a variety of bit rate, and provides good result at bit rates of 96
kbps.
The Ogg Vorbis
The ogg vorbis is a completely free and open audio compression project from the
Xiph.org foundation, and is a part of their ogg effort to create free and open multimedia
and signal processing standards. This format is popular among open source communities
and they argue that due to its higher fidelity and completely free nature it is a natural
replacement for the MP3 format. Ogg vorbis has replaced as the de facto standard audio
CODEC, with many newer video games titles employing ogg vorbis as opposed to MP3.
Vorbis uses the modified discrete cosine transform (MDCT) for converting the sound
data from time domain to frequency domain and back. Given 44.1 kHz stereo input, the
current encoder will produce output from 45 to 500 kbps depending on specified quality
setting.
MP4
Mp4's or rather the term mpeg-4 was developed by ISO (International Organization for
Standardization). It is a format specific for multimedia, the most common uses are for
digital audio and video, and it is a certain type of container that holds all this information.
It can contain other data such as subtitles and still images. You should also note that
mpeg-4 is identical to QuickTime MOV format. In fact any kind of data can be inserted
into MP4.
16. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
16
MP4 is getting more and more popular. In fact, it is now very popular to people who use
the latest iPod players and PSP owners. This type of file compression is now providing a
way to store DVD quality movies at a very small size. This type of file compression is
known as the MPEG4 format.
In addition, those other extensions have been used for MP4’s, (which is audio from an
iTunes store), to m4a (which includes such things as chapter markers, images, and
hyperlinks). In addition, of course m4b (which has the ability to work with IPODs, where
m4a files cannot. So most of the time when you find a file with the extension of mp4 or
m4v, then they have both the audio and video aspects.
MP4 can contain codec’s as well.
MP4 Can include and contain audio, video and also still images (like pictures), and all
kinds of other data, making it a diverse file to use.
MP4 can contain other types of competing technology e.g. ogg, vob, ratdvd, divx media
format, Matroska (mkv) and others to name a few.
MP4 works with variety of software and hardware such as in software: Amarok, Banshee
music player, 3ivx, foobar2000, GOM player, iTunes, Media player classic, QuickTime
player, Realplayer, VLC media player, and so forth, as for hardware: Kiss 1600, Apple
iPod, PSP (playstation portable), Playstation 3, Xbox 360, Nokia.
Therefore, to answer the question what MP4 is, well, it is technology at its best!
Indeed, the MP4 technology can now make entertainment much more convenient than
ever before. So, next time you think of downloading your favorite movies to your
computer to transfer to your portable multimedia device, try thinking of MP4. Just make
sure that your multimedia devices are capable of reading the MP4 format.
What Format To Use?
The WAVE format is one of the most popular sound format on the Internet, and it is
supported by all popular browsers. If you want recorded sound (music or speech) to be
available to all your visitors, you should use the WAVE format.
The MP3 format is the new and upcoming format for recorded music. If your website is
about recorded music, the MP3 format is the choice of the future.
Principles of Animation:
By definition, animation is the act of making something come alive. Visual effects such
as wipes, fades, zooms and dissolves are a simple form of animation. In animation a
series of images are changed very slightly and very rapidly, one after the other, giving
visual illusion of movement.
We often think of animation as full-length Disney movies and Saturday morning carton
in which illustrated heroes and villains and especially animal character come to life.
Television programs, movies and videos are part of out daily lives.
Usages of Animation
Animation plays a huge role in entertainment and provides action and realism.
Animation plays a huge role in education and training programs it provides visualization
and demonstration.
The perception of motion in an animation is an illusion. The move that we see is like a
movie, made up of many still images, each it in its own frame. Movies on video run at
17. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
17
about 30 frames per second but computer animations can be effective at 12 to 15 fps
anything less result in a jerky motion, as the eye detects the changes from one frame to
the next.
Basic of Animation
Animation is of normally two types 2d and 3d.
2-d animation:
There are two types of 2-d animation , cel and path.
Cel animation is based on the changes that occur from one frame to another to give the
illusion of movement. Cel comes the word celluloid (a transparent sheet material) which
was first used to draw the images and place them on a stationary background. In cel
animation background remains stationary whereas object changes it position from frame
to frame. You can have more than one object move against a fixed background.
Path animation moves an object along a predetermined path on the screen. The path could
be a straight line or it could include any number of curves. Often the object does not
change, although it might be resized or reshaped. Some program allows motion tweening
and shape tweening for this purpose.
3-d animation:
3-d animation is the foundation upon which many multimedia CD games and adventure
titles are constructed. Top-selling products such as Myst and 7th
Guest use 3-d animation
to bring the user into the setting and make him or her seem a part of the action. Whether
opening doors, climbing stairs, or exploring mysterious rooms, the user is participant, not
a spectator. Create 3-d animation is considerably more complex than 2-d animation and
involves three steps:
Modeling is the process of create 3-d objects and scenes. Modeling involves drawing
various views of an object (top, side, cross-section) by setting points on a grid. These
views are used to define the object’s shape.
Animation step involves defining the object’s motion and how the lighting and view
changes during the animation.
Rendering is the final step in creating 3-d animation and involves giving objects
attributes such as colors, surface textures, and degree of transparency. During testing
animators use quicker, lower-resolution rendering process then they use a slower, higher
quality process for the finished animation. Strata Pro 3D, Swiver 3D and 3D Studio are
examples of 3d animation program.
Animation on the web:
Incorporating animation is an excellent way to increase the appeal of a web site and help
ensure return visits. Animations can be as simple as blinking text, marquee like scrolling
headlines, rotation of logos, and other 2d figure performing action or as complex as 3-d
animation.
Animated text: using the html <blink> command you can cause text to flash on and off.
Another way to animate text is by using a scrolling or marquee-like action to scroll text
or to use java-script to make text animate.
18. VG Group Of Education
ASSISTANT PROFESSOR :GAJENDRA JINGAR(9166304153)
18
Animated GIF: the GIF graphics file format is a standard for the web. Gif are still images
that can be combined to create an animation. A program called gif builder allows you to
create an animation by displaying a series of gif files. It allows adjusting the speed of the
animation and how many times it is played.
Director Movie: a director animation can be played using the shockwave plug-in. this is a
way to create somewhat sophisticated animations and have them delivered via the web.
Macromedia flash and QuickTime are also useful for animation in the web.
3-D environments: the computer language used to create 3-d environments on the web
that allow the user to move through a space or explore an object is called virtual reality
modeling language (VRML). VRML technology is useful in creating games and
educational titles. A browser that supports VRML or a plug-in is required to display
VRML applications.
Limits and features of Web Animation/Design issues of web animation
Originally www was designed as a simple method for delivering text and graphics.
Make you web pages look good on minimal system, now layout should be for 800x600 to
1024x768 resolutions.
Transferring of multimedia file of size 1MB can take approx. 5 minutes on slow internet
connection. Therefore keeping file size small and using file compression technique for
that is useful.
Limit animated gifs to small images, and use a more capable plug-in for animations over
larger areas.
Give the user control over whether or not to display animation in web page. The icon
loads much faster than animation file therefore icon can be displayed and clicking on
some button or link animation can be displayed.
Allow the user to be active while graphic images or animation is being displayed. A
graphic or animation can be displayed in stages (streaming) while the user is reading text,
scrolling a page, or selecting a button hyperlinked to another web page.
Provide feedback to the user using timer or progress bar how much of graphics or
animation has been downloaded. It helps the user decide whether or not to continue with
the download and/or complete another task while waiting for the download to finish.
User may not have a plug-in or helper application to play an animation file; in this case a
series of images can be displayed or button with text can be given informing what kind of
plug-in or helper application is required to play the animation.