Wireless communication is the transfer of information between two or more points that are not connected by an electrical conductor.
The most common wireless technologies use radio
Wireless communication is the transfer of information between two or more points that are not connected by an electrical conductor.
The most common wireless technologies use radio
Distributed Operation
Synchronization
Hidden Terminals
Exposed terminals
Throughput
Access delay
Fairness
Real-time Traffic support
Resource reservation
Ability to measure resource availability
Capability for power control
Adaptive rate control
Use of directional antennas
Examples of wireless communication systems, paging systems, cordless telephone systems, cellular telephone systems,evolution of mobile phone, MSC, MTSO, PSTN, Mobile communication, wireless link, subscriber,
Introduction to basics of wireless networks such as
• Radio waves & wireless signal encoding techniques
• Wireless networking issues & constraints
• Wireless internetworking devices
you can be friend with me on orkut
"mangalforyou@gmail.com" : i belive in sharing the knowledge so please send project reports ,seminar and ppt. to me .
Distributed Operation
Synchronization
Hidden Terminals
Exposed terminals
Throughput
Access delay
Fairness
Real-time Traffic support
Resource reservation
Ability to measure resource availability
Capability for power control
Adaptive rate control
Use of directional antennas
Examples of wireless communication systems, paging systems, cordless telephone systems, cellular telephone systems,evolution of mobile phone, MSC, MTSO, PSTN, Mobile communication, wireless link, subscriber,
Introduction to basics of wireless networks such as
• Radio waves & wireless signal encoding techniques
• Wireless networking issues & constraints
• Wireless internetworking devices
you can be friend with me on orkut
"mangalforyou@gmail.com" : i belive in sharing the knowledge so please send project reports ,seminar and ppt. to me .
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
PERFORMANCE ANALYSIS OF CLIPPED STBC CODED MIMO OFDM SYSTEMIAEME Publication
A combination of Multiple-Input Multiple-Output Spatial Division Multiplexing technology and Orthogonal Frequency Division Multiplexing technique, namely MIMO-OFDM systems, been well-known as a potential technology to provide high speed data transmission and spectrum efficiency to attain throughput of 1 Gbit/sec and beyond improves link reliability for modern wireless communications. The rising development of Internet related contents and demand of multimedia services leads to increasing curiosity to high speed communications. It has been shown that by using MIMO system, it is possible to increase that capacity considerably.
For ease of analog or digital information transmission and reception, modulation is the foremost important technique. In the present project, we’ll discuss about different modulation scheme in digital mode done by operating a switch/ key by the digital data. As we know, by modifying basic three parameters of the carrier signal, three basic modulation schemes can be obtained; generation and detection of these three modulations are discussed and compared with respect to probability of error or bit error rate (BER).
Introduction to digital communication, base band system, formatting of textual data, MESSAGES, CHARACTERS, AND SYMBOLS, Example of Messages, Characters, and Symbols, Baseband Modulation, Intersymbol Interference
Design and Implementation of Area Optimized, Low Complexity CMOS 32nm Technol...IJERA Editor
A numerically controlled oscillator (NCO) is a digital signal generator which is a very important block in many Digital Communication Systems such as Software Defined Radios, Digital Radio set and Modems, Down/Up converters for Cellular and PCS base stations etc. NCO creates a synchronous, discrete-time, discrete-valued representation of a sinusoidal waveform. This paper implements the development and design of CMOS look up Table based numerically controlled oscillator which improves the performance, reduces the power & area requirement. The design is implemented with CMOS 32 nm Technology with Microwind 3.8 software tool. In addition, it can be used for analog circuit also enables the integration of complete system on chip. This paper also describes the design of a NCO which is of contemporary nature with reasonable speed, resolution and linearity with lower power, low area. For all about Pre Layout simulation has been realized using 32nm CMOS process Technology.
SECURED TEXT MESSAGE TRANSMISSION IN PRE -CHANNEL EQUALIZATION BASED MIMO- OF...IJNSA Journal
In this paper, we made a comprehensive performance evaluative study of a secured MIMO Orthogonal Frequency-Division Multiplexing wireless communication system with implementation of two pre channel equalization techniques such as Pre-Minimum Mean Square Error (Pre-MMSE) and Pre-Zero
Forcing(Pre-ZF) under QPSK and QAM digital modulations. The simulated system deploys three channel coding techniques(1/2-rated Convolutional, CRC and BCH). In the present simulated system, text message transmission has been secured with concatenated implementation of Electronic Codebook (ECB)
and Cipher Feedback(CFB) cryptographic algorithm. It is remarked from simulation results that the MIMO OFDM system outperforms with pre-ZF channel equalization,QAM digital modulation and BCH channel coding schemes under fading channels(AWGN and Raleigh).In Pre-MMSE/pre-ZF channel
equalization scheme, the system shows comparatively worst performance in convolutional channel coding scheme with QAM/QPSK digital modulation. With increase in noise power as compared to signal power, the system is found to have shown performance deterioration
Multi user performance on mc cdma single relay cooperative system by distribu...IJCNCJournal
Increasing data rate and high performance is the target focus of wireless communication. The multi carrier on multi-hop communication system using relay's diversity technique which is supported by a reliable coding is a system that may give high performance. This research is developing a model of multi user and two scheme of multi carrier CDMA on multi hop communication system with diversity technique which is using Alamouti codes in Rayleigh fading channel. By Alamouti research, Space Time Block Code (STBC) for MIMO system can perform high quality signal at the receiver in the Rayleigh fading channel and the noisy system. In this research, MIMO by STBC is applied to single antenna system (Distributed-STBC/DSTBC) with multi carrier CDMA on multi hop wireless communication system (relay diversity) which is able to improve the received signal performance.
MC DS CDMA on multi hop wireless communication system with 2 hops is better performing than MC CDMA on multi user without Multi User Detector. To reach BER 10-3 multi hop system with MC CDMA needs more power 5 dB than MC DS CDMA at 5 users using Alamouti scheme for symbol transmission at the relay.
Diversity combining is a technique in wireless network that uses multiple antenna system to improve the quality of radio signal. Mobile radio system suffers multipath propagation due to signal obstruction in the channel. A new hybridized diversity combining scheme consisting of Equal Gain Combining (EGC) and Maximal Ratio Combining (MRC) was proposed in this paper. Theperformance of the hybrid model was evaluated using Outage Probability (Pout) and Processing time (Pt) at different Signal-to-Noise Ratio (SNR) and Signal Paths (L=2,3) for 4-QAM and 8-QAM Modulation Schemes. A mathematical expression for the hybrid EGC-MRC was realized using the Probability Density Function (PDF) of the Nakagami fading channel. MATLAB R2015b software was used for the model simulation. The result shows that hybrid EGC-MRC outperforms the standalone EGC and MRC schemes by having lower Pout and Pt values. Hence, hybrid EGC-MRC exhibits enhanced potentials to mitigate multipath propagation at reduced system complexity.
A NEW HYBRID DIVERSITY COMBINING SCHEME FOR MOBILE RADIO COMMUNICATION SYSTEM...ijcsit
Diversity combining is a technique in wireless network that uses multiple antenna system to improve the quality of radio signal. Mobile radio system suffers multipath propagation due to signal obstruction in the channel. A new hybridized diversity combining scheme consisting of Equal Gain Combining (EGC) and Maximal Ratio Combining (MRC) was proposed in this paper. Theperformance of the hybrid model was evaluated using Outage Probability (Pout) and Processing time (Pt) at different Signal-to-Noise Ratio (SNR) and Signal Paths (L=2,3) for 4-QAM and 8-QAM Modulation Schemes. A mathematical expression for the hybrid EGC-MRC was realized using the Probability Density Function (PDF) of the Nakagami fading channel. MATLAB R2015b software was used for the model simulation. The result shows that hybrid EGC-MRC outperforms the standalone EGC and MRC schemes by having lower Pout and Pt values. Hence, hybrid EGC-MRC exhibits enhanced potentials to mitigate multipath propagation at reduced
system complexity.
Speech Recognition Systems(SRS) have been implemented by various processors including the digital signal processors(DSPs) and field programmable gate arrays(FPGAs) and their performance has been reported in literature. The fundamental purpose of speech is communication, i.e., the transmission of messages.In the case of speech, the fundamental analog form of the message is an acoustic waveform, which we call the speech signal. Speech signals can be converted to an electrical waveform by a microphone, further manipulated by both analog and digital signal processing, and then converted back to acoustic form by a loudspeaker, a telephone handset or headphone, as desired.The recognition of speech requires feature extraction and classification. The systems that use speech as input require a microcontroller to carry out the desired actions. In this paper, Cypress Programmable System on Chip (PSoC) has been studied and used for implementation of SRS. From all the available PSoCs, PSoC5 containing ARM Cortex-M3 as its CPU is used. The noise signals are firstly nullified from the speech signals using LogMMSE filtering. These signals are then sent to the PSoC5 wherein the speech is recognized and desired actions are performed.
Speech Recognition Systems(SRS) have been implemented by various processors including the digital signal processors(DSPs) and field programmable gate arrays(FPGAs) and their performance has been reported in literature. The fundamental purpose of speech is communication, i.e., the transmission of messages.In the case of speech, the fundamental analog form of the message is an acoustic waveform, which we call the speech signal. Speech signals can be converted to an electrical waveform by a microphone, further manipulated by both analog and digital signal processing, and then converted back to acoustic form by a loudspeaker, a telephone handset or headphone, as desired.The recognition of speech requires feature extraction and classification. The systems that use speech as input require a microcontroller to carry out the desired actions. In this paper, Cypress Programmable System on Chip (PSoC) has been studied and used for implementation of SRS. From all the available PSoCs, PSoC5 containing ARM Cortex-M3 as its CPU is used. The noise signals are firstly nullified from the speech signals using LogMMSE filtering. These signals are then sent to the PSoC5 wherein the speech is recognized and desired actions are performed.
PERFORMANCE EVALUATION OF MC-CDMA SYSTEM OVER RAYLEIGH FADING CHANNELIJCSES Journal
Multi Carrier Code Division Multiple Access (MC-CDMA) is a well known technique for high speed
wireless data transmission .Two advanced technology was included in the MC-CDMA structure, such as
orthogonal frequency division multiplex (OFDM) and the code division multiple access (CDMA),so that it
benefits from the robustness of OFDM against multipath environment and from the capability of multiuser
multiplexing services that is achieved via (CDMA) system. MC-CDMA is a multicarrier spread spectrum
system which is classified as the candidate scheme for future mobile radio systems. In this paper MCCDMA
system, with Rayleigh fading channel and 16QAM array modulation, was simulated to investigate
the effects of different parameters on the system performance, such as processing gain; number of paths;
number of users; interleaving; and coding ratio.MC-CDMA performances can be considerably improved
by proper selection of these parameters. As expected, results show that MC-CDMA has a better performance over CDMA system.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
1. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 1
MODULE-2
INFORMATION REPRESENTATION
Introduction
All types of multimedia information are stored and processed within a computer in a
digital form like when we enter strings of character from keyboard; each character is
represented by a unique collection of fixed number s of bits known as codeword.
Computer generated graphical images are made up of mix of lines, circles, squares and
many more, each represented in a digital form.
Also devices such as microphones and many video cameras produce electrical signals
where amplitude of the signals varies according to time.
The conversion of analog signal into digital form is carried out using an electrical circuit
known as signal encoder and similarly the conversion of stored digitized samples into
corresponding time varying analog form is done by signal decoder.
Digitization principles
1. Analog signals
The general properties relating to any time varying analog signal are shown in the below
figure 1.
2. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 2
Figure 1: Signal properties: (a) time varying analog signal, (b) sinusoidal frequency
components, (c) signal bandwidth examples, (d) effect of limited bandwidth transmission
channel
In (a), the amplitude of signals varies continuously with time. A mathematical technique
known as Fourier analysis can be used to show that any time varying analog signal is
made up of infinite number of single frequency sinusoidal signals whose amplitude and
phase vary continuously with time.
The highest and lowest frequency components of the signal are shown in the figure 1(a)
and (b).
The range of frequencies of the sinusoidal components that make up a signal is called the
signal bandwidth which is shown in the figure 1 (c).
In terms of speech, human produce sounds which are converted into electrical signals by a
microphone that are made up of a range of sinusoidal signals varying frequency between
50Hz and 10kHz.
In case of a music signal, the range of signals is wider and varies between 15Hz and
20kHz which is sensitive to ear.
When an analog signal is being transmitted through a network, the bandwidth of the
transmission channel, the range of frequencies of the channel should be equal or greater
than the bandwidth of the signal.
3. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 3
If the bandwidth of the channel is less than this, then some of the low and high frequency
components will be lost which degrades the quality of the received signal. This type of
transmission channel is called bandlimiting channel and its effect is shown in figure 1(d).
2. Encoder design
Describe the function of signal encoder with the associated waveforms. (8M, 10M)
The conversion of a time varying analog signal (audio signal) into a digital form is carried
out using an electronic circuit is known as signal encoder.
Figure 2: Signal encoder design: (a) circuit components, (b) waveform
The principles of an encoder are shown in figure 2 (a). It consists of two main circuits: a
bandlimiting filter and an analog to digital converter (ADC). In addition to this, it also
consists of a sample and hold and a quantizer.
4. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 4
A typical waveform for a signal encoder is shown in figure 2 (b).
The bandlimiting filter is used to remove selected higher frequency components from the
source signal (A).
The output of the filter (B) is then fed to the sample and hold circuit which is used to
sample the amplitude of the filtered signal at regular time intervals (C) and to hold the
sample amplitude between samples (D).
The output of sample and hold circuit is fed to the quantizer circuit which converts each
sample amplitude into a binary value known as a codeword (E).
Sampling rate
The Nyquist sampling theorem states that, “In order to obtain an accurate representation of
a time varying analog signal, its amplitude must be sampled at a minimum rate that is
equal to or greater than twice the highest sinusoidal frequency component that is present
in the signal”.
This is known as the Nyquist rate and is normally represented as either Hz or samples
pers second (sps).
Sampling a signal at a rate which is lower than the Nyquist rate results in additional
frequency components being generated that are not present in the original signal which
cause the original signal to become distorted.
Quantization intervals
If Vmax is the maximum positive and negative signal amplitude and n is the number of
binary bits used, then the magnitude of each quantization interval q is given by
q=
2𝑉𝑚𝑎𝑥
2𝑛
The difference between the actual signal amplitude and the corresponding amplitude is
called quantization error. The error values will vary randomly from samples to sample
and hence quantization error is also known as quantization noise.
The ratio of peak amplitude of a signal to its minimum amplitude is known as Dynamic
range of the signal D.
D=20log10(Vmax/Vmin) db
3. Decoder design
Analog signal are processed and transmitted in a digital form. Prior to their output they
must be converted back again into their analog form. The electronic circuit that performs
this conversion is known as signal decoder.
The circuit components of decoder and signal waveform is shown in the figure 3
5. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 5
Figure 3: Signal decoder design: (a) circuit components, (b) signal waveform
Each digitized codeword is converted into an equivalent analog sample using a circuit
called a digital to analog converter or DAC.
This produces a time varying signal consisting of not just sinusoidal frequency
components that make up the original analog signal but also an additional higher
frequency component.
In order to produce the original signal, the output of the DAC is passed through a low pass
filter which only passes low frequency components that are made of original filtered
signal. Therefore Low pass filter is also known as recovery or reconstruction filter.
Since most of the multimedia applications involve audio and video, the communication
channel is two way simultaneous in order the terminal equipment must support both input
and output simultaneously. The audio/video signal encoders and decoders in each terminal
equipment are combined into single unit called audio/video encoder decoder or
audio/video codec.
6. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 6
TEXT
Illustrate the different types of text data representation. (6M,10M)
There are three types of text data representations:
1. Unformatted text
This is also known as plaintext and enables pages to be created which consisting of
strings of fixed sized characters from a limited character set.
All the normal alphabetic, numeric and punctuation characters are referred to as printable
characters. The total ASCII character set which includes a number of control
characters. These include:
Format control characters: BS (backspace), SP (space), DEL (delete), ESC
(escape) etc.
Information separators: FS (file separators) and RS (record separator).
Transmission control characters: SOH (start of heading), STX (start of text), ETX
(end of text), ACK (acknowledge), NAK (negative acknowledge) etc.
2. Formatted text
It enables documents to be created that consist of characters of different styles and
variable size and shape, each of which can be plain, bold or italicized.
This is also known as richtext and enables pages and complete documents to be created
which consists of strings of characters of different styles, size and shape with tables,
graphics and images inserted at appropriate points.
A variety of document formatting options are supported to enable an author to structure a
document into chapters, sections and paragraphs, each with different headings and with
tables, graphics and pictures inserted at appropriate points.
3. Hypertext
This enables an integrated set of documents referred to as pages to be created which have
defined linkage points referred to as hyperlinks between them. Documents consisting of
only text are created using hypertext.
Each document has a unique address known as Uniform Resource Locator (URL). The
first page of the document is known as home page.
A standard format is used for writing documents are known as Hypertext Markup
Language (HTML). The linked set of pages that are stored in a particular server are
accessed and viewed using a client program known as a browser.
IMAGES
Images include computer generated images referred to as computer graphics or simply
graphics and digitized images of both documents and pictures.
7. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 7
These images are displayed in the form of a two dimensional matrix of individual picture
elements known as pixels.
1. Graphics
Software packages provide easy to use tools to create graphics that are composed of all
kinds of visual objects including lines, arcs, squares, rectangles, circles, ovals, diamonds,
stars and so on, as well as any form of hand drawn (freeform) objects.
These are produced by drawing the desired shape on the screen by means of a
combination of a cursor symbol on the screen.
Textual information can also be included in a graphics, together with prescribed tables and
graphs and digitized pictures and photographs.
A computer’s display screen can be considered as being made up of a two dimensional
matrix of individual picture elements each of which can have a range of colors associated
with it.
2. Digitized documents
An example of a digitized document is that produced by the scanner associated with a
facsimile or fax machine.
Figure 4: Schematic of Fax machine
The scanner associated with a fax machine operates by scanning each complete page from
left to right to produce a sequence of scan lines that start at the top of the page and end at
the bottom.
The vertical resolution of the scanning procedure is either 3.85 or 7.7 lines per millimeter
which is equivalent to approximately 100 or 200 lines per inch.
As each line is scanned, the output of the scanner is digitized to a resolution of
approximately 8 picture elements known as pels.
3. Digitized pictures
In case of scanners, more than a single bit is used to digitize each picture element. For
example, a good quality black and white picture can be obtained by using 8 bits per picture
element.
8. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 8
Color principles
What do you understand by the terms
i) Color gamut
ii) Additive color mixing
iii) Subtractive color mixing
Give application of both color mixing methods. (8M)
A whole spectrum of colors is known as a color gamut which can be produced by using
different proportion of the three primary colors red (r), green (G), and blue (B).
This principle is as shown in below figure 5
Figure 5: Color principles: (a) additive color mixing (b) subtractive color mixing
9. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 9
The mixing technique used in figure (a) is known as additive color mixing. Since black is
produced when all the three primary colors are zero, which is particularly useful for
producing a color image on a black surface as in the display applications.
It is also possible to perform the complementary subtractive color mixing operation to
produce a similar range of colors which is shown in the figure (b).
In subtractive color mixing, a white is produced when the three chosen primary colors
cyan (C), magneta (M) and yellow (Y) are all zero.
Hence this choice of colors is useful for producing a color image on a white surface as in
the case of printed applications.
The same principle is used in the picture tubes associated with the color television sets
with the three primary colors R, G and B. Also in most computer monitors uses the same
picture tubes as are used in television sets.
Raster scan principles
Explain Raster scan operation associated waveform. (10M)
The picture tubes used in most television sets operate using raster scan. This involves a
finely focused electron beam (raster) being scanned over the complete screen.
10. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 10
Figure 6: Television/computer monitor principles: (a) schematic, (b) Raster scan principles,
(c) pixel format on each scan line
Each complete scan consisting of a number of discrete horizontal lines, the first of which
starts at the top left corner of the screen and the last of which ends at the bottom right
corner.
At this point the beam is deflected back again to the top left corner and scanning operation
repeats in the same way. This type of scanning is called progressive scanning and is
shown in the figure (b).
Each complete set of horizontal scan lines is called frame and each frame is made up of N
individual scan lines.
The inside of the display screen of the picture tube is coated with a light sensitive
phosphor that emits light when energized by the electron beam.
The amount of light emitted (brightness) is determined by the power in the electron beam
at that instant.
During each horizontal (line) and vertical (frame) retrace period the electron beam is
turned off and to create an image on the screen, the level of power in the beam is changed
as each line is scanned.
In case of black and white picture tubes just a single electron beam is used with a white
sensitive phosphor. Color tubes use three separate closely located beam and a 2-D matrix
of pixels.
Each pixel consists of set of three color sensitive phosphors associated with each pixel is
called phosphor triad which is shown in figure (c).
Television picture tubes were designed to display moving images. The persistence of the
light/color produced by the phosphor is designed to decay very quickly and hence it is
necessary to continuously refresh the screen
Spot is a practical shape of each pixel which merges with its neighbors when viewed from
a sufficient distance a continuous color image is seen.
Frame refresh rate must be high enough in order to avoid decaying of television picture.
Ficker is caused by a low refresh rate caused by the previous image fading from the eye
retina before the image is displayed. To avoid flicker the refresh rate must be atleast 50
times/sec is required.
11. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 11
Video RAM is a separate block of memory used to store the pixel image. Area of
computer memory make up the image the pixel image must be accessed continuously as
each line is scanned.
Graphics program write the pixel images into video RAM whenever selected pixels or
total image changes.
Display Controller (frame/display/refresh buffer) is a part of program gives a sequence
of display commands and converts them into displayed objects by writing appropriate
pixel values into the video RAM.
Video controller is a hardware subsystem that read the pixel values stored in the video
RAM in time with the scanning process converts each set of pixel values into equivalent
set of R, G and B analog signals for output to display.
Figure 7: Raster scan display architecture
Pixel depth
The number of bits per pixel is known as pixel depth which determines the range of
different colors that can be produced.
The amount of memory required to store an image can be reduced by maintaining color
took up table or CLUT which contains the corresponding color values.
Aspect Ratio
Aspect ratio is the ratio of the screen width to the screen height.
Both the number of pixels per scanned line and the number of lines per frame vary can be
determined by using aspect ratio of the display screen.
The aspect ratio of current television tubes is 4/3 with older tubes where the PC monitors
are 16/9 with the wide screen television tubes.
In the United States, the standard for color television has been defined by the National
Television Standard Committee (NTSC) while in Europe three color standards namely:
PAL (UK), CCIR (Germany) and SECAM (France).
12. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 12
NTSC standard uses 525 scan lines per frame while three European standard all use 625
lines per frame.
To produce a square picture avoiding distortion on the screen with 4/3 aspect ratio, it is
necessary for displaying a square of (NxN) pixels to have 640 pixels (480*4/3) per line
with an NTSC monitor and 768 pixels (576*4/3) per line with a European monitor.
Calculate the time to transmit the following digitized images at both 64kbps and 1.5Mbps
i) A 640 x 480x 8 VGA – compatible image
ii) A 1024 x 768 x 24 SVGA – compatible image (6M, 10M)
Digital Cameras and Scanners
Describe with a neat schematic color image capture using Digital Cameras and Scanner.
(8M)
A typical arrangement that is used to capture and store a digital image produced by a
scanner or a digital camera (either a still image camera or a video camera) is shown in the
below figure 8.
13. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 13
Figure 8: Color image capture: (a) schematic, (b) RGB signal generation
In the figure it is assumed that, the captured image is transferred to the computer directly
as it is produced.
In the case of digital camera, a set of digitized images can be stored within the camera
itself and then downloaded into the computer later.
Image sensor is a device that captures the image within the camera. This is a silicon chip
in digital cameras consisting of a 2-D grid of light sensitive cells called photosites.
When the camera shutter is activated, each photosites stores the level of intensity of the
light that falled on it.
14. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 14
A widely used image sensor is a charge coupled device (CCD). This consists of an array
of photosites on its surface and operates by converting the level of intensity that falls on
each photosites into an equivalent electrical charge.
The level of charge and light intensity stored at each photosites position is then read out
and converted into a digital value using an ADC.
The three methods to obtain color associated with each photosites and pixel position are as
follows:
(i) In this method, the surface of each photosite is coated with either a red, green or blue
filter. So that its charge is determined by the level of red, green and blue light falls on it.
The coatings are arranged in a 3x3 grid structure as shown in the figure. The color
associated with each photosites is then determined by the output of the photosite R, G, or
B with each of its 8 neighbors. The level of the other two colors in each pixel is then
estimated by an interpolation procedure involving all nine values. This method is used
with most consumer quality cameras.
(ii) This method involves the use of three separate exposures of single image sensor, the
first through a red filter, second a green filter and third blue filter. The color associated
with each pixel position is determined by the charge obtained with each of three filters R,
G and B. Since three separate exposures are required, this approach cannot be used with
video cameras in locations such as photographic studios where the camera can be attached
to a tripod.
(iii) This method used three separate image sensors, one with the photosites coated with
red filter, the second coated with a green filter and the third coated with a blue filter. A
single exposure is used with the incoming light split into three beams each of which
exposes a separate image sensor. This method is used in professional quality high
resolution still and moving image cameras.
Each image once captured and stored on the image sensor, then charge stored at each
photosite location is read and digitized.
Using CCD set of charges on the matrix of photosites are read single row at a time. Once
in readout register, the charge on each photosite position is shifted out, amplified and
digitized using ADC.
AUDIO
There are two types of audio signals: speech signal as used in a variety of interpersonal
applications including telephony and video telephony and music quality audio as used in
applications such as CD on demand and broadcast television.
Audio can be produced by means of a microphone or electronically using some form of
synthesizer.
In case of synthesizer, the audio is created in a digital form and stored within the computer
memory.
15. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 15
A microphone generates time varying analog signals in order to store such signals in the
memory of a computer and to transmit them over digital networks they must be converted
into digital form using audio signal encoder.
Loudspeakers operate using an analog signal, an output of all digitized audio signals must
be converted back again into its analog for using an audio signal decoder.
The bandwidth of a typical speech signal is from 50Hz to 10kHz and that of music signal
from 15Hz to 20kHz.
Assuming the bandwidth of a speech signal is from 50 Hz through to 10 kHz and that of a
music signal is from 15 Hz through to 20kHz, derive the bit rate that is generated by the
digitization procedure in each case assuming the Nyquist sampling rate is used with 12 bits
per sample for the music signal. Derive the memory required to store a 10 minute passage
of stereophonic music. (10M)
1. PCM speech
Explain the principle of operation of PCM speech code with block diagram. (6M, 10M)
With the aid of block diagram explain PCM signal encoding and decoding principle. (8M)
Interpersonal applications involving speech uses PSTN for communication purpose.
Pulse Code Modulation (PCM) is a digitization procedure defined and the international
standard relating to this is ITU recommendation G.711 which is shown in the figure 9.
16. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 16
Figure 9: PCM principles: (a) signal encoding and decoding schematic, (b) compressor
characteristic
Figure (a) shows the block diagram of PCM speech circuit. It consists of compressor
(encoder) and an expander (decoder).
The effect of quantization noise level is same for both low amplitude (quiet) signals and
high amplitude (loud) signals.
The ear is more sensitive to noise on quiet signals than loud signals. Hence to reduce the
effect of quantization noise in a PCM system, the quantization intervals are made non
linear.
This can be achieved by compressor circuit and the reverse operation is performed by
expander circuit. The overall operation is known as companding.
The input and output relationship of both circuits is shown in figure (b) and (c). Figure (b)
shows compression characteristic and figure (c) shows the expansion characteristic.
Prior to the input signal being sampled and converted into a digital form by the ADC, it is
passed through the compressor circuit by passing the amplitude of the input signal.
The level of compression and quantization intervals increases as the amplitude of the input
signal increases.
17. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 17
The resulting compressed signal is then passed to the ADC which performs linear
quantization on the compressed signal.
Similarly at the destination, each received codeword is fed into a linear DAC. The analog
output from the DAC is then passed to the expander circuit which performs the reverse
operation of the compressor circuit.
2. CD Quality audio
CD players and CD-ROMs are digital storage devices for stereophonic music and general
multimedia information streams. The standard associated with these devices known as
CD-digital audio (CD-DA).
The sampling rate used is 44.1ksps which means signal is sampled at 23 microsecond
intervals.
Since the bandwidth of a recording channel on a CD is large, a high number of bits per
sample can be used.
Bit rate per channel = sampling rate x bits per sample
= 44.1 x 103
x 16 = 705.6kbps
Total bit rate = 2 x 705.6 = 1.411 Mbps
Assuming the CD-DA standard is being used, derive :
(i) the storage capacity of a CD-ROM to store a 60 minute multimedia title.
(ii) the time to transmit a 30 second portion of the title using a transmission channel of bit
rate: (1) 64kbps (2)1.5Mbps (6M)
3. Synthesized audio
Synthesized audio can be defined as sound generated by electronic signals of different
frequencies.
Sound can be synthesized by the use of sound synthesizers. The synthesizers use different
algorithms to generate sound to different waveform synthesis.
18. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 18
Synthesized audio is often used in multimedia applications, since the memory required
between two and three orders of magnitude less than that required to store the equivalent
digitized waveform.
In addition, it is much easier to edit synthesized audio and mix several passages together.
Figure 10: Schematic of Audio/sound synthesizer
The three main components are computer, keyboard and set of sound generators.
The computer takes input commands from the key board and outputs these to the sound
generators which in turn produce the corresponding sound waveform via DACs to operate
speakers.
Pressing a key on the keyboard of a synthesizer has a similar effect to pressing a key on
the keyboard of a computer.
For each key that is pressed, a different codeword which is known as a message which is
read by the computer program.
The control panel contains range of different switches and sliders which allow the user to
indicate the computer program additional information such as volume of the generated
output and selected sound effects to be associated with each key.
The secondary storage interface allows the sequence of messages to be saved on the
secondary storage such as floppy disk.
The sequencer program associated with the synthesizer ensures that the resulting
integrated sequence of messages are synchronized and the output of the sound generators
to create merged passage.
VIDEO
Video features in a range of multimedia applications such as:
1. Entertainment: broadcast television and VCR/DVD recordings
2. Interpersonal: video telephony and videoconferencing
3. Interactive: window containing short video clips
19. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 19
1. Broadcast television
A color picture/image is produced from varying mixtures of three primary colors red,
green and blue.
The screen of the picture tube is coated with a set of three different phosphors, each of
which is activated by a separate electron beam.
The three electron beams are scanned from left to right with a resolution of either 525
lines (NTSC) or 625 lines (PAL).
The total screen contents are then refreshed at a rate of either 60 or 50 frames per second.
The computer monitors used with most personal computers use the same picture tubes as
those in broadcast television receivers and hence operate in similar way.
The three digitized color signals that make up a stored picture/image are read from the
computer memory in time synchronous with the scanning operation of the display tube.
After each complete scan of the display, the procedure repeats producing a flicker free
color image on the screen.
Scanning sequence
It is necessary to use a minimum refresh rate of 50 times per second to avoid flicker. To
produce smooth motion, a refresh rate of 25 times per second is sufficient.
In order to minimize the amount of transmission bandwidth, the image/picture associated
with each frame is transmitted into two halves which are known as a field.
The first consisting only odd scan lines and the second the even scan lines. The two field
are then integrated together in the television receiver using a technique known as
interlaced scanning.
Figure 11: Interlace scanning principles
20. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 20
Color signals
The received signals associated with a color TV broadcast had to be such that they could
be used by an existing monochrome TV set to produce same picture with high quality
monochrome.
A color TV had to be able to produce black and white pictures from monochrome
broadcasts.
The three main properties of color source that the eye make use of are:
1. Brightness: The amount of energy the stimulates the eye and varies on a gray scale from
black (lowest) through to white (highest).
2. Hue: The actual color of the source where each color has a different
frequency/wavelength and the eye determines the color from this.
3. Saturation: The strength of vividness of the color. A saturated color such as red has no
white light in it.
The luminance is refers to the brightness of a source and hue and saturation. Because they
are concerned with its color it is referred to its chrominance characteristics.
A range of colors can be produced by mixing three primary colors R, G and B. In similar
way, a range of colors can be produced on a television display screen by varying the
magnitude of the three electrical signals that energize red, green and blue phosphors.
If the magnitude of the three signals are in the proportion
0.2999R + 0.587G + 0.114B
Then the color white is produced on the display screen. Hence the luminance of a source
is only a function of the amount of white light, it contains any color source. Its luminance
can be determined by summing together the three primary components that make up the
color in this proportion.
Ys = 0.299 Rs + 0.587 Gs + 0.144 Bs
Where Ys is the amplitude of the luminance signal. Rs, Gs and Bs are the magnitudes of the
three color component signals that make up the source.
The signals blue chrominance (Cb) and red chrominance (Cr) are used to represent the
coloration of hue and saturation of the source.
Cb = Bs – Ys and Cr = Rs - Ys
Since the Y signal has been subtracted in both cases, hence it does not contain no brightness
information. Also, since Y is a function of all three colors, then G can be computed from these
two signals. In this way, the combination of the three signals Y, Cb and Cr contains all the
information that is needed to describe a color signal.
2. Digital Video
Digitization of video signals has been carried out in television studios for many years in
order to perform conversion from one video format into another.
21. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 21
In order to standardize this process and to make the exchange of television programmes
internationally easier, the international body for television standards, the International
Telecommunications Union- Radiocommunication Branch (ITU-R) formerly known
as the Consultative Committee for International Radiocommunications (CCIR)
define a standard for the digitization of video pictures known as Recommendation
CCIR-601.
4:2:2 format
Explain 4:2:2 and 4:2:0 digitization formats. (8M)
This is the original digitization format used in Recommendation CCIR-601 for use in
television studios.
The three component (analog) video signals from a source in the studio can have
bandwidths of up to 6 MHz for the luminance signal and less than half of this for the
two chrominance signals.
To digitize these signals, it is necessary to use bandlimiting filters of 6MHz for the
luminance signal and 3MHz for the two chrominance signals with a minimum
sampling rate of 12MHz (12Msps) and 6MHz respectively.
In the standard, a line sampling rate of 13.5 MHz for luminance and 6.75 MHz for the
two chrominance signals was selected, bothe of which are independent of the particular
scanning standard-NTSC, PAL and so on.
Figure 12: Sample positions with 4:2:2 digitization format
Derive the bit rate and the memory requirements to store each frame that result
from the digitization of both a 525-line and a 625 line system assuming a 4:2:2
format. Also find the total memory required to store a 1.5 hour movie/video. (8M)
Solution:
22. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 22
525-line system:
The number of samples per line is 720 and the number of visible lines is 480. Hence the
resolution of the luminance (Y) and two chrominance (Cb and Cr) signals are:
Y=720 x 480
Cb = Cr = 360 x 480
Bit rate: Line sampleing rate is fixed at 13.5 MHz for Y and 6.75 MHz for both Cb and Cr,
all with 8 bits per sample.
Hence, Bit rate = 13.5 x 106
x 8 + 2(6.75 x 106
x 8) =216Mbps
Memory required: Memory required per line = 720 x 8 +2(360 x 8)
= 11520 bits or 1440 bytes
Hence memory per frame, each of 480 lines = 480 x 11520
= 5.5296 Mbits or 691.2 Kbytes
and memory to store 1.5 hours assuming 60 frames per second = 691.2 x 60 x 1.5 x 3600Kbytes
= 223.9488 Gbytes
625-line system:
The number of samples per line is 720 and the number of visible lines is 576. Hence the
resolution of the luminance (Y) and two chrominance (Cb and Cr) signals are:
Y=720 x 576
Cb = Cr = 360 x 576
Bit rate: Line sampleing rate is fixed at 13.5 MHz for Y and 6.75 MHz for both Cb and Cr,
all with 8 bits per sample.
Hence, Bit rate = 13.5 x 106
x 8 + 2(6.75 x 106
x 8) =216Mbps
Memory required: Memory required per line = 720 x 8 +2(360 x 8)
= 11520 bits or 1440 bytes
Hence memory per frame, each of 480 lines = 576 x 11520
= 6.63555 Mbits or 829.44 Kbytes
and memory to store 1.5 hours assuming 60 frames per second = 829.44x 50 x 1.5 x 3600Kbytes
23. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 23
= 223.9488 Gbytes
4:2:0 format
This format is used in digital video broadcast applications. It has been found to give good
picture quality.
Since it is intended for broadcast applications, interlaced scanning is used and the absence
of chrominance samples in alternative lines is the origin of the term 4:2:0.
The resolution of luminance and chrominance signals for both 525-line system and 625-
line system are:
525-line system: Y = 720 x 480
Cb = Cr = 360 x 240
625-line system: Y = 720 x 576
Cb = Cr = 360 x 288
The bit rate in both systems with this format is:
13.5 x 106
x 8 + 2(3.375 x 106
x 8) = 162 Mbps
High Definition Television (HDTV) formats
The resolution of those which relate to the older 4/3 aspect ratio tubes can be up to 1440 x
1152 pixels and the resolution of those which relate to the newer 16/9 widescren tubes can
be up to 1920 x 1152 pixels.
In both cases, the number of visible lines per frames is 1080 which produces a square
pixel lattice structure with both tube types.
Bothe use either the 4:2:2 digitization format for studio applications or the 4:2:0 format
for broadcast applications.
The corresponding frame refresh rate is either50/60 Hz with the 4:2:2 format or 25/30 Hz
with the 4:2:0 format.
Source Intermediate Format (SIF)
This format is used to give a picture quality comparable with that obtained with video
cassette recorders (VCRs). The digitization format is 4:1:1.
It uses half the spatial resolution in both horizontal and vertical directions as that used in
the 4:2:0 format. This technique is known as subsampling and it uses half the refresh rate
known as temporal resolution.
The frame refresh rate is 30 Hz for a 525 line system and 25 Hz for a 625 line system.
Since SIF is intended for storage applications, progressive (non-interlaced) scanning is
used.
The resolution is given by
24. MULTIMEDIA COMMUNICATIONS [18EC743]
Amrutha R, Assistant Professor, Dept. of ECE, MyCEM 24
525 line system: Y = 360 x 240
Cb = Cr = 180 x 120
625 line system: Y = 360 x 288
Cb = Cr = 180 x 144
The worst case bit rate in both systems with this format is
6.75 x 106
x 8 + 2(1.6875 x 106
x 8) = 81Mbps
Common Intermediate Format (CIF)
The Common Intermediate Format (CIF) has been defined for use in videoconferencing
applications.
This is derived from the SIF and uses combination of the spatial resolution used for the
SIF in the 625 line system and temporal resolution used in the 525 line system.
The resolution is : Y = 360 x 288
Cb = Cr = 180 x 144
This has a temporal resolution of 30 Hz using progressive scanning.
The positions of the sampling instants per frame are same as for SIF and hence
digitization format is 4:1:1. Similarly, the worst case bit rate is 81Mbps.
To convert to the CIF, a 525 line system needs a line rate conversion and 625 line system
a frame rate conversion.
Quarter CIF (QCIF)
The QCIF has been defined for use in video telephony applications.
It is derived from the CIF and uses half spatial resolution of CIF in both horizontal and
vertical directions and the temporal resolution is divided by either 2 or 4.
The spatial resolution of: Y = 180 x 144
Cb = Cr = 90 x 72
With a temporal resolution of either 15 or 7.5 MHz. The worst case bit rate with this
format is:
3.375 x 106
x 8 + 2(0.84375 x 106
x 8) = 40.5 Mbps
It has the same 4:1:1 digitization format as CIF.