Ai Brt

1,053
-1

Published on

This Slide contain valuable information about BRT.

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,053
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
32
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • © Copyright 2001 Global Wireless Education Consortium All rights reserved. This module, comprising presentation slides with notes, exercises, projects and Instructor Guide, may not be duplicated in any way without the express written permission of the Global Wireless Education Consortium. The information contained herein is for the personal use of the reader and may not be incorporated in any commercial training materials or for-profit education programs, books, databases, or any kind of software without the written permission of the Global Wireless Education Consortium. Making copies of this module, or any portion, for any purpose other than your own, is a violation of United States copyright laws.     Trademarked names appear throughout this module. All trademarked names have been used with the permission of their owners.
  • Partial support for this curriculum material was provided by the National Science Foundation's Course, Curriculum, and Laboratory Improvement Program under grant DUE-9972380 and Advanced Technological Education Program under grant DUE‑9950039.    GWEC EDUCATION PARTNERS: This material is subject to the legal License Agreement signed by your institution. Please refer to this License Agreement for restrictions of use.        
  • After completing this module and all of its activities, participants will be able to: Describe the functions performed in baseband signal processing for analog and digital transmission. Describe the conversion of analog to digital signals. Characterize the differences among speech coders. Summarize the methods of channel coding and error correction. Summarize the basic techniques used in modulation and demodulation of baseband signals. Describe the techniques used for channel multiplexing and multiple access for different wireless technologies.
  • This slide provides a definition and notations concerning baseband signaling, as presented in Federal Standard 1037C , “Telecommunications: Glossary of Telecommunication Terms”, National Communications System Technology and Standards Division, http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm The term “baseband” is generally used to refer to the signal prior to modulation (at the transmitting end) or after demodulation (at the receiving end). It is also sometimes used to refer to the signal after modulation but prior to multiple access multiplexing (at the transmitting end) or after multiple access demultiplexing but prior to demodulation (at the receiving end). In this module, “baseband” will be used to refer to a voice or data signal prior to modulation at the transmitting end and after demodulation at the receiving end. The term “signal” is used here to refer to any voice conversation or data transmission taking place across a wireless network. The following slides discuss the processes by which a baseband signal is transformed from a raw audio or data signal at the transmitting end and recovered at the receiving end of a radio connection. In order to provide a full view of the processes involved, operations performed on the signal before and after modulation are discussed.
  • Baseband Signaling Many of the functions described here are discussed in the following sections in terms of the transmitting end, which may be the mobile station or the base station. Each of the functions at the transmitting end has a corresponding function at the receiving end: Analog to Digital Conversion requires Digital to Analog Conversion. Channel Coding requires Channel Decoding (including error detection and correction). Multiplexing requires Demultiplexing. Modulation requires Demodulation, including filtering and conversion to and from Intermediate Frequencies. Combining simultaneous conversations via Multiple Access techniques requires a corresponding separation of those conversations. Transmission of the signal at one end implies a receiving function at the other end. The slide above shows the baseband processing functions required to carry a voice conversation from one mobile subscriber to another mobile subscriber. In a connection through an actual network, these functions may be performed several times as a call is carried from a mobile subscriber on one cellular or Personal Communications Service (PCS) provider’s network, through the base station and mobile switching center to either the Public Switched Telephone Network (PSTN) or directly to another cellular or PCS provider’s network, to mobile subscriber at the other end. As the baseband processing functions are discussed in the following slides, a simple model is used to describe the baseband processing required to send a call from a mobile station to a base station. The “transmit” functions are assumed to take place at the mobile station, and the “receive” functions will be assumed to take place at the base station. The following slides discuss variations in baseband processing at the mobile station and the base station.
  • Transmit & Receive Baseband processing functions take place at both the mobile station and the base station. Depending on the direction of transmission (mobile station to base station or base station to mobile station), each component may perform either the transmitting or receiving set of functions described in the preceding slide. An actual call involving a wireless mobile station and base station may also connect to another base station through the wireless operator’s network, or to the Public Switched Telephone Network (PSTN) for connection to a wireline phone, or to another mobile service provider’s network. Both the backhaul links from the Base Station and the connecting trunks in rest of the network path are likely to be digital, at T1 rate (1.544 Mbps) or higher. In many cases, a digital signal sent from a mobile station to a base station will be demodulated for processing at the base station, but will be recoded and transmitted as a digital signal on the wireline backhaul network without undergoing conversion back to an analog signal until it reaches the party at the other end of the conversation (either a mobile or a wireline station.) In the case of digital transmission from a base station to a mobile station, the incoming signal from the wireline network to the base station may already be digital. If the radio technology being used is analog rather than digital (e. g., Advanced Mobile Phone Service (AMPS)), the signal to be transmitted from the base station to the mobile station must undergo digital to analog conversion at the base station before analog baseband processing begins.
  • Analog versus Digital The table above summarizes the baseband processing functions associated with analog versus digital radio technologies in regard to the voice and data formats of originating signals. As seen in the table, some combinations of analog versus digital signal input and analog versus digital radio technology are not compatible (labeled “N/A” in the table). Digitally formatted voice or data signals that arrive at a base station to be transmitted to a mobile station on an analog radio network must be converted to a compatible analog signal before baseband processing begins. On the other hand, transmission of analog data on digital radio technology is also incompatible. Since the speech coding techniques used for transmission of voice on digital networks have been optimized for voice and not for non-voice (e. g., voice-band modem) signals, the transmission of analog data on a digital radio network requires an “interworking function” that will convert the signal into digital data for transmission on the air interface. Interworking is not possible if the required modem data rate is higher than the data rate available on the air interface. This module makes reference to the following analog and digital radio technologies: Analog radio technologies: Advanced Mobile Phone Service (AMPS) Narrowband AMPS (NAMPS) Digital radio technologies: Time Division Multiple Access (IS-136 TDMA) Code Division Multiple Access (IS-95 CDMA) Global System for Mobile Communication (GSM) More detailed information about each of these technologies can be found in the individual technology modules.
  • Analog Baseband Processing In an analog radio system, the voice signal is transmitted from the mobile station to the base station as an analog signal. Many of the Baseband Processing functions associated with the conversion of analog to digital signals, and the processing of digital signals, are not applicable in an analog radio technology such as AMPS. In an analog radio system, baseband processing of voice signals prior to modulation consists of: Compression/Expansion - A form of noise reduction based on amplifying the original signal into one with a higher average level and a smaller dynamic range. Weak signals are amplified greatly; strong signals are amplified only slightly. “Compression” at the transmitter is coupled with “expansion” at the receiver (at the cell site) to restore the original signal. Pre-emphasis/De-emphasis - “Pre-emphasis” at the transmitter increases the amplitude of the higher frequency components of a Frequency Modulated (FM) signal (the type of modulation used in AMPS and NAMPS transmission) in order to reduce overall noise levels, improving the signal-to-noise ratio (S/N). It is coupled with “de-emphasis” at the receiver to restore the original signal. Limiting - Uses filters to reduce transmitted bandwidth and interference between adjacent channels in an FDMA system. Modulation, multiplexing, and multiple access of voice signals in an analog radio system are described in later sections of this module.
  • Analog to Digital Conversion The following are the basic steps in conversion of an analog voice signal to a digital signal: Initial analog signal (for example, analog voice) - The analog voice signal is created when a subscriber speaks into a mobile station or wireline phone. Sampling: Measurement at a constant rate (samples per second) of the amplitude of the analog signal in terms of range of possible values (levels 0 through 7 in the example). Quantization: Conversion of the sample values to a finite number of values (i.e., rounded to the nearest full level), it is during this process that discrete values are assigned. Encoding: Each quantized level is represented as a string of ones and zeroes (up to 3 bits, for the example of 8 possible levels), as follows: To represent a quantized value of “0”, code as 000 “ Analog signal sample is >0 but <= 1” = 001 “ Analog signal sample is >1 but <= 2” = 010 “ Analog signal sample is >2 but <= 3” = 011 “ Analog signal sample is >3 but <= 4” = 100 “ Analog signal sample is >4 but <= 5” = 101 “ Analog signal sample is >5 but <= 6” = 110 “ Analog signal sample is >6 but <= 7” = 111 Transmission of the digital signal - the digital signal is then modulated onto a carrier frequency, as appropriate to the technology being considered, and transmitted to a receiver (for example, from a mobile station to a base station) The signal is then demodulated and decoded at the receiver so that the original quantized values can be recovered.
  • The slide above is a visual representation of the conversion of an analog signal to a digital signal. The charted line represents the original analog voice signal. The black dashed lines and decimal numbers represent the sampling of the amplitude the analog signal at intervals of time T, using a range of values from 0 through 7 to represent the wave form. The columns represent the quantizing of each sample to discrete quantities. As illustrated by the last three samples, values of 2.9, 3.2, and 2.5 will all be quantized as a value of 3. It can be seen from this example that there is the potential for transmitting a digitally encoded signal from which it is impossible to recover the original signal exactly at the receiver end, since some detail of the original signal can be lost when sample values are quantizied. There is, however, a fundamental theorem (known as the “sampling theorem”) that states an analog signal may be uniquely represented by discrete samples taken at intervals no larger than one over twice the bandwidth. For human speech, which ranges in frequency from 300 Hertz (that is, cycles per second) to 3300 Hertz, the bandwidth (B) is 3,000 Hertz (3 KHz). This means that samples must be taken at least once every six thousandth of a second (1/(2B)) in order to adequately represent a voice signal. The common sampling rate actually used in wireline telephony is 8,000 samples per second. The encoded ones and zeroes specify the form of the digital signal that now represents the original analog signal. While the sample shown uses 8 quantized values (0 through 7), digital encoding of voice in wireline telephony applications generally uses 256 levels of quantization (values of 0 through 255, represented in binary form as a string of 8 ones or zeroes). The need to transmit 8 bits of data 8,000 times a second means that this form of digital voice encoding requires a transmission rate of 8 x 8,000 = 64,000 bits (or 64 Kilobits) per second. This form of analog to digital conversion for voice signals is known as Pulse Code Modulation (PCM). At the receiving end, this digital signal can be converted back into the quantized values, from which the original analog signal (or a close approximation) can be recovered.
  • Digital Speech Coding The discussion of analog to digital conversion used an example that represented the process of sampling, quantizing, and encoding, which would be used in a simple analog to digital conversion such as PCM. This type of digital speech coding, while representative of the basic principles of analog to digital conversion, requires a fairly high bandwidth for transmission (64 Kbps), and is consequently seldom used in wireless applications. In addition, wireless transmission is a “hostile” environment compared to wireline: the higher error rates associated with wireless transmission require the implementation of error detection and recovery techniques as part of the analog to digital conversion process. Classes of speech coders (which are sometimes called coders/decoders, or “codecs”) that are commonly used on the air interface in wireless networks include: Waveform coding algorithms - These are compression techniques that exploit the redundant characteristics of the waveform itself. While PCM represents a relatively high bandwidth form of waveform coding, this category also includes a type of codec called a “differential encoder”, which quantizes and transmits only the “error signals” generated based on the difference between the input speech signal and the results of a model that predicts what that signal should be. Linear predictive coding algorithms (known as “vocoders”) - These operate on the principle that voice speech can be represented by a set of parameters that can drive a model that will synthesize the original speech signal at the receiving end. Vocoders tend to operate at low bit rates (typically 2.4 Kbps) and produce synthetic-sounding speech at the receiving end. Hybrid coders (combining waveform coding techniques and vocoder techniques) - Various forms of hybrid coders use linear predictive coding to formulate a difference signal that is quantized and transmitted (as in the differential encoder variation on waveform coding). The following slide lists in more detail the specific methods of digital speech coding that fall under each category of codec.
  • Coding Techniques Waveform codec: Pulse Code Modulation (PCM) - high bit rate codec (64 Kbps) Adaptive Differential Pulse Code Modulation (ADPCM) and Adaptive Predictive Coding (APC) are differential coders that achieve a better signal-to-quantization noise performance over PCM at bit rates of 16 to 32 Kbps. Linear Predictive codec (LPC): Generally very low bit rates (2.4 Kbps) but low quality speech output. Hybrid codec: Residual-Excited LPC (RELP) - Uses linear prediction to formulate a difference signal; good speech quality can be achieved at 8 Kbps. Code-Excited LPC (CELP) - Uses a “codebook” of vectors such that the error between the original signal and the predicted signal is minimized. It can achieve good speech quality at low bit rates (a codebook is defined as an ordered collection of all possible values that can be assigned to a scalar or vector variable, called a “codeword”). Algebraic Code-Excited LPC (ACELP) - An algebraic code used to populate codebooks for CELP speech coders, resulting in more efficient codebook search algorithms. Vector-Sum Excited LPC (VSELP) - A type of speech coder using an excitation signal generated from the output of a long term or pitch filter and two codebooks; operates at a rate of 8 Kbps. Multi-pulse, multi-level quantization (MP-MLQ) - Provides a higher quality of sound at low bit rates than ACELP-based codecs, but with high delay.
  • Standards The International Telecommunication Union (ITU) is the Telecommunications arm of the United Nations. The ITU Telecommunication Standardization Bureau (ITU-T) is responsible for studying technical, operating, and tariff questions and issuing recommendations on them, with the goal of standardizing telecommunications worldwide. The European Telecommunications Standards Institute (ETSI) establishes telecommunications standards for Europe. ETSI guidelines are voluntary and almost always comply with standards produced by international bodies. The ITU-T and ETSI standards summarized above represent a full range of speech coding rates for digital radio technologies. Individual radio technologies have generally selected one or more of these coding techniques according to the parameters (e. g., speech quality, coding/decoding complexity) considered most important. The following chart summarizes the characteristics of each of the codecs described above.
  • Codec Advantages versus Disadvantages The table above summarizes the advantages and disadvantages of each of the codecs defined by the ITU-T and ETSI. While G.711 (PCM) has the highest speech quality, it also has the highest bit rate. In general and within the limits of acceptable speech quality, low bit rate, low delay, and low complexity are considered advantages. The development of a good codec is a balancing act; an attempt to decrease the required bit rate, while maintaining good speech quality at the receiver, generally leads to increased complexity in the codec. Increased complexity can, in turn, translate into unacceptable cost of development and manufacture or an unacceptable delay in processing time, which itself causes a deterioration in perceived speech quality. Different digital technologies have chosen different codecs and different coding rates: IS-136 TDMA uses VSELP at 8 Kbps IS-95 CDMA uses a version of CELP developed by Qualcomm, at Rate Set 1 (9.6 Kbps) or Rate Set 2 (14.4 Kbps) for voice GSM uses GSM EFR (based on ACELP) at 13 Kbps for voice
  • Coding Channel coding is used only with digitally encoded signals, to insert redundancy that will allow error detection and correction when the signal is received and decoded. Because the radio air interface is a more “hostile” environment than wireline, potentially introducing a high error rate, a number of complex channel coding schemes have been developed for use with wirelesss technologies. The channel coding scheme is a characteristic of the radio technology being used, and is consistent among service providers using the same technology.
  • Channel Coding Types The digital bit stream (voice, data, or call control) is typically segmented into blocks, with channel coding used to add redundancy (i. e., additional bits) to each individual block. Categories of channel coding: Block Codes: Each k -digit input block is mapped into an n -digit block of output digits with the additional ( n - k ) bits known as “parity bits”. Block codes are “memoryless” in that the output code word depends only on one source k -bit block and not on any preceding blocks of digits. Convolutional Codes: Involve “memory” through the use of a binary register that produces an output code based on the previous m memory blocks. Interleaving: Intended to detect and correct “bursts” of errors; may be used in conjunction with block codes or convolutional codes.
  • Error Detection/Correction Automatic Repeat Request (ARQ) – Detects errors at the receiving end, but requests re-transmission of the data rather than correcting the error. ARQ is generally used for data transmission rather than voice; however, the presence of ARQ techniques in protocols traditionally used for data presents some problems of inefficiency when those protocols to a wireless environment (e. g., the implementation of Voice over IP over wireless). Forward Error Correction (FEC) – A more complex coding scheme allows error to be detected and corrected at the receiving end without re-transmission of the bit stream. There are various forms of FEC, including some simple examples: Repetition codes: Each bit is sent an odd number of times, and the receiver determines the correct value by the “majority” of the values received; this method is used in AMPS for error correction of control channel messages. Hamming codes: Can do detection and correction of single errors in a four bit sequence by the addition of 3 redundant bits; Hamming codes are a form of “block codes”, as described in the preceding slide.
  • Modulation Modulation allows the overlay of a signal containing “information” (speech, data, or signaling) on a carrier frequency, that is, modulation modifies a carrier wave of constant amplitude and frequency so that the necessary information can be reconstructed at the receiving end of the transmission. There are multiple reasons for modulating a signal prior to transmission on a radio network. From a purely practical point of view, transmission of signals in the frequency range of audible speech (up to around 3 KHz) presents certain physical problems such as a low frequency signal would have a very large wavelength. Since the desirable length for an antenna is half the wavelength, the antenna would have to be very large to accommodate the voice signal (on the order of 50 kilometers in length). Modulation of the signal to the carrier frequency, which is normal much higher than the frequency of the baseband signal, allows the use of a much smaller antenna. The second reason for modulating a radio signal in a mobile wireless network is that the legal and regulatory structure requires them to transmit and receive in certain frequency bands specified either by their licensing agreements or by the laws governing certain types of transmission. Information (speech, data, or call control signals) can only be conveyed by transmitting them on the carrier frequencies assigned to a particular service provider. The use of specific carrier frequencies provides a means of separating simultaneous calls by mobile subscribers to the same carrier, as well as separating out the simultaneous calls made by customers of one wireless carrier versus customers of all other wireless carriers (and other wireless services, such as broadcast radio) operating in the same geographic area.
  • Modulation Techniques This slide provides a simplified summary of analog and digital modulation techniques. A more detailed discussion of modulation techniques including more complex forms of digital modulation, is contained in a separate GWEC module (AI-MOD). The application of these modulation techniques is shown in the following slides.
  • Analog Modulation This slide illustrates the forms of modulation commonly used for voice and analog data transmission in an analog radio technology such as AMPS. The Baseband Voice Signal is the analog signal originating at the mobile station. Because this is assumed to be a network deploying an analog radio technology, no analog to digital conversion or speech coding had been performed on this signal. The carrier wave is the transmit frequency assigned by the base station to the mobile station, according selected from among those frequencies for which the analog service provider is licensed in a given geographic area. In Amplitude Modulation (AM), the amplitude of the carrier wave is modified according to the amplitude of the baseband signal. In Frequency Modulation (FM), the frequency of the carrier wave is modified according to the amplitude of the baseband signal. When the amplitude of the baseband signal is positive, the frequency of the carrier wave is modulated to be greater than the center frequency or “resting frequency”. When the amplitude of the baseband signal is negative, the frequency of the carrier wave is modulated to be less than the resting frequency. These modulation techniques represent the form of modulation that would occur for voice and analog data signals in an AMPS or NAMPS network. However, in addition to the methods of analog signal modulation described above, some types of call control messages in analog radio technologies such as AMPS use a form of Frequency Shift Keying (described in the following slide) to represent a digital bit stream.
  • Digital Modulation This slide illustrates three basic methods of digital signal modulation, or the modulation of a digital data signal or of an analog voice signal that has been converted to a digital signal by using analog to digital conversion and speech coding techniques. The input in this case, rather than an analog voice signal as in the preceding slide, is a digital signal (a sequence of ones and zeroes represented by abrupt changes in amplitude of the signal). The carrier wave can be modulated in three ways to represent the digital signal in such a way that it can be recovered at the receiving end of the transmission: Amplitude Shift Keying (ASK) - The frequency and phase of the carrier wave remain constant; ones and zeroes in the digital signal are represented by changes in the amplitude of the carrier wave. Frequency Shift Keying (FSK) - The amplitude and phase of the carrier wave remain constant; ones and zeroes in the digital signal are represented by changes in the frequency of the carrier wave. Phase Shift Keying (PSK) - The amplitude and frequency of the carrier wave remain constant; ones and zeroes in the digital signal are represented by sudden shifts in the phase of the carrier wave.
  • Intermediate Frequency (IF) As part of the modulation and demodulation processes, most radio technologies use an Intermediate Frequency (IF) prior to modulation to the carrier frequency at the transmitter and before demodulation to the baseband frequency at the receiver. This technique allows amplification of the signal with less introduced noise than if the same amplification were performed at carrier frequencies. The use of an IF also allows for more efficient equipment design and fine tuning of components for specific frequencies. Mixing two signals together to produce a third frequency is called “heterodyning”. If two waveforms of different frequency are combined using the appropriate "mixer" circuit, an output will be produced that contains four waveforms at different frequencies. Two of the waveforms will be at the original two frequencies. A third waveform will have a frequency equal to the sum of the two original frequencies, and a fourth that is equal to the difference of the two. The frequency equal to the difference between the carrier frequency and the frequency introduced by the local oscillator in the mixer is used as the Intermediate Frequency (IF), and can be made constant across all carrier frequencies received as input. Because the amplifiers can be customized for a particular IF, they can be designed to minimize the introduction of additional noise. Also, by limiting the set of IFs used in the design of wireless equipment, economies of scale in the manufacture of frequency-specific components can be realized. The IF is used in modulation at the transmitting end and in demodulation at the receiving end of a radio transmission from the mobile station; and the base station and from the base station to the mobile station. It is used in both analog and digital technologies.
  • Demodulation Demodulation of a carrier wave basically involves reversing the modulation process to recover the original signal. Because a signal can be degraded in transmission, particularly in wireless transmission, there are several stages to the demodulation process. The preceding sections discussed modulation and demodulation, intermediate frequencies and amplification in the demodulation process. The remaining stage is filtering. Because a digital signal is susceptible to distortion, filtering becomes a critical stage in restoring the digital signal at the receiver so that it can be decoded and the original analog speech signal can be retrieved. The following section discusses the issues and techniques associated with filtering digital baseband signals.
  • Filtering Filtering is applied to baseband signals for a variety of reasons in analog and digital radio technologies, to limit the frequencies to be passed, or to improve the signal quality. This section deals with the filtering of digital baseband signals required to restore the clear sequence of ones and zeroes after demodulation, given that Inter-symbol Interference (ISI) may have distorted the signal. As discussed in the section on Digital Signal Processing, filtering is one of the functions performed on the received signal that may be done either with an analog filter or with a Digital Signal Processor, depending on whether the received signal has been converted from analog format (the carrier wave used for transmission on the air interface) to a digital signal format.
  • Filtering Effects Filtering provides for more efficient operation, and removes the distortion introduced by modulation and transmission over the air interface. Filtering: Allows the transmitted bandwidth to be significantly reduced without losing the content of the digital data. Eliminates much of the distortion of the received signal, Intersymbol Interference (ISI). Removes high frequency replicas of the signal that arise due to modulation. Removes as much noise as possible, while affecting the information signal as little as possible.
  • Filter Types There are several types of filters used in digital signal transmission: Raised Cosine filters are commonly used in digital data communication systems to limit ISI. Root Raised Cosine filters are used in cases where the overall Raised Cosine response is split equally between the transmitter and the receiver. These filters are "root" (square root) in terms of the Frequency Domain (also known as a form of Nyquist filters). A Gaussian filter does not have zero ISI, resulting in a small blurring of symbols in GSM. However, Gaussian filters have advantages in carrier power, occupied bandwidth, and symbol-clock recovery. Because the effects of a Gaussian filter in the time domain are relatively short, each symbol will interact (causing ISI) only with the symbol immediately preceding or following it. Chebyshev low pass FIR filter: IS-95 CDMA has a channel spacing of 1.25 MHz and a symbol rate of 1.288 MHz, so that it is important to reduce “leakage” to adjacent frequency channels. This is accomplished by using a filter with a very sharp shape factor. This type of filter also does not have zero ISI. However, in IS-95 CDMA, a correlation of the 64 chips is used to make a symbol decision, making ISI less of an issue than in other digital technologies. Information taken from Agilent Technologies Application Note (AN) 1298 , “Digital Modulation in Communications Systems - An Introduction”.
  • Multiplexing There are several levels of multiplexing and multiple access that occur in baseband processing. These procedures are applied before or after modulation, according to the radio technology being used. The first level of multiplexing is called channel multiplexing, and is used to merge speech with call control signaling, synchronization, etc. into a single digital bit stream. This type of multiplexing is always applied prior to modulation, so that the complete digital bit stream can be represented by the modulated carrier wave. Subsequent to channel multiplexing, multiple access techniques are applied. Multiple access is defined according to the characteristic used to separate multiple simultaneous calls: Frequency Division Multiple Access (FDMA) – Separates simultaneous calls by carrier frequency; can be used for either analog or digital radio technologies. Time Division Multiple Access (TDMA) – Separates simultaneous calls by assigning a unique timeslot to each call, within a particular carrier frequency; is only applicable to digital radio technologies. Code Division Multiple Access (CDMA) – Separates simultaneous calls by assigning a unique code to each call, within a particular carrier frequency; is only applicable to digital radio technologies.
  • Multiple Access Time Division Multiple Access (TDMA) and Code Division Multiple Access (CDMA) multiplexing techniques are applied prior to modulation, to create a signal carrying multiple conversations that can then be modulated onto a single carrier frequency. Frequency Division Multiple Access (FDMA) is applied after the signal has been modulated up to the carrier frequency, since that is the frequency that will be used to distinguish between different calls (or multiplexed combinations of calls) in transmission. It should be noted that all of the analog and digital radio technologies in common use (i. e., AMPS, N-AMPS, GSM, IS-136 TDMA and IS-95 CDMA), use a form of FDMA subsequent to modulation, in the sense that simultaneous conversations in a given area are separated by the frequency channel on which they are carried, as well as by timeslot or by code. The following slide illustrates the sequence of processing for multiple access according to the multiple access method used.
  • Multiple Access Multiplexing This graphic illustrates the sequence of processing for multiple access according to the multiple access method used. It does not show channel multiplexing of speech, synchronization, and call control signals. These occur prior to the multiple access multiplexing or modulation steps shown. In order to show the assignment of calls to different frequencies, the sequence shown above represents baseband processing as it would occur at the base station, rather than at a mobile station. For a radio technology, such as AMPS, that uses FDMA, each individual call would be modulated up to a unique carrier frequency. As discussed in the separate AMPS module (AI-AMPS), the potential for a mobile station to receive a transmission from multiple base stations on the same frequency has resulted in the inclusion of additional identifiers in the call control signals. For a radio technology that uses TDMA, simultaneous calls are multiplexed together according to the number of timeslots per carrier frequency that each technology is designed to support. The carrier wave is then modulated to represent the digital bit stream that contains the signals for all the multiplexed calls on that frequency. Similarly, a CDMA network assigns a unique code to each simultaneous call, combines the speech signals with synchronization and paging signals, and modulates the carrier wave at the assigned frequency to represent the combined signals carried. The assignment of frequencies to individual calls or multiplexed combinations of calls is under the control of the base station. The list of frequencies available to a given service provider at a given base station is a function of legal and regulatory constraints and of the frequency reuse plan implemented by the service provider. The base station communicates with a given mobile station on a specific frequency, a specific frequency and timeslot, or a specific frequency and code.
  • Digital Signal Processing Digital Signal Processing operates on signals of interest (speech or data) as sequences of binary numbers, using numeric techniques, as opposed to analog processing, which operates on the wave form. “ Digital” radio technologies are still partly analog: After modulation, the signal transmitted between the mobile station and the base station consists of an analog representation of the digital signal (speech or data) - the modulated carrier wave is entirely an analog signal, even though it may contain the information (in ones and zeroes) that made up the digital signal. Operations performed in baseband processing may be a combination of analog techniques (e. g., filtering) and digital techniques (e. g., channel coding and decoding). For example, digital filtering, as described previously, may be performed using analog filtering techniques or in Digital Signal Process, operating on the sampled and quantized form of the baseband signal. The characteristics that distinguish different radio technologies are in both the analog realm (e.g., carrier frequency) and in the digital realm (e. g., speech coding and channel coding formats). However, depending on how close to the antenna a received analog carrier wave signal, it may eventually be possible to build radios that can be programmed operate in any of a number of carrier frequencies, advancing the speed at which wireless networks can be upgraded. The result of this variability in the changeover from analog to digital signaling is that individual technologies, and even individual manufacturers’ products may apply Digital Signal Processing to differing extents in the performance of baseband signal processing functions.
  • Digital Signal Processing Hardware There are three main hardware options by which digital signaling functions may be performed: Application Specific Integrated Circuit (ASIC) - A chip designed for a particular purpose. ASICs can be designed by combining circuit “building blocks” from an existing library, reducing the time it takes to produce a new ASIC, but are not programmable. An ASIC can only perform a single function, and may be analog or digital. Digital Signal Processor (DSP) - A special-purpose processing unit designed to perform the mathematics required to manipulate information in digital format (speech or data). DSPs are generally programmable. Field Programmable Gate Array (FPGA) - An integrated circuit that can be programmed in the field after manufacture. FPGAs are used by engineers in the design of specialized integrated circuits that can later be produced and hard-wired in mass quantities for faster performance and lower cost. FGPAs are not considered “real time” programmable, in that it takes about a half second to change the functionality, and the whole system must be taken down while this occurs (http://www.commsdesign.com/story/OEG20001012S0023). There are advantages and disadvantages to each of the options outlined above: ASICs can provide a custom architecture that is optimized for a particular application - it may perform faster than a DSP, in an area where speed is essential; however, there can be significant delay in the initial ASIC development due to design and manufacturing issues. DSPs offer maximum flexibility, since they are programmable, but speed of operation may be much slower. FPGAs can operate significantly faster than DSPs, but still offer the flexibility of being programmable in the field without physically changing out the hardware. Issues remain concerning their ability to reliably implement complex DSP algorithms.
  • Software Defined Radio (SDR) One of the most significant applications of Digital Signal Processing is likely to be Software Defined Radio (SDR, or “soft radio”). Potential developments include the ability to reprogram base stations and mobile stations to communicate on different frequencies or using different wireless technology standards. However, different baseband processing requirements among radio standards mean that flexibility and processing power are issues, even if a completely flexible radio frequency front end could be developed. Some of the companies currently developing various SDR products are: Chameleon Systems Inc (http://www.chameleonsystems.com/) Morphics (http://www.morphics.com/) Xilinx (http://www.xilinx.com) Current technology cannot yet support analog to digital conversion at carrier frequencies. Therefore, a signal at the receiver must be down-converted to an Intermediate Frequency before it can be converted to a digital signal for numeric processing. Because of this limitation, it is not yet feasible to implement a completely flexible RF front end that can produce a baseband signal regardless of the format transmitted. However, the range of functions addressed by SDR technology is increasing, and the deployment of 3rd Generation (“3G”) wireless may provide a driver for additional SDR development to ease the pain of high cost evolution.
  • The following technology characteristics are considered in comparing baseband processing: Voice Transmission - AMPS and N-AMPS are analog technologies, while the others shown are digital. Codec: Not applicable for AMPS/N-AMPS (voice signal remains analog) IS-136 TDMA uses VSELP IS-95 CDMA uses a version of CELP developed by Qualcomm GSM uses GSM EFR (based on ACELP) Coding Rates: AMPS and N-AMPS send voice as an analog signal across the air interface IS-136 TDMA uses 8 Kbps voice IS-95 CDMA uses Rate Set 1 (9.6 Kbps) or Rate Set 2 (14.4 Kbps) for voice GSM uses 13 Kbps voice Modulation: AMPS and N-AMPS use FM, including FSK to encode digital data. IS-136 TDMA uses differentially encoded quadrature phase shift keying (DQPSK). IS-95 CDMA uses direct sequence code modulation and quadrature phase shift keying (QPSK). GSM uses Gaussian minimum shift keying (GMSK). Multiple Access - The different technologies separate multiple users either by frequency (FDMA), by timeslots (TDMA and GSM), or by coding (CDMA). Digital Filter (used to restore the shape of the digital signal at the receiver): Not applicable for AMPS/N-AMPS Square Root Raised Cosine (IS-136 TDMA) Gaussian (GSM) Chebyshev equiripple Finite Impulse Response (FIR) filter (IS-95 CDMA)
  • This slide summarizes the functions considered in Baseband Radio Transmission, including the issues of Digital Signal Processing and Software Defined Radio.
  • Ai Brt

    1. 1. Air Interface-Baseband Radio Transmission (AI-BRT)
    2. 2. AI-BRT <ul><li>© Copyright 2001 Global Wireless Education Consortium </li></ul><ul><li>All rights reserved. This module, comprising presentation slides with notes, exercises, projects and Instructor Guide, may not be duplicated in any way without the express written permission of the Global Wireless Education Consortium. The information contained herein is for the personal use of the reader and may not be incorporated in any commercial training materials or for-profit education programs, books, databases, or any kind of software without the written permission of the Global Wireless Education Consortium. Making copies of this module, or any portion, for any purpose other than your own, is a violation of United States copyright laws. </li></ul><ul><li>Trademarked names appear throughout this module. All trademarked names have been used with the permission of their owners . </li></ul>
    3. 3. AI-BRT <ul><li>Partial support for this curriculum material was provided by the National Science Foundation's Course, Curriculum, and Laboratory Improvement Program under grant DUE-9972380 and Advanced Technological Education Program under grant DUE‑9950039. </li></ul><ul><li>GWEC EDUCATION PARTNERS: This material is subject to the legal License Agreement signed by your institution. Please refer to this License Agreement for restrictions of use. </li></ul>
    4. 4. Table of Contents <ul><li>Overview 5 </li></ul><ul><li>Learning Objectives 6 </li></ul><ul><li>Baseband Signaling 7 </li></ul><ul><li>Analog to Digital Conversion 13 </li></ul><ul><li>Digital Speech Coding 16 </li></ul><ul><li>Channel Coding and Error Correction 21 </li></ul><ul><li>Modulation and Demodulation 25 </li></ul><ul><li>Baseband Filtering for Digital Signals 32 </li></ul><ul><li>Multiplexing and Multiple Access 36 </li></ul><ul><li>Digital Signal Processing 40 </li></ul><ul><li>Summary 44 </li></ul><ul><li>Contributors 47 </li></ul>
    5. 5. Overview <ul><li>This module covers the following topics: </li></ul><ul><li>Baseband Signaling </li></ul><ul><li>Analog to Digital Conversion </li></ul><ul><li>Digital Speech Coding </li></ul><ul><li>Channel Coding and Error Correction </li></ul><ul><li>Modulation/Demodulation </li></ul><ul><li>Multiplexing and Multiple Access Techniques </li></ul><ul><li>Digital Signal Processing </li></ul><ul><li>Summary of Baseband Signaling </li></ul>
    6. 6. Learning Objectives <ul><li>After completing this module participants will be able to: </li></ul><ul><li>Describe the functions performed in baseband signal processing for analog and digital transmission </li></ul><ul><li>Describe the conversion of analog to digital signals </li></ul><ul><li>Characterize the differences among speech coders </li></ul><ul><li>Summarize the methods of channel coding and error correction </li></ul><ul><li>Summarize the basic techniques used in modulation and demodulation of baseband signals </li></ul><ul><li>Describe the techniques used for channel multiplexing and multiple access for different wireless technologies </li></ul>
    7. 7. Baseband Signaling
    8. 8. Baseband Signaling <ul><li>What is the baseband signal? </li></ul><ul><li>The original band of frequencies produced by a transducer, such as a microphone, telegraph key, or other signal-initiating device, prior to initial modulation. </li></ul><ul><ul><li>Note 1: In transmission systems, the baseband signal is usually used to modulate a carrier. </li></ul></ul><ul><ul><li>Note 2: Demodulation re-creates the baseband signal. </li></ul></ul><ul><ul><li>Note 3: Baseband describes the signal state prior to modulation, prior to multiplexing, following demultiplexing, and following demodulation. </li></ul></ul><ul><ul><li>Note 4: Baseband frequencies are usually characterized by being much lower in frequency than the frequencies that result when the baseband signal is used to modulate a carrier or subcarrier. </li></ul></ul>
    9. 9. Steps in Baseband Signal Processing A/D Mux Channel Decoding D/A Channel Coding Demux Mod Demod Transmit/Receive <ul><li>Transmission </li></ul><ul><li>Multiple Access </li></ul><ul><li>Demodulation </li></ul><ul><li>Demultiplexing </li></ul><ul><li>Channel Decoding </li></ul><ul><li>Digital to Analog Conversion </li></ul><ul><li>Analog to Digital Conversion </li></ul><ul><li>Channel Coding </li></ul><ul><li>Multiplexing </li></ul><ul><li>Modulation </li></ul><ul><li>Multiple Access </li></ul>Multiple Access
    10. 10. Transmit versus Receive <ul><li>Baseband processing takes place at both the mobile station and the base station </li></ul><ul><li>The unmodified (baseband) signal can be: </li></ul><ul><ul><li>Analog voice: human speech received at the mobile station and delivered from the mobile station on the receiving end </li></ul></ul><ul><ul><li>Analog data: (i. e., modem data) transmitted from a mobile station to a base station or from a base station to a mobile station </li></ul></ul><ul><ul><li>Digital voice: the signal received by a base station from another base station or from the PSTN to be transmitted to a mobile station may already be digitally encoded voice </li></ul></ul><ul><ul><li>Digital data: transmitted from a mobile station to a base station or from a base station to a mobile station </li></ul></ul>Mobile Station Mobile Station 1 2 4 5 7 8 * 0 3 6 9 # 1 2 4 5 7 8 * 0 3 6 9 # Base Station Base Station
    11. 11. Analog versus Digital Technologies
    12. 12. Baseband Processing (Analog Technologies) <ul><li>In an analog radio technology such as AMPS, baseband processing does very little conditioning to the raw audio (baseband) signal for voice before sending it to modulation </li></ul><ul><li>AMPS baseband processing functions include: </li></ul><ul><ul><li>Compression/Expansion </li></ul></ul><ul><ul><li>Pre-emphasis/De-emphasis </li></ul></ul><ul><ul><li>Limiting </li></ul></ul><ul><li>Modulation and demodulation of analog signals takes place as described in the Modulation and Demodulation Section </li></ul><ul><li>Multiplexing and Multiple Access techniques are applied as appropriate (e. g., AMPS uses Frequency Division Multiple Access (FDMA) after modulation) </li></ul>
    13. 13. Analog to Digital Conversion
    14. 14. Analog to Digital Conversion <ul><li>Conversion of an analog (continuous) signal to a digital (discrete) signal at the transmitting end requires the following: </li></ul><ul><ul><li>Initial analog signal (for example, analog voice) </li></ul></ul><ul><ul><li>Sampling </li></ul></ul><ul><ul><li>Quantization </li></ul></ul><ul><ul><li>Encoding </li></ul></ul><ul><ul><li>Transmission of the digital signal </li></ul></ul><ul><li>At the receiving end, the original analog signal is reconstructed by decoding the digital signal </li></ul>
    15. 15. Analog to Digital Conversion Quantizing Time Amplitude 7 5 3 1 7.0 1.1 1.3 4.8 4.2 3.1 2.4 2.9 3.2 2.5 3.1 6.2 3.8 T 0 1 111 100 001 110 010 011 011 011 011 011 001 100 101 Analog Signal Digital Signal Encoding Sampling
    16. 16. Digital Speech Coding
    17. 17. Digital Speech Coding <ul><li>Pulse Code Modulation (PCM) is a form of speech coding known as “waveform coding”, and is commonly used to convert analog voice and data to digital transmission in the wireline network </li></ul><ul><li>In a wireless network, due to air interface (between the mobile station and the base station): </li></ul><ul><ul><li>PCM (which requires a transmission rate of 64 Kbps) is an inefficient use of scarce bandwidth resources </li></ul></ul><ul><ul><li>Higher error rates for wireless versus wireline transmission require the adoption of error recovery techniques as part of digital transmission </li></ul></ul><ul><li>Classes of speech coders (coders/decoders, or “codecs”) that may be used on the air interface in wireless networks include: </li></ul><ul><ul><li>Waveform coding algorithms </li></ul></ul><ul><ul><li>Linear predictive coding algorithms (known as “vocoders”) </li></ul></ul><ul><ul><li>Hybrid coders (combining waveform coding techniques and vocoder techniques) </li></ul></ul>
    18. 18. Digital Speech Coding Techniques <ul><li>Waveform codec: </li></ul><ul><ul><li>Pulse Code Modulation (PCM) </li></ul></ul><ul><ul><li>Adaptive Differential Pulse Code Modulation (ADPCM) </li></ul></ul><ul><ul><li>Adaptive Predictive Coding (APC) </li></ul></ul><ul><li>Linear Predictive codec (LPC): </li></ul><ul><ul><li>Models speech by encoding and transmitting a few key parameters, which are used at the receiver to synthesize the original speech signal </li></ul></ul><ul><li>Hybrid codec: </li></ul><ul><ul><li>Residual-Excited LPC (RELP) </li></ul></ul><ul><ul><li>Code-Excited LPC (CELP) </li></ul></ul><ul><ul><li>Algebraic Code-Excited LPC (ACELP) </li></ul></ul><ul><ul><li>Vector-Sum Excited LPC (VSELP) </li></ul></ul><ul><ul><li>Multi-pulse, multi-level quantization (MP-MLQ) </li></ul></ul>
    19. 19. Standardization of Coding Techniques <ul><li>ITU-T G-series standards: </li></ul><ul><ul><li>G.711: Describes 64 Kbps PCM voice coding, including A-law and  -law encoding laws </li></ul></ul><ul><ul><li>G.726: Describes ADPCM coding at 40, 32, 24, and 16 Kbps </li></ul></ul><ul><ul><li>G.728: Describes 16 Kbps low-delay variation of CELP (LD-CELP) </li></ul></ul><ul><ul><li>G.729: Describes 8 Kbps CELP (CS-ACELP) (G.729 and G.729 Annex A are similar standards that differ in computational complexity) </li></ul></ul><ul><ul><li>G.723.1: Describes a compression technique for speech or audio signal components at very low bit rates (5.3 Kbps (based on ACELP) or 6.3 Kbps (based on MP-MLQ)) </li></ul></ul><ul><li>ETSI standards for GSM: </li></ul><ul><ul><li>GSM EFR: Compresses 8 KHz sampled speed to 13 Kbps (based on ACELP algorithm) </li></ul></ul>
    20. 20. Comparison of Codecs
    21. 21. Channel Coding and Error Correction
    22. 22. Channel Coding <ul><li>The purpose of channel coding in digital transmission of voice or data is to introduce additional bits into the information bit stream that will allow errors to be detected and, in some cases, corrected at the receiving end </li></ul><ul><li>The radio air interface is more “hostile” than wireline: errors in transmission occur due to noise, co-channel interference (from users in adjacent cells), multipath fading (cancellation of the signal due to interference by multiple reflections of the signal) </li></ul><ul><li>Shannon’s Channel Capacity Theorem indicates that it is possible, in principle, to devise a coding technique such that the probability of error of information transmitted at a rate R less than the channel capacity C can be made “arbitrarily small” </li></ul><ul><li>In practice, there is a tradeoff: Reduction in error rate generally means a reduction in throughput as well, increasing the cost per subscriber in a capacity-limited network </li></ul>
    23. 23. Types of Channel Coding <ul><li>The digital bit stream (voice, data, or call control) is typically segmented into blocks </li></ul><ul><li>Channel coding is used to add redundancy to each individual block </li></ul><ul><li>Categories of channel coding: </li></ul><ul><ul><li>Block Codes: </li></ul></ul><ul><ul><ul><li>Input block is mapped into output block containing parity bits </li></ul></ul></ul><ul><ul><ul><li>Considered “memoryless”, i. e., dependent only on the individual code </li></ul></ul></ul><ul><ul><li>Convolutional Codes: </li></ul></ul><ul><ul><ul><li>Incorporates “memory” - output is based on the previous m memory blocks </li></ul></ul></ul><ul><ul><li>Interleaving: </li></ul></ul><ul><ul><ul><li>Corrects for “bursts” of errors </li></ul></ul></ul><ul><ul><ul><li>May be used in conjunction with block codes or convolutional codes </li></ul></ul></ul>
    24. 24. Error Detection and Correction <ul><li>Two main approaches exist for error correction and detection: </li></ul><ul><ul><li>Automatic Repeat Request (ARQ) </li></ul></ul><ul><ul><li>Forward Error Correction (FEC) </li></ul></ul>
    25. 25. Modulation and Demodulation
    26. 26. Modulation <ul><li>Modulation allows the overlay of a signal containing “information” (speech, data, or signaling) on a carrier wave in a different frequency band from the original signal </li></ul><ul><li>There are multiple reasons for modulating a signal prior to transmission on a radio network: </li></ul><ul><ul><li>Higher frequency transmission allows the use of a smaller antenna: for example, radio signals in the range of audible speech (about 3 KHz) would require an antenna on the order of 50 kilometers in length </li></ul></ul><ul><ul><li>Licensing and/or statutory requirements constrain wireless service providers to transmit and receive in specific frequency bands. Transmission by the subscribers of one service provider is separated by frequency from all other radio transmission in the same geographic area </li></ul></ul>
    27. 27. Modulation Techniques <ul><li>Modulation techniques vary with the technology supported: </li></ul><ul><ul><li>Analog radio technology (e. g., AMPS) uses analog modulation techniques </li></ul></ul><ul><ul><ul><li>Amplitude Modulation (AM) </li></ul></ul></ul><ul><ul><ul><li>Frequency Modulation (FM) </li></ul></ul></ul><ul><ul><ul><li>Phase Modulation (PM) (considered a variation of Frequency Modulation) </li></ul></ul></ul><ul><ul><li>Digital radio technology (digitized voice or digital data input) such as GSM, IS-95 CDMA, or IS-136 TDMA uses digital modulation techniques: </li></ul></ul><ul><ul><ul><li>Amplitude Shift Keying (ASK) </li></ul></ul></ul><ul><ul><ul><li>Frequency Shift Keying (FSK) </li></ul></ul></ul><ul><ul><ul><li>Phase Shift Keying (PSK) </li></ul></ul></ul><ul><li>Modulation techniques, including more complex forms of digital modulation, are discussed in detail in a separate module (AI-MOD) </li></ul>
    28. 28. Analog Modulation Carrier Wave Amplitude Time Amplitude Modulation (AM) Amplitude Time Baseband Voice Signal Amplitude Time Frequency Modulation (FM) Amplitude Time
    29. 29. Digital Modulation 0 0 1 1 1 0 Binary Digits 1 0 -1 Digital Signal 1 0 -1 Amplitude Time Carrier Wave Amplitude Time Frequency Shift Keying (FSK) 1 0 -1 Amplitude Time Amplitude Shift Keying (ASK) 1 0 -1 Amplitude Time Phase Shift Keying (PSK) 1 0 -1 Amplitude Time
    30. 30. Intermediate Frequency <ul><li>An Intermediate Frequency is defined as a frequency to which a carrier frequency is shifted as an intermediate step in transmission or reception </li></ul><ul><ul><li>Intermediate frequencies in a wireless system are generally in the tens or low hundreds of MHz range </li></ul></ul><ul><ul><li>The purpose of modulating a signal to an Intermediate Frequency prior to modulation to the carrier frequency at the transmitter, and prior to demodulation to voiceband frequencies at the receiver, is that amplification of the signal can be accomplished more efficiently than if the same functions were performed at carrier frequencies </li></ul></ul><ul><ul><li>Intermediate Frequencies are used in modulation and demodulation in both analog and digital wireless technologies </li></ul></ul>
    31. 31. Demodulation <ul><li>Demodulation of a carrier wave to recover the original signal involves several stages: </li></ul><ul><ul><li>Intermediate Frequency </li></ul></ul><ul><ul><li>Amplification </li></ul></ul><ul><ul><li>Demodulation </li></ul></ul><ul><ul><li>Filtering </li></ul></ul>
    32. 32. Baseband Filtering for Digital Signals
    33. 33. Filtering in Digital Baseband Processing <ul><li>Filters can be applied at both the transmit and receive end, for analog and digital radio technologies </li></ul><ul><li>Depending on whether the signal at the receiver has been sampled and converted to digital, filtering can be done using Digital Signal Processing as well as with an analog filter </li></ul><ul><li>Equalization involves application of a filter to the received signals in order to reverse the time dispersion caused by multi-path effects </li></ul><ul><li>Time dispersion causes inter-symbol interference (ISI), or, more generally, distortion of the received signal </li></ul>
    34. 34. Effects of Filtering <ul><li>Allows the transmitted bandwidth to be significantly reduced without losing the content of the digital data </li></ul><ul><li>Eliminates much of the distortion of the received signal, Intersymbol Interference (ISI) </li></ul><ul><li>Removes high frequency replicas of the signal that arise due to modulation </li></ul><ul><li>Removes as much noise as possible, while affecting the information signal as little as possible </li></ul>
    35. 35. Types of Filters in Digital Transmission <ul><li>Raised Cosine filter </li></ul><ul><li>Square Root Raised Cosine filter (IS-136 TDMA) </li></ul><ul><li>Gaussian filter (GSM) </li></ul><ul><li>Chebyshev lowpass Finite Impulse Response (FIR) filter (IS-95 CDMA) </li></ul>
    36. 36. Multiplexing and Multiple Access
    37. 37. Multiplexing and Multiple Access <ul><li>Channel multiplexing is used to merge speech with call control signaling, synchronization, etc. into a single digital bit stream prior to modulation </li></ul><ul><li>Multiple Access techniques multiplex active calls from multiple users onto a single frequency channel by using: </li></ul><ul><ul><li>Frequency Division Multiple Access (FDMA) </li></ul></ul><ul><ul><li>Time Division Multiple Access (TDMA) </li></ul></ul><ul><ul><li>Code Division Multiple Access (CDMA) </li></ul></ul>
    38. 38. Multiple Access <ul><li>Multiple Access techniques may be applied before or after modulation, depending on the technique used: </li></ul><ul><ul><li>FDMA: applied after the signals have been modulated up to the carrier frequencies to be used for transmission </li></ul></ul><ul><ul><li>TDMA: applied before modulation, so that the combined bit stream for all active calls on a given frequency channel can be modulated onto the same carrier frequency </li></ul></ul><ul><ul><li>CDMA: applied before modulation, so that all active calls on a given frequency channel can be modulated onto the same carrier frequency </li></ul></ul>
    39. 39. Multiple Access Multiplexing FDMA: Combiner Call 1 Call 2 Call n f 1 f 2 f n Modulator f 1 Modulator f 2 Modulator f n f 1 f 2 f n f 1 f 2 f n TDMA: Modulator f 1 Modulator f 2 Modulator f n Call 1 Call 2 Call i Call 1 Call 2 Call i Time Division Multiplexer 1 Time Division Multiplexer 2 f 1 f 2 f n f 1 f 2 f n Modulator f 1 Modulator f 2 Modulator f n CDMA: Call 1 Call 2 Call j Call 1 Call 2 Call j Code Division Multiplexer 1 Code Division Multiplexer 2 f 1 f 2 f n
    40. 40. Digital Signal Processing
    41. 41. Digital Signal Processing <ul><li>Digital Signal Processing operates on signals of interest (speech or data) as sequences of binary numbers, using numeric techniques </li></ul><ul><li>“Digital” radio technologies are still partly analog </li></ul><ul><li>The result is selected applicability of Digital Signal Processing </li></ul>
    42. 42. Hardware Options for Digital Signal Processing <ul><li>Application Specific Integrated Circuit (ASIC) </li></ul><ul><li>Digital Signal Processor (DSP) </li></ul><ul><li>Field Programmable Gate Array (FPGA) </li></ul>
    43. 43. Software Defined Radio (SDR) <ul><li>Software defined radio (SDR) or “soft radio”: </li></ul><ul><ul><li>Will use Digital Signal Processing to allow service providers to reprogram base stations as standards change or to develop mobile stations that will be able to communicate with any wireless technology base station in any frequency band </li></ul></ul><ul><li>SDRs currently use a combination of DSP, ASIC, and FPGA technology with hardware support </li></ul><ul><ul><li>Ultimate goal is to have all processing done by software </li></ul></ul><ul><ul><li>Battery power, size, weight, and cost requirements are all issues, especially in handheld mobile stations </li></ul></ul><ul><ul><li>The closer to the antenna that an incoming signal can be sampled and converted back to a digital data stream, the more baseband processing functions can be programmed into software </li></ul></ul>
    44. 44. Summary
    45. 45. Summary of Baseband Processing by Technology
    46. 46. Summary of Baseband Radio Transmission <ul><li>Analog to Digital Conversion </li></ul><ul><li>Speech Coding </li></ul><ul><li>Channel Coding and Error Detection </li></ul><ul><li>Modulation, Demodulation, and Filtering </li></ul><ul><li>Multiplexing and Multiple Access </li></ul><ul><li>Digital Signal Processing/Software Defined Radio </li></ul>
    47. 47. Industry Contributors <ul><li>Telcordia Technologies, Inc ( http://www.telcordia.com ) </li></ul>The following companies provided materials and resource support for this module:
    48. 48. Individual Contributors <ul><li>The following individuals and their organization or institution provided materials, resources, and development input for this module: </li></ul><ul><li>Dr. Cheng Sun </li></ul><ul><ul><li>Cal Poly </li></ul></ul><ul><ul><li>http://www.calpoly.edu </li></ul></ul><ul><li>Dr. David Voltmer </li></ul><ul><ul><li>Rose-Hulman Institute of Technology </li></ul></ul><ul><ul><li>http://www.rose-hulman.edu </li></ul></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×