Low complexity walsh decoding for cdma systems


Published on

For more visit www.nanocdac.com

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Low complexity walsh decoding for cdma systems

  1. 1. Low-Complexity Localized Walsh DecodingFor CDMA SystemsAlbert M. Chan, Jon Feldman, Raghu MadyasthaVanu, Inc.Cambridge, MA, USA 02142Email: {chanal,jonfeld,raghu}@vanu.comPiotr Indyk and David KargerMIT CSAILCambridge, MA, USA 02139Email: {indyk,karger}@mit.eduAbstract— As software radio becomes increasingly popular forwireless infrastructure, intelligent signal processing algorithmsare required to make the most efficent use of the computationalcycles available on general-purpose processors. In particular,the design of CDMA receiver algorithms for software radio ischallenging because signals must be processed at rates muchhigher than other wireless standards. While traditional hardwareimplementations of the CDMA standard IS-95 [1] despreadsignals received at the base station prior to Walsh decoding,in this paper we take advantage of the flexibility afforded bysoftware implementations and develop three classes of Walshdecoding algorithms that do not require full despreading ofincoming signals at the base station. Two proposed classesof algorithms exploit the fact that Walsh codes are locallydecodable codes, which have the surprising property that anybit of the message can be recovered (with some probability) byexamining only a small number of symbols of the codeword.We also describe a third class of algorithms based on codepuncturing. All these algorithms enable trading off computationfor performance, and we use them to dynamically minimize thecomputational requirements of the despreader and subsequentWalsh decoder such that a target bit-error rate is maintained withchanging channel conditions. These algorithms are applicable toother CDMA-based systems that use Walsh codes for orthogonalmodulation, and the third class is also applicable to CDMA-basedsystems that use codes for channelization, including 1xRTT, EV-DO, and W-CDMA.I. INTRODUCTIONThe design of code division multiple access (CDMA) re-ceiver algorithms for software radio systems is challenging.The high spreading factor typical of CDMA air interfacestandards requires the signal processing subsystem to processa high rate of input samples. For example, in the IS-95reverse link [1], the received signal at the base station must bedespread at a rate of 1.2288 Megachips/sec using a mobile-specific long code prior to Walsh decoding. The computationrate for long code generation, receiver despreading and Walshdecoding is thus roughly some multiple of 1.2288 MHz.Because these processes are repeated for each of the strongestreceived paths for each mobile in the system, a significantportion of the available computational resources on a multi-GHz processor is consumed by these processing stages.In this paper, we take advantage of the flexibility afforded bysoftware implementations and introduce classes of techniquesto minimize the processing of IS-95 reverse link signals atthe base station. These techniques are also applicable to otheradvanced CDMA-based receivers that use Walsh codes fororthogonal modulation, and the code puncturing technique canalso be modified to a partial correlation technique for otherCDMA-based receivers that use codes for channelization, suchas 1xRTT, EV-DO, and W-CDMA. In particular, we developmethods that process only a fraction of the despread signalto decode Walsh codewords. In turn, only a fraction of thereceived signal needs to be despread, and only a fraction ofthe long code needs to be computed.This paper is organized as follows. In Section II, we reviewWalsh orthogonal modulation and its usage in IS-95. Threeclasses of Walsh decoding algorithms that use only a fractionof the despread signal are introduced in Section III. The first islocal decoding—a well known concept in the computationalcomplexity and cryptography communities [2], but here wepresent the novel application of local decoding to a nonco-herent receiver. The second algorithm class is generalizedlocal decoding, which is a novel and elegant generalizationof local decoding. The third is punctured decoding, whichfollows from the observation that the generator matrix ofa Walsh code contains all distinct columns. The differentclasses of techniques are characterized by different tradeoffsbetween computational complexity and bit-error rate (BER),using the simulation of additive white Gaussian noise (AWGN)channels. Section IV explores a realistic wireless scenarioin which base stations encounter reverse link channels withfluctuating instantaneous SNR. We present Walsh decodersbased on local decoding, generalized local decoding, andpunctured decoding that adapt the amount of computationaccording to the instantaneous SNR, so that a target BER isachieved with minimal computation.II. WALSH ORTHOGONAL MODULATIONUnlike the forward link of IS-95, the reverse link does notutilize a pilot channel and thus signal processing at the basestation is noncoherent; i.e., the random phase associated withthe received carrier is unknown. Binary antipodal modulation,for example, cannot be used because the quadratic operationsinherent in noncoherent demodulation destroy signs. Onealternative is to use orthogonal modulation, which not onlyfacilitates noncoherent detection at the base station, but alsoprovides additional coding gain. In the IS-95 reverse link, theWalsh code of length 64 is used for orthogonal modulation.
  2. 2. BlockInterleaver OrthogonalModulator64−aryConvolutonalEncoderRate 1/3BasebandFilterBasebandFilter28.8 kbps576 bitsShort PN CodeQuadratureSpreading1.2288 Mcps192 bits 576 bits9.6 kbps 28.8 kbpsSpreadingPN CodeLong24576 PN chips 24576 PN chips96 Walsh codewords6144 Walsh symbols1.2288 Mcps4.8 kcwps307.2 kspspI [n] cos w0tsin w0tpQ[n]D12pL[n]wjFig. 1. IS-95 reverse link traffic channel at the mobile for the 9.6 kbps data rate of Rate Set I. The structure is similar for other vocoder rates and for thereverse access channel.LPFLPFPN CodeLongDespreadingSamplerSamplerDespreadingShort PN CodeQuadratureWalsh CorrelationspL[n]pL[n]pI [n]pI [n]≈ wj cos θ≈ wj sin θpQ[n]−pQ[n]D− 12D− 12cos w0tsin w0tr(t)·, wk·, wk (·)2(·)2≈ ( wj, wk cos θ)2≈ ( wj, wk sin θ)2zk ≈ wj, wk2for k = 1, 2, . . . , 64Fig. 2. Optimal noncoherent receiver structure for one IS-95 signal path of the reverse link traffic channel. The structure is similar for other vocoder ratesand for the reverse access channel.The Walsh code of length n = 2kis a set of mutuallyorthogonal codewords that can be defined and generated bythe rows of a 2k× 2kHadamard matrix [3]. Starting with a1×1 matrix H1 = [0], higher-order binary Hadamard matricescan be recursively generated viaH2k =H2k−1 H2k−1H2k−1 H2k−1.For example, H4 and the associated Walsh codewords oflength 4 in bipolar (±1) form areH4 =0 0 0 00 1 0 10 0 1 10 1 1 0w0 = [+1, +1, +1, +1]w1 = [+1, −1, +1, −1]w2 = [+1, +1, −1, −1]w3 = [+1, −1, −1, +1].Figure 1 shows the physical layer processing of the IS-95 reverse traffic channel at the mobile. Note that afterinterleaving, the coded bits are mapped six bits at a time intowj, one of 64 possible Walsh codewords of length 64. Underthe assumption that proper timing synchronization has beenachieved, the optimal noncoherent receiver structure for onepath of the IS-95 reverse link [4] is depicted in Fig. 2. Itis sufficient for our purposes to note that if the transmittedWalsh codeword is wj, estimates of wj cos θ and wj sin θare correlated with each of the 64 possible Walsh codewordsat the receiver, and the corresponding pairs of correlationsare squared and summed to eliminate any dependency on theunknown θ. The 64 squared correlation values indicate thelikelihood of each of the 64 possible Walsh codewords. Takinginto account the interleaver and the six bits that each Walshcodeword represents, the set of 64 correlation values can beconverted into soft information to be fed into a convolutionaldecoder [5].Because of the special structure inherent in Walsh codes, itis well known that the correlations of a sequence with the 2kWalsh codewords can be efficiently computed in O(n log n)operations using the fast Hadamard transform (FHT) [3].The FHT has the same butterfly structure as the fast Fouriertransform (FFT) [6], but the coefficients for the FHT are ±1rather than complex exponentials. Thus, the correlations in theoptimal noncoherent receiver of Fig. 2 can be performed bytwo FHTs of length 64.
  3. 3. III. SUBCODE-BASED WALSH DECODINGTraditionally, Walsh codes are decoded optimally with fixedcomputational complexity. In this section we introduce al-gorithms that trade off computational complexity and BERbased on the mathematical structures underlying Walsh codes.These algorithms form the basis of the dynamic adaptationalgorithms of Section IV.The binary codewords of the (64, 6) Walsh code used inIS-95 can be generated by computing c = xG for all 1 × 6binary row vectors x, where G is the generator matrix whosecolumns are the 64 distinct 6 × 1 binary vectors; i.e.,G =0 0 0 0 0 0 10 0 0 0 0 0 10 0 0 0 0 0 . . . 10 0 0 0 1 1 10 0 1 1 0 0 10 1 0 1 0 1 1.The special structure of this generator matrix makes it possibleto efficiently decode the Walsh codeword without examiningthe entire codeword. We now describe three such techniques.A. Local DecodingThe special structure of Walsh codes permits an estimateof one of the six input bits by observing only two of the 64symbols of the received Walsh codeword, via a process knownas local decoding, which is well known in the computationalcomplexity and cryptography communities [2]. To understandthis technique, we label the symbols of the codeword as c =[c0, c1, c2, . . . , c63], where the subscripts are the decimal val-ues of the corresponding binary column vectors in G. Supposewe wish to decode the zeroth bit of x = [x0, x1, x2, x3, x4, x5]in the absence of receiver noise. We can obtain the value ofx0 by modulo-2 adding any pair of codeword components ciand cj for which the binary representations of i and j (andthe corresponding columns in G) differ only in the zeroth bit:ci ⊕ cj =5s=0xsGsi ⊕5t=0xtGtj=5s=0xs(Gsi ⊕ Gsj)= x0(G0i ⊕ G0i) ⊕5s=1xs(Gsi ⊕ Gsi)= x0.For example, c2 and c34 form one of the 32 possible pairs thatcan be used to decode x0. In a practical situation in whichWalsh codewords are represented in bipolar (±1) format asopposed to binary format, the modulo-2 addition is replaced bynormal multiplication of ci and cj. In the presence of noise, theproduct is quantized to ±1. The signal-to-noise ratios (SNRs)encountered in practice are usually too low to reliably decode abit by examining only one pair of codeword symbols. Instead,the sum of products of up to 32 bipolar pairs can be quantizedto ±1 to decode one bit in a “soft voting” procedure.Our use of local decoding in a noncoherent system is novel.The resulting receiver structure is the same as in Fig. 2 exceptthat the two Walsh correlators are replaced by two local−5 0 5 10 15 2010−410−310−210−1100Eb/No (dB)ProbabilityofWalshDecoderBitErrorOptimal DecodingLocal Decoding, 32 PairsLocal Decoding, 16 PairsLocal Decoding, 8 PairsLocal Decoding, 4 PairsLocal Decoding, 2 PairsLocal Decoding, 1 PairFig. 3. BER of 2-point local Walsh decoding for AWGN channels as afunction of SNR per information bit.decoders that sum the products of the same codeword symbolpairs, and the subsequent squarers are eliminated since themultiplication of symbol pairs already squares cos θ and sin θ.The soft votes from the two local decoders are then addedtogether to eliminate any dependency on θ, and the sum isquantized to ±1 to decode one bit.The simulated AWGN BER for local decoding in the IS-95reverse link as a function of both the number of pairs used todecode each bit and the SNR per information bit is depicted inFig. 3. (For the purposes of computing SNR per informationbit in this paper, “information bit” shall refer to bits at the inputof the Walsh encoder rather than the convolutional encoder.)Although local decoding using 32 pairs of codeword symbolsmakes use of the entire Walsh codeword, its performanceis nearly 3 dB worse than optimal decoding because therelationships between symbols in different pairs are not fullyexploited. Each reduction of the number of local decodingpairs by a factor of two causes an additional loss of 2 dB.B. Generalized Local DecodingTo decode two input bits using local decoding, one pairof symbols in the Walsh codeword can decode one bit andanother pair can decode the other bit. While local decodingcan decode bits using few computational operations and ex-amining potentially few codeword symbols, one might expectto get a lower BER by jointly decoding both input bits whenconstrained to examine only 4 of the 64 symbols of a receivedWalsh codeword.In what we call generalized local decoding, certain groupsof four symbols in the Walsh codeword can jointly decode twoinput bits at low BERs. Whereas local decoding determinesthe bit xt using two symbols ci and cj such that the binaryrepresentations of i and j differ only in the tth bit position,generalized local decoding determines the pair of bits xs andxt using four symbols cg, ch, ci and cj such that the binaryrepresentations of g, h, i, and j differ only in both the sth andtth bit positions.
  4. 4. These four symbols are processed in a way that generalizeslocal decoding. In local decoding, xt is determined by takingthe product of codeword components ci and cj in bipolarformat and quantizing to ±1. This process is equivalent toperforming a 2-point FHT on the two-component vector [ci, cj]where i < j, and determining xt to be the binary index ofthe maximum FHT component. That is, if the zeroth FHTcomponent is largest then decode as 0, otherwise the first FHTcomponent is largest and decode as 1. In generalized localdecoding, a 4-point FHT is performed on the four-componentvector [cg, ch, ci, cj] where g < h < i < j, and xs and xtare decoded as the two-bit binary index of the maximum FHTcomponent where s < t.Generalized local decoding works also for groups of 8, 16,and 32 codeword symbols, for which 3, 4, and 5 bits aredecoded respectively. Note that at one extreme, generalizedlocal decoding using groups of 2 codeword symbols is simplylocal decoding, while at the other extreme generalized localdecoding using the entire 64 symbols of the Walsh codeword isoptimal Walsh decoding implemented by a 64-point FHT. Allsix input bits can be decoded using combinations of various-sized FHTs. For example, one could use three 4-point FHTsor two 8-point FHTs.As in the previous section, multiple groups of codewordsymbols can be processed and combined to decode input bits.In local decoding, up to 32 pairs of codeword symbols can beused to decode one input bit because there are 32 choices forthe five fixed bits in the binary representation of the codewordsymbol indices. In general, up to 26−pgroups of 2pcodewordsymbols can be used to decode p input bits because there are26−pchoices for the fixed 6−p bits in the binary representationof the codeword symbol indices. Because the reverse linkof IS-95 is noncoherent, the components of the 26−pFHTsare squared before being added component-wise. Figure 4shows the simulated AWGN BER for generalized 8-point localdecoding. From Figs. 3 and 4, it is clear that if the number ofcodeword symbols examined to decode each input bit is fixed,better performance is obtained with higher-order FHTs.C. Punctured DecodingThus far, we have been exploiting (2p, p) codes that arefound within the (64,6) Walsh code—that is, we have beenusing a subset of the codeword symbols to decode a subset ofthe input bits. In this section, we use a subset of the codewordsymbols to decode all 6 input bits. Since the columns ofthe generator matrix G for the (64,6) Walsh code include allpossible 6-bit binary vectors, any (n, 6) binary linear code withn < 64 and distinct generator matrix columns can be obtainedby puncturing those symbols in the (64,6) Walsh code thatcorrespond to unwanted columns in G.For a given n < 64, the set of all possible (n, 6) linearcodes created by distinct puncturing patterns has a wide rangeof BERs, and we wish to find one that leads to low BERs.In noncoherent systems such as IS-95, the (n, 6) code thatmaximizes the minimum distance does not necessarily leadto the best BER. Take for instance the (32,6) biorthogonal−5 0 5 10 15 2010−410−310−210−1100Eb/No (dB)ProbabilityofWalshDecoderBitErrorOptimal DecodingGeneralized 8−Point Local Decoding, 8 GroupsGeneralized 8−Point Local Decoding, 4 GroupsGeneralized 8−Point Local Decoding, 2 GroupsGeneralized 8−Point Local Decoding, 1 GroupFig. 4. BER of generalized 8-point local Walsh decoding for AWGN channelsas a function of SNR per information bit.code, which is the (32,6) linear code with the maximumminimum Hamming distance [7]. However, the complement ofeach codeword is also a codeword, so pairs of indistinguish-able codewords exist in a noncoherent system with unknowncarrier phase. A more appropriate criterion for selecting acode for a noncoherent system is to simultaneously maximizethe minimum distance and minimize the maximum distance.While we are not aware of research done to identify codesthat simultaneously optimize both metrics, it is well knownin coding theory that randomly generated codes are generallygood codes with high probability, especially for long codewordlengths (large n) [8]. In light of this, we can randomly selecta subset of n codeword symbols from the (64,6) Walsh code,thereby effectively obtaining an (n, 6) code that, with highprobability, has good BER performance.The maximum-likelihood decoder for the resulting (n, 6)code is the minimum-distance decoder which, in the case ofbipolar codewords, is equivalent to taking the correlations ofthe 64 possible codewords of length n with the puncturedreceived codeword, and selecting the codeword correspondingto the maximum. This set of 64 correlations can still beefficiently computed using an FHT on the received codewordwith the 64 − n unwanted codeword symbols set to zero. Ina noncoherent system such as IS-95, the receiver structureis similar to Fig. 2, except that puncturing is prior to theWalsh correlations. Figure 5 shows the simulated AWGN BERcurves for randomly punctured (60,6), (56,6), (48,6), (32,6),and (16,6) codes.IV. ADAPTIVE SUBCODE-BASED WALSH DECODINGLike typical wireless channels, the IS-95 reverse link en-counters large fluctuations in instantaneous SNR. The fluctu-ations due to path loss and shadowing are partially mitigatedby power control, but the rapid SNR fluctuations due tomultipath fading remain, especially at high mobile speeds. Thetechniques presented in the previous section provide a varietyof operating points in the tradeoff plane between BER and
  5. 5. −5 0 5 10 15 2010−410−310−210−1100Eb/No (dB)ProbabilityofWalshDecoderBitErrorOptimal DecodingRandom (60,6) DecodingRandom (56,6) DecodingRandom (48,6) DecodingRandom (32,6) DecodingRandom (16,6) DecodingFig. 5. BER of punctured Walsh decoding for AWGN channels as a functionof SNR per information bit.computational complexity at a fixed SNR, so in theory if weknew the instantaneous SNR, we could choose the technique(and thus, operating point) at that time that achieves the targetBER with as little computation as possible. However, this kindof information is not necessarily available or accurate at thebase station.To address this issue, we can test a series of suboptimalWalsh decoding algorithms with operating points that simul-taneously move towards higher computational complexity andlower BERs until the target BER is achieved. Although wecannot know the true BER, we can get an indication bymonitoring a reliability metric generated by the Walsh de-coding algorithm until it exceeds a threshold. The suboptimalWalsh decoding techniques can be chosen such that only anincremental amount of computation is required for the next,slightly more complex technique.We test adaptive algorithms based on local decoding, gen-eralized local decoding, and punctured decoding by simu-lating the IS-95 reverse link. Each adaptive Walsh decodingtechnique at a particular stopping threshold corresponds to apoint in the tradeoff plane between BER and computationalcomplexity, averaged over many IS-95 frames experiencingindependent fades. Due to implementation-specific softwareoptimizations, it is not very meaningful or realistic to countthe total number of algorithmic operations for long codegeneration, despreading, and Walsh decoding as a measure ofcomputational complexity. Rather, we take as a measure theaverage number of Walsh codeword symbols used to decodeall six input bits, because the computation is roughly linear inthis quantity.The tradeoff between BER and average computational com-plexity for adaptive local decoding is shown in Fig. 6. In thisalgorithm, the number of pairs used to locally decode one ofthe six input bits is increased by increments of one until themagnitude of the sum of the pair products exceeds a threshold.Since there are 32 possible terminations to the algorithm, wecall this a 32-stage algorithm. Each point was generated by0 10 20 30 40 50 6010−310−210−1100Average Computational ComplexityProbabilityofWalshDecoderBitError32−Stage Local Decoding8−Stage Generalized 8−Point Local Decoding64−Stage Random Punctured DecodingFig. 6. Tradeoff between BER and average computational complexity forthe three classes of adaptive subcode-based Walsh decoding. The SNR perinformation bit is lognormally distributed with mean 4.7 dB and variance1.5 dB.assuming a specific stopping threshold and assuming that theorder in which the pairs are examined is chosen at random. Thecurve is traced out from left to right as the threshold increasesfrom 0 to infinity; at low thresholds the algorithm typicallystops after looking at few pairs, while at high thresholds thealgorithm typically stops after looking at many pairs. Thereason the tradeoff curve here is not a very good one is thateven if few pairs are used to decode each of the six bits, mostof the pairs for each of the six bits do not correspond to thesame Walsh codeword symbols, and computational complexitybecomes very high.Figure 6 also shows the tradeoff curve for an adaptivegeneralized local decoding algorithm using 8-point FHTs. This8-stage algorithm increments the number of 8-point FHTs usedto decode three of the six input bits until the metrics for thethree bits all cross a threshold. When each additional 8-pointFHT is used, the squared correlations are added to the sumof squared correlations of the 8-point FHTs already used. Themetric for each bit, for the purposes algorithm termination, isthe absolute difference between the maximum sum of squaredcorrelations over all three-bit input patterns with a zero in thatbit position and the maximum sum of squared correlationsover all those patterns with a one in that bit position. Thisadaptive generalized local decoding algorithm creates a bettertradeoff than adaptive local decoding.The tradeoff curve for adaptive punctured decoding is alsoshown in Fig. 6, corresponding to a 64-stage algorithm inwhich up to all 64 Walsh codeword symbols are used to decodeall six input bits. When an additional codeword symbol is used,the length of the 64 correlations is extended by that additionalsymbol. The metric for each bit, for the purposes of stoppingthe algorithm, is the absolute difference between the maximumsquared correlation over all six-bit input patterns with a zeroin that bit position and the maximum squared correlation overall those patterns with a one in that bit position. This 64-
  6. 6. 0 1 2 3 4 5 6 7 81e−050.00010.0010.010.11ProbabilityofWalshDecoderBitErrorBER of Optimal DecodingBER of Adaptive Random Punctured DecodingBER of Adaptive Random Punctured Decoding, Target BER = 10−2Complexity of Optimal DecodingComplexity of Adaptive Random Punctured DecodingComplexity of Adaptive Random Punctured Decoding, Target BER = 10−20 1 2 3 4 5 6 7 81e−050.00010.0010.010.11ProbabilityofWalshDecoderBitError0 1 2 3 4 5 6 7 8010203040506070AverageComputationalComplexityEb/No (dB)BER of Optimal DecodingBER of Adaptive Random Punctured DecodingBER of Adaptive Random Punctured Decoding, Target BER = 10−2Complexity of Optimal DecodingComplexity of Adaptive Random Punctured DecodingComplexity of Adaptive Random Punctured Decoding, Target BER = 10−2Fig. 7. BER and average computational complexity of adaptive randompunctured decoding as a function of average SNR per information bit. Thevariance of the SNR is 1.5 dB.stage algorithm approaches the BER of optimal noncoherentdecoding (the BER of the rightmost point on the curve) withlower computational complexity. Note that adaptive punctureddecoding has the better tradeoff at low BERs, while adaptivegeneralized 8-point local decoding has the better tradeoff athigher BERs.In Fig. 7, we compare the BER and computational complex-ity of optimal noncoherent decoding specifically to adaptiverandom punctured decoding. While optimal decoding requiresa fixed amount of computation regardless of SNR, similarBERs can be achieved by adaptive random punctured decod-ing with an amount of computation that decreases as SNRincreases. If the goal is to achieve a target BER of 10−2, therequired computation is even less.Although we have used the same threshold at each stageof these multi-stage algorithms for convenience, there is noreason that different thresholds cannot be used at differentstages. With these additional degrees of freedom, the achiev-able points in the tradeoff plane for a particular adaptivealgorithm form a region rather than a curve. While it is difficultto analytically or numerically determine the optimal thresholdsthat achieve points that lie on the lower left border of theachievable region, it is possible to find distinct thresholds foreach stage that generate improved tradeoff curves. This is aninteresting area for future research.Furthermore, adaptive algorithms with fewer stages maybe more practical. For example, a 2-stage adaptive localdecoding algorithm looks at a certain number of pairs, and thendecides whether to examine the remaining pairs. If designedcorrectly, the reduction in the number of stages can simplifythe algorithm and yet retain a similar tradeoff.REFERENCES[1] EIA/TIA/IS-95 Mobile Station-Base Station Compatibility Standard forDual-Mode Wideband Spread Spectrum Cellular System, Telecommuni-cations Industry Association, July 1993.[2] L. Trevisan, “Some Applications of Coding Theory in ComputationalComplexity,” Quaderni di Matematica, vol. 13, pp. 347–424, 2004.[3] R. K. Yarlagadda and J. E. Hershey, Hadamard Matrix Analysis andSynthesis, with Applications to Communications and Signal/Image Pro-cessing. Boston: Kluwer Academic, 1997.[4] J. S. Lee and L. E. Miller, CDMA Systems Engineering Handbook.Boston: Artech House, 1998.[5] A. J. Viterbi, CDMA: Principles of Spread Spectrum Communication.Reading, MA: Addison-Wesley, 1995.[6] A. V. Oppenheim and R. W. Schafer, with J.R. Buck, Discrete-TimeSignal Processing 2nd ed. Upper Saddle River, NJ: Prentice Hall, 1999.[7] V. S. Pless and W. C. Huffman, eds., Handbook of Coding Theory, Vol.1. Amsterdam: Elsevier Science, 1998.[8] S. J. MacMullan and O. M. Collins, “A Comparison of Known Codes,Random Codes, and the Best Codes,” IEEE Trans. Inform. Theory, vol.44, no. 7, pp 3009–3022, Nov. 1998.