Upcoming SlideShare
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Standard text messaging rates apply

# Ieeetcomgpcdmav12

104
views

Published on

vvhjbhjdsbvbbbjhbdvbjbvhjbhvbdsbvhjhb

vvhjbhjdsbvbbbjhbdvbjbvhjbhvbdsbvhjhb

Published in: Technology

0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total Views
104
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
0
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Transcript

• 5. 5B. The GP solution: a nonlinear MMSE good example of a regularized solution is the CMOE detector The MMSE criterion minimizes: in [11], fmmse (·) = argmin E (b − f (r)) 2 (11) wcmoe = (RR /L + αI)−1 R · b/L, (18) f (·) where α is set using an ad-hoc procedure. Others examples ofin which b and r are, respectively, the transmitted and received more involved methods for regularization can be found in [22],symbols and f (·) is the function that estimates b from r. It [15]. A common drawback of these approaches is that they dois widely known [42] that the minimum of (11) for AWGN not clearly indicate how the regularization parameter is set andchannels is given by the conditional mean of b given r: how it affects the solution. GPR provides a simple method, ˆ∗ = fmmse (r∗ ) = E [b∗ |r∗ ] , b (12) based on maximum likelihood, to compute the regularization. GPR trains its regularization parameter as to automaticallywhich is a nonlinear function of the inputs, unless the inputs choosing the best linear detector between the matched ﬁlter 2 2 2 2are Gaussian distributed. (σν /σw → ∞) and the SMMSE (σν /σw = 0), improving the In DS-CDMA, linear MMSE estimators are typically used, performance of the MUD-SMMSE.due to their simplicity and nearly optimal results in many The nonlinear GPR-MUD inherits this property from thestandard scenarios. The linear, f (r) = r w, MMSE detector linear case, as the nonlinear GPR can be understood as a linearcan be expressed as a Wiener ﬁlter [1]: solution in a transformed space. Therefore, if the proposed 2 −1 kernel is ﬂexible to approximate any desired function, the wmmse = argmin E b−r w = (Rrr ) Rrb , (13) GPR predictions are optimal in the MMSE sense. The kernel w we proposed in Section IV-A is universal and it is able towhere Rrr and Rrb are, respectively, the autocorrelation of approximate (12) and produce optimal decisions.the inputs and the cross-correlation between the inputs andoutputs. Under mild conditions, the MMSE linear detector can be C. Computational loadbuilt from the spreading codes of every user and the channel’s GPR uses the ‘kernel trick’ [31], [41], [24] to achieveimpulsive response: nonlinear solutions without explicitly describing its nonlinear transformation to the feature space, which can be inﬁnite wmmse = (PP + σn I)−1 pj , 2 (14) dimensional. The computational complexity of GPRs is limitedwhere pj denotes the jth column of P. If the other users’ by the inversion of the covariance matrix, which is O(L3 ).spreading codes or the channel state information are unknown, This inversion is needed in every step of any gradient-basedwe need to replace Rrr and Rrb by their sampled versions: optimization method to estimate the hyperparameters [24] in the training stage. Then, the result of the training stage is wsmmse = (R · R /L)−1 R · b/L, (15) used to estimate the output for any new received data at awhere the columns of R are the received chips at every symbol computational cost given just by the computation of k in (3).in the training set. We denote this solution as sampled MMSE For large training sequences this computational complexity(SMMSE). can be prohibitively large and we need to approximate the The GPR-MUD estimates the output for a new incoming GPR solution to deal with thousands of samples, as we showsample using (7), which can be simpliﬁed if we use a linear in short. In the linear GPR solutions, if the input dimensionkernel (φ(r) = r) to: is lower than the number of training samples (L > D), the computational complexity reduces to O(D3 ) as we derive next, 2 2 2 −1 wgpr = µw = σw R σw R R + σν I b. (16) which makes the complexity for this algorithm independent of the number of training examples and similar to that of the After some algebraic transformations we can express the linear SMMSE-MUD.linear GPR solution, up to a scaling constant, as 1) The general case: There are several proposals in the 2 −1 literature to reduce the complexity of the GPR training and σν wgpr = RR + 2 I Rb. (17) prediction stages [43]-[47]. A detailed review of all these σw methods can be found in [48]. The main motif of theseSince RR and Rb are, respectively, the empirical estimates approaches is representing the covariance matrix of the GPof the correlations matrices Rrr and Rrb , this expression is with a reduced set of J samples (or pseudo-samples) much 2 2 smaller than L, with the same accuracy as the whole trainingequal to (15) except for the term σν /σw I. This term endowsthe SMMSE solution in (15) with ridge regularization [14]. dataset.Ridge regression is a quite standard tool to avoid overﬁtting The algorithms in [43]-[47] propose different ways ofand improve the performance of the SMMSE detector. The selecting the J samples in the reduced set. Some of them trainregularization term must fade away as the number of training the locations of the pseudo-samples. This allows obtaining asamples increases [10], thereby achieving the optimal solution further reduced set at a larger computational complexity duringfor inﬁnite samples. For short training sequences, the estimated training. Other approaches use heuristics to select the pseudo-correlation matrices represent poorly Rrr and Rrb and the samples from the training set. Also, the pseudo-samples canregularization term should be kept high to avoid overﬁtting. A be choosen incrementally until a desired accuracy is achieved.
• 6. 6 2) Linear GPR: If we are interested only in linear detectors, SMMSEthe complexity of each iteration of the optimization procedure CMOEcan be signiﬁcantly reduced from O(L3 ) to O(min{L, D}) SVM −1 10 CMTwithout loss of accuracy. We propose to run a very simple GPRalgorithm for computing the linear GPR-MUD. For a linearkernel the covariance function becomes: −2 10 BER Cθ = eθ1 X X + eθ2 I = eθ1 X X + eθ2 −θ1 I , (19)where the columns of X are the sample inputs in (8) and θ1 , θ2 −3are the to be estimated hyperparameters. We can write X X 10in terms of its eigenvalues: X X = UΛU , (20) −4 10where U is an orthonormal matrix, containing the eigenvectors 0 5 10 15 20 25of X X, and Λ is a diagonal matrix with its eigenvalues. This SNR(dB)representation is useful to compute the inverse of Cθ as −1 C−1 = e−θ1 U Λ + eθ2 −θ1 I θ U . (21) Fig. 1 BER ALONG THE SNR IN A CDMA SCENARIO WITH K = 10 USERS AND This eigenvalue-eigenvector representation simpliﬁes the G OLD SEQUENCES WITH N = 31 FOR LINEAR SVM (∗), CMT ( ),gradient with respect to θi , as follows: CMOE ( ), SMMSE (◦), AND LINEAR GPR ( ) MUD FOR L = 64 D D TRAINING SAMPLES . ∂l(θ) 1 λi 1 λi zi e−θ1 2 = + (22) ∂θ1 2 i=1 λi + eθ2 −θ1 2 i=1 (λi + eθ2 −θ1 ) 2 D D ∂l(θ) 1 eθ2 −θ1 1 eθ2 −2θ1 zi 2 SMMSE = θ2 −θ1 + , (23) CMOE ∂θ2 2 λi + e 2 (λi + eθ2 −θ1 ) 2 i=1 i=1 SVM −1 10 CMTwhere z = U y. At most, the number of nonzero eigenvalues GPRis min(L, D). We have used D in the previous equations asin most cases L > D. −2 10 The complexity of the linear SMMSE-MUD is O(D3 ) as BERit needs to invert a D × D matrix. The complexity of trainingthe GPR is linear in D, once we have computed its eigenvalue −3decomposition which is O(D3 ). Each optimization step of 10the GPR is insigniﬁcant with respect to the matrix inversion.Therefore, the complexity of the GPR detector with a linearkernel is of the same order of magnitude as that of the −4 10SMMSE-MUD. 50 100 150 200 250 300 350 Training Samples V. E XPERIMENTAL R ESULTSA. Regularization Fig. 2 We ﬁrst propose the same scenario as in [15] with K = 10 BER ALONG THE NUMBER OF TRAINING DATA IN A CDMA SCENARIOactive users and Gold spreading sequences of length N = 31. WITH K = 10 USERS AND G OLD SEQUENCES WITH N = 31 FOR LINEARThe amplitudes of the interferer were equal to that of the user SVM (∗), CMT ( ), CMOE ( ), SMMSE (◦), AND LINEAR GPR ( )of interest. We include the average for 300 simulated chip- MUD FOR A SNR=15 D B.spaced channels of length Mc = 15 with equally distributedzero-mean random Gaussian for the channel paths. We usedQ = 2 in (8). We compare the BER of the SMMSE detector in (15) to a function of the SNR for L = 64 samples in Fig. 1 and thethe GPR with linear kernel (α1 = 0 in (10)), the SVM with BER as a function of the training samples for an SNR of 15dBlinear kernel and soft margin 0.5, the CMOE in (18) and the in Fig. 2.tappered regularized method (CMT) in [15]. The regularization The linear GPR-MUD receiver clearly outperforms the otherparameters for the CMOE and the CMT where set as described detectors for short training sequences for the SNR range ofin [49], where the covariance matrix was estimated with L = interest, see Fig. 1. The SVM is the second best procedure,372 samples. The training sequences were generated randomly although its performance is signiﬁcantly poorer than that of thefor every channel. The BER was estimated with test inputs GPR detector. The CMOE and CMT use a ﬁxed regularizationdifferent from the training sequences. We report the BER as procedure that precludes them to perform well for all training