Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Massive MIMO and Random Matrix


Published on

  • Be the first to comment

Massive MIMO and Random Matrix

  1. 1. Mini Seminar on Massive MIMO and Random Matrix Department of Electronics & Communication Engineering National Institute of Technology, Rourkela Varun Kumar (514EC1005) Under the supervision of Prof . Sarat Kumar Patra
  2. 2. Massive MIMO and Random Matrix (RMT) Massive MIMO: Massive MIMO makes a clean break with current practice through the use of a very large number of service antennas (e.g., hundreds or thousands) that are operated fully coherently and adaptively. Extra antennas help by focusing the transmission and reception of signal energy into ever-smaller regions of space. Random Matrix Theory: A wide range of existing mathematical results that are relevant to the analysis of the statistics of random matrices arising in wireless communications. • Complex Gaussian random variables are always circularly symmetric, i.e., with uncorrelated real and imaginary parts, and complex Gaussian vectors are always proper complex.
  3. 3. Most Important Class of Random Matrices: • Gaussian • Wigner • Wishart • Haar matrices We also collect a number of results that hold for arbitrary matrix size.
  4. 4. Wireless Channel Model: 𝑦 = ρ𝐺𝑥 + 𝑤 𝐺 = 𝐻𝐷1/2 Where ρ is signal-to-noise ratio, 𝐺M×𝐾 is the channel gain matrix in uplink scenario, 𝑥k×1 is the transmitted symbol vector of K user and 𝑤M×1 is the noise vector. 𝐷k×𝐾 is diagonal matrix 𝐷 = 𝛽1 … 0 ⋮ ⋱ ⋮ 0 … 𝛽 𝐾 𝛽 𝑘 = 𝑧 𝑘/(𝑟𝑘/𝑟ℎ) 𝑛 𝑧 𝑘 is the lognormal distributed random variable. 𝑟𝑘 is the kth user location form base station whereas 𝑟ℎ is the hole distance and n is the path loss exponent. All user in a given cell are distributed like Poisson point distributed. 𝑟ℎ
  5. 5. H is fast faded channel matrix. The primary assumption to make mathematical modal are as follows 𝐻~Ϲ 𝑀×𝐾 𝐻𝐻 𝐻 ~𝑁 0, I 𝐾 𝑤~𝑁(0, σ2 𝑤) 𝐸 𝑥 2 ~1 Capacity Formulation: Shannon Channel Capacity: 𝐶 = 𝐸 log 1 + 𝛾𝑋 𝐶 = 𝐸[log(1 + 𝑆𝑁𝑅)]
  6. 6. Gamma Distribution Density function 𝑓 ρ = ρ 𝛼−1 𝑒 −ρ 𝛽 𝛽 𝛼г 𝛼 0 < ρ < ∞ 𝑚𝑒𝑎𝑛 = 𝛼𝛽, 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 𝛼𝛽2 𝑆𝐷 = 𝛼𝛽2 Extension of Gamma distribution: • Chi-Squared Distribution 𝛼 = 𝑑 2 𝛽 = 2 (d= Degree of freedom) • Exponential Distribution 𝛼 = 1 Role of the Singular Values: 1 𝑀 𝐼 𝑥; 𝑦 𝐻 = 1 𝑀 𝑙𝑜𝑔𝑑𝑒𝑡 𝐼 + ρ𝐻𝐻 𝐻 = 1 𝑀 𝑖 𝑀 log(1 + 𝜌λ𝑖(𝐻𝐻 𝐻 )) = 0 ∞ log 1 + 𝜌𝑥 𝑑𝐹 𝐻𝐻 𝐻(𝑥) With transmitted signal-to-noise ratio (SNR) 𝜌 = 𝑀𝐸[||𝑥||2 ] 𝐾𝐸[||𝑤||2]
  7. 7. Some important property of random vector and random matrices • Let 𝑝 ≜ [𝑝1, 𝑝2 𝑝3….. 𝑝 𝑛] 𝑇 and 𝑞 ≜ 𝑞1, 𝑞2 𝑞3….. 𝑞 𝑛 𝑇 be mutually independent 𝑛 × 1 vectors whose elements are i.i.d zero mean random variables (RVs) with 𝐸 𝑝𝑖 2 = σ 𝑝 2 and 𝐸 𝑞𝑖 2 = σ 𝑞 2 i=1,2…n Then from the large numbers we have 1 𝑛 𝑝 𝐻 𝑝 → σ 𝑝 2 and 1 𝑛 𝑝 𝐻 𝑞 → 0 as n→ ∞ From the Lindeberg-Levy central limit theorem, we have 1 𝑛 𝑝 𝐻 𝑞 𝑑 → CN(0, 𝜎 𝑝 𝜎 𝑞) as n→ ∞ 𝑑 → denotes the convergence of distribution • Gaussian Matrix: A standard real/complex Gaussian m × n matrix H has i.i.d. real/complex zero-mean Gaussian entries with identical variance 𝜎2 = 1 𝑚 . The p.d.f. of a complex Gaussian matrix with i.i.d. zero-mean Gaussian entries with variance 𝜎2 is (𝜋𝜎2 )−𝑚𝑛 𝑒𝑥𝑝 −𝑡𝑟(𝐻𝐻 𝐻 ) 𝜎2
  8. 8. • Wigner Matrices Let W be an n × n matrix whose (diagonal and upper-triangle) entries are i.i.d. zero-mean Gaussian with unit variance. Then, its p.d.f. is (2)− 𝑛 2 𝜋−𝑛2/2 𝑒𝑥𝑝 −𝑡𝑟(𝑊2) 2 while the joint p.d.f. of its ordered eigenvalues λ1 ≥ ... ≥ λn is 1 (2𝜋)−𝑛/2 𝑒− 1 2 𝑖 𝑛 λ 𝑖 2 𝑖=1 𝑛−1 1 𝑖! 𝑖<𝑗 𝑛 (λ𝑖 − λ𝑗)2 Wishart Matrices: The m × m random matrix A = HH† is a (central) real/complex Wishart matrix with n degrees of freedom and covariance matrix Σ, (A ∼ Wm(n, Σ)), if the columns of the m × n matrix H are zero-mean independent real/complex Gaussian vectors with covariance matrix Σ. The p.d.f. of a complex Wishart matrix A ∼ Wm(n, Σ) for n ≥ m is 𝑓𝐴 (𝐵) = 𝜋−𝑚(𝑚−1)/2 𝑑𝑒𝑡 𝑛 𝑖=1 𝑚 𝑛 − 𝑖 ! exp −𝑡𝑟 Σ−1 𝐵 𝑑𝑒𝑡𝐵 𝑛−𝑚
  9. 9. Some Useful Property of Whishart Matrix: • 𝐸 𝑡𝑟 𝑊 = 𝑚𝑛 𝑓𝑜𝑟 𝑊~𝑊𝑚(𝑛, 𝐼) • 𝐸 𝑡𝑟 𝑊2 = 𝑚𝑛(𝑚 + 𝑛) • 𝐸 𝑡𝑟2 𝑊 = 𝑚𝑛(𝑚𝑛 + 1) For a central Wishart matrix 𝑊~𝑊𝑚(𝑛, 𝐼) with n>m, 𝐸 𝑡𝑟 𝑊−1 = 𝑚 𝑛 − 𝑚
  10. 10. Other Application: • Single-user matched filter • De-correlator • MMSE • Optimum • Iterative nonlinear
  11. 11. Conclusion: • Random matrix results have been used to characterize the fundamental limits of the various channels that arise in wireless communications. • Random matrix theory is very useful to converge the very large matrix size. In stead of solving point to point multiplication addition subtraction and large number of channel realization it easily converge to expected value.
  12. 12. References: 1. A. M. Tulino and S. Verdú, “Random matrix theory and wireless communications,” Foundations Trends Communication. Inf. Theory, vol. 1, no. 1, pp. 1–182, June 2004 2. H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser MIMO systems,” IEEE Trans. Commun., vol. 61, no. 4, pp. 1436–1449, Apr. 2013. 3. A Goldsmith, Wireless Communication, Cambridge university press, 2005 4. Andrew Gelman , John B. Carlin , Hal S. Stern , David B. Dunson, Aki Vehtari, Donald B. Rubin ,`` Bayesian Data Analysis’’ (Chapman & Hall/CRC Texts in Statistical Science) 3rd Edition 2013 5. M. Matthaiou, M. R. MacKay, P. J. Smith, and J. A. Nossek, “On the condition number distribution of complex Wishart matrices,” IEEE Trans. Commun., vol. 58, no. 6, pp. 1705–1717, Jun. 2010 6. G. Stewart, Matrix Algorithms: Basic Decompositions. Philadelphia, PA: SIAM, 1998.