project

Upload: mohamadwehbe

Post on 01-Mar-2016

212 views

Category:

Documents


0 download

DESCRIPTION

Capacity of Multi-antenna Gaussian Channels

TRANSCRIPT

  • American University of Science and TechnologyDepartment of Computer and Communication Engineering

    Mohamad Wehbe Hassan Harajli

    Capacity of Multi-antenna Gaussian Channels

    Presented to :Dr. Mustafa El Halabi

    Monday 8th of June, 2015

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 1 / 15

  • Abstract

    In this project we will discuss the use of multiple transmitting andreceiving antennas for single user communications over additive Gaussianchannel without fading. We derive formulas for the capacity of thechannel, and describe computational procedures to evaluate such formulas.The capacity has been derived using 2 different methods:

    1. Singular Value Decomposition2. Mutual Information

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 2 / 15

  • Outline

    Introduction.

    Preliminaries.

    Gaussian Channel with fixed Transfer function

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 3 / 15

  • Introduction

    We will consider a single user Gaussian channel with multiple transmittingand receiving antennas. We will denote the number of transmittingantennas by t and the number of receiving antenna by r. We willexclusively deal with linear model in which received vector y C r dependson the transmitted vector x C t via

    y = Hx + n

    H is ar t

    complex and n is zero-mean complex Gaussian noise with independent,equal variance real and imaginary parts and we assume E[nn]=Ir

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 4 / 15

  • Introduction

    Matrix H can be:1.Deterministic2.Random,changes over time and chosen according to a probabilitydistribution3.Random, but fixed once chosen

    The main focus of this paper is on the last two of the mentioned cases.

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 5 / 15

  • Preliminaries

    For any Z Cn and A Cnm define:z=

    [

  • Preliminaries

    The importance of the circularly symmetric complex Gaussians is due tothe following lemma: circularly symmetric Gaussians are entropymaximizers.Lemma 2:

    Then the entropy of x satisfiesH(x) log det(pieQ)with equality if and only if x is a circularly symmetric complex Gaussianwith E[xx]=Q, where x is a complex zero mean complex random vector.Lemma 3:If x is a circularly symmertic complex Gaussian then so

    y = Ax

    Let Q=E[x x]

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 7 / 15

  • Preliminaries

    E[y y]=AE[x x]A=1/2A Q A=1/2KWhere K=AQA

    Lemma 4:If y and x are independent circularly symmetric complex Gaussian, then

    z = x + y

    is a circularly symmetric Gaussian.

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 8 / 15

  • Gaussian Channel with Fixed Transfer Function

    1. Using Singular value decompositionWe will maximize the average mutual information I(x;y) between the inputsand the output of the channel over the choice of the distribution of x

    y = Hx + n

    where n CN (0,Ir )The case of 2 antennas the H matrix take the form:

    H=

    [h11 h21h12 h22

    ]and X=

    [x1x2

    ]Using singular value decomposition any matrix H C rt can be writtenas:

    H = UDV T

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 9 / 15

  • Where U and V are unitary and the entries of D are only along thediagonal and they are non-negative square root of the eigenvalues of H H

    -Columns of U are eigen vectors of H H.-Columns of V are eigen vectors of HH.Then

    y = UDV T x + n

    Let:y = U yx = V xn = U nThus, the original channel is equivalent to the channel

    y = Dx + n

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 10 / 15

  • Since H is of rank at most min (r,t), we can write the instantaneous valueof y as:

    yi = 1/2i xi + ni , 1 i min (r,t)

    To maximize the mutual information, we need to choose (xi : 1 i min(r,t)) to be independent, with each xi having independent Gaussian,zero-mean real and imaginary parts. The variances need to be chosen viawater-filling as

    E[ < (xi )2 ]= E [ = (xi )2 ]=1/2 (-1i )+

    Where is chose to meet the power constraint. So the power P andcapacity C can thus be parametrized as:

    P () =

    i ( 1i )+

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 11 / 15

  • So the capcaity is calculated as such:

    C=log(1+SNR)

    SNR=(i xi )/ni

    C=I=min(r ,t)

    i=1 log2(1+ (i P)/min(r , t))

    C () =min(r ,t)

    i=1 log2(1+i ( 1i ))+

    C () =min(r ,t)

    i=1 log2(i )+

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 12 / 15

  • Mutual Information

    The mutual information I(x;y) can be written as:

    I (x ; y) = H(y) H(y/x) = H(y) H(n)If x is zero-mean with covariance E[xx]=Q, then y is zero-mean withcovariance E[yy]=H Q H +Ir ,and by Lemma 2 among such y theentropy is largest when y is circularly symmetric complex Gaussian, whichis the case when x is circularly symmetric complex Gaussian.In this case the mutual information is given by:

    I(x;y)=logdet(Ir+H Q H)= logdet(It+Q HH)

    and it only remains to choose Q to maximize this quantity subject to theconstraints tr(Q) P and that Q is non-negative definite.

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 13 / 15

  • HH=U U, with unitary U and non-negative diagonal =diag(1, ...t)

    det(Ir+H Q H)=det(It+ 1/2 U Q U 1/2)

    Observe that Q =UQU is non-negative definite when and only when Qis, and that tr(Q)=tr(Q); thus the maximization over Q can be carriedequally well over Q.For any non-negative definite matrix A, det(A) i Aii , thusdet(Ir+

    1/2 Q1/2) i (1 + Qiii )with equality when Q is diagonal.Thus we can see the maximizing Q isdiagonal, and the optimal diagonal entries can be found viawater-fillingto be

    Qii=(- 1i )

    +, i=1,...,t

    where is chosen to satisfy

    i Qii=P

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 14 / 15

  • The corresponding maximum mutual information is given by

    C=

    i log2( i ))+

    Mohamad Wehbe, Hassan Harajli (American University of Science and Technology)American University of Science and Technology Monday 8th of June, 2015 15 / 15