1

On the Capacity of the Block-Memoryless Phase-Noise Channel Giuseppe Durisi, Senior Member, IEEE

Notation: Uppercase boldface letters denote matrices and lowercase boldface letters designate vectors. The N ×N identity matrix is denoted by IN ; Γ(·) stands for the Gamma function and ψ(·) is Euler’s digamma function. For two functions f (x) and g(x), the notation f (x) = O(g(x)), x → ∞, means that < ∞, and f (x) = o(g(x)), x → ∞, lim supx→∞ f (x)/g(x) means that limx→∞ f (x)/g(x) = 0. We denote expectation I. I NTRODUCTION by E[·], and use the notation Es [·] to stress that expectation is The AWGN channel with phase noise is a widely used model taken with respect to the random variable s. With CN (0, R) to capture imperfect carrier-phase tracking in wireless commu- we designate the distribution of a circularly-symmetric complex nications and certain impairments in fiber-optic communica- Gaussian random vector with covariance matrix R. We say that tions [1]. The surging data-rate demands in microwave backhaul a random variable r has Gamma distribution with parameters links, which can be accurately modeled as AWGN channels im- α > 0 and β > 0, and write r ∼ Gamma(α, β), if its paired by phase noise, has recently motivated a renewed interest probability density function (pdf) qr (r) is given by in characterizing the capacity of phase-noise channels. qr (r) = rα−1 e−r/β / (β α Γ(α)) , r ≥ 0. (1) When the phase-noise process varies slowly, i.e., its coherence time is much larger than the inverse of the signal band- Finally, log(·) indicates the natural logarithm. width, the phase-noise samples in the discretized channel inputoutput (I/O) relation are correlated. A simple way to model this II. S YSTEM M ODEL correlation is to assume that the phase-noise samples remain We consider a discrete-time AWGN channel impaired by constant over a block of N ≥ 1 samples before changing to phase noise. The phase-noise process is assumed to stay conan independent realization [2]. The resulting channel model is stant over a block of N samples and to change independently commonly referred to as block-memoryless phase-noise chanfrom block to block. Within one block, the channel I/O relation nel. This model is attractive because correlation is captured by is given by a single parameter, i.e., the coherence time N of the phase-noise process. yk = ejθ xk + wk , k = 1, . . . , N. To date, the capacity of the block-memoryless phase-noise channel is not known in closed form. Nuriyev and Anastasopou- Here, θ denotes the phase noise, which is assumed uniformly los [2] proved that the capacity-achieving input distribution ex- distributed on [0, 2π). Stacking the symbols transmitted within a hibits a circular symmetry and that the resulting input-amplitude block in a vector x = [x1 · · · xN ] and, similarly, stacking noise distribution is discrete with an infinite number of mass points. and output signals in corresponding vectors y and w enables us They also showed that in the low-SNR regime one can approxi- to write the I/O relation in vector form as follows: mate capacity accurately by using only few mass points, whose y = ejθ x + w. (2) position can be found by numerically solving a nonconvex optimization problem. In the medium- and high-SNR regimes (SNR We assume that w ∼ CN (0, IN ) is independent of θ, and that above 15 dB), however, this numerical approach is unfeasible x is independent of θ and w. We focus on the scenario where due to the large number of mass points needed to approximate coding is performed over multiple blocks. For this scenario, the relevant performance metric is the channel ergodic capacity, capacity accurately. which is given by Contributions: In this letter, we derive bounds on the capacity of the block-memoryless phase-noise channel that are 1 C(ρ) = (3) sup I(x; y) tight over a large range of SNR values of practical interest. N Qx Specifically, the bounds allow us to identify the first two terms in the asymptotic expansion of capacity for SNR going to infinity, as a consequence of the block-memoryless assumption. The supremum in (3) is over the set of input probability distribuand, hence, to characterize capacity accurately at high SNR. tions Qx that satisfy the average-power constraint   This work has been partly supported by the Swedish Agency for Innovation E kxk2 ≤ N ρ. (4) Systems (VINNOVA), within the project P36604-1 MAGIC. Abstract—Bounds are presented on the capacity of the blockmemoryless phase-noise channel. The bounds capture the first two terms in the asymptotic expansion of capacity for SNR going to infinity and turn out to be tight for a large range of SNR values of practical interest. Through these bounds, the capacity dependency on the coherence time of the phase-noise process is determined.

G. Durisi is with the Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden (e-mail: [email protected]).

Because the noise variance is normalized, ρ is equal to the SNR.

2

No closed-form expression for C(ρ) is available to date. For shall next heuristically determine this fraction for the blockthe case N = 1, Lapidoth [3] determined the first two terms in memoryless phase-noise channel in (2). The multiplication of the asymptotic expansion of C(ρ) for ρ → ∞. Specifically, he the input vector x ∈ CN by the phase noise term ejθ makes showed that one of the 2N real parameters characterizing x not recoverable (from ejθ x) at the receiver. This means that, even in the absence 1 1 (5) of the additive noise w, the received signal carries only 2N − 1 C(ρ) = log(ρ) − log(2) + o(1), ρ → ∞. 2 2 Non-asymptotic capacity bounds for N = 1 are presented in [4]. real parameters describing x. Hence, the fraction of signal-space For the general case N ≥ 1, the following asymptotic capacity dimensions available for communication is (2N − 1)/(2N ) = 1 − 1/(2N ), in agreement with (6) and (7). expansion is available [2]: On the basis of this observation, we choose the following   1 C(ρ) = 1 − log(ρ) + O(1), ρ → ∞. (6) distribution to evaluate the mutual information on the right-hand 2N side (RHS) of (3) and, hence, obtain a capacity lower bound: we Note that the asymptotic capacity expansion in (6) (for the take x isotropically distributed, to exploit the circular symmetry 2 general case N ≥ 1) is less accurate than the one in (5) (for of the I/O relation (2), and kxk distributed as the sum of the the special case N = 1) because in (6) the second term in the square of 2N − 1 independent real Gaussian random variables. This results in a Gamma distribution. To obtain a capacity upper expansion of C(ρ) for ρ → ∞ is not determined explicitly. In this letter, we present non-asymptotic bounds on C(ρ) for bound that matches the lower bound (up to a o(1) term), we use the general case N ≥ 1. The bounds turn out to be tight for the duality approach, a technique introduced in [6] to charactera large range of SNR values of practical interest. Furthermore, ize the capacity of fading channels under no a priori channel they allow us to refine (6) and determine the second term in knowledge at the transmitter and the receiver. The essence of the asymptotic expansion of C(ρ) for ρ → ∞. Specifically, we duality is that it allows one to by-pass the supremization in (3) and obtain tight capacity upper bounds by choosing an appropriestablish the following result. ate probability distribution on the output y. As distribution we Theorem 1: The capacity of the channel (2) is given by   choose the one induced on the noiseless channel output ejθ x 1 log(ρ) + cN + o(1), ρ → ∞ (7) by the probability distribution on x used to obtain the lower C(ρ) = 1 − 2N bound. The approach just outlined generalizes to N ≥ 1 the where proof technique used in [3] for the case N = 1.     2N 1 cN , 1 − log B. A Lower Bound on Capacity 2N 2N − 1     To obtain a capacity lower bound, we evaluate the mutual 1 1 Γ(N − 1/2) + log − log(4π) . (8) information on the RHS of (3) for the probability distribution N Γ(N ) 2 introduced in Section III-A. Specifically, let x = kxk · vx , Proof: See Section III-D. where vx = x/kxk. We take vx uniformly distributed on Note that by √ setting N = 1 in (7) and (8), and recalling that the unit sphere in CN . Furthermore, we choose kxk2 = Γ(1/2) = π, one recovers (5). N ρ s/(N − 1/2), where s ∼ Gamma(N − 1/2, 1) is inThe rest of this letter is organized as follows: in Section III-A dependent of v . As E[s] = N − 1/2, the average power x we provide some intuition on the structure of the capacity- constraint (4) is satisfied with equality. Next, we use that, by achieving input distribution at high SNR. Then, we use this definition, I(x; y) = h(y) − h(y | x) and bound the two differintuition to construct a capacity lower bound (Section III-B) and ential entropy terms separately. For the first term, we proceed an upper bound (Section III-C) that agree up to a o(1) term, as follows: and, hence, allow us to establish Theorem 1. In Section IV, we (a) (b) present an additional capacity lower bound that, although not h(y) ≥ h(y | w) = h(ejθ x) asymptotically tight in the sense of (7), yields (together with    (c) = h(kxk2 ) + log π N /Γ(N ) + (N − 1) E log kxk2 the upper bound in Section III-C) an accurate capacity charac  N   terization for a large range of SNR values of practical interest. π 2N ρ (d) + h(s) + log = N log 2N − 1 Γ(N ) III. B OUNDING C APACITY AT H IGH SNR + (N − 1) E[log(s)]     A. Geometric Intuition Γ(N − 1/2) 2N ρ (e) + log + (N − 1/2) = N log We start by providing some geometric intuition that sheds 2N − 1 Γ(N )  1 light on the way the capacity bounds used to establish Theorem 1 (9) + log π N + ψ(N − 1/2). are constructed. Let the capacity pre-log χ be defined as the 2 asymptotic ratio between capacity and the logarithm of SNR as Here, (a) follows because conditioning reduces entropy [7, SNR grows to infinity, i.e., Sec. 8.6], in (b) we used that differential entropy is invariant to  translations [7, Thm. 8.6.3] and that w and (x, θ) are indepenχ = lim C(ρ)/ log(ρ) . ρ→∞ dent; in (c) we used the change of variable lemma to compute The capacity pre-log can be interpreted as the fraction of h(ejθ x) in polar coordinates [6, Lem. 6.17 and Lem. 6.15]; we signal-space dimensions available for communications [5]. We also exploited that ejθ x is isotropically distributed; (d) follows

3

because kxk2 = N ρ s/(N − 1/2) and from [7, Eq. (8.66)]; finally, (e) follows because, for z ∼ Gamma(α, 1),

As in (3), √ the supremum is over the set of Qx satisfying (4). Let y = r · vy , where r = kyk2 and vy = y/kyk. To evaluate the first term on the RHS of (16), we take qy (y) so E[log(z)] = ψ(α) that r ∼ Gamma(α, β), with α to be optimized later, and  h(z) = (1 − α)ψ(α) + α + log Γ(α) . β = N (ρ + 1)/α. Furthermore, we take vy uniformly distributed on the unit sphere in CN and independent of r. Let qr (r) We next bound h(y | x). Let {w ˜l }N l=1 be independent and identically distributed CN (0, 1) random variables. Furthermore, denote the resulting pdf of r. By using polar coordinates,  ˜ be a N -dimensional random vector with entries y˜1 =  let y √ ˜ are related − Ey [log(qy (y))] = − Ey log qy ( r · vy ) ejθ kxk + w ˜1 and y˜l = w ˜l , l = 2, . . . , N . As y and y  (a) by a unitary transformation, we have that = − Ey [log(qr (r))] + log π N /Γ(N ) + (N − 1) Ey [log(r)] h(y | x) = h(˜ y | x) = log(πe)N −1 + h(˜ y1 | kxk). (10) (b) Ey [r] = (N − α) Ey [log(r)] + α N (ρ + 1) Because the phase of w ˜1 is uniformly distributed on [0, 2π), the  N jθ + log π Γ(α)/Γ(N ) + α log(N (ρ + 1)/α) . (17) random variable y˜1 = e kxk + w ˜1 has the same distribution as jθ ˆ yˆ1 = e [kxk + w ˆ1 ] where w ˆ1 ∼ CN (0, 1). Let now θ1 denote Here, in (a) we used that the phase of yˆ1 . Then,  √ h(˜ y1 | kxk) = h(ˆ y1 | kxk) qr,vy (r, vy ) = qy r · vy · rN −1 /2 (a)

2

2

= h(θˆ1 | |ˆ y1 | , kxk) − log(2) + h(|ˆ y1 | | kxk)

(b)

as a consequence of the change of variable theorem, and that

2

= log(π) + h(|ˆ y1 | | kxk) (11) qr,vy (r, vy ) = qr (r) · Γ(N )/(2π N )     1 4N ρ ≤ log(π) + Es log (2πe) 1 + s . (12) by construction; (b) follows from (1) with β = N (ρ+1)/α. Let 2 2N − 1 Here, (a) follows from [6, Lem. 6.16]; in (b) we used that θ dλ,α , log(Γ(α)/Γ(N )) + λ − N + 1. is uniformly distributed on [0, 2π) and, hence, θˆ1 is uniformly 2 distributed on [0, 2π) as well, and independent of |ˆ y1 | and Substituting (17) and (10) into (16), and then using (11) and that 2 kxk; finally (c) follows because |ˆ y1 | has variance 1 + 2kxk2 Ey [r] = E kyk2  = E kxk2  + N , we obtain given kxk, and because the Gaussian distribution maximizes (   differential entropy under a variance constraint [7, Thm. 8.6.5]. 1 N (ρ + 1) Substituting (12) into (10), subtracting (10) from (9), and then C(ρ) ≤ + dλ,α sup α log N Qx α dividing by N , we obtain: C(ρ) ≥ L1 (ρ), where      + (N − α) Ey [log(r)] − h(|ˆ y1 |2 | kxk) 1 Γ(N − 1/2) 2N ρ )   + log −log(2π) L1 (ρ) , log E kxk2 + N 2N − 1 2N Γ(N )    + (α − λ) . (18) N (ρ + 1) 4N ρ + ψ(N − 1/2) − Es log 1 + s (13) 2N − 1 To eliminate the supremum over Qx , we next bound the last with s ∼ Gamma(N − 1/2, 1). three terms on the RHS of (18) (the only terms that depend on Qx ) as follows. Let C. An Upper Bound on Capacity  √  Let qy (y) denote an arbitrary pdf on y. By duality [6, gλ,α (s, ρ) , (N − α) Eθ,w log(r) | kxk = s Thm. 5.1], for every input probability distribution Qx we have √ s+N 2 that − h(|ˆ y1 | | kxk = s) + (α − λ) . N (ρ + 1) I(x; y) ≤ − Ey [log(qy (y))] − h(y | x). (14) Then The expectation on the RHS of (14) is with respect to the prob  ability distribution induced on y by Qx through (2). Note also E kxk2 + N 2 (N −α) Ey [log(r)]−h(|ˆ y1 | | kxk)+(α−λ) that for every input distribution Qx satisfying (4) N (ρ + 1) h  i h i n o  2 1 − E kxk + N / N (ρ + 1) ≥ 0. (15) ≤ max gλ,α (s, ρ) . (19) (c)

s≥0

Fix now λ ≥ 0 and an arbitrary pdf qy (y) on y. We can upperbound C(ρ) in (3) using (14) and (15) as follows: ( 1 sup − Ey [log(qy (y))] − h(y | x) C(ρ) ≤ N Qx !)   E kxk2 + N . (16) +λ 1− N (ρ + 1)

Substituting (19) into (18), and minimizing the resulting bound over α > 0 and λ ≥ 0, we obtain: C(ρ) ≤ U (ρ), where U (ρ) = min min α>0 λ≥0

   N (ρ + 1) 1 α log N α n o + dλ,α + max gλ,α (s, ρ) . (20) s≥0

4

D. Proof of Theorem 1 To prove Theorem 1, we show that the lower bound L1 (ρ) in (13) and a refined version of the upper bound U (ρ) in (20) have the same asymptotic expansion as the one in (7). For L1 (ρ), it is sufficient to note that    4N ρ Es log 1 + s 2N − 1   4N ρ + o(1), ρ → ∞ (21) = E[log(s)] + log 2N − 1 with E[log(s)] = ψ(N − 1/2), and to substitute (21) into (13). To refine U (ρ), we exploit the fact that the high-SNR behavior of C(ρ) does not change if we constrain the input distribution to be supported outside a sphere of arbitrary radius. This result, known as escape-to-infinity property of the capacity-achieving input distribution [6, Def. 4.11], is formalized in the following lemma (see [6], [5] for an intuitive interpretation). Lemma 2: Fix an arbitrary s0 > 0 and let K = {x ∈ CN : kxk2 ≥ s0 }. Denote by C (K) (ρ) the capacity of the channel (2) when the input signal is subject to the average-power constraint (4) and to the additional constraint that x ∈ K almost surely. Then C(ρ) = C (K) (ρ) + o(1),

ρ→∞

with C(ρ) given in (3). Proof: The lemma follows directly from [8, Thm. 8] and [6, Thm. 4.12]. Fix s0 > 0. By performing the same steps leading to (20), but accounting for the additional constraint that x ∈ K almost surely and also setting α = N − 1/2 (as for the lower bound) and λ = α, we obtain: C (K) (ρ) ≤ U (K) (ρ), where     1 2N (ρ + 1) U (K) (ρ) , 1 − log 2N 2N − 1      1 Γ(N − 1/2) 1 log + + max g˜(s) (22) + N Γ(N ) 2 s≥s0

with g˜(s) , gα,λ (s, ρ) | α=λ=N −1/2 . As lims→∞ g˜(s) = −(1/2) log(4πe) (see [3, Eq. (9)] and [6, App. X]), we can make (22) to be arbitrarily close to (7) by choosing s0 sufficiently large. This concludes the proof of Theorem 1. IV. T HE L OW- AND M EDIUM -SNR R EGIMES Differently from the upper bound U (ρ), the lower bound L1 (ρ) turns out to be not accurate for small ρ values. Lack of tightness of L1 (ρ) is due to the inequality (a) in (9), which is rather crude at low SNR. To avoid (a), one can take x ∼ CN (0, ρIN ), which yields h(y) = N log(1 + ρ) + log(πe)N .

(23)

Combining (23) with (10) and proceeding as in (12), one gets C(ρ) ≥ L2 (ρ), where o 1 n log(2πe) + Es [log(1 + 2ρs)] L2 (ρ) , log(1 + ρ) − 2N (24) with s ∼ Gamma(N, 1). We remark that the Gaussian input distribution yielding (24) was also used in [2] to establish (6). Also note that L2 (ρ) is not asymptotically tight in the sense of (7).

V. N UMERICAL R ESULTS AND C ONCLUSIONS

Fig. 1. The capacity lower bounds L1 (ρ) and L2 (ρ) (solid lines), and the capacity upper bound U (ρ) (dashed line) as a function of SNR ρ for N = 2 and N = 10.

Fig. 1 shows the capacity lower bounds L1 (ρ) and L2 (ρ) and the upper bound U (ρ) as a function of ρ for N = 2 and N = 10. The bounds L2 (ρ) and U (ρ) are surprisingly tight over the entire range of SNR values considered in the figure, and, hence, describe capacity accurately. Although L2 (ρ) is not asymptotically tight in the sense of (7), the asymptotic gap between L2 (ρ) and C(ρ) decays quickly as a function of N . For the case N = 10, this gap is smaller than 7 × 10−5 . Concluding remarks: We conclude by observing that the capacity bounds presented in this letter are derived under the assumption of uniform phase noise. An interesting open issue is whether our analysis can be generalized to other phase-noise distributions commonly used in the wireless and fiber-optic communities (e.g., wrapped Gaussian, truncated Gaussian and von Mises/Tykhonov distributions). R EFERENCES [1] B. Goebel, R.-J. Essiambre, G. Kramer, P. J. Winzer, and N. Hanik, “Calculation of mutual information for partially coherent Gaussian channels with applications to fiber optics,” IEEE Trans. Inf. Theory, vol. 57, no. 9, pp. 5720–5736, Sep. 2011. [2] R. Nuriyev and A. Anastasopoulos, “Capacity and coding for the blockindependent noncoherent AWGN channel,” IEEE Trans. Inf. Theory, vol. 51, no. 3, pp. 866–883, Mar. 2005. [3] A. Lapidoth, “On phase noise channels at high SNR,” in Proc. IEEE Inf. Theory Workshop (ITW), Bangalore, India, Oct. 2002, pp. 1–4. [4] M. Katz and S. Shamai (Shitz), “On the capacity-achieving distribution of the discrete-time noncoherent and partially coherent AWGN channels,” IEEE Trans. Inf. Theory, vol. 50, no. 10, pp. 2257–2270, Oct. 2004. [5] G. Durisi and H. B¨olcskei, “High-SNR capacity of wireless communication channels in the noncoherent setting: A primer,” Int. J. Electron. Commun. ¨ vol. 65, no. 8, pp. 707–712, Aug. 2011, invited paper. (AEU), [6] A. Lapidoth and S. M. Moser, “Capacity bounds via duality with applications to multiple-antenna systems on flat-fading channels,” IEEE Trans. Inf. Theory, vol. 49, no. 10, pp. 2426–2467, Oct. 2003. [7] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. New York, NY, U.S.A.: Wiley, 2006. [8] A. Lapidoth and S. M. Moser, “The fading number of single-input multipleoutput fading channels with memory,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 437–453, Feb. 2006.

On the Capacity of the Block-Memoryless Phase-Noise ...

to capture imperfect carrier-phase tracking in wireless commu- nications and certain ... of Technology, Gothenburg, Sweden (e-mail: [email protected]). Notation: .... definition, I(x; y) = h(y) − h(y | x) and bound the two differ- ential entropy ...

149KB Sizes 2 Downloads 129 Views

Recommend Documents

On the Effects of Frequency Scaling Over Capacity ... - Springer Link
Jan 17, 2013 - Springer Science+Business Media New York 2013 .... the scaling obtained by MH in wireless radio networks) without scaling the carrier ...

On the Effects of Frequency Scaling Over Capacity ... - Springer Link
Nov 7, 2012 - Department of Electrical and Computer Engineering, Northeastern ... In underwater acoustic communication systems, both bandwidth and received signal ... underwater acoustic channels, while network coding showed better performance than M

Infrastructure Development for Strengthening the Capacity of ...
Currently, institutional repositories have been serving at about 250 national, public, and private universities. In addition to the ... JAIRO Cloud, which launched.

Infrastructure Development for Strengthening the Capacity of ...
With the rapid development of computer and network technology, scholarly communication has been generally digitalised. While ... Subdivision on Science, Council for Science and Technology, July 2012) .... quantity of published articles in the consequ

New Results on the DMC Capacity and Renyi's ...
This proposed problem has great significance in cryptography and communications. In early 2015, Yi JANET Lu first constructed novel imaginary channel transition matrices and introduced Shannon's channel cod- ing problem to statistical cryptanalysis.

New Results on the Capacity of the Gaussian Cognitive ...
and interference decoding at the primary receiver based on the analysis of the approximate capacity results. I. INTRODUCTION. The cognitive interference ...

The Capacity of the Interference Channel with a ...
to a third node, a cognitive relay, that aids the communication between both ..... ence Relay Channel,” in IEEE Global Telecommunications Conference.

Divide-and-conquer: Approaching the capacity of the ...
Jan 18, 2010 - ... of Electrical Engineering, California Institute of Technology, ... an approximate characterization of the noisy wireless relay network problem ...

africa capacity 201 report 4 - The African Capacity Building Foundation
Dec 3, 2014 - year the focus is on the capacity imperatives for regional integration, a core mandate of the ACBF, and on the ...... policies for aid effectiveness; and the degree of inclusiveness ...... based products, automotive, e-commerce,.

africa capacity 201 report 4 - The African Capacity Building Foundation
Dec 3, 2014 - ... form or by any means, electronic, mechanical, photocopying recording or otherwise, without prior written permission. ...... ECCAS. EAC is the most advanced, launching its ..... ACR's primary index and its signature trademark ...

The Capacity of the Semi-Deterministic Cognitive ...
ing case of the cooperative communications paradigm where cooperation between ..... Transactions on Telecommunications, vol. 19, no. 4, pp. 405–420, 2008.

The Eighth Annual Review of the Mental Capacity Act 2005
The Eighth Annual Review of the Mental Capacity Act 2005. 13 October 2016 - The Merchant Adventurers Hall, York. Conference Programme. 9.00am – 9.30am.

The Capacity Region of the Gaussian Cognitive Radio ...
MIMO-BC channels. IV. GENERAL DETERMINISTIC CHANNELS. For general deterministic cognitive channels defined in a manner similar to [13] we have the following outer bound: Theorem 4.1: The capacity region of a deterministic cogni- tive channel such tha

The Eighth Annual Review of the Mental Capacity Act 2005
Oct 13, 2016 - This well-established one day conference will continue to provide up to date training, as in previous years, from highly regarded speakers at the ...

Capacity of Cooperative Fusion in the Presence of ...
Karlof and Wagner. [11] consider routing security in wireless sensor networks. ...... thesis, Massachusetts Institute of Technology, Cambridge, MA, August. 1988.

On Outage Capacity of MIMO Poisson Fading Channels
∗Department of Electrical and Computer Engineering. University of ... mile connectivity, fiber backup, RF-wireless backhaul and enterprise connectivity [10].

On Outage Capacity of MIMO Poisson Fading Channels
Free space optics is emerging as an attractive technology for several applications ... inexpensive components, seamless wireless extension of the optical fiber ...

On the Secrecy Capacity of Multi-Cell Uplink Networks ...
that all legitimate users experience the same degree of path- ..... proposed scheduling yields the best performance. However, ... [1] C. E. Shannon, “Communication theory of secrecy systems,” Bell Syst. Tech. J., vol. 28, pp. 656–715, Oct. 1949

On the Secrecy Capacity of Multi-Cell Uplink Networks ...
first threshold is related to the amount of generating inter- ference of the .... as reducing the. 2It is sufficient to achieve full degrees-of-freedom gain in this model.

Interference Mitigation and Capacity Enhancement based on ...
Interference Mitigation and Capacity Enhancement ba ... Dynamic Frequency Reuse for Femtocell Networks.pdf. Interference Mitigation and Capacity ...

High Transmission Power Increases the Capacity of Ad ...
solution to the optimization problem, which yields the throughput capacity of .... We call such a power vector an optimum power vector, identify an associated ...

Understanding the Capacity of Information Retrieval from Long Term ...
What prevents information stored in long- term memory to be efficiently retrieved? • Is there a parsimonious explanation for the power-law scaling of recall ...