Private Extremum Estimation Denis Nekipelova∗ and Evgeny Yakovlevb a,b

Department of Economics, University of California at Berkeley, Berkeley, California, 94720

This Version: June 2011

Abstract Private data analysis has been a recent an important topic in the statistics and computer science literature. Current results in statistics range from comparative analysis of different approaches to privacy to investigation of possibility of privacy breaches in particular data releases. However, little attention has been paid to privacy analysis in application to concrete estimation procedures. In this paper we bridge this gap by considering the behavior of M-estimators under a particular definition of privacy. Under general conditions, we provide the convergence rate and distribution results. In addition, for a particular class of objective functions, we show that our procedure attains a minimax rate over the class of estimators satisfying our notion to privacy. Keywords: Differential privacy, weak convergence, minimax rates

1

Introduction

Privacy is becoming a growing concern in storage and analysis of socio-economic datasets. While earlier work has heavily focused on the entire database storage and release as reviewed in Adam and Worthmann (1989), modern advances in technology have called to re-consider the ways a researcher can access and display relevant statistics from private databases. An earlier idea of trying to maintain privacy by perturbing the database via noise addition or alternative randomization techniques has transformed into the idea where one considers a release of the result of a particular query to the researcher rather than the entire database. The techniques for releasing tabulation and countingtype queries have been discussed in Cox (1980), Cox (1987), Duncan and Fienberg (1997),Fischetti and Salazar (1998), Fienberg (1999), and Chowdhury, Duncan, Krishnan, Roehrig, and Mukherjee (1999) among others. However, many of the previously techniques for disclosure limitation have not provided universal privacy guarantees for arbitrary queries. A feasible solution to this problem has been offered in Dwork and Nissim (2004), Dwork (2006) and the following literature. However, despite the attractive properties from the the point of view of protecting the database from the privacy disclosures, statistical properties of query “sanitization” corresponding to this approach ∗ Corresponding Author. Email: [email protected]. We would like to thank Cynthia Dwork for insightful comments and inspiration.

2

have not been investigated large classes of queries. Few exceptions include Wasserman and Zhou (2010) who discuss the private release of data histograms. Dwork and Lei (2009) discuss the privacy properties of certain classes of robust statistics. Finally, Smith (2008) analyzes the class of parametric maximum likelihood procedures for regular differentiable likelihood functions. In our paper we focus on the setup of M-estimators with general (possibly discontinuous) sample objective functions. Discontinuous queries turn out to be the most problematic from the privacy viewpoint. We suggest a class of private M-estimators for the cases where the objective function can be discontinuous. We establish that the private M-estimators converge in probability to the parameter of interest and find the convergence rate for those estimators. Finally, we demonstrate that these estimators are optimal in the minimax sense for the objective functions defined by indicators. We consider the model where the data are represented by a data sample {zi }ni=1 associated with T i.i.d. realizations of random variable Z with compact support Z. z (n) = (z1 , . . . , zn ) is a random vector whose entries are equal to sample observations. We assume that the object of interest is the population parameter that is associated with the maximum of the objective function g(θ) = E [g (Z; θ)] over compact parameter space Θ ⊂ Rp . We will assume that the distribution of Z is continuous with a bounded density function. In the standard M-estimation settings where the distribution of random variable Z is not known, one would be able to create an estimator for the population maximizer θ0 by evaluating the maximum of the sample analog n

gˆ(θ) =

1X g(zi ; θ). n i=1

(1.1)

We consider the setting where the data sample {zi }ni=1 may contain private information and uncontrolled release of sample statistics may lead to the leakage of this private information. Such a privacy breach, for instance, may be associated with the release of extreme statistics of the sample distribution. Computer science literature suggests one approach to guaranteeing the absence of privacy breaches (or data sanitization) via a specific setup where the data sample is stored on a secured server. The researcher can submit the queries to the server requesting specific sample statistics and the server would release a “sanitized” result of the query. There could be multiple ways to establish the requirements to the set of sanitized queries. However, a major problem is to define a data release algorithm that will provide an ad omnia guarantee against the privacy breaches. In this paper we consider a class of data release mechanisms that output randomized query. A randomized query  outcome is a random variable κn defined on the probability space Ω, B, Pκ|x(n) . The information  regarding the probability space Ω, B, Pκ|z(n) is assumed to be public information. The property of randomized responses to queries that leads to a computationally feasible data release mechanism has been established in Dwork and Nissim (2004), also described in Dwork (2006) and Dwork and Smith (2010) and is referred to as differential privacy. DEFINITION 1. A randomized query κn (z (n) ) delivers ǫ-differential privacy if for some ǫ > 0

3 and all z (n) and z (n)′ that is different from z (n) by exactly one element  P κn (z (n) ) ∈ K|z (n)  ≤ eǫ . sup sup (n)′ ) ∈ K|z (n)′ z (n) ∈Z n ,dH (x(n) ,x(n)′ )=1 K∈B P κn (z

We used the notion of the Hamming metric dH (·, ·) that counts the number of pairwise different entries in two vectors. Dwork, McSherry, Nissim, and Smith (2006) establish the existence of a class of randomized query outcomes that give a universal guarantee of ǫ-differential privacy. The randomization is based on the notion of global sensitivity of the query. DEFINITION 2. Consider a query f : Z × . . . × Z 7→ Rp . Define the global sensitivity of this query as ∆f =

sup x(n) , dH (x(n) ,x(n)′ )=1

kf (x(n) ) − f (x(n)′ )k1

Using the notion of global sensitivity Dwork, McSherry, Nissim, and Smith (2006) established the following result that suggests the structure of privacy preserving randomized query outcomes kn (·, x(n) ; f ). THEOREM 1. For a finite-dimensional query f (·) the randomization algorithm that adds independently generated noise from the double exponential distribution with parameter ∆ f /ǫ to each component of the query outcome satisfies ǫ-differential privacy. Following Dwork (2006) and Wasserman and Zhou (2010) we call the mechanism that adds the sensitivity-calibrated double exponential noise “the Laplace mechanism”. We now consider the class of private M-estimators that are associated with application of the exponential mechanism to the queries corresponding to the maximum of the sample objective function. The query corresponding ˆ The randomization of the query to the maximum of (1.1) in the public dataset is the maximizer θ. outcome is based on global sensitivity and we would expect that this sensitivity will be high for discontinuous objective functions, which can even make the results of the sanitized query not useful. We illustrate this point with the following example. Example: Suppose that Z = [0, 1] and g(z, θ) = 1{z ∈ [θ − a, θ + a]} for some a < 1/2. Parameter space Θ = [0, 1]. To compute global sensitivity, consider the sample of size 2n with zi = 0 for i = 1, . . . , n and zi = 1 for i = n + 1, . . . , 2n. Then in the sample where one zi is switched from P2n 1 0 to 1 the sample objective function gˆ(θ) = 2n i=1 1{zi ∈ [θ − a, θ + a]} is maximized at 1, and in the sample where it is switched to 0, the objective function will be maximized at 0. This means that the global sensitivity ∆ f = 1 for the considered function. As a result, with an increase of the sample size, the variance of randomized query outcome will not decline and the resulting estimator will not be consistent. Provided that a high global sensitivity of M-estimation queries is affected by the smoothness of the objective function, we provide a generalization of the Laplace mechanism that operates with

4

a transformed objective function. To transform the objective function we use the kernel function, R which is a known fixed function K(·), which is symmetric, bounded, continuous, K(t) dt = 1, R |K(t)|2 dt < ∞, and lim K(t) = 0. In practice, kernel functions are frequently picked as standard |t|→∞

distribution densities (such as normal or triangular distributions). Consider a transformed function   Z 1 t−z gh (z, θ) = g(t, θ) K dt. (1.2) h h t∈Z Pn Also denote gn,h (θ) = Pn gh (·, θ) = n1 i=1 gh (zi , θ). We can note that if K(·) is a proper density, the new objective function is defined via a convolution. In this case we can treat this approach to the analysis of non-smooth objective functions as a generalization of the Bayesian-based approach to privacy in Williams and McSherry (2010). It turns out, that by slightly changing the objective function we are able to reduce the sensitivity of objective function to global changes of arguments closer to the sensitivity to local changes, which has been studied for seartain classes of queries in Nissim, Raskhodnikova, and Smith (2007). Then we can characterize the class of estimators that are the focus of the current paper.

DEFINITION 3. For a smoother (1.2) that is globally differentiable with respect to its arguments with bounded second derivatives, define a private M-estimator associated with maximization of (1.1) as θˆn,h = τn,h + νn,h , where τn,h = arg max gn,h (θ) and νn,h is a double exponential random variable with parameter θ∈Θ

∆ τn,h /ǫ In this definition τh,n is the extremum estimator corresponding to the smoothed objective function. With the introduced definition, we note that by construction the considered class of estimation procedures guarantees ǫ-privacy. Smoothing of the objective function may be important even in cases where the original objective function is smooth. One can construct examples of pathological cases of infinitely differentiable functions with bounded support where global sensitivity will not decrease with the sample size. In those cases smoothing plays the role of delivering the guarantees on the lower bounds of the derivatives of smoothed objective functions.

5

Global Sensitivity

Global sensitivity

0.2

1

0.18

0.9

0.16

0.8

0.14

0.7

0.12

0.6

0.1

0.5

0.08

0.4

0.06

0.3

0.04

0.2

0.02

0.1

0

0

0.1

0.2

ε

0.3

0.4

0.5

0

0

0.1

0.2

Figure 1.a

−1/3

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1 50

0.5

Global Sensitivity, h=10/N

1

0

0.4

Figure 1.b

Global Sensitivity, h=10/N

0

0.3 h

100 Sample Size

150

200

0

0

Figure 1.c

50

100 Sample Size

150

200

Figure 1.d

Figure 1: Global sensitivity of smoothed Chernoff’s objective function as a function of smoothing parameter and sample size

We focus our attention on a specific class of bounded functions that includes the class of finite sums of of indicator functions, which were shown to create problems with global sensitivity and, as a result, consistency of sensitivity-calibrated differentially private extremum estimators. In the subsequent results we adopt the normalization g(·, θ0 ) = 0. Our results rely on the smoothed empirical process results in van der Vaart (1994), Yukich (1992) and more recent findings in Radulovic and Wegkamp (2000), Kozek (2003) and Gin´e and Nickl (2008). The following theorem provides the convergence result under strong requirements on the smoothing parameter but applies to a broader class of functions. Further, we will demonstrate the results for narrower classes of functions that will allow us to weaken the restriction on smoothing. THEOREM 2. Suppose that Fε = {g(·, θ), d (θ, θ0 ) < ε} is a translation invariant2 VC-class such R that sup |f | ≤ C < ∞ almost everywhere and gh (θ, z) = g(t, θ) h1 K( t−z h ) dt, with the kernel f ∈Fε R function additionally satisfying K(t)t2 dt = 0, is twice differentiable in both arguments with first derivatives bounded away from zero and infinity on Z × Θ. Then, if sup d(θ,θ0 )<ε 2 That

2 n→∞

E [ghn (Z, θ) − g(Z, θ)] −→ 0,

is, if f (·) is in this class then f (· + t) is also in this class.

(1.3)

6

for

√ 2 nhn → ∞ with hn → 0 we have3 sup d(θ,θ0 )<ε

|gn,hn (θ) − g(θ)| = op (1).

Consequently, if g(θ0 ) attains a unique global maximum and gn,hn (τn,hn ) ≥ sup gn,hn (θ) − op (1), θ∈Θ

then p θˆn,hn −→ θ0

Proof: Given the assumptions regarding the considered class of functions, we can apply the result in van der R Vaart (1994) and Theorem 2 in Gin´e and Nickl (2008). In fact, using the fact that K(u) du = 1, we make a chance of variable and represent Z 1 y gh (Z, θ) − g(Z, θ) = [g (Z + y, θ) − g (Z, θ)] K dy h h Then by Theorem 2 in Gin´e and Nickl (2008) condition (1.3) is necessary and sufficient to guarantee that   1 . sup |(ghn (zi , θ) − g (zi , θ) − P ghn (·, θ) + P g (·, θ))| = op √ n d(θ,θ0 )<ε Provided the P-Donsker assumption regarding the original class of functions, we can conclude that   1 sup |ˆ g (θ) − P g (·, θ)| = op √ n d(θ,θ0 )<ε as in van der Vaart and Wellner (1996). Finally, for the second-order kernel, we can use evaluation in Lemma 3 in Gin´e and Nickl (2008) which implies that  |P ghn (·, θ) − P g (·, θ)| = O h2n .

Finally, application of triangle inequality, delivers the result of interest:

sup d(θ,θ0 )<ε

|gn,hn (θ) − g(θ)| =

op (1). We note that provided the slow order of convergence for hn to zero, the magnitude of the decay will be determined by h2n . Then application of Corollary 3.2.3. in van der Vaart and Wellner (1996) delivers the result regarding the convergence of the non-private estimator τn,hn . Next, we consider the construction of private estimator θˆn,hn . According to the argument in Dwork (2006) we notice that by global sensitivity is a deterministic feature of gn,h (·). As we discuss in Section 4, by global differentiability of this function with respect to θ and z with bounded derivatives and additive separability, global sensitivity has deterministic order O( nh1 2 ). As a result, given that ∆ gn,hn → 0 a.s.

n

provided that nh4n → ∞, νh,hn −→ 0. Q.E.D. 3 We

use standard notation xn = op (1) to denote that for any µ > 0 P (|xn | > µ) → 0 as n → ∞.

7

Theorem 2 provides the consistency result under a relatively strong condition requiring the decay of the “smoothing bandwidth” to be slower than n−1/4 . This assumption can be relaxed for particular classes of functions and the rate of the fastest decay for hn will depend on the variance of the sample objective function and the complexity of its functional class. Such precise rate results are given in Gin´e and Nickl (2008), for instance, for H¨ older classes with degree of H¨ older continuity exceeding 1/2. We note that the variance condition (1.3) may not be satisfied for large classes of functions that are potentially problematic. In Example 1 we displayed a non-private extremum estimator. We can look at the class of functions with g(z, θ) = 1{z ∈ [θ − a, θ + a]} and Fε = {g(·, θ) − g(·, θ0 ), d(θ, θ0 ) < ε} for some sufficiently small ε. Then we can see that for f ∈ Fε we can evaluate    R θ+a R θ−a R dt = (−1)1{θ<θ0 } θ0 +a h1 K t−z dt + (−1)1{θ>θ0 } θ0 −a h1 K t−z dt f (t) h1 K t−z h h h For sufficiently small ε these integrals can be evaluated as        Z 1 ε ε t−z θ0 + a − z θ0 − a − z f (t) K dt = O − K K h h h h h h This means that E

Z

1 f (t) K h



t−Z h



dt − f (Z)

2

=O



ε2 h



.

For a shrinking bandwidth this suggests that if the diameter of the neighborhood is fixed, then condition (1.3) is violated. We conclude this section by providing the consistency result for the class of functions for which condition (1.3) is not satisfied. THEOREM 3. Assume that   Z t−z Gn = hn ghn (θ, z) = g(t, θ)K( ) dt, d (θ, θ0 ) < ε, hn → 0 hn forms a polynomial class as in Nolan and Pollard (1987)4 . In addition, sup |f | ≤ C and sup P f 2 = f ∈Gn

f ∈Gn

O (hn ) and all elements of Gn are twice differentiable in both arguments with first derivatives bounded away from zero and infinity on Z × Θ. Then, provided that nhn / log n → ∞ with hn → 0 and gn,hn (τn,hn ) ≥ sup gn,hn (θ) − op (1) we obtain θ∈Θ

p θˆn,hn −→ θ0

Proof: We use Theorem 37 in Pollard (1984) to produce the result. We evaluate the probability ) ( P

sup |Pn f − P f | > 8Cε hn

,

f ∈Gn

4 Formally condition in Nolan and Pollard (1987) requires thatthe uniform covering number for the class G (ε) = n {fn (θ) ∈ Gn , |θ − θ0 | < ε} is polynomial in ε with polynomial degree and coefficients that do not depend on n.

8 for some ε > 0. We note that var (Pn f ) /(4εhn )2 ≪ (log n)−1 for sufficiently large n. Then application of the in combination with Lemma 33 in Pollard (1984) delivers the ( symmetrization inequality ) result P

n→∞

−→ 0. This means that

sup |Pn f − P f | > 8Cε hn

f ∈Gn

sup d(θ,θ0 )<ε

|gn,hn (θ) − P gn,hn (·, θ)| = op (1).

Then we note that application of the result in Lemma 3 in Gin´e and Nickl (2008) allows us to evaluate Z   t−· 1 K dt − P g(·, θ) ≤ C ′ hn , |P gn,hn (·, θ) − P g(·, θ)| = g (t, θ) P hn hn

for some constant C ′ . Combining this result with the previously obtained convergence result and the triangle inequality, produces sup d(θ,θ0 )<ε

|gn,hn (θ) − P g(·, θ)| = op (1).

The remaining argument repeats the argument in the proof of Theorem 2. Q.E.D.

2

Convergence results

Before proceeding with the further discussion we impose some assumptions regarding the estimation problem. We will aim at investigating the properties for particular classes of objective functions. Our lead case corresponds to the class of finite linear combinations of indicator functions. First of all we impose the following assumption on the population behavior of the objective function. ASSUMPTION 1. Assume that (i) sup g(θ) < 0 for any open set S ⊂ Θ that contains θ0 . θ6∈S

(ii) The density of distribution of Z has r continuous derivatives. There exists δ > 0 such that for R (k) all d (θ, θ0 ) < δ and all k = 0, . . . , r Z |g(θ, z)| |fz (z) | dz < ∞. The population objective is twice differentiable in θ with the second derivative g ′′ (θ0 ) ≤ −H < 0. (iii) For some small δ > 0 and θ1 and θ2 from the δ - neighborhood of θ0 P |g(·, θ1 ) − g(·, θ2 )| = O (|θ1 − θ2 |) . The next assumption is concerned with the behavior of the sample objective function. We consider objective functions that can be discontinuous in the parameter in the spirit of Kim and Pollard (1990) and the results in van der Vaart and Wellner (1996). ASSUMPTION 2. Assume that

9

(i) g(·, θ0 ) = 0 for normalization R (ii) The class of functions Gε,H = {h gh (θ, ·) = g(t, θ)K( t−· h ) dt, d (θ, θ0 ) < ε, 0 < |h| < H} is polynomial in the definition of Nolan and Pollard (1987) with class {|f |, f ∈ Gε,H } and class {f1 − f2 , f1 , f2 ∈ Gε,H } permissible and manageable in the sense of Kim and Pollard (1990). Moreover, the metric entropy integral of Gε,H is the same for the second class and is not smaller than 1/6 of the metric entropy integral of the last class. (iii) Class Gε,H has envelope Gε,H such that |Gε,H | < C, P G2ε,H = O(H ε2 ), and for each δ > 0 there exists K such that P G2ε,H 1{Gε,H > K} < δ We note that Assumption 2 (iii) is narrowing down the class of functions of interest as compared to classes considered in Theorem 3 to those where the class of smoothed versions of those functions has a “non-trivial” envelope. The presence of such an envelope is an artifact of the underlying indicator-like behavior of functions g(·, ·) which suggests that decay of the envelope for the smoothed functions will be universally decaying when either the neighborhood around parameter value θ0 or the bandwidth shrinks relatively fast. gh(z,θ)

gh(z,θ)

1

0.8

0.8

0.6

0.6 0.4 0.4 0.2

0.2 0

0

−0.2

−0.2

−0.4

h=0.1 h=0.2 h=0.3 h=0.4 h=0.5

−0.6 −0.8 −1 −3

−2

−1

0

1

Figure 2.a

2

ε=0.1 ε=0.2 ε=0.3 ε=0.4 ε=0.5

−0.4 −0.6

3

−0.8 −3

−2

−1

0

1

2

3

Figure 2.b

Figure 2: Smoothed re-centered Chernoff’s objective function

as a function of ε and smoothing parameter The next assumption we make concerns the kernel function. We consider higher order kernels that are “adapted” to the smoothness class of the sample objective function. As we already noted the kernel plays two separate roles. First, it smoothes the discontinuities in the objective function. Second, it restricts potentially large changes in the derivatives of smooth objective function, thereby decreasing its global sensitivity. ASSUMPTION 3. Assume that kernel function K(·) has at least two continuous derivatives and is symmetric about the origin of order r. Moreover, for some δ > 0 Z Z K(u) du = 1, uk K(u) du = 0, for k = 1, . . . , r, Z Z |u|r |K(u)| du < ∞, |u|r+δ |K(u)|1+δ du < ∞

10 R Finally, K(·) is adapted to the smoothness class of g(·, θ) such that g(t, θ)K continuously differentiable in θ for some δ > 0 and |θ − θ0 | < δ. Moreover   Z ∂ t−z g(t, θ)K dt > M h, inf z∈Z,θ∈Θ ∂θ h

t−z h



dt is twice

for each given h with the constant being the same over all bandwidths. Next we establish the convergence rate result for the private M-estimator in the considered function class. THEOREM 4. Under assumptions 3,1 and 2, take the the sequence hn → 0, n hr+1 → 0 and n nhn / log n → ∞. Then if τn,hn is the consistent non-private estimator for θ0 with  Pn gn,hn (·, τn,hn ) ≥ sup Pn gn,hn (·, θ) − Op (nhn )−1 θ∈Θ

and double exponential noise νn,hn , the resulting private estimator for θ0 :   sensitivity-calibrated  d θˆn,hn , θ0 = Op (nhn )−1/2 .

Proof: Denote the rate of convergence rn . To prove the result we use Rao’s “slicing device” as in proof of Theorem 3.2.5 in van der Vaart and Wellner (1996) and partition the parameter space into segments An,j = {θ ∈ Θ, hn , 2j−1 < rn d(θ, θ0 ) < 2j }. We select some δ > 0 and from consistency of θˆn,hn for θ0 it follows that P (2d (θ, θ0 ) > δ) → 0. Then for some M we evaluate the probability     P rn d(θˆn,hn , θ0 ) > 2M +1 ≤ P rn d(τn,hn , θ0 ) > 2M + P rn d(νn,hn , 0) > 2M ! X P sup gn,hn (θ) ≥ 0 ≤ j≥M,rn δ≥2j

θ∈An,j

 + P rn d(νn,hn , 0) > 2M + P (2d (τn,hn , θ0 ) > δ)

The last two probabilities converge to zero, correspondingly, by Chebychev inequality and conby teh global sensitivity that has a sistency of τn,hn . In fact, given that the noise is calibrated   rn 1 deterministic order O( nhn ), then rn d(νn,hn , 0) = Op nhn = op (1) if rn decays slower than nhn .

Provided that the objective function is locally quadratic with a negative second derivative, we conclude that for sufficiently small δ we observe g(θ) ≤ −H d2 (θ, θ0 ). Also, given the evaluation from Lemma 3 in Gin´e and Nickl (2008) we can evaluate |gn,hn (θ) − g(θ)| ≤ C ′ hrn . This means that the key evaluation that we need to make concerns the probability ! 22j−2 ′ r P sup |gn,hn (θ) − P gn,hn (·, θ)| ≥ H 2 + C hn . rn θ∈An,j

11

Provided the argument in Kim and Pollard (1990), we can evaluate using the maximum inequality s √ 22j−2 nhn |gn,hn (θ) − P gn,hn (·, θ)| ≤ J(1) hn 2 . P sup rn θ∈An,j As a result, given that n hn / log n → ∞ we obtain

!

K 2j−1 rn ≤√ nhn (H22j−2 + C ′ hrn rn2 ) √ Thus, for sufficiently large n provided that n hr+1 → 0, rate rn = nhn leads to the sum of n √ probabilities that converges to zero for sufficiently large M . This proves that rn = nhn . P

22j−2 sup |gn,hn (θ) − P gn,hn (·, θ)| ≥ H 2 + C ′ h2n rn θ∈An,j

We make an additional note regarding the rate requirement for the bandwidth parameter. Infact,  q for the considered class of function the uniform entropy integral has order J(δ) = O δ log 1δ . As a result, condition n hn / log n → ∞ guarantees that the integral is finite. This can also be verified via a two-step argument similar to the rate result in Nolan and Pollard (1987). Q.E.D. One important conclusion from the proof of Theorem 4 is establishing that the privacy guarantee ǫ can be adapted to the sample size. Provided that the noise is calibrated by the global sensitivity ∆τn,hn /ǫ the rate of decay of ǫ cannot exceed the convergence rate. In fact,  if ǫn is a function of the sample size then the stochastic order of the noise is determined as Op ǫ √1n h . This means n

n

that the condition on the privacy guarantee is that nhn ǫ2n → ∞ as ǫn → 0. We can contrast this condition to the conditions provided in Smith (2008) where no restrictions were imposed on the rate of decay of the privacy guarantee with the sample size. We can clearly see in our results that an uncontrollably fast decay of the privacy guarantee can result in a loss of consistency of the considered estimator. Next we consider the distribution convergence result and at the same time establish the bounds on the rate of shinkage of the smoothing bandwidth parameter. To do so, following Romano (1988) and Kim and Pollard (1990) we define for some numeric sequence hn → 0 the process   t t Zn (t) = nhn gn,hn θ0 + √ ∈ Θ} 1{θ0 + √ nhn nhn

We also define a re-centered process      t t t − nhn P ghn ·, θ0 + √ 1{θ0 + √ Wn (t) = nhn gn,hn θ0 + √ ∈ Θ}. nhn nhn nhn To evaluate the convergence properties we first establish stochastic equicontinuity of the re-centered process THEOREM 5. Suppose that n hn / log n → ∞ and n hr+1 → 0. Then the re-centered process n Wn (t) satisfies the stochastic equicontinuity condition: for each ε > 0, η > 0 and M < ∞, there exists δ > 0 such that lim sup |s−t|<δ, |s|,|t|
P ∗ {sup |Wn (t) − Wn (t)| > η} < ε.

12

Proof: o n t1 t2 ), |t1 − t2 | < δn , |t1 |, |t2 | < C . This Consider class Hn = hn ghn (·, θ0 + √nh ) − hn ghn (·, θ0 + √nh n n class has envelope Hn = 2Gεn ,hn , where εn is the diameter of the shrinking neighborhood of θ0 :  √ C . We can note that P H 2 = O 1 . n n nh n

Following Kim and Pollard (1990) we define sequences Xn = n Pn Hn2 and Yn = sup Pn h2 Then we h∈Hn

can note that P Xn = O(1). Also, invoking Lemma 33 in Pollard (1984) and Lemma 4.6. in Kim and Pollard (1990) we conclude that 1 P sup Pn h2 = o( ), n h∈Hn  provided that P h2 = o n1 for some sufficiently large n. Next, we can apply the tail bound in Lemma 19.34 in van der Vaart (2000) also corresponding to Theorem 2.14.2. in van der Vaart and Wellner (1996): for class F with a measurable envelope F given η > 0 we set q a(η) = ηkF kP,2 / 1 + log N[] (ηkF kP,2 , F, L2 (P )). Then if kf kP,2 < δkF kP,2 , we can evaluate

√ √ √ nP sup kPn f − P f k ≤ J[] (δ, F, L2 (P )) kF kP,2 + nP F 1{F > na(δ)}. f ∈F

Using this result in combination with the discovered property of the envelope function, we can finally obtain the evaluation   p nYn = o(1). n P sup |Pn h − P h| ≤ P Xn J Xn h∈Hn Provided that we organized the class Hn from functions hn ghn (·, θ), this result proves the statement of the theorem. Q.E.D. Next, we establish the distribution result for the considered functional class. To do that we impose additional assumptions regarding the behavior of the objective function. This assumption will allow us to characterize the behavior of the covariance function of the limiting process. ASSUMPTION 4. Assume that: (i) θ0 is the interior point of Θ (ii) Functionhϕ(·, z1 , z2 , τ1 , τ2 ) = g (·i + z1 , θ0 + τ1 ) g (· + z2 , θ0 + τ2 ) has is integrable and for some 1+δ < ∞ for |z1 − z2 | < ε and |τ1 − τ2 | < ε with |τ1 |, |τ2 | < M . δ > 1 E |ϕ(z, z1 , z2 , τ1 , τ2 )| RR Moreover ϕ(z, z1 /h, z2 , τ1 /h, τ2 )K( zh1 )K( zh1 ) dz1 dz2 < ∞ and its absolute value has finite expectation with respect to z.

13

(iii) For each z1 , z2 ∈ Z and t1 , t2 ∈ R there exists constant C and function  Z Z  t1 z1 α2 P , θ0 + g ·+ H(t1 , t2 , τ1 , τ2 ) = 2 lim α β τ1 β α β ≥C, β→∞    z2 t2 g ·+ K(z1 )K(z2 ) dz1 dz2 < ∞, , θ0 + τ2 β α which is continuous in both arguments and bounded in (τ1 , τ2 ) for each pair (t1 , t2 ) (iv) For each z1 ∈ Z and t1 ∈ R and each ε > 0 " 2   #   α2 t1 t1 2 lim > α ε/β = 0. 1 g1/β ·, θ0 + P g1/β ·, θ0 + α2 β α α β ≥C, β→∞ Using Assumption 4 we can characterize the limiting process corresponding to the scaled objective function. THEOREM 6. Under Assumptions 4 and 1, provided that n h3n is a non-decreasing sequence, the finite-dimensional projections of the scaled process Zn (t) converge in distribution to finidedimensional projections of the process 1 Z(t) = W (t) − t′ Ht. 2

(2.4)

W (t) is a centered Gaussian process with covariance kernel H(t1 , t2 , +∞, +∞)/K if hn = τ γn n γn3 = K, and H(t1 , t2 , τ1 , τ2 ) if hn = τ γn and n γn3 → ∞. Proof: We note that with sufficiently large n and given that θ0 is in the interior of Θ, then the centered process can be characterized as   n X t t hn ghn (zi , θ0 + √ Wn (t) = ) − P ghn (·, θ0 + √ ) nhn nhn i=1 Analyzing the covariance function, we find that   t2 t1 2 )ghn (zi , θ0 + √ cov (Wn (t1 ), Wn (t2 )) = nhn P ghn (·, θ0 + √ ) nhn nhn t1 t2 ) − nh2n P ghn (·, θ0 + √ ) P ghn (·, θ0 + √ nhn nhn We note that t1 P ghn (·, θ0 + √ )=O nhn



 1 + hrn , nhn

meaning that the last term converges to zero. For the first term we note that   Z t t gh z, θ0 + √ = g(z + hζ, θ0 + √ ) K(ζ) dζ. nh nh

14

Therefore, we can specify the object of interest as the expectation   t2 t1 nh2 E g(z + hz1 , θ0 + √ ) g(z + hz2 , θ0 + √ ) . nh nh √ √ Let αn = nh2n . Then if nh3n = K, we can see that hn = K/αn and nhn = αn / K. Then   Z Z t1 t2 2 √ √ nhn E g(z + hn z1 , θ0 + ) g(z + hn z2 , θ0 + ) K(z1 )K(z2 ) dz1 dz2 nhn nhn # " √ √ Z Z Kt1 Kt2 Kz2 Kz1 , θ0 + ) g(z + , θ0 + ) K(z1 )K(z2 ) dz1 dz2 = αn E g(z + αn αn αn αn √ √ → H( Kt1 , Kt2 , +∞, +∞). √ Next, consider the case where nh3n → ∞. In that case we can denote αn = nhn and βn = Then, by assumption, we assure that αn2 /βn → ∞. As a result,   Z Z t1 t2 nh2n E g(z + hn z1 , θ0 + √ ) g(z + hn z2 , θ0 + √ ) nhn nhn   z1 t1 z2 t2 = αn2 βn E g(z + , θ0 + ) g(z + , θ0 + ) K(z1 )K(z2 ) dz1 dz2 βn αn βn αn

1 hn .

→ H(t1 , t2 ; τ1 , τ2 ). Finally, we can note that for the centering sequence 1 t ) → − t′ Ht, nhn P ghn (·, θ0 + √ 2 nhn given that nhr+1 → 0. n Q.E.D. We note that the structure of the covariance function of the centered process Wn (t) changes when hn = O(n−1/3 ). We can note that this rate for the bandwidth generates the slowest convergence rate for the considered estimator. In cases where the smoothing bandwidth converges to zero at a slower rate (and the rate of convergence of the considered estimator is faster), the limit re-centered process has a constant covariance function. In the concluding part of this section, we combine our results to establish the convergence rate of the estimator of interest. → 0 and THEOREM 7. Under assumptions 1, 2, 3, and 4, take the the sequence hn → 0, n hr+1 n 3 nhn being a non-decreasing sequence. Then if τn,hn is the consistent non-private estimator for θ0 with  Pn gn,hn (·, τn,hn ) ≥ sup Pn gn,hn (·, θ) − Op (nhn )−1 θ∈Θ

and sensitivity-calibrated double exponential noise νn,hn with nhn ε2n → ∞, the resulting private √ estimator for θ0 nhn (θˆnhn − θ0 ) converges in distribution to the random vector that maximizes Z(t)

15

Proof: The result of the theorem follows from  of Theorem 3.2.2. in van der Vaart and Wellner  application √ ˆ (1996) to the random element nhn θn,hn − θ0 . Q.E.D.

3

Minimax rates for classes of indicators

In the previous section we established general distribution results for the private M-estimators applied to the classes of functions with particular properties of their envelopes (variance of the envelope decays both when the parameter neighborhood shinks and when the smoothing bandwidth approaches to zero). In this section we will relate the discovered rate results to the private Mestimators applied to the indicators of compact connected sets on real line. We use the our example of the mode estimator which, in its original form, is not compatible with ǫ-privacy. The private M-estimator applied to considered objective function, according to Theorem 7 will not only be consistent, but also admits a lower privacy constant εn in larger datasets. An important and relevant question that remains unanswered is whether the procedure based on global sensitivity can be optimal for indicators in the minimax sense as compared to alternative procedures. PL We focus on sample objective functions with the structure g(z; θ) = l=1 tl 1{z < θ + al }. We assume that L as well as numbers tl and al are fixed for l = 1, . . . , L Given that the remaining parameters are fixed, without loss of generality, we can simply focus on the class of indicators: Qε = {1{θ − a ≤ z < θ + a} − 1{θ0 − a ≤ z < θ0 + a}, d(θ, θ0 ) < ε} We consider a general class of private extremum estimators, which we define as smooth functionals of the sample distribution Z Tn (Fn , ξ) = ϕn (z, ξ)Fn (dx). We assume that ϕn (z, ·) is Lipschitz continuous and monotone (increasing) function of the random element ξ ∼ U [0, 1]. Here we willingly refrain from using the double exponential signal, as it can be generated as a contiuous transformation of the uniform noise. Moreover, this allows us to consider the cases with some alternative noise distributions. In the considered setup where the support of the empirical distribution is fixed and the estimator is represeted by linear functional, we can substitute privacy definition in Definition 2 by the requirement that lim sup sup sup log Tn−1 (Qn , u) − log Tn−1 (Pn , u), ≤ ǫ (3.5) n→∞ u∈[0,1] Qn , Pn

for all empirical probability measures on Z. Given this definition of privacy, we can define the class

16

of estimators whithin which we will be looking for the optimal one: Z Tn = {Tn (Fn , ξ) = ϕn (z, ξ)Fn (dx)}

(3.6)

ϕn (z, ·) is monotone increasing and Lipschitz,

ξ ∼ U [0, 1], and (3.5) is satisfied. Then we provide the following result in the spirit of Ibragimov and Khasminskii (1979). THEOREM 8. Consider the popolation objective function which is an element of the class of functions Qε indexed by θ ∈ Θ. Suppose that Assumption 1 holds. Denote by C r (Z) the space of all density functions that have r continuous derivatives with support on Z. Then for any bounded sub-convex loss function ℓ(·) lim inf

inf

sup

n→∞ Tn ∈Tn f ∈C r (Z)

  r Ef,Tn ℓ n 2r+1 (Tn − θ0 ) > 0,

(3.7)

where we take expectation also with respect to the random element in Tn . Proof: We use a two-step argument here, which follows the argument in Theorem IV.5.1. in Ibragimov and Khasminskii (1979). Following that argument we consider a class of moment function perturbations defined by the function γ(·) with a compact support and r continuous derivatives such that R γ(z) dz = 0 and γ(0) 6= 0 such as fn (z, θ) = f (z) +

θ − θ0  1/(2r+1)  , g zn r n 2r+1

with d(θ, θ0 ) < δ. This parametrizes the function of interest while remaining within the class of interest. Then consider the behavior of the population maximizer, which will correspond to the solution of the first-order condition ∂ ∂θ

θ+a Z

fn (z, θ) dz = 0.

θ−a

This implies that the corresponding estimate θf = θ0 + we can characterize the object of interest as    r lim inf inf sup Ef,Tn ℓ n 2r+1 Tn − n→∞ Tn ∈Tn d(θ,θ )<δ 0

r n 2r+1

θ−θ0 (f ′ (θ

0

+a)−f ′ (θ

0 −a))

 r + o n− 2r+1 . Then

θ − θ0 ′ (f (θ0 + a) − f ′ (θ0 − a))



> 0.

By Ibragimov and Khasminskii (1979) the corresponding information is finite and can be expressed as Z I0 = g 2 (y)dy/f (0).

17

Then, by the Lipschitz structure of the kernel in Tn we can guarantee that (3.5) is satisfied whenever θ−θ0 This is assured by the appropriate choice of the sup Eξ kTn (·, ξ) − (f ′ (θ0 +a)−f ′ (θ −a)) kPn ,2 ≤ ǫ. 0 Pn

neighborhood of θ0 . In fact, we can see that Tn (·, ξ) −

 r θ − θ0 = Op n− 2r+1 . (f ′ (θ0 + a) − f ′ (θ0 − a))

Thus, for any ǫ > 0, sufficiently large n and sufficiently small δ > 0 such that d(θ, θ0 ) < δ, we can provide ǫ-privacy guarantee. Then by sub-convexity of the loss function and H¨ older inequality, we can fix the random element in Tn . Then Tn becomes a regular non-parametric estimator, and application of Lemma IV.5.1. delivers the result of the theorem. Q.E.D.

4

Smoothing and the location of the maximum

We assumed that convolving the objective function with the kernel generates a sufficiently smoothed function. However, in addition to smoothing per se, such a convolution can be used to “regularize” the objective function. Commonly used regularization techniques such as Tikhonov regularization are frequently used to address the discontinuity of solutions of inverse problems. In our case we use convolution to assure that global sensitivity of the sample objective function will decline with  R g (t, θ) dt. the sample size. We expressed the smoothed objective function via gh (z, θ) = h1 K t−z h Given that the function is sufficiently smoothed, the necessary condition for the maximum can be expressed as n

1 X ∂gh (zi , θ) = 0. n i=1 ∂θ If the solution to this equation is unique, the analysis of global sensitivity reduces to the analysis of behavior this solution with the changes in z. Let θ1 solve the first order condition with the original set of zi . Now we take some i and substitute zi with some z ′ . Then the first-order condition can be written as   ∂ˆ gh (θ) 1 ∂gh (z ′ , θ) ∂gh (zi , θ) = 0. + − ∂θ n ∂θ ∂θ Denote the solution of this equation θ2 and then use the Taylor-series expansion at θ1 to express θ2 − θ1 =



  −1  1 ∂ 2 gh (zi , θ∗ ) ∂ 2 gh (z ′ , θ∗ ) 1 ∂gh (zi , θ1 ) ∂gh (z ′ , θ1 ) ∂ 2 gˆh (θ∗ ) , − − − ∂θ∂θT n ∂θ∂θT ∂θ∂θT n ∂θ ∂θ

where θ∗ is in the neighborhood of θ1 of radius d(θ1 , θ2 ). We consider the behavior of the difference θ1 − θ2 for sufficiently large sample n. As before, we consider the example of indicator functions.

18

For simplicity we focus on g(z, θ) = 1{z ∈ [θ − a, θ + a]} for given a. Then if K(t) = we can represent     z−θ+a z−θ−a −K . gh (z, θ) = K h h

Rt

−∞

K(q) dq,

Using smoothess and the Taylor expansion, for some point z¯, we can represent ∂gh (zi , θ1 ) ∂gh (z ′ , θ1 ) ∂ 2 gh (¯ z , θ1 ) − = (zi − z ′ ) ∂θ ∂θ ∂θ∂z As a result, provided that      ∂ 2 gh (¯ z , θ1 ) z−θ+a z−θ−a 1 − K(1) = − 2 K (1) ∂θ∂z h h h the numerator has order to produce

1 nh2 .

For the denominator we can use an analogous Taylor series expansion

∂ 2 gh (zi , θ∗ ) ∂ 2 gh (z ′ , θ∗ ) ∂ 3 gh (˜ z , θ∗ ) − = (zi − z ′ ) . ∂θ∂θT ∂θ∂θT ∂θ∂θT ∂z We note that      z−θ+a z−θ−a 1 ∂ 3 gh (˜ z , θ∗ ) (2) (2) − K K = − ∂θ∂θT ∂z h3 h h On the other hand, provided the results in Romano (1988) we can evaluate   1 ∂ 2 gˆ(θ∗ ) ∂ 2 gˆh (θ∗ ) . = + o p ∂θ∂θT ∂θ∂θT nh2  2 g ˆ(θ ∗ ) 1 3 This means that the denominator can be expressed as ∂∂θ∂θ T + o nh3 . Therefore, condition nh being non-decreasing assures the boundedness of the global sensitivity. Moreover, condition nh2 → ∞ assures that the global sensitivity will converge to zero as the sample becomes larger.

5

Alternative estimators

Throughout our discussion we focused on the class of procedures that, first, approximate the potentially ill-behavied sample objective function with its smoothed analog. And, second, they introduce the noise to query based on the global sensitivity of the smoothed objective. We can consider procedures that deliver privacy guarantees in an alternative way. In this section we provide two examples of procedures that may be used as alternatives to the smoothing procedure. Subsampling-based procedure. We consider a procedure, proposed in Smith (2008), where one constructs Hn random subsamples Jk from the original sample, such that the probability of having each observation included in a given subsample is equal to λn . Then we compute the estimator θ¯n,Hn ,λm by:

19

(i) computing νn,k,λm = arg maxθ

1 #Jk

(ii) aggregating the obtained estimates

P

i∈Jk

g(zi , θ);

Hn 1 X νn,k,λm . θ¯n,Hn ,λm = Hn k=1

We note that this procedure can be represented as Radamacher-type randomization where one draws random variables δi,k with values −1 and 1 taken with probabilities 1−λn and λn . Then the objective function in the subsample k can be written as " # n 1X 1 gˆ(θ) + δi,k g(zi , θ) . gˆk (θ) = 2 n i=1 Then, given sample {zi }ni=1 the maximum νn,k,λm is a random variable depending on λm and the aggregation over k can be treated as smoothing with respect to the distribution of this random variable. We note that for a sufficiently large simulation sample the the extremum estimator obtained using the randomized subsample and averaged over all random subsamples will approach to its expectation with respect to the distribution over all randomized samples. The expectation plays the role of smoothing in this environment. Numerical optimization An additional example of a possible alternative to smoothing the objective function via convolution to attain privacy comes from numerical optimization. Frequently, in order to find a maximum to the objective function one tries to find a solution to the first-order condition ∇g(θ) = 0, where the gradient is computed via some numerical approximation. We can consider the case where the numerical approximation is based on computing finite differences of the objective function. The jth derivative of g (x) can be approximated by a linear operator, denoted by Lǫk,p g (θ), that makes use of a pth order two-sided formula: Lǫk,p g (x) =

p 1 X cl g (x + lǫ) . ǫj l=−p

The usual two sided derivative refers to the case when p = 1. When p ≥ 1, these are called higher order differentiation. For a given p, when the weights cl , l = 1, . . . , p are chosen appropriately, the error in approximating g (k) (x) with Lǫj,p g (x) will be small: Lǫk,p g (x) − g (k) (x) = O(ǫ2p+1−k ). Now consider the example of Chernoff’s objective function g (θ) = E [1 {z ∈ [θ − a, θ + a]}]

20

with the sample analog n

gˆ (θ) =

1X 1 {zi ∈ [θ − a, θ + a]} . n i=1

The sample objective is non-smooth in θ which may complicate the search for the maximum of the objective with respect to θ. We can apply the random numerical gradient-based approach to construct a more manageable estimation technique. Suppose that ξ is a random draw from the uniform distribution with support on [s, 1] and for deterministic sequence hn (equivalent to the smoothing sequence we used before) we can express the numerical gradient condition as n ˆ (θ) Lξh 1,2 g

       n log n zi + θ − a zi + θ + a 1X 1 U −U = op , = n i=1 hn ξ hn ξ hn ξ hn n

where U (·) is a uniform kernel. As a solution, we can take the smallest root of this equation. We note that the procedure of solving the numerical first-order condition is similar to the procedure that results from smoothing the sample objective function. According to the result in Hong, Mahajan, and Nekipelov (2010) the resulting estimator will be consistent. Moreover, the support of the uniform random variable ξ can play the role of the privacy control.

References Adam, N., and J. Worthmann (1989): “Security-control methods for statistical databases: a comparative study,” ACM Computing Surveys (CSUR), 21(4), 515–556. Chowdhury, S., G. Duncan, R. Krishnan, S. Roehrig, and S. Mukherjee (1999): “Disclosure detection in multivariate categorical databases: Auditing confidentiality protection through two new matrix operators,” Management Science, pp. 1710–1723. Cox, L. (1980): “Suppression methodology and statistical disclosure control,” Journal of the American Statistical Association, 75(370), 377–385. (1987): “A constructive procedure for unbiased controlled rounding,” Journal of the American Statistical Association, 82(398), 520–524. Duncan, G., and S. Fienberg (1997): “Obtaining information while preserving privacy: A markov perturbation method for tabular data,” in Joint Statistical Meetings, pp. 351–362. Dwork, C. (2006): “Differential privacy,” Automata, languages and programming, pp. 1–12. Dwork, C., and J. Lei (2009): “Differential privacy and robust statistics,” in Proceedings of the 41st annual ACM symposium on Theory of computing, pp. 371–380. ACM. Dwork, C., F. McSherry, K. Nissim, and A. Smith (2006): “Calibrating noise to sensitivity in private data analysis,” Theory of Cryptography, pp. 265–284.

21

Dwork, C., and K. Nissim (2004): “Privacy-preserving datamining on vertically partitioned databases,” in Advances in Cryptology–CRYPTO 2004, pp. 134–138. Springer. Dwork, C., and A. Smith (2010): “Differential privacy for statistics: What we know and what we want to learn,” Journal of Privacy and Confidentiality, 1(2), 2. Fienberg, S. (1999): “Fr´echet and Bonferroni bounds for multi-way tables of counts with applications to disclosure limitation,” in Statistical Data Protection (SDP98) Proceedings, pp. 115–129. Fischetti, M., and J. Salazar (1998): “Computational experience with the controlled rounding problem in statistical disclosure control,” Journal of Official Statistics, 14(4), 553–565. ´, E., and R. Nickl (2008): “Uniform central limit theorems for kernel density estimators,” Gine Probability Theory and Related Fields, 141(3), 333–387. Hong, H., A. Mahajan, and D. Nekipelov (2010): “Extremum estimation and numerical derivatives,” working paper, UC Berkeley and Stanford University. Ibragimov, I., and R. Khasminskii (1979): “Asymptotic theory of estimation,” (in Russian) Nauka, Mosow. Kim, J., and D. Pollard (1990): “Cube root asymptotics,” The Annals of Statistics, 18(1), 191–219. Kozek, A. (2003): “On M-estimators and normal quantiles,” The Annals of Statistics, 31(4), 1170– 1185. Nissim, K., S. Raskhodnikova, and A. Smith (2007): “Smooth sensitivity and sampling in private data analysis,” in Proceedings of the thirty-ninth annual ACM Symposium on Theory of Computing, pp. 75–84. ACM. Nolan, D., and D. Pollard (1987): “U-processes: rates of convergence,” The Annals of Statistics, 15(2), 780–799. Pollard, D. (1984): Convergence of stochastic processes. Springer. Radulovic, D., and M. Wegkamp (2000): “Weak convergence of smoothed empirical processes: beyond Donsker classes,” High Dimensional Probability II, 47, 89–105. Romano, J. (1988): “On weak convergence and optimality of kernel density estimates of the mode,” The Annals of Statistics, 16(2), 629–647. Smith, A. (2008): arXiv:0809.4794.

“Efficient,

differentially private point estimators,”

Arxiv preprint

van der Vaart, A. (1994): “Weak convergence of smoothed empirical processes,” Scandinavian journal of statistics, 21(4), 501–504.

22

van der Vaart, A. (2000): Asymptotic statistics. Cambridge University Press. van der Vaart, A., and J. Wellner (1996): Weak convergence and empirical processes. Springer Verlag. Wasserman, L., and S. Zhou (2010): “A statistical framework for differential privacy,” Journal of the American Statistical Association, 105(489), 375–389. Williams, O., and F. McSherry (2010): “Probabilistic Inference and Differential Privacy,” in Neural Information Processing Systems (NIPS). Yukich, J. (1992): “Weak convergence of smoothed empirical processes,” Scandinavian journal of statistics, 19(3), 271–279.

Private Extremum Estimation

Computer science literature suggests one approach to guaranteeing the absence of pri- ... Following Dwork (2006) and Wasserman and Zhou (2010) we call the ..... {fn(θ) ∈ Gn, |θ − θ0| < ε} is polynomial in ε with polynomial degree and ...

279KB Sizes 1 Downloads 179 Views

Recommend Documents

PRIVATE i PRIVATE
IO. N0. 1 NO. TRESSPASS l NG TRESSPASSING. W/IiTfIIIEEPIiEEi22/14 - 11%. PRIVATE i ... Engineering & Development Co., Adelphi, Md., a cor poration of ...

Extremum of Circulant type matrices: a survey
Mar 15, 2008 - behaviour near the “edge”: of the extreme eigenvalues, spectral norm and spectral radius. ...... Inc., Melbourne, FL, second edition, 1987. .... Bidhannagar Government College, Sector I, Salt Lake, Kolkata 700064, India.

Distributed Extremum Seeking and Cooperative Control ...
The proposed approach retains all the advantages of cooperative control (such ... Mobile platforms with wireless communication capabili- ties can often be used ...

Extremum of Circulant type matrices: a survey
Mar 15, 2008 - Study of the properties of the eigenvalues of random matrices emerged first from data analysis and then from ..... helps to visualize the general non-Gaussian case. 3.1 Spectral radius and ..... a measurable map. N : (Ω,F,P) ...

Formation Flight Optimization Using Extremum Seeking ...
formation of Lockheed C-5s, extending the use of maximum performance formation ight to ... WHEN they y in formation, two aircraft can achieve a signif- ..... not meaningful. ..... by the presence of nonminimum phase zeros in the aircraft dynam-.

Private the best by private
Vegaintl. nightschool neon indian.38386162815. Tinkerbell dualaudio.J star game. ... Pdf password remover pro. Privatethe. best by private- Download.

Noise-contrastive estimation: A new estimation principle for ...
Any solution ˆα to this estimation problem must yield a properly ... tion problem.1 In principle, the constraint can always be fulfilled by .... Gaussian distribution for the contrastive noise. In practice, the amount .... the system to learn much

Age estimation of faces: a review - Dental Age Estimation
Feb 27, 2008 - The current paper reviews data on the ... service) on the basis of age. ... than the remainder of the face and a relatively small, pug-like nose and ..... For example, does wearing formal attire such as a business suit, associated with

Software Cost Estimation
Which software size measurement to use – lines of code (LOC), function points .... External file types: files that are passed or shared between the system and ...

Empirical Evaluation of Volatility Estimation
Abstract: This paper shall attempt to forecast option prices using volatilities obtained from techniques of neural networks, time series analysis and calculations of implied ..... However, the prediction obtained from the Straddle technique is.

Mo_Jianhua_Asilomar14_Channel Estimation in Millimeter Wave ...
Mo_Jianhua_Asilomar14_Channel Estimation in Millimeter Wave MIMO Systems with One-Bit Quantization.pdf. Mo_Jianhua_Asilomar14_Channel Estimation ...

Quantum Estimation Theory
L~. Example. X€St. Z3. >. $62. 77,. O 100 299. 7 ..... (AQXM)2 := ;p@{Tr[z>11>?,%1 ~ T"ii5aXui2}.

Private investment law
This law sets forth the general basis of the private investment to be carried out in the Republic of ..... compliance with the legal and procedural formalities. 2.

Private investment law
equipment and other assets or technology, the use of funds geared at the creation of new .... means of acquisition of the relevant assets or execution of business .... Private investment operations benefiting from the advantages defined in this.

Indictment - My Private Audio
-f-4. 9 l. ) 10 v. ) VIOLATIONS: ) 18 U.S.C. j 1341 - Mlif Fraud. 1 1 KAREN TAPPERT,. ) 18 U.S.C. $ 1343 - W lre Fraud. ) 12. Defendant. ) 13 THE G RAND JURY CHARGES THAT: 14. At aIl times relevant to this .... TAPPERT caused this fraudulent deed to

PRIVATE WiFi Online Help
It is virtually impossible to break into a guarded data center with biometric scanners and .... Customer Support by phone, call (860) 615-9896. •. Email support is ...

DISTRIBUTED PARAMETER ESTIMATION WITH SELECTIVE ...
shared with neighboring nodes, and a local consensus estimate is ob- tained by .... The complexity of the update depends on the update rule, f[·], employed at ...

Quantum Estimation Theory
optimal: the equality is achievable for all states and observables. [ What is the optimal bound of ... w MX ;§fвз,*£t¢*вв+,¢,Vp¢;'ffi'з *4-*+ 'air q,*-w,44". ~* 1*.

private cloud setup.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. private cloud setup.pdf. private cloud setup.pdf. Open. Extract.

private cloud setup.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. private cloud ...