Random Sampling and Probability

Richard Bass University of Connecticut [email protected] www.math.uconn.edu/∼bass

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

1 / 34

This is joint work with Karlheinz (Charlie) Gr¨ochenig, University of Vienna

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

2 / 34

The problem

The problem: given xj , yj = f (xj ), find f . This is • Classical • Useful • Topical: (Cand´es, Romberg, and Tao; Cucker and Smale; and Smale and Zhou) • Easy

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

3 / 34

Refining the problem

We need to restrict the class of f ’s somehow. We assume our samples lie in [0, 1]d and we look at band-limited f ’s: X f (x) = ak e 2πik·x . |k|≤M,k∈Zd

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

4 / 34

If the xn ’s are on a lattice, things are easier, but one wants to allow other cases. For example, recording a picture for a passport, one want more detail in certain places, such as the face.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

5 / 34

The case of one dimension is well understood, but d > 1 is poorly understood. The reason is that a lot is known about the zeros of an analytic function, but very little about the zeros of a holomorphic function on Cn . Yet there are algorithms that work, although no one understands why they do.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

6 / 34

The method The method to find f is this. We have a collection of linear equations: X yn = f (xn ) = ak e 2πik·xn . |k|≤M,k∈Zd

If we let Unk = e 2πik·xn , and define the matrices a and y in the obvious way, we need to solve Ua = y for a. It is common to look at U ∗ Ua = U ∗ y , and then T = U ∗ U is a Toeplitz matrix.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

7 / 34

First result

Question 1: For what xn ’s can we solve this equation? Theorem 1. If the xj ’s are chosen independently and the distribution of each xj has a density, then T is invertible with probability 1.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

8 / 34

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

9 / 34

The idea of the proof is as follows. T is a (2M + 1)d × (2M + 1)d matrix. We look at the N × N upper left hand corner and show by induction that each of these is invertible. Suppose we have the N × N matrix invertible. If we let bi be the vector consisting of the first N columns in the i th row, then bN+1 is a unique linear combination of b1 , . . . , bN . So the only way we do not have invertibility of the (N + 1) × (N + 1) upper left hand corner is if aN+1,N+1 is the same linear combination of the ai,N+1 . Sorting this out, this means that xN+1 is a zero for a particular fixed trigonometric polynomial. The set of zeros of a trigonometric polynomial have measure 0, so with probability 1, xN+1 is not a zero.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

10 / 34

The sampling inequality

Question 2: If one can solve the system of equations, can one do it practically? What is the condition number of T ? The condition number is the ratio of the largest to smallest eigenvalue. The larger the condition number, the harder it is to solve linear equations numerically.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

11 / 34

What we prove is known as a sampling inequality: Akf k22 ≤

r X

|f (xj )|2 ≤ Bkf k22 .

j=1

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

12 / 34

If we have a sampling inequality, we know three things. 1. κ(T ) ≤ B/A, where κ is the condition number. (Recall the equation y = Ua. The middle expression in the sampling inequality is ky k22 and the left and right hand sides are kak22 .) 2. Uniqueness - if f1 (xj ) = f2 (xj ) for all j, the left hand inequality applied to f1 − f2 implies uniqueness. 3. Stability - if we vary f a little, the samples will only vary a little. This comes from the right hand inequality.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

13 / 34

Second result

Theorem 2. If the xj are i.i.d. with a uniform distribution, then (1 − ε)r kf k22 ≤

r X

|f (xj )|2 ≤ (1 + ε)r kf k22

j=1

holds with probability at least ε2

1 − c1 e −c2 r 1+ε .

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

14 / 34

A corollary is that P supf | |f (xj )|2 − kf k22 | √ lim sup = c, r log log r kf k22 r →∞

Richard Bass (University of Connecticut)

Random sampling and probability

a.s.

June 2010

15 / 34

So κ(T ) ≈ 1 + c

 log log r 1/2

. r For the lattice case, we have the same without the double logs, so we don’t lose much.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

16 / 34

I’ll sketch two proofs. One gives explicit bounds, the other generalizes. Let δ = inf{s : [0, 1]d ⊂ ∪rj=1 B(xj , s)}. We show that δ will be small when r is large enough, and then apply a theorem of Beurling.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

17 / 34

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

18 / 34

Divide the √ unit cube into subcubes of side 1/N. If none are empty, then δ(r ) ≤ d/N. The probability that a sample misses a given cube is 1 − N −d . So the probability that all the samples miss a given √ cube is −d r d (1 − N ) . There are N cubes, so the probability that δ > d/N is bounded by N d (1 − N −d )r . It turns out this sort of analysis is sharp.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

19 / 34

The other proof uses three ideas. P First, since f = ak e 2πix·k , the L2 norm of f is the same as the `2 norm of a. By Cauchy-Schwarz, the L∞ norm is comparable (with a constant (2M + 1)d/2 which can be bad, but is fixed). By interpolation all the Lp norms of f are comparable. Second, Bernstein’s inequality - the one for sums of independent random variables, not the one for the derivative of a Fourier series.  P(Sn > λ) ≤ exp −

Richard Bass (University of Connecticut)

 λ2 . 2σ 2 + λM/3

Random sampling and probability

June 2010

20 / 34

Third is the technique of metric entropy. To get a bound for the sup of Yf over a class of f ’s, one gets a bound for the sup over a small finite class C1 of f ’s. Then one gets a bound for the sup over a slightly larger finite class C2 by comparing every f in C2 to the nearest element in C1 . Continuing, one gets a bound on the sup for a countable dense subset of f ’s.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

21 / 34

Here is a little more detail. For each i, let Ci be a finite set of points, suppose the Ci increase, and the union of the Ci is dense in the space. Write P(sup |Yf | > λi + sup |Yf |) ≤ P( Ci+1

Ci

sup

|Yf − Yg | > λi ).

f ∈Ci+1 ,g ∈Ci

One can be a little more efficient in realizing that the last sup can be restricted to f ’s and g ’s that are close together. By balancing the selection of the Ci against the values of λi , one can get very good estimates.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

22 / 34

Generalizations

P We can let our class of functions be { ak ek } for some other basis {ek }. We can look at almost periodic functions: X ak e 2πiλk ·x . where the λk are not necessarily elements of Zd . We can look at algebraic polynomials, we can look at shift invariant functions (e.g., wavelets), and we can look at spherical harmonics.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

23 / 34

The infinite-dimensional problem

Now what if we want to let xn ∈ Rd , yn = f (xn )? A suitable class of functions to consider is the band-limited functions, those whose Fourier transform has support in [− 12 , 12 ]d . We are now in an infinite dimensional situation.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

24 / 34

Negative results

(a) Suppose in each cube we pick r points uniformly. In this case the sampling inequality fails. Look at d = 1. Find f such that the zeros of f are near the even integers. By Borel-Cantelli there will be a long string of intervals where all the samples are near the even integers. A shift of f gives a function with a poor constant for the sampling inequality. One can construct a sequence of functions fk such that X

Richard Bass (University of Connecticut)

|fk (xk )|2 ≤

kfk k22 . k

Random sampling and probability

June 2010

25 / 34

(b) Another idea for random sampling is the spatial Poisson process. Let λ be Lebesgue measure, and we want the number of samples in a set A to be Poisson with parameter λ(A), where the numbers of points in disjoint sets are independent. By Borel-Cantelli there will be a large hole with no samples, and by results of Landau a sampling inequality cannot hold.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

26 / 34

We could let dλ/dx = o(1 + log+ (|x|)), and there is the same difficulty, but if dλ/dx = c(1 + log+ (|x|)) for large enough c, then things are OK.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

27 / 34

Positive results

We look at functions most of whose energy is not too far away from the origin. We let n BR,δ = f band-limited : Z o |f (x)|2 dx ≥ (1 − δ)kf k22 , kf k2 ≤ 1 . [−R/2,R/2]d

Then the sampling inequality holds.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

28 / 34

The idea is to use metric entropy. The spaces BR,δ are compact. When we discussed metric entropy, we said we balanced the size of the Ci ’s against the λi ’s. (We looked at pairs f , g with f ∈ Ci+1 , g ∈ Ci , and f and g close.) Given some compact metric space, the covering numbers N(ε) are defined by N(ε) = log M(ε), where M(ε) is the fewest number of balls of radius ε that cover the space. The size of the Ci ’s is related to the covering numbers.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

29 / 34

To get the covering numbers, we estimate eigenvalues. Let AR = QPR Q, where c =1 b Qf [−1/2,1/2]d f ,

PR f = 1[−R/2,R/2]d f .

Let ϕn be an orthonormal basis for L2 with respect to AR . These are products of prolate spheroidal functions. An argument counting the number of eigenvalues less than ε for AR leads to a covering number for BR,δ ∩ B, where B is the unit ball in L2 .

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

30 / 34

The argument goes something like this. Let λn be the eigenfunctions and let n o X Sδ = c ∈ `2 : kck2 ≤ 1, λn |cn |2 ≥ 1 − δ . n

Since X n

λn |cn |2 =

X λn ≥ε/2

+

X

,

λn <ε/2

it suffices to get a covering number for n o X λn |cn |2 ≥ 1 − δ − ε/2 . Sδε = c ∈ `2 : kck2 ≤ 1, λn ≥ε/2

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

31 / 34

From analysis, there are good estimates for the number of eigenvalues larger than ε/2. So Sδε is a subset of the unit ball in CN for the appropriate N. And the covering number for the unit ball in CN is easy to compute.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

32 / 34

Once one has the covering numbers, one csn apply the metric entropy argument. The result one gets is that the sampling inequality holds, except for an event whose probability goes to 0 exponentially fast in the number of samples.

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

33 / 34

Some open problems

1. A practical algorithm is not known for reconstructing f in the infinite dimensional case. 2. In the finite dimensional case, our results are asymptotic in the number of samples r . What if r is just barely large enough to insure invertibility of the Toeplitz matrix? What impact does increasing r by 1, or by a factor of 2, have?

Richard Bass (University of Connecticut)

Random sampling and probability

June 2010

34 / 34

Random sampling and probability

The larger the condition number, the harder it is to solve linear equations numerically. ... By Cauchy-Schwarz, the L∞ norm is comparable (with a constant.

275KB Sizes 2 Downloads 210 Views

Recommend Documents

pdf-12115\probability-random-variables-and-random-signal ...
... of the apps below to open or edit this item. pdf-12115\probability-random-variables-and-random-sig ... daptation-by-bertram-emil-shi-peyton-z-peebles-jr.pdf.

103796670-Papoulis-Probability-Random-Variables-and-Stochastic ...
С расписанием работы врачей поликлиники Вы можете. Page 3 of 678. 103796670-Papoulis-Probability-Random-Variables-and-Stochastic-Processes.pdf.

probability statistics and random processes by t veerarajan pdf ...
File: Probability statistics and random. processes by t veerarajan pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. probability statistics and random processes by t veerarajan pdf. probability statistics and ra

Ebook Probability And Random Processes Full Book
Measure, Integral and Probability (Springer Undergraduate Mathematics Series) · The Elements of Statistical Learning: Data Mining, Inference, and Prediction, ...

PDF Download Probability And Random Processes Full ...
The Elements of Statistical Learning: Data Mining, Inference, and Prediction, ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and ...

Near-Optimal Random Walk Sampling in Distributed ...
in a continuous online fashion. We present the first round ... at runtime, i.e., online), such that each walk of length l can ... Random walks play a central role in computer science, ..... S are chosen randomly proportional to the node degrees, then

Hardware-Efficient Random Sampling of Fourier ...
frequencies instead of only frequencies on a grid. We also introduce .... Slopes generated for given input signal (red) in a traditional slope. ADC (top) and a ...

Random Sampling from a Search Engine's Index ...
□A good crawler covers the most documents possible. 5. ▫ Narrow-topic queries. □E.g., get homepage of John Doe. ▫ Prestige. □A marketing advantage ...

Random Sampling from a Search Engine's Index - EE, Technion
Mar 4, 2008 - ... of 2.4 million documents substantiate our analytical findings and ... Security evaluation: using an anti-virus software, we can estimate the ...

Random Sampling from a Search Engine's Index
Mar 4, 2008 - Email: [email protected]. ... In an attempt to come up with reliable automatic benchmarks for search engines, Bharat and ...... return x as a result, but that would have required sending many (sometimes, thousands of) ...

Random Sampling from a Search Engine's Index - EE, Technion
Mar 4, 2008 - (2) Applying the same sampling procedure to both the provider's own ...... successfully fetch and parse, that were in text, HTML, or pdf format, ...

Jittered random sampling with a successive approximation ADC.pdf ...
Georgia Institute of Technology, 75 Fifth Street NW, Atlanta, GA 30308. Abstract—This paper ... result, variable word length data samples are produced by the ... Successive Sine Matching Pursuit (SSMP) is proposed to recover. spectrally ...

Random Sampling from a Search Engine's Index
“weight”, which represents the probability of this document to be selected in .... from an arbitrary document, at each step choose a random term/phrase from ...... battellemedia.com/archives/001889.php, 2005. [4] K. Bharat ... Management, 40:495â

Random Sampling from a Search Engine's Index ...
40%. 50%. 60% e n ts fro m sa m p le . Google. MSN. Yahoo! 32. 0%. 10%. 20%. 30% com org net uk edu de au gov ca us it noes ie info. Top level domain name.

Random Sampling from a Search Engine's Index | Google Sites
Mar 4, 2008 - †Department of Electrical Engineering, Technion, Haifa 32000, ... Security evaluation: using an anti-virus software, we can estimate the ..... MCMC methods allow us to transform a given ergodic Markov Chain P ..... 3. results(·) is a

Snowball sampling for estimating exponential random ...
Nov 13, 2015 - Abstract. The exponential random graph model (ERGM) is a well-established statis- tical approach to modelling social network data. However, Monte Carlo estimation of ERGM parameters is a computationally intensive procedure that imposes

Schaum Series of Probability & Random Processes.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Schaum Series ...

Schaum's Outline of Probability, Random Variables ...
2.4 Discrete Random Variables and Probability Mass Functions. 41 ... die, the toss of a coin, drawing a card from a deck, or selecting a message signal for ...

Schaum Series of Probability & Random Processes.pdf
There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Schaum Series of Probability & Random Processes.

Download [Pdf] Random Data: Analysis and Measurement Procedures (Wiley Series in Probability and Statistics) Full Books
Random Data: Analysis and Measurement Procedures (Wiley Series in Probability and Statistics) Download at => https://pdfkulonline13e1.blogspot.com/0470248777 Random Data: Analysis and Measurement Procedures (Wiley Series in Probability and Statis