IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

937

Generalized Mean-Median Filtering for Robust Frequency-Selective Applications Tuncer Can Aysal, Student Member, IEEE, and Kenneth E. Barner, Senior Member, IEEE

Abstract—Huber proposed the family of -contaminated normal distributions to model environments characterized by heavy-tailed distributions. Based on this two-component mixture distribution, mean-median (MEM) filters were proposed. The MEM filter output is a combination of the sample mean and the sample median, where observation samples are weighted uniformly. This property of MEM filters constrains them to the class of smoothers lacking frequency-selective filtering capabilities. This paper extends MEM filtering to the weighted sum-median (WSM) filtering structure admitting real-valued weights, thereby enabling more general filtering characteristics, i.e, bandpass and high-pass filtering. The proposed filter structure is also well motivated from a presented maximum likelihood (ML) estimate analysis under -contaminated statistics. The ML analysis demonstrates the need for a combination of weighted sum (WS) and weighted median (WM) type filters for processing of signals corrupted by -contaminated noise. The WSM filter is statistically analyzed through the determination of filter output variance and breakdown probability. The combination parameter is optimized to minimize the filter output variance, which is a measure of noise attenuation capability. Moreover, filter design procedures that yield a desired spectral response are detailed. Finally, the proposed WSM filter structure is tested utilizing signal processing applications including low-pass, bandpass, and high-pass filtering and image processing applications including image sharpening and denoising, evaluating and comparing the WSM filter performance to that of the WS, WM, and MEM filters. Index Terms— -contaminated, maximum likelihood estimate, mixed-filtering, nonlinear filtering, weighted mean, weighted median.

I. INTRODUCTION INEAR filtering techniques have been used in many signal processing applications, and their popularity mainly stems from their mathematical simplicity and their efficiency in the presence of additive Gaussian noise. It is known that under Gaussian statistics (all samples having the same variance), the maximum likelihood estimate of location is the sample mean, thereby inspiring linear filtering. However, mean filtering fails to effectively remove heavy-tailed noise and performs poorly in the presence of signal-dependent noise. It is also known that under heavy-tailed Laplacian statistics (all samples having the same variance), the maximum likelihood estimate of location is the sample median. Median filtering, with its fine detail preservation and impulsive noise removal characteristics, has

L

Manuscript received November 1, 2005; revised June 9, 2006. The associate editor coordinating the review of this paper and approving it for publication was Prof. Ioan Tabus. The authors are with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716 USA (e-mail: [email protected]. udel.edu; [email protected]). Digital Object Identifier 10.1109/TSP.2006.888882

taken its place in many signal and image processing applications [1], [2]. An important shortcoming of the median that has hampered its use in many other fields is that the filter output is always constrained, by definition, to be one of the samples in the input window. Although this “selection” characteristics is very desirable in image processing applications [3], it results in efficiency losses that are unacceptable for many other applications. It is well known, for example, that the median loses as much as 40% efficiency over the sample mean when used as a location estimator in Gaussian environments [4]. Accordingly, mixture-distributions are proposed to model the underlying Gaussian and impulsive noise characteristics encountered in many applications. Assume that the noise probability distribution is a scaled verfamily of -contaminated sion of a known member of the normal distributions proposed by Huber [5] (1) where is the standard normal distribution, is the set of all probability distributions symmetric with respect to the origin ), and is the (i.e., such that known fraction of “contamination.” The presence of outliers in a nominally normal sample can be modeled by a distribution with tails that are heavier than that of the normal distribution. Huber found that the least favorable distribution in , which maximizes the asymptotic variance (or, equivalently, minimizes the Fisher information [5]), is given by a Gaussian probability density function (pdf) in the center and a Laplacian pdf in the tails, switching from one distribution to the other at a point whose value depends on the fraction of the contamination . In this structure, larger contamination fractions correspond to smaller switching points and vice versa. The -contaminated pdf in this case reduces to . Inspired by the discussed distribution, and noting that the ML estimates under i.i.d. Gaussian and Laplacian statistics are given by sample mean and sample median, respectively, a filtering structure based on the convex combination of the sample mean and sample median is proposed in [6]. The MEM filter is defined as follows. denote an observaDefinition 1: Let tion vector. The MEM filter output is given by (2) where , and denote the sample mean and sample median, respectively. Note that the sample mean and the sample median filters are constrained to positive weights and all the samples in each filter

1053-587X/$25.00 © 2007 IEEE

938

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

are uniformly weighted. The MEM filter is thus constrained to the class of smoothers, lacking the capability of more general filtering characteristics, such as bandpass and high-pass. Also, note that in [6], the MEM filter is not related to any statistical estimation framework, such as ML. In addition, the combination parameter , in this original formulation, is simply optimized using the well-known asymptotic variances of the sample mean and sample median under the assumption that the subfilter outputs are independent, which is clearly not the case. Note that the -contaminated pdfs exhibit tails heavier than that of the Gaussian and lighter than that of the Laplacian pdf. For the noise distributions whose pdf tail weight lies between those of a Gaussian and Laplacian pdfs, one can employ -filters defined as follows [7], [8]. Definition 2: The -filter output is given by (3) and denote the th filter weight and the order where statistic [9], respectively. A special case or -filters, called “trimmed mean” filters, are robust against outliers present in a underlying Gaussian distribution. The coefficients, in this case, are given by [10], [11]

otherwise

(4)

where denotes the greatest integer and . As in the case of MEM filter, these filter structures are also constrained to the class of smoothers [12]. Although these methods have proven useful, but are nonetheless ad hoc and often too cumbersome for practical use. In this paper, we propose a novel weighted sum-median (WSM) filtering structure. The proposed filter structure is well motivated from a presented maximum likelihood (ML) estimate analysis under -contaminated statistics. The WSM filter admits real-valued weights enabling general filtering characteristics. Also, the optimization of the combination parameter is performed through the minimization of the output variance, which is often used to measure the noise attenuation capability of filters. Robustness of the proposed WSM filter is analyzed through the determination of the breakdown probability. In addition, spectral design procedures for setting the WSM weights are presented. Finally, signal and image processing application results are presented showing that the WSM filter outperforms weighted sum (WS), weighted median (WM), and MEM filtering schemes. The remainder of this paper is organized as follows. The relations between ML estimates and WS and WM filtering are presented in Section II and these approaches are extended to the -contaminated statistics. The WSM filter is defined in Section III along with the output variance analysis, robustness analysis, and spectral design procedure. Section IV includes simulations evaluating the proposed filter structure and comparing it to WS and WM filters with various applications. Finally, the conclusion are drawn in Section V.

II. MAXIMUM LIKELIHOOD ESTIMATION AND FILTERING Consider the location estimation problem of estimating the of location constant from the samples , where noisy observation data (5) and are independent and identically distributed zero-mean noise. In the following, we consider the optimal combination of samples approached from a ML perspective. ML estimation of location is first reviewed for a set of Gaussian and Laplacian distributed samples, and the concepts are then extended to Huber’s impulsive noise model. A. Gaussian Distribution Case Consider a set of independent samples each obeying Gaussian pdf, , with (possibly) different . The ML estimate maximizing the variances likelihood function is given by [13] (6) . This is simply a standard normalized where , where is the output and the terms FIR filter are the FIR filter weights. Enforcing the positivity constraint on the weights constrains the resulting filters to be smoothers. In general practice, however, this constraint is relaxed, enabling FIR filters to take on wide array of spectral characteristics, such as bandpass and high-pass. B. Laplacian Distribution Case A similar connection between ML estimation and filtering is established in the Laplacian pdf case [1], [14]. The estimate maximizing the likelihood function is given by [1], [14] (7) where

and

is the replication operator defined

. The weight positivity constraint as again restricts the defined class of filters to smoothers, but, as in the FIR filter case, this constraint can be relaxed to enable more general filtering characteristics [1]. The filter output in the more general case is given by (8) where

when , when and when . Remark 1: Note that the Gaussian distribution case leads to a weighted sum combination of observation samples, while the heavier tailed Laplacian distribution leads to sample selection based on rank order. Rank order based selection of the output sample is much more robust than output methodologies based on weighted sums. Indeed, outlier samples, even if infinitely valued, are suppressed by WM filtering as long as the number of outliers is sufficiently small that they are localized in the extremes of the ordered set. Considerable analysis is available in the literature on the detail preservation and outliers rejection characteristics of WM filters [1], [14]–[17].

AYSAL AND BARNER: GENERALIZED MEAN-MEDIAN FILTERING

939

C. -Contaminated Distribution Case Consider the case with the

The ML estimate of the location is given, in this case, by

s given in (5) given as (9) (13)

where and are RVs with Gaussian pdf, , and is the fraction Laplacian pdf, , respectively. Also of contamination. This two-component model is the basis of a number of robust estimators in the literature [4], [5]. This assumption for the noise to be -contaminated Gaussian and Laplacian distributed is mainly due to the fact that heavier tails than the Gaussian mixture are provided by the Laplacian pdf, which is used as a contaminant of the Gaussian pdf. The presence of outliers in a nominally normal sample is thus modelled by a Laplacian pdf with tails that are heavier than that of the Gaussian pdf. Since the ML estimate of location is based on the pdf of the underlying statistics, the CDF and pdf of -contaminated statistics are discussed next. The CDF of is easily derived by conditioning on the RVs. Moreover, differentiating the CDF yields the pdf of the contaminated distribution (10) Remark 2: Huber [5] proposed the -contaminated normal for modelling heavy-tailed environments and found that set that maximizes the asympthe least favorable distribution in totic variance (or, equivalently, minimizes the Fisher information-inverse Cramer-Rao bound [5]) is given by (10). This pdf is Gaussian in the center and Laplacian in the tails, and switches from one to the other at a point whose value depends on the fraction of the contamination . The likelihood function, in this case, is given by (11) Unfortunately, the sum density function formulation makes the likelihood function in (11) intractable in the estimation problem [12]. Note, however, that in (9) each generated sample or Laplacian is Gaussian distributed with probability samples, hence, we can distributed with probability . For samples obeying a assume that there are samples obeying a heavy-tailed Gaussian pdf and Laplacian pdf. To overcome the drawbacks of the sum density function likelihood formulation, we consider the following equivalent likelihood function1

(12)

k

1We assume, without loss of generality, that the first samples are generated ) are generated from heavy-tailed from Gaussian pdf and the remaining ( Laplacian pdf.

N 0k

Taking the natural log and eliminating constants yields (14) It is simple to show that the second derivative of the above cost function is always positive indicating that the function is convex with a global minimum. Note that the above ML estimate has two terms: (1) sum of the squared deviations for Gaussian distributed terms; and (2) sum of the absolute deviations for Laplacian distributed terms. The ML estimate, hence, is referred to as Combined ML estimate (CML). Remark 3: The CML converges to a weighted mean as . Note that corresponds to the case where all samples, , are Gaussian distributed, under which the ML estimate is given by the weighted mean. Also, the CML con. Similarly, correverges to a weighted median as sponds to the case where all samples are Laplacian distributed, under which the ML estimate is given by the weighted median. The CML estimate, thus, inspires a combination of weighted-sum and weighted-median type filters for processing of signals with -contaminated statistics. Next, a filtering structure based on convex combination of weighted-sum and weighted-median type filters is proposed and statistically analyzed. III. WSM FILTERING Recall that in Section II it is shown that the ML estimate under Gaussian and Laplacian statistics reduces to a weighted sum and weighted median combination of samples, respectively. In addition, the ML estimate under -contaminated distributions reduces to the CML, which incorporates both the sum of squared deviations of Gaussian distributed terms and the sum of absolute deviations of Laplacian distributed terms in the minimization problem. The CML suggests that a combination of WS and WM type filters is required for processing of signal under -contaminated statistics. In practical, however, it is a very difficult task to determine the distribution of a given sample. The performance of the CML, thus, needs to be approximated. A convex combination of WS and WM filters is accordingly defined as follows. Definition 3: Given the observation sample vector, , the output of the WSM filter is given by

(15) and for where denote the sub-WS and sub-WM filter coefficients, respectively, and and .

940

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

[15], [18]–[20]. Hence, these statistics are readily determined for a given pdf and are explicitly utilized later in the paper at the breakdown probability calculation section. The noise attenuation capability of the filters, determined by the filter output variance, is an important measure of filter performance. Given the filter weights and , the WSM filter attaining the minimum output variance (i.e., minimized over ), is calculated as Fig. 1. The WSM filtering framework: WS and WM subfilters process the observation samples followed by a convex combination of subfilter outputs.

Note that the WSM filter reduces to MEM filter when all the observation samples are weighted uniformly. The WSM filtering structure is illustrated in Fig. 1, which clearly shows the WS and WM processing of observation samples (admitting realvalued weights) followed by a convex combination of subbuilt outputs. The following property of the WSM is analogous to the properties of the CML estimate derived in Section II, where, for and , the CML estimate reduces to ML estimate under Gaussian and Laplacian statistics, respectively. Property 1: The WSM filter reduces to WS and WM filters for and cases, respectively. The WSM filter can thus be tuned with the parameter to adjust to the changing statistics of the noise. Specifically, can be set closer to zero to obtain linear filtering characteristics under Gaussian statistics, or be set to unity to obtain robustness to heavy-tailed noise. Note that unlike MEM filtering structure proposed in [6], which is simply a combination of the sample mean and the sample median, the observation samples in the WSM filter are weighted utilizing real-valued weights in each subfilter, yielding more general filtering characteristics in the WSM filtering structure.

(18) the solution to which provides the best filtering performance. The following corollary gives the parameter minimizing the filter output variance. value minimizing the WSM filter output Corollary 1: The variance is given by (19) Proof: Differentiating (17) with respect to

yields

(20) Setting (20) to zero gives the desired result. In addition, the second derivative is given by (21) Note that the second derivative is always positive since (22) which implies that

A. Statistical Properties and Optimization of Properties of the WSM structure are studied here in order to gain a better appreciation of filter characteristics. We consider statistics of the WSM filter, deriving the filter output variance and the breakdown probability illustrating the robustness of the WSM filter. Also, the combination parameter is optimized to yield the filter with minimum output variance. 1) Output Variance: The second-order central output moment is quite often used to measure the noise attenuation capability of a filter, as it quantifies the spread of the output samples with respect to their mean value. The WSM filter output variance, , is a function of , and given by (16) where

denote statistical expectation. Utilizing and performing algebraic manipulations yields

(17) where , , and denote the sub-WS filter output variance, sub-WM filter output variance, and covariance of the subfilter outputs, respectively. The determination of these parameters are extensively studied in the literature [9],

(23) and indicates that is a convex function [18]. Thus, is . the global minimum of Note that, given the weights, the optimum is dependent on the noise statistics. This is in contrast to the MEM filter, which has a fixed combination parameter [6]. This optimization of the WSM filter yields a more robust and flexible structure for tuning the filter to varying noise characteristics. To visually see the convexity property of the WSM filter output variance function, we plot for the , , and WSM filter window size case, Fig. 2. In this example, 10 000 samples ( -contaminated distributed) are generated and passed through the WSM filter with varying parameters and the filter output variance reported in Fig. 2. The observation samples are weighted uniformly for both subfilter formulations, which gives the most smoothing and robust performance. The theoretical calculations show that , and yielding, . Note that the curve is a convex function of . In addition, the simulated agrees with the theoretical results as the plot minimum is attained at .

AYSAL AND BARNER: GENERALIZED MEAN-MEDIAN FILTERING

941

Fig. 2. Filter output variance as a function of . The parameters are  = 0:4,  = 1, and  = 4. The WSM filter window size is N = 11. The observation samples are weighted uniformly for both subfilter formulations.

Fig. 3. Breakdown probability analysis: the BDP of the WS, WM, and WSM filters are given in solid, dashed, and dotted lines, respectively. Noise parameters are  = 0:4,  = 1, and  = 4.

2) Breakdown Probability: A direct measure of filter robustness is given by the breakdown probability (BDP), which is defined as the probability of an impulse occurring at the filter output [19], [20]. The BDP is derived by first selecting a threshold , such that if a noise or output sample exceeds this level, the sample is regarded as an impulse. Also, let the sym. Then metric distribution of the i.i.d. input samples be the probability of an input sample being an impulse (positive or negative) is . The BDP of selection-type filters, such as WM filters, can be established utilizing the rank selection probability (RSP), which is defined as the probability that the filter output is the th ranked sample, i.e., for [20]. The RSPs can be established for any WM filter with integer-valued weights [19]–[21] and any WM filter with realvalued weights can be represented by an equivalent WM filter with integer valued weights [20], [21]. Thus for any WM filter, the RSPs can be established. Last, since scaling a RV trivially alters its distribution, we assume for simplicity that . The BDP of the WSM filter is statistically related to the filter output density, . In order to calculate the output distribution of the WSM filter, the subfilter output distributions are first calculated. Let and denote the output pdfs of the sub-WS and sub-WM filters, respectively. For i.i.d. observation samples and window size , is given by

where

is the pdf of the th-order statistics [9]

(26) The outputs of the subfilters are clearly coupled, making an exact derivation of the final output unwieldy. Thus we assumed that the subfilter outputs are independent. Although this is a somewhat crude assumption, it makes the analysis tractable and yields fairly accurate results. Under the independence assumption, the WSM filter output distribution is expressed as (27) The BDP of the WSM filter,

, is then simply given by

(28) where denotes the output CDF. The BDPs of WS and WM filters, along with the BDP of the WSM filter, are given in Fig. 3 for -contaminated mixture distributed noise with parameters: and and . The BDP of a WS filter is given by . In addition, the BDP of a WM filter is determined as

(24) where and denote the input pdf and the convolution operation, respectively. The output pdf of the sub-WM filter is given by

(25)

(29) is the CDF of the th-order statistics given by [9] . The observation samples in all filter formulations are uniformly weighted yielding the most smoothing and robust filtering characteristics. Notice that the WS filter is the most sensitive to the impulsive components of the noise yielding the largest where

942

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

TABLE I SUB-WS AND SUB-WM FILTER WEIGHTS FOR LOW-PASS FILTERING

BDP compared to WM and WSM filters. In addition, the WM filter, well-known for its impulsive-type noise rejection, yields greater impulsive components rejection capabilities. The WSM filter yields the lowest BDP with optimal , since WSM filter is more efficient than: (1) the WM filter when most of the observation samples obey Gaussian distribution; and (2) the WS filter when outliers are present in the observation samples. B. Design of WSM Filters can be jointly designed acFilter parameters, , and , such as MAE cording to a statistical error criterion, or mean-square error (mse) with straightforward extension of methods readily available in the literature for optimizing weighted sum [22] and weighted median filters [12]. However, a large number of engineering applications require low-pass, bandpass or high-pass frequency filtering characteristics. Equalization, deconvolution, prediction, beamforming, and system identification are example applications where filters having low-pass, bandpass, or high-pass characteristics are of fundamental importance. The WSM filter weights design procedure detailed here is thus aimed at obtaining such desired frequency-selective characteristics. Recall that the WSM filter is a convex combination of WS and WM subfilter outputs. Accordingly, the design of the WSM filter weights reduces to the design of subfilters and the setting of (previously addressed). The sub-WS filter can be set through a number of optimization and design tools readily available for the linear filters. In addition, the sub-WM filter can be designed utilizing the recently proposed synthesis algorithm [23]. This algorithm provides a closed-form solution for the spectral design of WM filters, avoiding the need for adaptive algorithms requiring training data sets. The following provides a formal statement and review of the synthesis algorithm. Following this, the developed WSM filter spectral design procedure is given. The linear FIR smoother coefficients have been shown by Mallows to have an intimate relationship to the statistical characteristics of the corresponding nonlinear smoother [24]. In fact, the weights of the closest nonlinear smoother satisfy for , where is the probability that the output value of the nonlinear smoother is equal to the th input sample . Accordingly, is referred to as sample selection probability (SSP) [20]. The WM smoother weights are found to minimize the mse cost function given by (30) denotes the SSP vector for a given . The optimizawhere tion is carried out with a gradient-based algorithm designed to

find the WM smoother closest to a specified WS smoother in the mse sense. The extension of this algorithm to the WS and WM filters for is admitting real-valued weights accomplished as follows [23]. • Given the desired frequency characteristics, design the best using one of the traditional design WS filter tools for linear filters. • Decouple the signs of the coefficients to form the vectors and . • After normalizing the vector , use the synthesis algorithm defined for smoothers to find the closest WM filter, . • The WM filter with the spectral response closest to the . desired one is given by The WSM filter, with the desired spectral response and noise attenuation characteristics is thus, designed as follows. 1) Design the best sub-WS component of the filter using the traditional tools already available for WS filters. 2) Use the above synthesis algorithm to obtain the sub-WM component weight vector . 3) Given the designed subfilter weights, the optimal is obtained by using (19). Hence, the desired frequency-selective characteristics are accomplished through the subfilter weights design algorithm and the noise attenuation is achieved by optimizing the combination parameter such that the filter output variance is minimized. The WSM filter is, thus, equipped both with the desired frequency-selective and noise attenuation features. The following experiment is used to test the proposed WSM filter designing algorithm. A linear FIR (WS) filter , with low-pass characteristics is designed using MATLAB’s fir1 command. In addition, its WM counterpart, is designed using the synthesis algorithm [23] reviewed here. Similar procedures are repeated for bandpass and high-pass filtering cases. The designed sub-WS and sub-WM filter weights are given in Tables I, II, and III for the low-pass, bandpass, and high-pass filtering cases, respectively. The cut-off frequencies are: 0.25, [0.35 0.65], and 0.75 for low-pass, bandpass, and high-pass filtering cases, respectively. Note that, both filter formulations, in all considered cases, exhibit similar weight distributions. This is expected since it is shown in [1] and [23] that WS and WM filters with similar weighting manifest similar filtering characteristics. The frequency characteristics of the formed WSM filters are estimated as follows: 50 realizations of 1000 sample standard white -contaminated noise sequences are fed into the filter and the spectra of the output is estimated using the Welch method [25]. The results are averaged to yield the frequency responses shown in Fig. 4. In addition to the WSM filters output spectra, the output spectra of the WS (FIR) and WM filters are shown in

AYSAL AND BARNER: GENERALIZED MEAN-MEDIAN FILTERING

943

TABLE II SUB-WS AND SUB-WM FILTER WEIGHTS FOR BANDPASS FILTERING

TABLE III SUB-WS AND SUB-WM FILTER WEIGHTS FOR HIGH-PASS FILTERING

Fig. 4. The plots show that the WSM filter can take on a wide range of frequency-selection characteristics. The characteristics of the filters are very similar in the passband, whereas the major difference is the range of attenuations provided by the WS and WM filters. The robust WSM filter, however, provides better attenuation than the WM filter in the passband, but worse than . that of the WS filter. In all cases IV. SIMULATION RESULTS Although the statistical analysis points to the fact that WSM filter outperforms WS and WM filters, their performance is best shown through examples. The performance of the proposed convex combination of WS and WM filters is tested and compared here to traditional WS, WM, linear combination of weighted median (LCWM) [26], and MEM filtering through simulations. Considered here are the signal processing applications of low-pass, bandpass, and high-pass filtering, and the image processing applications of image sharpening, image denoising. Also the validity of the proposed filtering structure is verified by comparing to the true ML estimate. The choice of criteria by which to measure the performance of filters presents certain difficulties. In particular, it is clear that a global performance measure, such as mse, only gives a partial picture of true performance. For instance, one filter may perform well when operating on Gaussian samples but poorly on outlier, whereas another may perform poorly on Gaussian samples but well on outliers. Yet, two could have the same mse. MAE, in contrast, tends to give less influence to large errors. In addition, The second-order central output moment is quite often used to measure the noise attenuation capability of filters. It quantifies the spread of the output samples with respect to their mean value. To assess the performances of the filters, output variance, MAE and mse between the filtered and the desired signals are evaluated to quantitatively compare the performance of the filters. A. Frequency-Selective Filtering The frequency-selection capabilities of the filters, together with their noise rejection potential, are tested as follows: The sum of two sinusoids, with frequencies chosen such that one is in the passband and one is in the stopband of each filter, are corrupted with additive -contaminated ( [6]) mixed

and Laplacian noise, with different level Gaussian 1, 2, and 4). The normalized frequenof impulsiveness ( cies of the sinusoidals are 0.1 and 0.5 for low-pass filtering and bandpass filtering and 0.5 and 0.9 for high-pass filtering. The WSM filters designed in previously are utilized in this set of simulations. 1) Low-Pass Filtering: The two-tone input signal with normalized frequencies 0.1 and 0.5 is utilized in this experiment. Table IV shows the quantitative filter output variance, and and norm error measurements for varying impulsiveness of 1, 2, 4). It can be seen from the table that the the noise ( WSM filter yields the lowest output variance, MAE, and mse in all cases. In addition, as the impulsiveness of the noise increases, the performance gain provided by WSM filters, over WS and WM filters, in MAE and mse senses, increases. It is also noted that although WS filter yields a lower variance than the WM in the less-impulsive cases, as the impulsiveness increases, the WM filter yields a higher noise attenuation than the WS. The LCWM yields poor results, especially in the sense of variance, since its structure is based on weighted sum combinations of sub-WM filters operating on subsample sets. 2) Bandpass Filtering: The WS, WM, LCWM, and WSM filters are also evaluated in a bandpass frequency-selective filtering scenario. The quantitative filter output variances, and and norm error measurements are tabulated in Table V for the bandpass filtering case. Similar to the low-pass filtering case, it can be seen from Table V that WSM yields better frequency selection with less variance. In all cases, the WSM filter provides better noise attenuation with less output variance and better frequency-selection capabilities with lower MAE and mse values. 3) High-Pass Filtering: The WS, WM, LCWM, and WSM filters are also tested in a high-pass filtering application. The two-tone input signal with normalized frequencies 0.5 and 0.9 is utilized in this experiment. Similar to the previous cases, the corrupted two-tone input signal is passed through high-pass WS, WM, LCWM, and WSM filters. The quantitative filter output and norm error measurements are tabulated variances, in Table VI for high-pass filtering case. As in the low-pass and bandpass filtering cases, the WSM filter provides best noise attenuation and frequency selection capabilities. 4) Influence of Contamination Parameter : Note that parameter in the mixture noise determines the impulsiveness of the , the mixed noise is purely Gaussian, distribution. When

944

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

TABLE IV AVERAGE VARIANCE, MAE, AND MSE RESULTS OF 50 TRIALS FOR -CONTAMINATED MIXED GAUSSIAN AND LAPLACIAN NOISE FOR LOW-PASS FILTERING

and when , it is purely Laplacian. Fig. 5 shows the influence of the parameter on the WS, WM and WSM filtering performances. The formulation and configuration of the low-pass filtering case is used. As expected, in low fraction of contamination cases, WS filtering provides better noise attenuation than WM filtering, and WM filtering provides higher noise attenuation than the WS filtering in high fraction of contamination cases. The WSM filter, however, provides the most noise attenuation by yielding the smallest output variance in all cases. In an error norm sense, the WS filter outperforms the WM filter since the WM filter distorts the signal and yields a poor frequency-selective operation. On the other hand, WSM filtering combines the superior frequency-selective characteristics of the WS with the robustness characteristics of the WM filter to yield the most accurate results in all the cases, other than some isolated low-fraction contamination cases. This is a result of the optimization criteria, which is the minimization of the filter output variance, being different than the evaluation criteria. 5) Effect of Impulsiveness : Another important parameter of the -contaminated distribution is the impulsiveness determined by . The higher the value, the heavier the tails of the mixture distribution. The effect of this parameter on the filter performances is tested. Fig. 6 shows the influence on filter of the mixture-distribution impulsiveness output variance, MAE, and mse values. Note that the WSM filter provides the best performance in all senses. Also, note increases, the sub-WM filter yields smaller output that as parameter, variance compared to the sub-WS filter and the hence, converges to 1. This corresponds to the case in which WSM reduces to WM filter. This effect is also apparent in the plots. B. Image Processing Applications The proposed filtering structure is also evaluated in the image processing applications of unsharp masking [27] and denoising. 1) Unsharp Masking: In practice, image sharpening consists of adding a scaled version of high-pass filtered image to the original image, Fig. 7. The sharpening operation can be represented by (31)

Fig. 4. Estimated frequency responses of WS (solid), WM (dashed), and WSM (dotted) filters. (a) Low-pass. (b) Bandpass. (c) High-pass.

is the original pixel value at the coordinates , where is the high-pass filter, is a tuning parameter such that , and is the sharpened pixel at the coordinates . The value taken by depends on the grade of sharpness desired. Increasing yields a more sharpened image. If background noise is present, however, increasing will rapidly amplify the noise.

AYSAL AND BARNER: GENERALIZED MEAN-MEDIAN FILTERING

945

TABLE V AVERAGE VARIANCE, MAE, AND MSE RESULTS OF 50 TRIALS FOR -CONTAMINATED MIXED GAUSSIAN AND LAPLACIAN NOISE FOR BANDPASS FILTERING

TABLE VI AVERAGE VARIANCE, MAE, AND MSE RESULTS OF 50 TRIALS FOR -CONTAMINATED MIXED GAUSSIAN AND LAPLACIAN NOISE FOR HIGH-PASS FILTERING

Fig. 5. Influence of the contamination fraction  on WS (solid), WM (dashed), and WSM (dotted) filtering performances. (a) Filter output variance. (b) MAE. (c) mse.

Fig. 6. Influence of the impulsiveness  on WS (solid), WM (dashed), and WSM (dotted) filtering performances. (a) Filter output variance. (b) MAE. (c) mse.

The key point in the effective sharpening process lies in the choice of the high-pass filtering operation. Traditionally, WS filters have been used to implement the high-pass filter. However, linear techniques can lead to rapid performance degradation should the input image be corrupted with noise. To overcome this drawback and utilize the selectivity property, WM filters have recently been introduced into the unsharp masking

application [12], [28]. Hence, WSM filter can be utilized to provide performance improvements in environments corrupted by contaminated statistics.2 2Although more elaborate WM sharpeners/denoisers are reported in the literature, such as the permutation WM [28] and center WM [12], we utilized the conventional WM methods since any such extensions to WM-type methods can be simply fused with a WS structure utilizing the convex combination technique to obtain performance improvements in contaminated statistics.

946

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

Fig. 7. Image sharpening by high-frequency emphasis: unsharp masking.

Fig. 9. (a: top-left) original image corrupted by -contaminated noise sharpened with (b: top-right) the WS sharpener, (c: bottom-left) with the WM sharpener, and (d: bottom-right) the WSM sharpener.

Fig. 8. (a: top-left) original image sharpened with (b: top-right) the WS sharpener, (c: bottom-left) with the WM sharpener, and (d: bottom-right) the WSM sharpener.

The following high-pass mask (filter kernel for laplacian operator) is used for unsharp masking applications [12], [27], [29] with (unnormalized) high-pass WS and WM filters

(32) Thus, in the WSM formulation both sub-WS and sub-WM filter masks are chosen to be as in (32). In Fig. 8, the performance of the WSM filter image sharpening is compared with that of traditional image sharpening based on WS and WM filters. The parameter is set to 0.5 [29], 2 [12], and 1.5 for WS, WM, and WSM sharpener cases, respectively. It is observed that WS and WSM filters provides better sharpening than the WM in the noise-free case. The filters are also tested in an -contaminated noise environment. The original image corrupted by noise with , , and is shown in Fig. 9(a). The noisy image is sharpened using WS, WM, and WSM filters in the unsharp masking application and the corresponding output images are shown in Fig. 9(b), (c), and (d), respectively. The parameter is set to 0.25 [29], 1.75 [12], and 0.5 for WS, WM, and WSM sharpener cases, respectively. Note that sharpening with WSM filters does not suffer from noise amplification to the extent that sharpening with WM or WS filters do. 2) Image Denoising: Recall that the MEM filter [6] exhibits only smoothing characteristics and, thus, is utilized to strictly remove -contaminated noise from images. Also, the MEM combination parameter is optimized to minimize the filter output

asymptotic variance under the assumption that the subfilter outand denote the optimal puts are independent. Let parameters for MEM and WSM filtering schemes, respectively. It is clear that the MEM filter is a special case of the proposed WSM structure where all observation samples are weighted over uniformly. In the following, the effectiveness of is shown through image denoising simulations. In order to evaluate the performances of WSM and MEM filters in the presence of -contaminated noise, the image shown , in Fig. 10(a) is corrupted by -contaminated noise with and [6], the result of which is shown in Fig. 10(b). The corrupted images are filtered using WS, WM, ) and WSM (with ) schemes. The MEM (with outputs are given in Fig. 10(c)-(f) for WS, WM, MEM, and WSM filters, respectively. The quantitative results for different impulsiveness are given in Table VII. Comparisons of the MAE and mse results show that the WSM filter outperforms WS, WM, and MEM filter in all cases and that the performance gain increases as the impulsiveness of the noise increases. C. Comparison With CML Estimate The proposed filter structure is compared to the true ML of the -contaminated statistics to evaluate the proximity of the MEM and WSM filters to the true ML estimate under -contaminated statistics. The ML estimate of Gaussian-Laplacian mixture statistics referred to as CML is given in (14). Based on the CML, the following filtering structure is formed: (33) where and denote the sets of samples generated from with variance and with variance , respectively. To illustrate and compare the performance of the filtering structure given in (33), the single-tone signal with normalized frequency 0.02 shown in Fig. 11 is used. The single-tone input signal is corrupted by additive -contaminated noise with and . The corrupted input signal (where the plot, not the signal, is clipped for visualization purposes)

AYSAL AND BARNER: GENERALIZED MEAN-MEDIAN FILTERING

947

Fig. 11. Comparison of the true CML estimate filtering with the MEM and WSM filtering under -contaminated statistics. From top to bottom: single-tone input signal, corrupted input signal, MEM, WSM, and ML filtering results, respectively.

TABLE VIII MAE AND MSE RESULTS TRIALS FOR -CONTAMINATED MIXED GAUSSIAN AND LAPLACIAN NOISE FOR SINGLE-TONE INPUT SIGNAL DENOISING

2

Fig. 10. Filtering using a 3 3 square window results for -contaminated noise: (a: top-left) original Lena image, (b: top-right) corrupted image, (c: middle-left) output of the WS filter, (d: middle-right) output of the WM filter, (e: bottom-left) output of the MEM filter, and (f: bottom-right) output of the WSM filter.

TABLE VII MAE AND MSE RESULTS TRIALS FOR -CONTAMINATED MIXED GAUSSIAN AND LAPLACIAN NOISE FOR IMAGE DENOISING (“LENA” IMAGE IS USED)

is also shown in Fig. 11. The noisy signal is used as input to the MEM, WSM and CML filtering structure for denoising purposes. The observation samples in the WSM filter formulation are weighted uniformly as in the MEM case. The resulting filter outputs are given in Fig. 11. Note that CML estimate filtering structure provides the best result with better detail preserving and noise attenuation characteristics. The WSM filter, however, provides better performance than MEM in the sense of detail preservation and noise attenuation. The time-domain plots are consistent with the quantitative and norm comparisons given in Table VIII. Each entry in the table reports the ensemble average of 50 trials. The quantitative results agree with the time-domain plot observations. As expected, the filtering based on the strict CML estimate yields the best results. Also, The WSM filtering structure yields results

closer to the true CML estimate filtering result than the MEM filtering structure. V. CONCLUSION A generalized mean-median filtering structure (WSM) admitting real-valued weights is proposed. Noting that weighted sum and weighted median filters are motivated by the ML analysis under Gaussian and Laplacian statistics, respectively, the proposed filtering structure is motivated from a ML analysis under -contaminated statistics. The proposed filter is statistically analyzed through the determination of filter output variance and breakdown probability. The combination parameter is optimized such that the filter output variance is minimized. The breakdown probability indicates that WSM filter provides better impulse rejection capabilities than WS and WM filters under contaminated statistics. In addition, a spectral design method is proposed to achieve desired filtering characteristics. With the aid of the overall proposed design framework, the desired frequency-selective characteristics is accomplished through the subfilter weights design algorithm and the noise attenuation is achieved by optimizing the combination parameter such that the filter output variance is minimized. The WSM filter is, thus, equipped both with desired frequency-selective and noise attenuation features. The simulation results provided for different type of frequency-selective signal processing applications, including low-pass, bandpass, and high-pass, and image processing applications including image sharpening and denoising show the superiority of the proposed filter (WSM)

948

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 3, MARCH 2007

over FIR, WM, and MEM filters. Also, it is shown through simulations that the WSM filtering (when constrained to uniformly distributed weights) provides results closer than MEM filtering to the true CML estimate in MAE and mse senses. REFERENCES [1] G. R. Arce, “A general weighted median filter structure admitting negative weights,” IEEE Trans. Signal Process., vol. 46, no. 12, pp. 3195–3205, Dec. 1998. [2] J. Astola and P. Kuosmanen, Fundamentals of Nonlinear Digital Filtering. Boca Raton, FL: CRC, 1997. [3] J. G. Gonzalez, D. L. Lau, and G. R. Arce, “Toward a general theory of robust nonlinear filtering: selection filters,” presented at the 1997 IEEE Int. Conf. Acoust., Speech, Signal Process., Munich, Germany, Apr. 21–24, 1997. [4] F. Hampel, E. Ronchetti, P. Rousseeuw, and W. Stahel, Robust Statistics: The Approach Based on Influence Functions. New York: Wiley, 1986. [5] Huber, Robust Statistics. New York: Wiley, 1981. [6] A. B. Hamza and H. Krim, “Image denoising: A nonlinear robust statistical approach,” IEEE Trans. Signal Process., vol. 49, no. 12, pp. 3045–3054, Dec. 2001. [7] A. C. Bovik, T. S. Huang, and D. C. Munson, “A generalization of median filtering using linear combinations of order statistics,” IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP-31, no. 6, pp. 1025–1037, Dec. 1983. [8] I. Pitas and A. Venetsanopoulos, Nonlinear Digital Filters: Principles and Application. Norwell, MA: Kluwer Academic, 1990. [9] H. A. David and H. N. Nagaraja, Order Statistics. New York: Wiley, 2003. [10] R. Oten and R. J. P. de Figueiredo, “An efficient method for L-filter design,” IEEE Trans. Signal Process., vol. 51, no. 1, pp. 193–203, Mar. 1996. [11] ——, “Adaptive alpha-trimmed mean filters under deviations from assumed noise model,” IEEE Trans. Image Process., vol. 13, pp. 627–639, 2004. [12] G. R. Arce, Nonlinear Signal Processing: A Statistical Approach. New York: Wiley, 2005. [13] P. G. Hoel, Introduction to Mathematical Statistics, 3rd ed. New York: Wiley, 1962. [14] L. Yin, R. Yang, M. Gabbouj, and Y. Neuvo, “Weighted median filters: A tutorial,” IEEE Trans. Circuits Syst., vol. 43, no. 3, pp. 157–192, Mar. 1996. [15] O. Yli-Harja, J. Astola, and Y. Neuvo, “Analysis of the properties of median and weighted median filters using threshold logic and stack filter representaion,” IEEE Trans. Signal Process., vol. 39, no. 2, pp. 395–409, Feb. 1991. [16] I. Shmulevich and G. R. Arce, “Spectral design of weighted median filters admitting negative weights,” IEEE Signal Process. Lett., vol. 8, no. 12, pp. 313–316, Dec. 2001. [17] I. Shmulevich, O. Yli-Harja, J. Astola, and A. Korshunov, “On the robustness of the class of stack filters,” IEEE Trans. Signal Process., vol. 50, no. 7, pp. 1640–1649, Jul. 2002. [18] A. Papoulis, Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill, 1984. [19] R. Yang, L. Yin, M. Gabbouj, J. Astola, and Y. Neuvo, “Optimal weighted median filters under structural constraints,” IEEE Trans. Signal Process., vol. 43, no. 3, pp. 591–604, 1995. [20] M. K. Prasad and Y. H. Lee, “Stack filters and selection probabilities,” IEEE Trans. Signal Process., vol. 42, no. 10, pp. 2628–2642, Oct. 1994. [21] M. K. Prasad, “Stack filter design using selection probabilities,” IEEE Trans. Signal Process., vol. 53, no. 3, pp. 1025–1037, Mar. 2005. [22] S. Haykin, Adaptive Filter Theory, 4th ed. New York: Prentice-Hall and Syst. Sci. Ser., 2002. [23] S. Hoyos, J. Bacca, and G. Arce, “Spectral design of weighted median filters: An iterative approach,” IEEE Trans. Signal Process., vol. 53, no. 3, pp. 1045–1056, Mar. 2005.

[24] C. L. Mallows, “Some theory of nonlinear smoothers,” Ann. Statist., vol. 8, no. 4, pp. 695–715, 1980. [25] J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles, Algorithms and Applications, 3rd ed. New York: MacMillan, 1996. [26] K.-S. Choi, A. W. Morales, and S.-J. Ko, “Design of linear combination of weighted medians,” IEEE Trans. Signal Process., vol. 49, no. 9, pp. 1940–1952, Sep. 2001. [27] A. K. Jain, Fundamentals of Digital Image Processing. Englewood Cliffs, NJ: Prentice-Hall, 1989. [28] M. Fischer, J. L. Paredes, and G. Arce, “Weighted median image sharpeners for the World Wide Web,” IEEE Trans. Image Process., vol. 11, pp. 717–727, 2002. [29] S. Thumhofer and S. K. Mitra, “A general framework for quadratic Volterra filters for edge enhancement,” IEEE Trans. Signal Process., vol. 5, no. 6, pp. 950–963, Jun. 1996. Tuncer Can Aysal (S’05) was born in Ankara Turkey, on February 4, 1981. He received the B.E. degree (high honors) from Istanbul Technical University, Istanbul, Turkey, in 2003. He is currently working toward the Ph.D. degree with the Department of Electrical and Computer Engineering, University of Delaware, Newark. His research interests include statistical signal and image processing, robust signal processing, polynomial processing, nonlinear signal processing, decentralized estimation/detection, wireless sensor networks, and visualization of the scientific data in haptic environments. Mr. Aysal is the recipient of the UD Competitive Graduate Student Fellowship, Signal Processing and Communications Graduate Faculty Award (Award is presented to an outstanding graduate student in this research area) and a University Dissertation Fellowship.

Kenneth E. Barner (S’84–M’92–SM’00) was born in Montclair, NJ, on December 14, 1963. He received the B.S.E.E. degree (magna cum laude) from Lehigh University, Bethlehem, PA, in 1987. He received the M.S.E.E. and Ph.D. degrees from the University of Delaware, Newark, in 1989 and 1992, respectively. For his dissertation “Permutation filters: A group theoretic class of non-linear filters,” he received the Allan P. Colburn Prize in Mathematical Sciences and Engineering for the most outstanding doctoral dissertation in the engineering and mathematical disciplines. He was the duPont Teaching Fellow and a Visiting Lecturer at the University of Delaware in 1991 and 1992, respectively. From 1993 to 1997, he was an Assistant Research Professor with the Department of Electrical and Computer Engineering, University of Delaware and a Research Engineer at the duPont Hospital for Children. He is currently a Professor in with the Department of Electrical and Computer Engineering, University of Delaware. His research interests include signal and image processing, robust signal processing, nonlinear systems, communications, haptic and tactile methods, and universal access. Dr. Barner is the recipient of a 1999 NSF CAREER award. He was the Co-Chair of the 2001 IEEE-EURASIP Nonlinear Signal and Image Processing (NSIP) Workshop and a Guest Editor for a Special Issue of the EURASIP Journal of Applied Signal Processing on Nonlinear Signal and Image Processing. He is a member of the Nonlinear Signal and Image Processing Board and is the coeditor of the book Nonlinear Signal and Image Processing: Theory, Methods, and Applications. He is a member of Tau Beta Pi, Eta Kappa Nu, and Phi Sigma Kappa. He is the Technical Program Co-Chair for ICASSP 2005. He is also serving as an Associate Editor of the IEEE TRANSACTIONS ON SIGNAL PROCESSING, the IEEE SIGNAL PROCESSING MAGAZINE, and the IEEE TRANSACTION ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING. He is also a member of the Editorial Board of the EURASIP Journal of Applied Signal Processing.

Generalized Mean-Median Filtering for Robust ...

a number of optimization and design tools readily available for the linear filters. ..... signal (where the plot, not the signal, is clipped for visualization purposes) ...

2MB Sizes 3 Downloads 182 Views

Recommend Documents

Generalized Mean-Median Filtering for Robust Frequency-Selective ...
of smoothers lacking frequency-selective filtering capabilities. This paper extends ... more general filtering characteristics, i.e, bandpass and high-pass filtering.

Meridian Filtering for Robust Signal Processing
T. C. Aysal was with the Department of Electrical and Computer Engineering, ... and Computer Engineering Department, McGill University, Montreal, QC H1T ...... Prize in Mathematical Sciences and Engineering for the most outstanding doc-.

robust algorithms for generalized pham systems
−1 ≤ ji ≤ Nk−1 for which ai = 2(ji + 1) and bi = (d−1)ji +d hold. Therefore, α = (. 2 + d. ∑ ...... The architecture of our solution method. We can solve the original.

ROBUST MERIDIAN FILTERING Tuncer C. Aysal and ...
Signal Processing and Communications Group. Electrical and ... Interestingly, it is shown that the obtained statistics, referred to as the Meridian distribution, is also a member of the GCD with p = 1. Hence ... servation data {x(i)}. Let xi = β + Î

Method and apparatus for filtering E-mail
Jan 31, 2010 - Clark et a1., PCMAIL: A Distributed Mail System for Per. 6,052,709 A ..... keted as a Software Development Kit (hereinafter “SDK”). This Will ...

Combinational Collaborative Filtering for ... - Research at Google
Aug 27, 2008 - Before modeling CCF, we first model community-user co- occurrences (C-U) ...... [1] Alexa internet. http://www.alexa.com/. [2] D. M. Blei and M. I. ...

Unscented Information Filtering for Distributed ...
This paper represents distributed estimation and multiple sensor information fusion using an unscented ... Sensor fusion can be loosely defined as how to best extract useful information from multiple sensor observations. .... with nυ degrees of free

CONSTANT TIME BILATERAL FILTERING FOR ...
naıve extension for color images and propose a solution that adapts to the image .... [9, 10] is considerably long compared to other techniques as reported in [7].

Google Message Filtering - PDFKUL.COM
ABOUT GOOGLE APPS. Google Apps is a suite of applications that includes Gmail, Google Calendar. (shared calendaring), Google Talk. (instant messaging and voice over IP),. Google Docs & Spreadsheets (online document hosting and collaboration),. Google

Method and apparatus for filtering E-mail
Jan 31, 2010 - Petition for Suspension of Rules Under CFR § 1.183; 2 ...... 36. The e-mail ?lter as claimed in claim 33 Wherein one of the plurality of rule ...

Newton's method for generalized equations
... 2008 / Accepted: 24 February 2009 / Published online: 10 November 2009 ..... Then there is a unique x ∈ Ba(¯x) satisfying x = Φ(x), that is, Φ has a unique.

Generalized Features for Electrocorticographic BCIs
Dept. of Computer Science and Engineering. University of ... graphic signals (ECoG) for use in a human Brain-Computer. Interface (BCI) ..... for the SVM. Both of these quadratic programs can be .... the number of channels required online for control.

Generalized Expectation Criteria for Semi-Supervised ... - Audentia
Generalized Expectation Criteria for Semi-Supervised Learning of. Conditional Random Fields. Gideon S. Mann. Google Inc. 76 Ninth Avenue. New York, NY ...

Generalized Features for Electrocorticographic BCIs - CiteSeerX
obtained with as few as 30 data samples per class, support the use of classification methods for ECoG-based BCIs. I. INTRODUCTION. Brain-Computer ...

Percolation and magnetization for generalized ...
clusters become infinite (see [3–6]). It is moreover possible to ... S. Fortunato, H. Satz / Nuclear Physics B 598 (2001) 601–611. Fig. 1. .... This variable, that we shall call percolation cumulant and indicate as γr(κ), is a scaling .... We t

Generalized Theory for Nanoscale Voltammetric ...
Jun 18, 2011 - sis of an approach curve, a plot of tip current versus tipАsubstrate distance, from which a local ET rate constant can be determined conveniently ...

2. Generalized Homogeneous Coordinates for ...
ALYN ROCKWOOD. Power Take Off Software, Inc. ... direct computations, as needed for practical applications in computer vision and similar fields. ..... By setting x = 0 in (2.26) we see that e0 is the homogeneous point corre- sponding to the ...

Towards Robust Indexing for Ranked Queries ∗
Department of Computer Science. University of Illinois at Urbana-Champaign. Urbana, IL ... Database system should be able to process the ranked queries.

Robust Mechanisms for Risk-Averse Sellers - CiteSeerX
at least 1/2, which implies that we get at most 2Ç«2 utility for urisk-averse, compared to Ç«(1 − Ç«) at the price Ç«/(1 − Ç«). 2.4 Results and Techniques. We first show ...

Robust Confidence Regions for Incomplete ... - Semantic Scholar
Kyoungwon Seo. January 4, 2016. Abstract .... After implementing the simulation procedure above, one can evaluate the test statistic: Tn (θ) = max. (x,j)∈X×{1,...

Google Message Filtering
Software-as-a-Service (SaaS) model, saving money and IT resources because there is no hardware or ... Google Message Filtering reduces the burden on your IT team by empowering your end-users to ... document hosting and collaboration),.

Generalized and Doubly Generalized LDPC Codes ...
The developed analytical tool is then exploited to design capacity ... error floor than capacity approaching LDPC and GLDPC codes, at the cost of increased.

ROBUST DECISIONS FOR INCOMPLETE MODELS OF STRATEGIC ...
Jun 10, 2011 - Page 1 .... parameters by only imposing best response or other stability ... when the decision maker has a prior for the parameter of interest, but ...

Best-Buddies Similarity for Robust Template ... - People.csail.mit.edu
1 MIT CSAIL. 2 Tel Aviv University ... ponent in a variety of computer vision applications such as ...... dation grant 1556/10, National Science Foundation Robust ... using accelerated proximal gradient approach. ... Online object tracking: A.