INEQUIVALENT MANIFOLD RANKING FOR CONTENT-BASED IMAGE RETRIEVAL Fan Wang, Guihua Er and Qionghai Dai Department of Automation Tsinghua University, Beijing, 100084, P.R.China ABSTRACT We propose to improve the effectiveness and scalability of graph based manifold ranking methods in image retrieval applications by emphasizing reliable images while damping the effect of noisy or irregular ones. Label information is firstly passed between most reliable data points, then propagated to less reliable ones on manifold structure. By treating these images inequivalently, undesirable effect of noisy samples is greatly reduced, thus effectiveness of manifold ranking algorithms is enhanced. Also, graph size in terms of number of nodes and edges is dramatically reduced, resulting in a great speed-up of the algorithm. Our experiment on real world image data set demonstrates the effectiveness of the proposed approach. Index Terms— Image retrieval, manifold ranking, scalability, clustering 1. INTRODUCTION With the explosion of digital image data sets, content based image retrieval (CBIR) has become indispensable because there is usually few textual or categorical information available [1]. Relevance feedback is a typical way of making use of labeled images to improve the retrieval performance, but the labeled information is always limited. With the huge database at hand, how to make use of the information of the whole data set instead of only the labeled data becomes crucial. Semi-supervised learning algorithms, especially manifold ranking algorithms [2, 3], are demonstrated to be promising in making use of the unlabeled data in several recent works [4]. Data points and their relationships are represented by a graph, and label information is propagated among data points according to the structure of the graph. By making use of the distribution information of unlabeled data points, manifold structure is exploited, and performance of ranking algorithms is enhanced. Two difficulties limit the applications of traditional manifold ranking algorithms in real world: First, real image sets are highly heterogeneous and irregular, contaminated by lots This work is supported by the project of NSFC No.60772048, No.60721003, and No.60432030.

of noisy samples. Since most of existing manifold ranking algorithms are sensitive to noise, their effectiveness will be greatly reduced in those circumstances. Second, typical manifold algorithms have time complexity of at least ¢ ¡ ranking O n2 with n denoting the total number of images in the set, which is too time-consuming to be used in real world data sets. In this paper, we propose to treat image samples inequivalently as “backbone images” and “non-backbone images”. This discrimination is obtained by estimating a “reliability score” for each image in the offline stage, which reflects the noisy level and importance of the images. Backbone images are defined as those who are reliable and important enough to preserve the structure of the original image space. Noisy images are those who have low reliability score and are out of the main part of the data distribution. In the query stage, we first construct a “backbone graph” with backbone images, with the edges in the graph re-weighed according to their reliability scores. Label information is propagated on this much smaller yet trustful graph, thus noisy images are prevented from damaging the ranking value of the backbone images, and ranking is greatly accelerated. After obtaining the ranking values of the backbone images, ranking values of other images are obtained by linearly propagating labels between neighborhoods, which is also highly efficient. By treating images inequivalently according to their reliability, we obtain enhancement in both effectiveness and efficiency. 2. INEQUIVALENT MANIFOLD RANKING In traditional manifold ranking algorithms, each data item is treated equally. The performance of these algorithms is limited when noisy data exists, which might destroy the manifold continuity and make the data distribution unreasonable. We propose to regard data points inequivalently by giving more weight to “reliable” points, i.e., with high confidence of being on the manifold, while damping the noisy points with low confidence of being so. However, directly eliminating the noisy points without making use of data distribution information is unwise and difficult. In our approach, we build the whole graph with all the

data points, and prune it by first reducing the number of nodes through optimizing an objective function described in Sec. 2.1, and then eliminating the edges as stated in Sec. 2.2. This method preserves the manifold structure, and reduces the graph scale at the same time. Ranking values of images are obtained through a two-level propagation process, first on backbone graph level (Sec. 2.2), then on sub-graph level (Sec. 2.3).

The availabilities are updated as: ½ X a (i, k) ← min 0, r (k, k) + 0

2.1. Backbone Graph Construction

Those two kinds of messages are updated iteratively until convergence (proved in [6]), then availabilities and responsibilities are combined to identify backbone images. For image i, its corresponding backbone image is obtained as

It is observed that natural image data sets often have cluster structure at local level, which means that despite of the irregular distribution of the whole database, small groups of images that are similar to each other exist locally. Suppose the image set contains N images in total, we try to identify the local structure of the graph by looking for some backbone images and the strongest link for each image i to these backbone images. Denote the strongest link of i is from i to j(i), with j(i) to be the backbone image for i. Backbone images have the strongest link to themselves. A natural structure of the images will be discovered by simultaneously optimizing the backbone images and the links. This intuition is summarized in the task of maximizing the following objective: N X J= {s [i, j(i)] + δ(i)} (1) i=1

where s [i, j (i)] is the similarity between i and j(i), and the penalty term δ(i) is defined as: ½ −∞ if : j (i) 6= i, but : ∃k : j (k) = i δ (i) = (2) 0 else where the term δ (i) forces image i to select itself as backbone image if it is a backbone image itself (∃k : j (k) = i). This objective can be optimized through Loopy Belief Propagation [5], and resulting an equivalent clustering result to Affinity Propagation Clustering [6]. 2.1.1. Optimizing the objective Two kinds of “messages” are defined and propagated among images, i.e., responsibility and availability. The responsibility r(i, k) from i to k reflects the accumulated evidence for how well-suited image k is to serve as the backbone image for image i. The availability a(i, k) from k to i reflects the accumulated evidence for how appropriate it would be for image i to choose k as its backbone image. The availabilities are initialized to zero. The responsibilities are computed as r (i, k) ← s (i, k) −

max

k0 ,s.t.k0 6=k

0

0

{a (i, k ) + s (i, k )}

(3)

where the self-responsibility r(k, k) reflects accumulated evidence that image k is a backbone image.

¾ max {0, r (i , k)} . 0

i ∈{i,k} /

(4) with the self-availability a(k, k) updated differently as: X a (k, k) ← max {0, r (i0 , k)}. 0 i 6=k

k ∗ = arg max {a (i, k) + r (i, k)} . k

(5)

(6)

This means to either identify image i as a backbone image if k ∗ = i, or identify image k ∗ as the backbone image for i. The final determined backbone images are expected to be representative to the whole image space. 2.1.2. Calculating Reliability Scores After the optimization, all the images are grouped into several clusters, then each cluster is represented by a backbone image and each image is assigned to a backbone image. The ”reliability score” of each image should decrease as the increase of its distance to its backbone image, and can be defined as:     pi = p(xi ) = exp 1 − 

1 N

 d[i, j(i)]   N P  d[k, j(k)]

(7)

k=1

where j(i) denotes the backbone image of i, and d [i, j (i)] is the distance between i and its backbone image. This reliability score gives images that are close to the backbone images more emphasis, and damps those that are far away from main data distribution, which are more likely to be noisy points. The images with higher reliability score are believed to be much more representative than those with lower scores. The backbone images have the maximum reliability score. Finally, the original space is compressed into a subspace spanned by a smaller set of backbone images (by using Representer Theorem), without losing much structural information of the original space. All the nodes construct a graph called “backbone graph”, which represents the basic structure of the original space. 2.2. Manifold Ranking on Backbone Graph The initial backbone graph consists of all backbone images and labeled images. After each round of relevance feedback, newly labeled images are also added to the “backbone” graph as well.

The affinity matrix W on this graph is computed as: µ 2 ¶ d (i, j) wij = pi pj exp − (8) σ2 which is an adjusted weight taking the reliability scores of images into account, where p is the reliability scores obtained by Eq. 7, d(i, j) is the distance between image i and j, and σ is a hyperparameter controlling variance. For i and j where wij < T , which means a week link, we make the graph sparse by removing the edge between i and j. Here T is a predefined threshold to control sparsity. In our P experiments, we set T = N12 wij . i,j

This backbone graph has two advantages: 1. It preserves the information of data distribution while reducing the scale of the graph. This is because the backbone images are identified though analysis of all the data, and are representative of the whole database. 2. It is robust to noisy images to some extent, since only confident images and strong edges are preserved. Most of the manifold ranking algorithms can be performed on this “backbone” graph. In our experiments, we first obtain normalized Laplacian matrix S = D−1/2 W D−1/2 , where D is a diagonal matrix with its (i, i)-element equal to the sum of the i-th row of W . The labeled information is summarized in Y , with Yij = 1 if the i-th image is labeled as yi = j and Yij = 0 otherwise. Ranking value for images on the backbone graph, i.e., the vec−1 tor F ∗ is obtained through F ∗ = (I − αS) Y , where α is a parameter in (0, 1). The parameter α represents the relative amount of the information from its neighbors and its initial label information, and it is simply fixed at 0.99 in our experiments to emphasize more on the information from neighbors. 2.3. Manifold Ranking on Sub-Graphs With the ranking values of “backbone” images, we can infer the ranking of all other images according to the manifold structure. One option is to simply regard manifold structure at a local level to be linear, and do linear interpolation locally between backbone images and other images. Another more sophisticated algorithm can also be applied by constructing the local graph in a similar way as constructing the backbone graph (Sec. 2.2), which is much smaller compared to the whole graph, and do manifold ranking on the small graph taking the ranking value of backbone images as input. Since the graphs are small, and are sub-graphs of the original graph, time cost of both constructing the graph and ranking on the graph is quite small. We use the latter method in the experiment, since this step also makes use of the data distribution information, and propagates label information locally on the manifold.

3. EXPERIMENTS 3.1. Setup To evaluate our proposed approach, we conduct experiments on a subset of Corel gallery, which is a real-world image database containing 100 semantic categories with 100 images in each category, totally 10000 images. Some generally used low-level image features are adopted, including 64-d HSV color histogram, 9-d LUV color moment, 128-d HSV color coherence, 6-d coarseness vector and 8-d directionality. We use Chi-Square distance for histogram features, and Euclidian distance for others. All the distances on different features are first normalized then combined to form the final distance between two images. We compare the proposed method “Inequivalent Manifold Ranking (IMR)” with the algorithms “Manifold Ranking (MR)” [2], “Harmonic” [3], and naive ”kNN” ranking. MR and Harmonic are both representative semi-supervised learning algorithms and have been applied experimentally in small scale CBIR, such as [4]. In the tests, images are randomly selected from each category as the queries, and the system simulates relevance feedback process based on the ground truth. All benchmark criteria are obtained through averaging over 100 random runs on a PC with 1024M RAM and 3.0GHz CPU. 3.2. Analysis Fig.1 shows the average time cost of one query session of four different methods. Although IMR is based on manifold ranking and makes use of the whole database, its time complexity is much lower (less than 1s) than traditional manifold ranking methods (more than 30s), such as MR [2] and Harmonic [3]. The time cost is acceptable, and is even comparable with the fastest kNN. This is due to the sparsity resulting from both reducing the whole data set to backbone images, and the reduced number of edges. 



















 















































!

"

#

$



















%

Fig. 1. Time cost comparison among different algorithms. Besides great reduction in computational cost, the search results are improved with the backbone graph and sub-graph

























Recall Precision Curve 1 IMR Harmonic kNN MR

0.8 Precision

structure, as shown in Fig. 2, which compares average precision in top 100 returned images with 3 rounds of relevance feedbacks. In each round of feedback, 3 new images are labeled, and the presented result is the average over 100 random runs. It can be observed that, although naive kNN is fast enough, it performs the worst compared to the manifold ranking al gorithms due to not using the unlabeled data information.  Among the three manifold ranking algorithms, our IMR yields  the best result. This supports our argument that by treating data points “inequivalently”, undesirable effect of noisy data can be  reduced, thus better result is obtained.

0.6 0.4 0.2 0

0

0.5 Recall

1

Fig. 3. Average recall precession curves of different algorithms.







sub-graph, which is also highly efficient. Advantages in those two aspects have been demonstrated in our experiments on real world image data set. We are planning to extend our current framework to other graph based methods, to do noise identification and scale reduction at the same time. Also, online learning ability is under investigation to allow the backbone images to be updated when new images are added to the database sequentially.























  



































!

"

#

$

%













&

'

'

 Fig.  2. Performance comparison in different round of relevance feedback.   Furthermore, the average Precision-Recall Curves (RPC) over 100 random runs are plotted in Fig. 3, when there is only one round of feedback and 3 images are labeled. It can also be observed that our algorithm IMR outperforms other three methods. To sum up, among the four methods, our algorithm can achieve the best retrieval performance with much less, or at least comparable time cost, which is the best combination of speed and accuracy in retrieval. 4. CONCLUSIONS AND FUTURE WORKS In this paper, a new perspective of graph based manifold ranking method is proposed to boost performance of traditional methods both in effectiveness and scalability. Reliability scores are assigned to images, and a backbone graph is constructed based on them. The new graph has smaller size by filtering out noisy and irregular images, and removing week edges. Manifold ranking on the backbone graph can not only preserve the manifold structure, but also improve robustness and efficiency. Ranking values for images other than backbone images are obtained through manifold ranking on local

5. REFERENCES [1] A. Smeulders, M. Worring, A. Gupta, and R. Jain, “Content-based image retrieval at the end of the early years,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 1349–1380, Dec. 2000. [2] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch¨olkopf, “Learning with local and global consistency,” in Advances in Neural Information Processing Systems 16, 2003. [3] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty, “Semi-supervised learning using gaussian fields and harmonic functions,” Proc. 12th International Conference on Machine Learning (ICML), 2003. [4] J. He, M. Li, H. Zhang, H. Tong, and C. Zhang, “Manifold-ranking based image retrieval,” in Proc. ACM Multimedia, 2004, pp. 9–13. [5] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan, “Loopy belief propagation for approximate inference: An empirical study,” in Proc. of Uncertainty in AI, 1999, pp. 467–475. [6] Brendan J. Frey and Delbert Dueck, “Clustering by passing messages between data points,” Science, vol. 315, pp. 972–976, Feb. 2007.

INEQUIVALENT MANIFOLD RANKING FOR CONTENT ...

passed between most reliable data points, then propagated to .... This objective can be optimized through Loopy Belief ... to Affinity Propagation Clustering [6].

232KB Sizes 1 Downloads 316 Views

Recommend Documents

Ranking on Data Manifold with Sink Points.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Ranking on ...

Semantic content ranking through collaborative and ...
Brunel University, Uxbridge, Middlesex UB8 3PH, UK. a r t i c l e i n f o ..... Notes in Computer Science 4132: Proceedings of the 16th International. Conference ...

mANifOld dESTiNY
Aug 28, 2006 - irrelevant for me,” he said. “Everybody ... me.” Among the books his father gave him was a copy of “Physics for Enter- tainment,” which ... Princeton mathematician who liked to ..... chance to review them thoroughly,. Yau tol

Multiscale Manifold Learning
real-world data sets, and shown to outperform previous man- ifold learning ... to perception and robotics, data appears high dimensional, but often lies near.

Flexible Manifold Embedding: A Framework for Semi ...
Changshui Zhang is with Department of Department of Automation,. Tsinghua ..... residue can lead to better reconstruction of face images. Replacing F0 with F ...

PerTurbo Manifold Learning Algorithm for Weakly ...
Abstract—Hyperspectral data analysis has been given a growing attention .... de la Recherche (ANR) under reference ANR-13-JS02-0005-01 (Asterix project).

Large-Scale Manifold Learning - Cs.UCLA.Edu
ever, when dealing with a large, dense matrix, as in the case of Isomap, these products become expensive to compute. Moreover, when working with 18M data ...

Manifold Alignment Determination
examples from which it is capable to recover a global alignment through a prob- ... only pre-aligned data for the purpose of multiview learning. Rather, we exploit ...

RBPR: Role-based Bayesian Personalized Ranking for ...
compared with the state-of-the-art methods such as BPR ... Figure 1: An illustration of role-based preference learning for heterogeneous one-class collaborative ...

Adaptive Bayesian personalized ranking for heterogeneous implicit ...
explicit feedbacks such as 5-star graded ratings, especially in the context of Netflix $1 million prize. ...... from Social Media, DUBMMSM '12, 2012, pp. 19–22.

RBPR: Role-based Bayesian Personalized Ranking for ...
compared with the state-of-the-art methods such as BPR ... Figure 1: An illustration of role-based preference learning for heterogeneous one-class collaborative ...

Feature Selection for Ranking
uses evaluation measures or loss functions [4][10] in ranking to measure the importance of ..... meaningful to work out an efficient algorithm that solves the.

LNCS 8149 - Manifold Diffusion for Exophytic Kidney ...
acteristic analysis showed that the proposed method significantly outperformed ..... Seo, S., Chung, M.K., Vorperian, H.K.: Heat kernel smoothing using laplace-.

Manifold Learning for Jointly Modeling Topic and ...
Visualization of high-dimensional data is an important ex- ploratory data analysis ... In text mining, the current approach to model semantics in documents in a ... tight integration potentially allows the visualization to serve as a way to explore .

ranking geral_valeEscolar.pdf
Download. Connect more apps... Try one of the apps below to open or edit this item. ranking geral_valeEscolar.pdf. ranking geral_valeEscolar.pdf. Open. Extract.

Ranking Arizona - Snell & Wilmer
Apr 17, 2017 - ... business leaders throughout Arizona, participated in an online opinion poll ... company's revenue and number of employees. About Snell & Wilmer. Founded in 1938, Snell & Wilmer is a full-service business law firm with ...

Dynamic Content Adaptation for Multimedia Content ...
Content adaptation will be mostly adapted for mobile devices that require special handling ... page is separated into text units that can each be hidden or.

RANKING MASCULINO.pdf
Page 1 of 1. RANKING MASCULINO.pdf. RANKING MASCULINO.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying RANKING MASCULINO.pdf.

RANKING EBA.pdf
em 6,2 milhões de animais, perdendo apenas para a China com 10,2 milhões de. Page 1 of 1. RANKING EBA.pdf. RANKING EBA.pdf. Open. Extract. Open with.

Manifold Alignment Preserving Global Geometry
Manifold Alignment Preserving Global Geometry. Chang Wang. IBM T. J. Watson Research Lab. 1101 Kitchawan Rd. Yorktown Heights, New York 10598 [email protected]. Sridhar Mahadevan. School of Computer Science. University of Massachusetts. Amherst, Ma

Semi-definite Manifold Alignment
between the two data sets, our method only needs some relative comparison in- formation like “A is more ... Besides its usage in data analysis and visualization,.

Large-Scale Manifold Learning - UCLA CS
niques typically try to unfold the underlying manifold so that Euclidean distance in ... borhood graph in the input space, which is an O(n2) prob- lem. Moreover ...

Website Ranking Bath.pdf
They are enjoying the luxury as a result of ... The Online Marketing strategy of the organizations is currently incorporated right into ... Website Ranking Bath.pdf.