Scene Image Clustering Based on Boosting and GMM Khiem Ngoc Doan

Toan Thanh Do

Thai Hoang Le

Academic Affairs Department Vietnam National University Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Viet Nam

Department of Computer Science University of Science HCMC 227 Nguyen Van Cu, District 5, Ho Chi Minh City,Vietnam

Department of Computer Science University of Science HCMC 227 Nguyen Van Cu, District 5, Ho Chi Minh City,Vietnam

[email protected]

[email protected]

[email protected]

multiple k-means results [4]. A different partitional clustering approach is based on probability density function (pdf) estimation using the Gaussian mixture model. The specification of the parameters of mixtures is based on the expectation minimization algorithm (EM ) in [5]. Hierarchical clustering methods can be displayed in the form of dendrogram or a tree in [7]. These methods can be classified as agglomerative or divisive. A hierarchical agglomerative method, which is also called a bottom-up clustering method, treats each sample as a singleton cluster at the outset and gradually merges the clusters into larger clusters until all samples are ultimately in a single cluster (the root node). A hierarchical divisive method, which is called a top-down clustering method, starts at the top with all samples in one cluster. The cluster is split into smaller clusters. This procedure is applied recursively until each sample is in its own singleton cluster.

ABSTRACT Gaussian M ixture M odel (GM M ) is widely used in unsupervised learning tasks. In this paper, we propose the boost-GM M algorithm which uses GM M s to cluster real world scenes. At first, images will be extracted with gist-feature to get the data set. At each boosting iteration, a new training set is constructed by using weighted sampling from the original dataset and GM M is used to provide a new data partitioning. The final clustering solution is produced by aggregating the multiple clustering results. Experiments on real-world scene sets indicate that boost-GM M has higher result than other algorithms.

Keywords Cluster, Gaussian M ixture M odel, boosting, scene feature.

1. INTRODUCTION

Grid-based clustering algorithms are mainly proposed for spatial data mining. These algorithms quantize the space into a finite number of shells and then do all operations on the quantized space. On the other hand, density -based clustering algorithms are based on density conditions.

Data clustering is one of the most important techniques in data mining. It is used to understand and analy ze the structure of unlabeled data. Image is a special data in which there is a lot of information. This problem becomes harder because when we use different metrics we obtain different results. The majority of clustering algorithms are based on the following four most popular clustering approaches: iterative square-error partitional clustering, hierarchical clustering, grid-based clustering, and density-based clustering in [1].

However, many of the above clustering methods require additional user-specified parameters, such as the optimality and shapes of clusters, similarity thresholds, and stopping criteria. M oreover, we can get different solutions due to the random initializations of many algorithms such as k-means and GM M .

The partitional clustering methods can be classified into hard clustering methods and soft clustering methods. In hard clustering methods, each sample is assigned to only one cluster. In soft clustering method , each sample can be associated with several clusters. K-means, which is the most used algorithm in partitional clustering, is based on the square-error criterion. This algorithm is computationally efficient and yields good results if the clusters are compact, hyper-spherical in shape, and well separated in feature space. Numerous attempts have been made to improve the performance of the simple k-means algorithm by using M ahalanobis distance to detect hyper-ellipsoidal shaped cluster in [3] or by incorporating a fuzzy criterion function resulting in fuzzy c-means algorithm in [2] or using boosting to aggregate the

Boosting is one of the most important developments in classification methodology, and this method is the linear combination of weak learners to create a stronger learner. It is also a multi-clustering approach. A multi-clustering approach is introduced by Fred in [8] and Frossyniotis in [4], where multiple clusterings (using k-means) are aggregated to get a better result. Another multi-clustering approach is BoGM M in [9]. In this paper, the author divided this method into two parts. The first part is Boosting GM M in which the BIC (Bayesian Information Criterion) algorithm is used to generate the sized of GM M . The second part is Boosted Clustering in which the clusters of GM M are combined into a C cluster (user-specified parameter). This method was shown to have good results in the case of small and basic dimensions database, while having bad results in the case of large and complex dimensions because of the large number of clusters generated by BIC (45 in the case of four input clusters).

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. T o copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SoICT 2011, October 13–14, 2011, Hanoi, Vietnam. Copyright 2011 ACM 978-1-4503-0880-9/11/10…$10.00.

For cluster images, we suggest using the gist-feature for the feature extraction step and propose boost-GM M for the clustering step. In our clustering method, at each iteration, a GM M is used to model the dataset; therefore, each cluster is represented by Gaussian. After several iterations, some Gausses, which represent the same cluster, are aggregated. In consequence, the combination

226

of Gaussians (with the compatible weight) is the same as a GM M . We tested the on scene database of Oliva Aude (M IT). Experimental results show that our method has good results in scene image clustering.

2. FEATURE EXTRACTION Haar features’s advantage is fast and known to work well in image processing tasks such as face detection. In clustering, an image has a lot of information and Haar features are so simple that an image has a large amount of Haar windows. Therefore without a labeled dataset, we would not be able to choose good features for clustering. Edges can give a rough idea of a scene. But in some cases, it seems to be not effective. For example, a tall building image would show many long vertical edges and possibly numerous horizontal edges; Similarly a forest image would show uniform distribution of dominant long vertical edges. The coast, open country and highway on the one hand would give a relatively fewer edges and more of horizontal lines as prominent ones. M ountains on the other hand can be expected to give more of diagonal edges, due to their characteristic shap e; similarly streets and highways have characteristic edges marking the outlines of the road.

Figure 2. Gabor filter at 3 scales (8-8-4) In gist feature extraction part, we apply 20 Gabor filters at 3 spatial scales (8-8-4) to the 3 color subchannels R-G-B of the prefiltered image. For each of the subchannels, average values from a predefined 4 by 4 grid (16 values) are computed for a total of 960 raw gist value (3 subchannels * 20 Gabor filters * 16 values = 960 raw gist value).

Color features were proven to have good results. However, it is not efficient when images of different scenes have similar colors. For example, images of forests and mountains in winter will be white, images of the coast and open country are yellow in sunset, or images of tall-buildings with gray walls and blue skies are similar to images of highway s, which have gray roadbeds and blue skies. Gist feature is proposed by Aude Oliva in [12], and it is effective in scene classification.

Gabor filters

Figure 3. Illustration of the application Gabor filters to an image. We suggest using the gist feature for scene image clustering. However, a 960-dimension vector is too large for clustering so we used Principal Component Analysis (PCA) to reduce it to a 30dimension vector. The gist feature has more advantages than other features because it is the combination of colors and texture. M oreover, texture is extracted by exhausting all of the directions. However, the gist feature has a weakness of having longer extraction time than other features.

3. BOOST-GMM We propose a new algorithm to clustering images based on boosting and the Gaussian M ixture M odel (GM M ) that we will call boost-GM M which iteratively recycles the training examples, providing multiple clusterings and resulting in a common partition. At each iteration, the weights of the original data set are computed, and a new training set is constructed based on this weight. GM M is then applied to the partition the new training set. The final cluster solution is produced by aggregating the obtained partitions using weighted voting, where the weight of each

Figure 1. Illustration of gist feature (a) input image (b) prefiltered image (c) 960 dimensions gist feature. This method has two parts. The first step is pre-filtering and the second step is gist feature extraction. In pre-filtered part, there are four steps: padding images to reduce boundary artifacts, whitening, local contrast normalization, and crop ping the output image to have the same size as the input image.

227

x

partition is a measure of its quality. The algorithm is summarized below. Algorithm boost-GMM. Given: Input sequence of N instances (‫ݔ‬ଵ ǡ ǥ ǡ ‫ݔ‬ே ), ‫ݔ‬௜ ‫ܴ א‬ௗ ǡ ݅ ൌ ͳǡ ǥ ǡ ܰ , the number ‫ ܥ‬of clusters to partition the data set and the maximum iterations ܶ.

x

x

x

x

o Produce the new training set by getting ܰ௧ samples whose weight are biggest. Call GM M to partition the training data, and get the partition ‫୲ ܪ‬. Get the cluster hypothesis ‫ܪ‬௜௧ ൌ ൫݄௧௜ǡଵ ǡ ݄ ௧௜ǡଶǡ ǥ ǡ ݄௧௜ǡ஼ ൯ for all ݅, ݅ ൌ ͳ ǥ ǡ ܰ, where ݄௜ǡ௝ is the membership degree of instance ݅ to cluster ݆. If ‫ ݐ‬൐ ͳ, renumber the cluster indexes of ‫ ܪ‬௧ according to the highest matching score, given by the fraction of shared instances with the clusters provided by the boost௧ିଵ . clustering until now, using ‫ܪ‬௔௚ Calculate pseudoloss ே

௧ୀଵ

‫ܳܥ‬௜௧

where is a measurement index which is used to evaluate the clustering quality of an instance ‫ݔ‬௜ for the partition ୲ : (3)

where ݄௧௜ǡ௚௢௢ௗ is the maximum membership degree of ‫ݔ‬௜ to a cluster and ݄௧௜ǡ௕௔ௗ is the minimum membership degree of ‫ݔ‬௜ to a cluster. x

Stopping criteria: o If Ԗ୲ ൐ ͲǤͷ then

‫ ݐ‬ൌ ‫ ݐ‬൅ ͳ

Another issue in the above methodology is how to renumber the cluster indexes at the iteration ‫ݐ‬. Because the initialization of GM M is random, difference times may have difference partitions. We need to renumber the cluster indexes of ‫ ܪ‬௧ according to the highest matching score with cluster indexes of ‫ ܪ‬௧ିଵ. To solve this problem we use Hungarian algorithm. We assume that ܲଵ and ܲଶ is two data partitions. At first, from two difference partitions ܲଵ and ܲଶ we produce a matrix ‫ܯ‬:

ଵିఢ ೟ ఢ೟

Set ߚ௧ ൌ

ܶൌ ‫ݐ‬െ ͳ go to step 3

‫ܯ‬௝௜ሺ݇ ሻ ൌ ቊ

o If Ԗ୲ ൏ Ԗ୫ୟ୶ then ‹… ൌ ‹… ൅ ͳ

ͳ݂݅‫ݔ‬௞ ‫ܥ א‬௝௜ Ͳ݂݅‫ݔ‬௞ ‫ܥ ב‬௝௜

ǡ ݇ ൌ ͳǡ ǥ ǡ ܰ

where ݅ is the index of the partition ݅ ൌ ͳ‫ܥ ;ʹݎ݋‬௝௜ is the list of samples in the ݆th cluster of partition ݅.

If ݅ܿ ൌ ͵ then ܶൌ‫ݐ‬ go to step 3

From the matrix ‫ܯ‬, we produce the matching cost matrix between clusters of two partitions:

Else x

ఛୀଵ

Ž‘‰ ሺߚఛ ሻ ݄ఛ ቉ ሺͷሻ σ ௧௝ୀଵ Ž‘‰൫ߚ௝ ൯ ௜ǡ௞

At the first iteration, the training set is the original data set. It means we use all of instances for the first clustering. At each iteration ‫ ݐ‬ൌ ʹǡ ǥ ǡ ܶ, the data set ܺ ௧ is constructed from the original data set by getting ܰ௧ instances whose weight are highest. Then a GM M is constructed from the dataset ܺ ௧, and a partitioning result ‫ ܪ‬௧ is produced using parameter of GM M on the original data set. For each instance ‫ݔ‬௜ ǡ ݅ ൌ ͳǡ ǥ ǡ ܰ, we get a cluster hypothesis ‫ܪ‬௜௧ ൌ ൫݄௧௜ǡଵǡ ݄௧௜ǡଶǡ ǥ ǡ ݄ ௧௜ǡ஼ ൯ where ݄ ௧௜ǡ௝ denotes the membership degree of instance ݅ to cluster ݆ with respect to σ ஼௝ୀଵ ݄ ௧௜ǡ௝ ൌ ͳ, for all ݅. It must be emphasized that, the membership degree ݄௧௜ǡ௝ , at each iteration, will be used to produce the final partitioning, not through the specification of GM M parameters. This fact gives the flexibility when applying other boosting methods in this problem.

ͳ ߳ ௧ ൌ ෍ ‫ݓ‬௜௧ ‫ܳܥ‬௜௧ሺʹሻ ʹ

x

௞ୀଵǡǥǡ஼

In this problem, we assume a given set ܺ of ܰ dimentional instances ‫ݔ‬௜ , the required number of clusters C, and the maximum number of iterations T of boost-GM M . The clustering obtained at ௧ will denote the iteration ‫ ݐ‬will be denoted as ‫ ܪ‬௧, and ‫ܪ‬௔௚ aggregate partition obtained using clustering ‫ ܪ‬௜ for ݅ ൌ ͳǡ ǥ ǡ ‫ݐ‬. Consequently, the final solution ‫ ܪ‬௙௜௡௔௟ produced by the clustering ் obtained at iteration : ‫ ܪ‬௙௜௡௔௟ ൌ ‫ܪ‬௔௚ . The basic feature of this technique is that, at each iteration, the weight ‫ݓ‬௜௧ is computed for each instance ‫ݔ‬௜ such that the weight is higher, it means this instance is more difficult to be clustered. At the beginning, the weights of all instances are equally initialized, ‫ݓ‬௜ଵ ൌ ͳȀܰ. In accordance the boosting methodology, the weight of each instance ‫ݔ‬௜ is constructed for the iteration ‫ ݐ‬൅ ͳ.

Produce the new training set from the original data set based on the weight of data set. o Compute the number of samples of new training set using this formula: ಿ ሺ ሻ ሺ ሻ ܰ௧ ൌ ܴ‫ ݀݊ݑ݋‬ቀ݁ ିσ೔సభ ௪೟ ௫೔ ௟௢௚௪೟ ௫೔ ቁሺͳሻ

‫ܳܥ‬௜௧ ൌ ͳ െ ݄ ௧௜ǡ௚௢௢ௗ ൅ ݄௧௜ǡ௕௔ௗ

ൌ ƒ”‰ ƒš ෍ ቈ

3. Output the number of iterations ܶ and the final cluster ௧ hypothesis ‫ܪ‬௔௚

2.Iterate while ‫ ݐ‬൑ ܶ

x

௧ ௧ ‫ܪ‬௔௚

1.Initialize ‫ݓ‬௜ଵ ൌ ͳȀܰǡ ݅ ൌ ͳǡ ǥ ǡ ܰ; ‫ ݐ‬ൌ ͳ; ߳ ௠௔௫ ൌ Ͳ; ݅ܿ ൌ Ͳ. x

Compute the aggregate cluster hypothesis:



݅ܿ ൌ Ͳ ߳ ௠௔௫ ൌ ߳ ௧ Update distribution W

‫ܥܯ‬௜ǡ௝ ൌ

ܺ௜ଵ ܺ௝ଶ

ሺ͸ሻ ் ் ் ܺ௜ଵ ܺ௜ଵ ൅ ܺ௝ଶ ܺ௝ଶ െ ܺ௜ଵ ܺ௝ଶ where ݅ǡ ݆ ൌ ͳǡ ǥ ǡ ‫ܥ‬Ǥ

஼ொ೟

‫ݓ‬௜௧ߚ௧ ೔ Ǣ ݅ ൌ ͳǡ ǥ ǡ ܰሺͶሻ ܼ௧ where ୲ is the nomalization constant such that ܹ ௧ାଵ is ୲ a distribution: ୲ ൌ σ ୒ ୧ୀଵ ™୧ ‫ݓ‬௜௧ାଵ ൌ

We apply the Hungarian algorithm for matrix ‫ ܥܯ‬to determine C best matching pairs of clusters between ܲଵ and ܲଶ . Then, we renumber the clustering indexes ܲଶ based on ‫ ܥ‬best matching pairs. We can see that above technique is a heuristic because we

228

get only a good result and not best results. We can get the best result using brute-force search. However, if the number of clusters is high, it is impossible.

Figure 4. Illustration of Mutual Information I(X,Y) We tested our algorithm on M IT dataset. This dataset contains 8 outdoor scene categories: coast, mountain, forest, open country, street, inside city, tall buildings and highways. There are 2688 color images, 256x256 pixels.

In the above methodology the most critical issue to be addressed is how to evaluate the clustering quality of an instance ‫ݔ‬௜ for the partition ‫ ܪ‬௧. Based on ‫ܪ‬௜௧, the clustering quality is computed using (3). We can rewrite (3) as ‫ܳܥ‬௜௧ ൌ ͳ െ ൫݄௧௜ǡ௚௢௢ௗ െ ݄௧௜ǡ௕௔ௗ൯. It means the larger ݄௧௜ǡ௚௢௢ௗ െ ݄௧௜ǡ௕௔ௗ is, the smaller ‫ܳܥ‬௜௧ is or the clustering quality of instance ‫ݔ‬௜ is better. Based on the ‫ ܳܥ‬index, at each iteration ‫ ݐ‬the pseudoloss ߳ ௧ is computed using (2). Then, the weight distribution ‫ݓ‬௜௧ାଵ for the next iteration is computed using (4). Using this formula, we can reduce the well-cluster points (whose clustering quality are high) and raise the weight of badly clustered data points. Thus, in the first iteration, the boostGM M algorithm will partition the original data set, while in the next iteration, our algorithm will cluster data points that were hard to cluster in the first iteration (ܰ௧ badly-clustered instances), and the next iteration will partition the hardly clustered instances in the previous iterations. For the early stopping of our algorithm, two stopping criteria were used. In particular, the algorithm terminates if GM M has a pseudoloss ߳ ௧ greater than 0.5 (in which case the partitioning result of the last iteration is not taken into account) or the pseudoloss does not further increase after three iterations. It is clear that almost all distributions in nature are the normal distribution so that in the case of not know clearly about feature of dataset, GM M is better than K-means. However, if we use only one GM M for clustering, each cluster is presented by Gaussian. Our method is using boosting to present each cluster by some Gausses, and these Gausses are combined with a compatible weight (like a GM M ) so that each cluster is presented by a GM M . GM M is better than GM M in presented data points.

4. EXPERIMENTAL RESULTS To test boost-GM M , we use Accuracy and Normalized M utual Information (NM I) to measure the quality of the final clustering solutions.

Figure 5. Examples of 8 outdoor scene categories. To construct the testing set, we mix images from categories. For example, 2x100 is 100 first images of coast and mountain; 4x200 is 200 first images of coast, mountain, forest and open-country.

Accuracy is therefore the number of true samples divided by the total of all the test samples. Normalized M utual Information between the label of testing set ܺ and final clustering solution Y: ܰ‫ ܫܯ‬ሺܺǡ ܻሻ ൌ 

Table 1: Time of feature extraction and running algorithms

‫ ܫ‬ሺܺǡ ܻሻ Input

ඥ‫ܪ‬ሺܺሻ ‫ ܪ כ‬ሺܻሻ

Where ‫ ܫ‬ሺܺǡ ܻሻ ൌ ‫ܪ‬ሺܺሻ ൅ ‫ܪ‬ሺܻሻ െ ‫ܪ‬ሺܺǡ ܻሻ. ‫ܪ‬ሺܺሻ is the entropy of X while ‫ܪ‬ሺܻሻ is the entropy of Y. H(X,Y) is the joint entropy of ܺ and ܻ.

229

T ime (in second) Feature Extraction

Kmeans

GMM

boostKmeans

boostGMM

2x100

90,339491

0,0588

0,0613

0,0879

0,1341

2x150

132,091160

0,0901

0,0958

0,1289

0,2576

2x200

171,601068

0,1399

0,1391

0,1724

0,3252

4x100

171,913414

0,1443

0,1467

0,2226

1,0720

4x150

251,784309

0,1950

0,2153

0,3297

2,3643

4x200

337,243440

0,2687

0,2712

0,3872

3,0982

8x100

337,572913

0,3293

0,3411

0,6023

6,5913

8x150

567,190846

0,5122

0,5560

0,8947

17,3812

8x200

734,957941

0,7108

0,7642

1,3500

25,7541

Because the initialization of GM M is random, the output of our algorithm is variable. Therefore, we execute algorithms ten times and computed the mean and the standard deviation (std):

1



݉݁ܽ݊ ൌ ‫ݔ‬ҧ  ൌ

kmeans GMM boost-kmeans boost-GMM

0.9

ͳ ෍ ‫ݔ‬௜ ݊

0.8

NMI

௜ୀଵ

0.7 0.6



ͳ ‫ ݀ݐݏ‬ൌ ඩ ෍ሺ‫ݔ‬௜ െ ‫ݔ‬ҧ ሻଶ ݊

0.5 0.4

௜ୀଵ

2

3

4

5 Clusters

6

7

8

In the first test, we constructed the testing set from 2 categories to 8 categories with 100 images per categories. The results are shown in table 2, Figure 6 and Table 3, Figure 7.

Figure 7. The NMI of partitioning on the testing set of 100 images per categories

Table 2. Accuracy of partitioning on the testing set of 100 images per categories

In the second test, we constructed the testing set from 2 categories to 8 categories with 150 images per categories. The result are shown in Table 4, Figure 8 and Table 5, Figure 9.

Accuracy

Kmeans

Input

mean

std

GMM mean

std

boost-Kmeans boost-GMM mean

std

mean

Table 4. Accuracy of partitioning on the testing set of 150 images per categories

std

2x100 0,9750 0,0000 0,9850 0,0000 0,9750 0,0000 0,9850 0,0000

Accuracy

3x100 0,6893 0,0649 0,7307 0,0720 0,7207 0,0387 0,7613 0,0065

Input

4x100 0,7365 0,0470 0,7580 0,0690 0,7600 0,0000 0,7925 0,0000

2x150

0,9767 0,0000 0,9900 0,0000 0,9767 0,0000 0,9900 0,0000

5x100 0,5916 0,0654 0,6248 0,0761 0,6268 0,0721 0,6588 0,0463

3x150

0,6907 0,0627 0,7142 0,0931 0,7089 0,0809 0,7253 0,0927

6x100 0,5457 0,0436 0,5510 0,0534 0,5520 0,0556 0,5953 0,0432

4x150

0,7183 0,0467 0,7563 0,1198 0,7377 0,0033 0,8093 0,0098

7x100 0,5486 0,0371 0,5683 0,0524 0,5626 0,0254 0,5940 0,0189

5x150

0,5499 0,0259 0,588 0,0263 0,5579 0,0251 0,6165 0,0194

8x100 0,5262 0,0410 0,5205 0,0432 0,5300 0,0337 0,5625 0,0291

6x150

0,4698 0,0497 0,5456 0,0392 0,4802 0,0443 0,5731 0,0337

7x150

0,4808 0,0202 0,5503 0,0198 0,5084 0,0159 0,5674 0,0061

8x150

0,4603 0,0399 0,5113 0,0301 0,4652 0,0235 0,5428 0,0258

1 kmeans GMM boost-kmeans boost-GMM

std

GMM mean

std

boost-Kmeans boost-GMM mean

std

mean

std

1 kmeans GMM boost-kmeans boost-GMM

0.9

0.8 Accuracy

Accuracy

0.9

Kmeans mean

0.7

0.8 0.7 0.6

0.6

0.5

0.5

2

3

4

5 Clusters

6

7

8

0.4

2

3

4

5 Clusters

6

7

8

Figure 6. The Accuracy of partitioning on the testing set of 100 images per categories

Figure 8. The Accuracy of partitioning on the testing set of 150 images per categories

Table 3. NMI of partitioning on the testing set of 100 images per categories

Table 5. NMI of partitioning on the testing set of 150 images per categories

NMI Input

Kmeans mean

std

GMM mean

std

boost-Kmeans boost-GMM mean

std

mean

NMI

std

Input

Kmeans mean

std

GMM mean

Std

boost-Kmeans boost-GMM mean

std

mean

std

2x100 0,8381 0,0000 0,8888 0,0000 0,8381 0,0000 0,8888 0,0000

2x150 0,8433 0,0000 0,9291 0,0000 0,8433 0,0000 0,9291 0,0000

3x100 0,4727 0,0152 0,5150 0,0232 0,4725 0,0151 0,5240 0,0032

3x150 0,4864 0,0257 0,5218 0,0275 0,4861 0,0254 0,5282 0,0242

4x100 0,5188 0,0123 0,5692 0,0249 0,5250 0,0000 0,5817 0,0000

4x150 0,5056 0,0188 0,6076 0,0561 0,5110 0,0032 0,6214 0,0133

5x100 0,4507 0,0224 0,4972 0,0343 0,4555 0,0182 0,4964 0,0184

5x150 0,3815 0,0154 0,4558 0,0103 0,3799 0,0167 0,4566 0,0135

6x100 0,3927 0,0134 0,4143 0,0115 0,3885 0,0226 0,4232 0,0135

6x150 0,3396 0,0049 0,4032 0,0107 0,3386 0,0042 0,4042 0,0183

7x100 0,4243 0,0097 0,4576 0,0170 0,4302 0,0036 0,4683 0,0066

7x150 0,3649 0,0049 0,4138 0,0033 0,3574 0,0054 0,4146 0,0041

8x100 0,4147 0,0168 0,4524 0,0278 0,4098 0,0135 0,4619 0,0186

8x150 0,3486 0,0149 0,4177 0,0237 0,3553 0,0162 0,4224 0,0127

230

1

1

0.9 0.8

0.9

boost-GMM

0.7

0.6

0.5

0.5

0.4

0.4

3

4

5 Clusters

6

7

2

8

In the third test, we constructed the testing set from 2 categories to 8 categories with 200 images per categories. The result are shown in Table 6, Figure 10 and Table 7, Figure 11. Table 6. Accuracy of partitioning on the testing set of 200 images per categories Accuracy

Kmeans std

GMM

mean

mean

std

mean

std

mean

2x200

0,9675 0,0000 0,9725 0,0000 0,9650 0,0000 0,9725 0,0000

3x200

0,6920 0,0977 0,7420 0,0686 0,7413 0,0016 0,7890 0,0061

4x200

0,7007 0,0391 0,7652 0,1170 0,7000 0,0644 0,8007 0,0410

5x200

0,5274 0,0368 0,603 0,0553 0,5528 0,0345 0,6598 0,0315

6x200

0,4828 0,0353 0,5762 0,0161 0,4888 0,0244 0,6037 0,0108

7x200

0,4774 0,0155 0,5499 0,0219 0,4820 0,0214 0,5806 0,0132

8x200

0,4707 0,0368 0,5449 0,0353 0,4851 0,0114 0,5579 0,0159

4

5 Clusters

6

7

8

[1]

Table 7. NMI of partitioning on the testing set of 200 images per categories

Input

Kmeans mean

std

GMM mean

Std

boost-Kmeans boost-GMM mean

std

mean

NM I

Kmeans

0,5022

0,3630

GM M

0,6187

0,4671

boost-Kmeans

0,5141

0,3668

boost-GM M

0,6533

0,4925

6. REFERENCES

Figure 10. Illustration the Accuracy of partitioning on the testing set of 200 images per categories

NMI

Accuracy

In this paper, we proposed boost-GMM algorithm, which is based on boosting, to cluster scenes. Although, our algorithm has a running time longer than that of other algorithms, the results is better. Because the information in an image is huge, in our future work, we will study a better feature and better measure to apply to cluster image. We will also study a dimensional other than PCA. Another task is applying boost-GM M to clustering object images.

0.5 3

8

5. CONCLUSION

0.6

2

7

We also tested object images, but the results are low although the results of boost-GM M is better than other methods, because Gist feature is not efficient in clustering object images.

0.7

0.4

6

From Tables 2 to 8, we can see that the results of “boost-GM M ” are mostly better than those of other methods. But our method costs much more time, which can be seen in Table 1. Another problem is the results of 3 categories is lower than 4 categories because we used PCA to reduce the dimension from 960 to 30, but PCA can not hold all of the important features.

kmeans GMM boost-kmeans boost-GMM

0.8

5 Clusters

Std

1 0.9

4

The result of testing on total M IT set are shown in Table 8. Table 8. Performance of partitioning on MIT set

boost-Kmeans boost-GMM

Input

3

Figure 11. Illustration the NMI of partitioning on the testing set of 200 images per categories

Figure 9. Illustration the NMI of partitioning on the testing set of 150 images per categories

Accuracy

0.7

0.6

2

GMM boost-kmeans

0.8 NMI

NMI

kmeans

kmeans GMM boost-kmeans boost-GMM

[2]

Std

2x200 0,7944 0,0000 0,8267 0,0000 0,7831 0,0000 0,8267 0,0000

[3]

3x200 0,5089 0,0137 0,5072 0,0350 0,5036 0,0022 0,5446 0,0083 4x200 0,5067 0,0046 0,6057 0,0581 0,5092 0,0031 0,6176 0,0255

[4]

5x200 0,3907 0,0019 0,4805 0,0170 0,3977 0,0058 0,4871 0,0130 6x200 0,3402 0,0026 0,4018 0,0109 0,3465 0,0048 0,4196 0,0056

[5]

7x200 0,3624 0,0041 0,4398 0,0085 0,3686 0,0082 0,4484 0,0043 8x200 0,3615 0,0137 0,4456 0,0160 0,3742 0,0160 0,4479 0,0050

231

Halkidi, M ., Batistakis, Y., Vazirgiannis, M ., 2001. Clustering algorithms and validity measures. In Proceedings of the 13th International Conference on Scientific and Statistical Database Management, July 18– 20. IEEE Computer Society, George M ason University, Fairfax, Virginia, USA. Bezdek, J., 1981. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York. Bezdek, J., Pal, S., 1992. Fuzzy M odels for Pattern Recognition: M ethods that Search for Structures in Data. IEEE CS Press. Frossyniotis, D., Likas, A., Stafylopatis, A., 2004. A clustering method based on boosting. In Proceedings of Pattern Recognition letters, 25, pp. 641–654. Dempster, A.P., Laird, N. M ., and Rubin, D.B., 1977. M aximum likelihood from incomplete data via

[6] [7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

the EM algorithm. Journal of the Royal Statistical Society, Series B, 34:1-38. Christopher M . Bishop, 2004, Pattern Recognition and Machine Learning. Springer, 2006. Boundaillier, E., Hebrail, G., 1998. Interactive interpretation of hierarchical clustering. Intell. Data Anal. 2 (3). Fred, A., 2001. Finding consistent clusters in data partitions. In Proceedings of the Second International Workshop on Multiple Classifier Systems (MCS 2001) Lecture Notes in Computer Science, 2096. Springer, Cambridge, UK, pp. 309–318. Hao Tang and Thomas S. Huang, 2008, Boosting Gaussian M ixture M odels via Discriminant Analysis , in Proceedings of International Conference on Pattern Recognition (ICPR'08), Tempa, FL, December, 2008 M iranda, A. A., Le Borgne, Y. A., and Bontempi, G., 2008, New Routes from M inimal Approximation Error to Principal Components, in Proceedings of Neural Processing Letters, Springer. 27, 3 (June, 2008), Jain, A. K., M urty, M . N., and Flynn, P. J., 1999, Data clustering: a review. Association for Computing Machinery computer Survey. Vol. 31, No.3 (September 1999), pp. 264-323. Oliva, A., and Torralba, A., 1999. Semantic Organization of Scenes using Discriminant Structural Templates. In Proceedings of the International Conference in Computer Vision (ICCV99), Korfu, Greece. (pp. 1253-1258). Oliva, A., and Torralba, A., 2002. Scene-centered description from spatial envelope properties. Lecture Note in Computer Science Serie Proceedings of Second International Workshop on Biologically Motivated Computer Vision, Eds: H. Bulthoff, S.W. Lee, T. Poggio, and C. Wallraven. Srpinger-Verlag, Tuebingen, Germany (pp.263-272). Oliva, A, Itti, L., Rees, G., and Tsotsos, J.K., 2005. Gist of the scene. In the Encyclopedia of Neurobiology of Attention.), San Diego, CA (pages 251-256). Oliva, A., and Ross, M .G., 2010. Estimating perception of scene layout properties from global image features. Journal of Vision, 10(1):2, 1-25. Oliva, A., and Torralba, A., 2010, scene image database: 8classes-2688images. http://cvcl.mit.edu/database.htm or http://people.csail.mit.edu/torralba/code/spatialenvelope/ (15/5/2010) Deng Cai, Xiaofei He, Zhiwei Li, Wei-Ying M a, and JiRong Wen, 2004, Hierarchical Clustering of WWW image Search Results Using Visual, Textual and Link Information, in Proceedings of Association for Computing Machinery 2004, New York, New York, USA. Copyright 2004. Fei Wang, Changshui Zhang and Naijiang Lu, 2005, Boosting GM M and Its Two Applications. In proceedings, Springer-Verlag GmbH, sixth International Workshop Multiple Classifier Systems (MCS) vol. 3541 pp. 12.. Seaside, California, USA, June 13-15, 2005.

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

232

Strehl, A., and Ghosh, J., 2002, Cluster ensembles - a knowledge reuse framework for combining multiple partitions. Journal on Machine Learning Research (JMLR), 3:583-617, December 2002. Ding, C., and He, X., 2004, K-means Clustering via Principal Component Analysis. In Proceedings of International Conference Machine Learning (ICML 2004), pp 225-232. July 2004 Domeniconi, C., Papadopoulos, D., Gunopulos, D., M a, S., 2004, Subspace Clustering of High Dimensional Data, In Proceedings of Structural Dynamics, and Materials 2004. Gunopulos, D., Vazirgiannis, M ., Halkidi, M ., 2006, Novel Aspects in Unsupervised Learning: SemiSupervised and Distributed Algorithms. In: Tutorial at 17th European Conference on Machine Learning and the 10th European Conference on Principles and Practice of Knowledge Discovery in Databases (2006). Nistér, D. and Stewenius, H., 2006, Scalable Recognition with a Vocabulary Tree, in Proceedings of Computer Vision and Pattern Recognition 2006. Kargupta, H., Huang, W., Sivakumar, K., and Johnson, E., 2001, Distributed clustering using collective principal component analysis. , in Proceedings of Knowledge and Information Systems, 3(4), 2001. Davidson, I., and Ravi, S. S., 2005, Clustering under Constraints: Feasibility Results and the k- M eans Algorithm, In Proceedings of Structural Dynamics, and Materials 2005. Davidson, I., and Ravi, S. S., 2005, Hierarchical Clustering with Constraints: Theory and Practice, In Proceedings of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2005. Jolliffe, I.T., 2002, Principal Component Analysis, Series: Springer Series in Statistics, 2nd ed., Springer, NY, 2002, XXIX, 487 p. 28 illus. Philbin, J., Chum, O., Isard, M ., Sivic, J., and Zisserman, A., 2007, Object retrieval with large vocabularies and fast spatial matching, In Proceedings of Computer Vision and Pattern Recognition, 2007. CVPR '07.IEEE Conference. Parsons, L., Haque, E., and Liu, H., 2004, Subspace clustering for high dimensional data: a review. SIGKDD Conference on Knowledge Discovery and Data Mining Explor. Newsl., 6(1):90105, 2004. M u Qiao and JiaLi, 2010, Two-way Gaussian mixture models for high dimensional classification, Statistical Analysis and Data Mining journal, volume 3, Issue 4, p. 259-271. Viola, P. and Jones, M ., 2001, Rapid object detection using a boosted cascade of simple features, In Proceedings of Conference on Computer Vision and Pattern Recognition, p. 511–518, 2001.

Scene image clustering based on boosting and GMM

Department of Computer Science. University of Science HCMC. 227 Nguyen Van .... all ݅, ݅ ൌ ͳ Ç¥ Ç¡Ü°, where ݄௜ǡ௝ is the membership degree of instance ݅ to ...

725KB Sizes 1 Downloads 221 Views

Recommend Documents

Outdoor Scene Image Segmentation Based On Background.pdf ...
Outdoor Scene Image Segmentation Based On Background.pdf. Outdoor Scene Image Segmentation Based On Background.pdf. Open. Extract. Open with.

Outdoor Scene Image Segmentation Based On Background ieee.pdf ...
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Outdoor Scen ... und ieee.pdf. Outdoor Scen ... und ieee.pdf. Open. Extract. Open with. Sign I

Boosting Margin Based Distance Functions for Clustering
Under review by the International Conference ... ing the clustering solutions considered to those that com- ...... Enhancing image and video retrieval: Learning.

Boosting GMM and Its Two Applications
to a specific density model – Gaussian Mixture Model (GMM) and pro- pose our boosting GMM algorithm. ... problems and proposed a general boosting density estimation framework. They also illustrated the potential .... For example, figure 1(a) is a d

Image Retrieval: Color and Texture Combining Based on Query-Image*
into account a particular query-image without interaction between system and .... groups are: City, Clouds, Coastal landscapes, Contemporary buildings, Fields,.

Synonym-based Query Expansion and Boosting-based ...
large reference database and then used a conventional Information Retrieval (IR) toolkit, the Lemur toolkit (Lemur, 2005), to build an IR system. In the post-.

Shape Indexing and Semantic Image Retrieval Based on Ontological ...
Center retrieves images, graphics and video data from online collections using color, .... ular class of image collection, and w(i,j) is semantic weight associated with a class of images to which .... Mn is defined by means of two coordinates (x;y).

Shape Indexing and Semantic Image Retrieval Based on Ontological ...
Retrieval Engine by NEC USA Inc.) provides image retrieval in Web by ...... The design and implementation of the Redland RDF application framework, Proc.

IMAGE ENHANCEMENT BASED ON FUZZY LOGIC AND ...
Whoops! There was a problem loading more pages. Retrying... IMAGE ENHANCEMENT BASED ON FUZZY LOGIC AND THRESHOLDING TECHNIQUES.pdf.

Image Retrieval Based on Wavelet Transform and Neural Network ...
The best efficiency of 88% was obtained with the third method. Key Words: .... Daubechies wavelets are widely used in signal processing. For an ..... Several procedures to train a neural network have been proposed in the literature. Among ...

Retrieving Video Segments Based on Combined Text, Speech and Image ...
content-based indexing, archiving, retrieval and on- ... encountered in multimedia archiving and indexing ... problems due to the continuous nature of the data.

Contextual Query Based On Segmentation & Clustering For ... - IJRIT
In a web based learning environment, existing documents and exchanged messages could provide contextual ... Contextual search is provided through query expansion using medical documents .The proposed ..... Acquiring Web. Documents for Supporting Know

Contextual Query Based On Segmentation & Clustering For ... - IJRIT
Abstract. Nowadays internet plays an important role in information retrieval but user does not get the desired results from the search engines. Web search engines have a key role in the discovery of relevant information, but this kind of search is us

Protecting sensitive knowledge based on clustering method in data ...
Protecting sensitive knowledge based on clustering method in data mining.pdf. Protecting sensitive knowledge based on clustering method in data mining.pdf.

Towards a Distributed Clustering Scheme Based on ...
Comprehensive computer simulations show that the proposed ..... Protocols for Wireless Sensor Networks,” Proceedings of Canadian Con- ference on Electrical ...

DBSTexC: Density-Based Spatio–Textual Clustering on ...
Jul 31, 2017 - noise (DBSCAN) is the most commonly used density-based clustering ... social media relevant to a certain point-of-interest (POI)), thus leading to poor ... gorithm using spatio–textual information on Twitter [9], [10], which takes in

Towards a Distributed Clustering Scheme Based on ... - IEEE Xplore
Abstract—In the development of various large-scale sensor systems, a particularly challenging problem is how to dynamically organize the sensor nodes into ...

Vision-based hexagonal image processing based hexagonal image ...
computer vision and pattern Recognition, Las Vegas, June 2006. [8] R.M. Mersereau, “The processing of Hexagonally Sampled Two-. Dimensional Signals,” Proceedings of the IEEE. 67: pp. 930 949, 1979. [9] X. He and W. Jia, “hexagonal structure for

Boosting Image Retrieval through Aggregating Search ...
aggregation of the top 25 results obtained with a set of visual annotations that match ... are rather sparse and short as most users use only a few keywords to annotate ... Annotations of- ten include spatial, temporal, and social references, as well

DGBA CIRCULAR - GOVT CHEQUES IN CTS BASED ON IMAGE ...
Deputy General Manager. Page 2 of 2. DGBA CIRCULAR - GOVT CHEQUES IN CTS BASED ... AGE & NO NEED TO SEND PHYSICAL CHEQUE.pdf.