Semantic Image Retrieval and Auto-Annotation by Converting Keyword Space to Image Space

Erbug Celebi Dokuz Eylul University Department of Computer Engineering 35100 Izmir, Turkey [email protected] Abstract In this paper, we propose a novel strategy at an abstract level by combining textual and visual clustering results to retrieve images using semantic keywords and auto-annotate images based on similarity with existing keywords. Our main hypothesis is that images that fall in to the same text-cluster can be described with common visual features of those images. In order to implement this hypothesis, we set out to estimate the common visual features in the textually clustered images. When given an un-annotated image, we find the best image match in the different textual clusters by processing their low-level features. Experiments have demonstrated that good accuracy of proposal and its high potential use in annotation of images and for improvement of content based image retrieval.

1. Introduction The emergence of multimedia technology and the rapidly expanding multimedia collections on the Internet have attracted significant research efforts in providing tools for effective retrieval and management of multimedia data. Image retrieval is based on the availability of a representation scheme of image content. Image content descriptors may be visual features such as color, texture, shape, and spatial relationships, or semantic primitives. Conventional information retrieval was based on text, and those approaches to textual information retrieval have been transformed into image retrieval in a variety of ways. However, “a picture is worth a thousand words”. Image contents are much more versatile compared with texts, and the amount of visual data is already enormous and still expanding very rapidly. Hoping to deal with these special characteristics of visual data, content-based image retrieval methods have been introduced. It has been widely recognized that the family of image retrieval techniques should become an integration of

Adil Alpkocak Dokuz Eylul University Department of Computer Engineering 35100 Izmir, Turkey [email protected]

both low-level visual features addressing the more detailed perceptual aspects and high-level semantic features underlying the more general conceptual aspects of visual data. Neither of these two types of features is sufficient to retrieve or manage visual data in an effective or efficient way [5]. Although efforts have been devoted to combining these two aspects of visual data, the gap between them is still a huge barrier in front of researchers. Intuitive and heuristic approaches do not provide us with satisfactory performance. Therefore, there is an urgent need of finding the latent correlation between low-level features and high-level concepts and merging them from a different perspective. How to find this new perspective and bridge the gap between visual features and semantic features has been a major challenge in this research field. There are so many researches in the literature on this subject: James Z. Wang et al. [4] present an image retrieval system, SIMPLIcity, which uses integrated region matching based upon image segmentation. Their system classifies images into semantic categories such as textured-nontextured, graph photograph. For the purpose of searching images, they have developed a series of statistical image classification methods. Duygulu et. al. and Hoffman have used a probabilistic approach to find latent classes of image corpus [3][6]. Sometimes, extracting semantics of images can be considered as auto-annotation of images with keywords. One approach to automatically annotate images is to look at the probability of associating words with image regions. Mori et. al.[11] used a co-occurrence model, which they look at the co-occurrence of words with image regions created using regular grid. More recently, a few other researches have also examined the problem using machine learning approaches. In particular Duygulu et. al.[3] proposed to describe images using a vocabulary of blobs. There are lots of studies [6][1] about translating one language to an other by using probabilistic approaches called Probabilistic Latent

Semantic Analysis (PLSA). Statistically-oriented approaches, believes that machines can learn (about) natural language from training data such as document collections and text corpora. In this study our main hypothesis is that images that fall in to the same text-cluster can be described with common visual features of those images. In this approach, images are first clustered according to their text annotations using C3M clustering technique. The images are also segmented into regions and then clustered based on low-level visual features using k-means clustering algorithm on the image regions. The feature vector of the images is then changed to a dimension equal to the number of visual clusters where each entry of the new feature vector signifies the contribution of the image to that visual cluster. Then a matrix is created for each textual cluster, where each row in the matrix is the new feature vector for the image in that textual cluster. A feature vector is also created for the query image and it is then appended to the matrix for each textual cluster and images in the textual cluster that give the highest coupling coefficient are considered for retrieval and annotations of the images in that textual cluster are considered as candidate annotations for the query image. The main contribution of this paper is to propose a new strategy (1) to retrieve images using semantic keywords and (2) auto-annotate images based on similarity with existing keywords for bridging the gap between low-level visual features and lack of semantic knowledge in multimedia information retrieval. Our solution works on an abstract level and combines both textual and visual clustering algorithms performance. The main idea behind this strategy is that the images within the same text cluster should also have same common visual features and could be stored in the same visual cluster. In our study, C3M and k-mean clustering algorithms are used for clustering textual annotations of images and low level visual features, respectively. The remainder of the paper is organized as follows: The next section gives a review of C3M algorithm and usage of our strategy combining textual and visual clustering properties is discussed in detail in section 3. In section 4, experimentation results are presented and the last section 5 concludes the paper and provides an outlook to our future studies on this subject.

2. C3M Cover Coefficient-based Clustering Methodology (C3M) is originally proposed by Can and Ozkarahan [2] to cluster text documents. The base concept of the algorithm, the cover coefficient (CC), provides a means of estimating the number of clusters within a document database and relates indexing and clustering analytically. The CC concept is used also to identify the cluster seeds and to form clusters with these seeds. The retrieval experiments show that the informationretrieval effectiveness of the algorithm is compatible

with a very demanding complete linkage clustering method that is known to have good retrieval performance. Cover Coefficient-based Clustering Methodology (C3M) employs document clusters as cluster seeds and member documents. Cluster seeds are selected by employing the seed power concept and the documents with the highest seed power are selected as the seed documents. In their paper Can, F. and Ozkarahan E.A., they showed that the complexity of C3M is better than most other clustering algorithms, whose complexities range from O(m2 ) to O(m3). Also their experiments show that C3M is time efficient and suitable for very large databases. Its low complexity is experimentally validated. C3M has all the desirable properties of a good clustering algorithm. C3M algorithm is a partitioning type clustering (clusters cannot have common documents). A generally accepted strategy to generate a partition is to choose a set of documents as the seeds and to assign the ordinary (non-seed) documents to the clusters initiated by seed documents to form clusters. This is the strategy used by C3M. Cover coefficient, CC, is the base concept of C3M clustering. The CC concept serves to; i. identify relationships among documents of a database by use of the CC matrix, ii. determine the number of clusters that will result in a document database; iii. select cluster seeds using a new concept, cluster seed power; iv. form clusters with respect to C3M, using concepts (i)-(iii); v. Correlate the relationships between clustering and indexing. C3M is a seed-based partitioning type clustering scheme. Basically, it consists of two different steps that are cluster seed selection and the cluster construction. D matrix is the input for C3M, which represents documents and their terms. It is assumed that each document contains n terms and database consists of m documents. The need is to construct C matrix, in order to employ cluster seeds for C3M. C, is a document-bydocument matrix whose entries cij (1 < i, j < m) indicate the probability of selecting any term of di from dj. In other words, the C matrix indicates the relationship between documents based on a two-stage probability experiment. The experiment randomly selects terms from documents in two stages. The first stage randomly chooses a term tk of document di; then the second stage chooses the selected term tk from document dj. For the calculation of C matrix, cij, one must first select an arbitrary term of di, say, tk, and use this term to try to select document dj from this term, that is, to check if dj contains tk. In other words, we have a two-stage experiment. Each row of the C matrix summarizes the results of this two-stage experiment. Let sik indicate the event of selecting tk from di at the first stage, and let s' jk indicate the event of

selecting dj, from tk at the second stage. In this experiment, the probability of the simple event “ sik and s' jk ” that is, P (sik , s' jk ) can be represented as P (sik ) × P( S ' jk ) . To simplify the notation, we use sik and s' jk respectively, for P( sik ) and P( s' jk ), where; sik =

dik

∑d

d jk

, and s ' jk =

n

,

m

∑d

ih

h =1

hk

h =1

where 1 ≤ i, j ≤ m , 1 ≤ k ≤ n

By considering document di, we can represent the D matrix with respect to the two-stage probability model. Each element of C matrix, cij , ( the probability of selecting a term of di from dj) can be founded by summing the probabilities of individual path from di to dj. n

cij =



sik .s' jk

k =1

This can be rewritten as;

cij = α i

n

∑d

ik .β k .d jk

, where 1 ≤ i , j ≤ m

Where α i and β k are reciprocals of the ith row sum and kth column sum, respectively, as shown below;

1

∑d

where , 1 ≤ i ≤ m

,

n

ij

j =1

βk =

1

,

m

∑d

Where 1 ≤ k ≤ n

jk

j =1

Properties of C Matrix : The following properties hold for the C matrix: i. For i ≠ j , 0 ≤ cij ≤ cii and cii > 0 ii. iii. iv. v.

ci1 + ci 2 + ci3 + ... + cim = 1 If none of the terms of di is used by the other documents, then cii=1 otherwise, cii<1. If cij = 0, then cji = 0, and similarly, if cij > 0, then cji > 0; but in general, Cij≠Cji . cii =cj j,= cij=cji iff di and dj are identical.

From these properties of the C matrix and from the CC relationships between two document vectors, cij can be seen to have the following meaning: c ij

δ=

m

δi

∑ m , where

0 < δ <1

i =1

ψ=

m

ψi

∑m

where 0 ≤ ψ ≤ 1

i =1

3. Image Auto-Annotation

k =1

αi =

As can be seen from the foregoing discussions, in a D matrix, if di (1≤ i≤m) is relatively more distinct (i.e., if di contains fewer terms that are common with other documents), then cii will take higher values. Because of this, cii is called the decoupling coefficient, δi, of di . (Notice that δi is a “measure” of how much the document is not related to the other documents, and this is why the word coefficient is used.) The sum of the off-diagonal entries of the ith row indicates the extent of coupling of di with the other documents of the database and is referred to as the coupling coefficient, ψi, of di. From the properties of the C matrix following equations can be written: δI = cii : decoupling coefficient of di ψi = 1 - δi : coupling coefficient of di

 extent to which di is covered by dj for i≠j  (coupling of d with d ), i j  =  extent to which d is covered by itself for i=j i  (decoupling of d from the rest of the documents), i 

In our strategy to auto-annotation of images, the main idea is very simple. We presume that images with similar annotations must share, at least, some similar low-level features. If this is true, can this correlation be used to associate some low-level visual features addressing the more detailed perceptual aspects and high-level semantic features? More formally, images that fall into the same text cluster can be described with their common visual features and could be stored in the same color cluster. One can easily think that it is possible to find images with annotations as counter examples that do not obey the underlying hypothesis. However, it is not possible to say that images with similar annotation never shares similar low-level features. It is clear that our approach is highly depending on the training set, and images must be annotated with care. On the other hand, our main hypothesis relies on to the intersected parts of both textual and low-level visual features. In our approach, images are first clustered according to their text annotations. The images are also segmented into regions and then clustered based on low-level visual features on the image regions. The feature vector of the images is then changed to a dimension equal to the number of visual clusters where each entry of the new feature vector signifies the contribution of the image to that visual cluster. Then a matrix is created for each textual cluster, where each row in the matrix is the new feature vector for the image in that textual cluster. A feature vector is also created for the query image and it is then appended to the matrix for each textual cluster and images in the textual cluster that give the highest coupling coefficient are considered for retrieval and annotations of the images in that textual

cluster are considered as candidate annotations for the query image.

3.1. Training Training phase of our approach based on the combination of textual and visual clustering has three main steps: textual clustering, visual clustering and replacing. The first step occurs at the training phase of the system and all of training images, T, are clustered according to their textual annotations by using C3M. Secondly, the all image regions are clustered according to visual similarities by k-means clustering algorithm where the number of cluster is nc-color for color features. Let K(t) is the k-means function and Ts is set of regions, clustering can be formally defined as follows: K(t):Ts →Mci

(0 < i ≤ nc-color )

where Ts={t: t is the segment of image I, ∀I ∈ T } holds the corresponding cluster id of region t for color clusters. The dimension of image feature vectors after K(t) transformation is equal to the number of elements in Mci (cluster sets). Then, each image, Ij, is represented as a vector in nc-color dimensional space. Ij = < ij1, ij2, …, ijnc-color> Each entry of new feature vector signifies the contribution of corresponding color cluster to the image j. Formally, let ijk indicates the kth entry of vector Ij which is for jth image in collection. More formally, an arbitrary entry of vector Ij can be defined as follows:   w i jk = ∑ t  wt

if K (st ) = mk , K (s p ) = mk for ∀st ∈ I j , ∃s p ∈ I j , p ≠ t if K ( st ) = mk , K ( s p ) ≠ mk for ∀st ∈ I j , ∀s p ∈ I j , p ≠ t

The vector is normalized so that sum of the entries of vector Ij is equal to 1. In another words, in this step, each image is transformed into a dimension, called region space. We have constructed new feature vectors for each image in the training set by using k-means clustering. The new features for each image are consisting of cluster ids that represent the segments of images. At the end of first two steps of training phase, we have two sets of clusters: First set is a set that contains the clusters of images based on text annotations and the second one contains clusters of images based on visual features of their regions. The last step of training is replacing the image vectors of textual clusters with visual features. More clearly, we use textual clustering of images, but each image within the cluster is represented by visual features for annotation and retrieval. This concludes the training phase and forms a combination of textual and visual features of image collection. This is the most important phase of our approach, which is based on the hypothesis that images with similar annotations should also have similar low level features, and images that fall into the same text cluster should also have common visual features and could be stored in the same color cluster.

3.2. Annotation and image retrieval After training the system, we have cluster of images where each image in the clusters are represented by visual features. In annotation phase, a feature vector for visual properties is prepared for the image to be annotated or retrieved for similarities, as explained in previous subsections. Then, this vector representation is appended to every clusters of training phase as new a member and then the C matrix is calculated for each cluster and measured the probability of which of those images are most close to this query image. Remembering that diagonal entries of C matrix show decoupling coefficient of an image, which is how image is related with others in the cluster. Also C matrix gives information about the probability of each image in the cluster, similar to query image. Then, images having the highest value are retrieved and annotations are organized as it will be described in following section.

3.3. An Example Let us consider a trivial example to demonstrate clearly how our approach works and C3M algorithm is used. In this subsection, variable values (i.e., Mi, C3M clusters) are hypothetical and they are not a result of our real experimentations or calculations. Let us assume that we have 8 images and clustered into 4 clusters ( nc3 text = 4) according to their annotations by C M as shown on Table-1. In this example we only consider the color features as the low-level feature of the images. Assuming that the training set, T, has 8 images and the total number of regions belong to set T is 30, where an image has minimum 2 and maximum 5 regions. Visual features of training images are clustered by using k-means into 7 clusters (nc-color=7). Table 1: Images, their annotations and clusters for simple example. Image Id 1 2 3 4 5 6 7 8

3

C M Cluster Id 1 1 2 2 2 3 4 4

Image Annotation Sky, sun, water Sky, sun, trees Tiger, grass Tiger, grass, trees Tiger, grass, water Car, road, people Horse, mare, trees Horse, trees

Table-2 shows images and their regions, the percentage of region area (W) covered in the image, the average RGB color values of the regions’ pixels and respective cluster id (Mi). C3M considers the percentage of region within the entire image as the region’s weight within the image. Notice that more than one region of the same image may be classified into the same cluster. For example image 1’s regions 2, 3 and 4 are fall into cluster 5. So, they are in the same cluster and the weight

is the sum of W values of those regions which have same Mi values. 0  0 0  70 I= 0  0 20   0

0  0 0 0 0 10 10 80   0 0 0 0 10 20 50 0 0 0 50 0   40 5 25 30 0 0  0 0 80 0 0 0   0 65 35 0 0 0  40 30

0 0

0 0

60 0 0 70

Table 2. Image segments and their corresponding

Figure 1. Image × color-cluster matrix, each entity denotes the region in the image represented by the corresponding color cluster.

A new feature m×nc-color matrix I (Figure 1), is created where each entry, ijk, (1
0 0 70 0 0 0 0 10 10 80 0 0 0 0 10 20 50 0 0 0 50 0 

40 0 0 60 30 0 0

0

I c 3 = [0 40 5 25 30 0 0] 20 0 0 80 0 Ic4 =   0 0 65 35 0

0 0

image as follows: q′ = <0, 50, 0, 0, 20, 30, 0> Mi values for the query image are evaluated according to K(t) function as in the training phase and the new feature vector q’ is obtained for query image Q. Let us continue on our simple example, assume that q’ is the member of each cluster by adding query vector to Ic clusters as the last document as in figure 3.

0 0 

Figure 2. Clusters generated with C3M for the images shown in Table 1. The images in each cluster Ici are distinct. this matrix to annotation images shown in Table 1 and we obtained the clusters with C3M as in figure 2. 0 40 0 0 60 0 0 I 'c1 = 0 30 0 0 0 70 0 0 50 0 0 20 30 0

0 0 0 70 0 0 I 'c2 =   0 50 0   0 50 0

0 40 5 25 30 0 0 I'c3 =   0 50 0 0 20 30 0

20 0 0 80 0 0 0   I 'c4 =  0 0 65 35 0 0 0  0 50 0 0 20 30 0

0 10 10 80 0 0 10 20 0 0 50 0   0 20 30 0 

k-means clusters. Image Segment ID W R ID 1 1 1 1 2 2 3 3 3 3 4 4 4 4 4 4 5 5 5 6 6 6 6 6 6 7 7 8 8 8

1 2 3 4 1 2 1 2 3 4 1 2 3 4 5 6 1 2 3 1 2 3 4 5 6 1 2 1 2 3

40 30 10 20 30 70 30 50 10 10 50 20 10 10 5 5 20 30 50 40 20 10 20 5 5 80 20 30 65 5

255 228 228 220 225 37 2 7 228 22 6 5 41 210 219 215 36 33 255 250 210 214 45 202 80 53 7 55 205 52

G

B

Mi

117 217 216 210 119 20 1 5 218 29 5 3 31 105 97 100 18 19 111 121 216 216 30 107 65 30 5 30 104 27

0 200 195 200 0 6 0 0 194 6 4 1 30 62 57 60 5 4 4 6 200 202 5 59 37 7 2 8 60 9

2 5 5 5 2 6 7 7 5 6 1 1 6 3 3 3 6 6 2 2 5 5 4 3 4 4 1 4 3 4

Then, the correlation of the query image q’ to each cluster is calculated. In another words, how query image is correlated with other images in each cluster is found out from C3M properties. As described in Section 2, C3M algorithm defines the correlation of q’ with coupling coefficient (ψq’) of q’ and the result of C matrixes of each I’ shown in figure 4 Table 3. Query Image and its segments for simple

Figure 3. Cluster representation after adding query

example.

vector into each cluster.

Assume that we have been given a query image Q and asked to find out best annotations and/or retrieve similar images for Q,, where the properties of the Q is as shown in Table 3. For our simple example, assume that we have calculated a query vector for the given query

Segment ID

W

R

G

1

20

253

111

1

2

2 3

20 30

223 235

214 121

201 3

5 2

4

30

35

23

7

6

B

Mi

0.58 0.10 0.32 C1 = 0.10 0.56 0.34 0.32 0.34 0.34 0.66 0.34 C3 =   0.34 0.66

 0.68  0.17 C2 =   0.05   0.10

0.17 0.05 0.10   0.75 0.05 0.03 0.05 0.50 0.40   0.03 0.40 0.47  0.76 0.24 0.00 C 4 = 0.24 0.76 0.00 0.00 0.00 1.00 

Figure 4. C Matrixes for each cluster in figure 3. Coupling value gives how much an image is related to others within the same collection. In another words, an image that shares a lot of common features with the other documents has a high coupling, but low decoupling coefficient otherwise [12]. Diagonal entries of C Matrix show the decoupling coefficient of each image with each other. We specified the distance of each images to query image as follows: For each image i in Cluster C 1 disti = Cim * C mm

Where m is the number ofimages in cluster C

Table 4: Query results for the example Image 1 2 3 4 5 dist 0.94 1 0.21 0.06 0.85

6 0.51

7 0

8 0

From these results, the cluster having the highest correlation, first cluster, is chosen for annotation/retrieval. So, we can say that I1 and I2 are most similar to (depends on threshold) query Image Q and the query image Q can be annotated with keywords of sky, sun, water and trees. This concludes explanation of the trivial example.

4. Experiments In this section we have described our experiments that are performed to assess the strengths and weakness of our system. We have used 4500 images from Corel image dataset to train the system and select 500 images that are distinct from training set to perform evaluations. In the image set, 10 largest regions are extracted from the images and each region is represented by 13 low-level features. We have obtained those feature sets from Duygulu et. al [3] which is publicly available dataset. Using this dataset, allows us to compare the performance of similar models in the literature in a controlled manner.

4.1. Training and query At the training phase, first, images are clustered according their text annotations with C3M. In our experiments C3M evaluates the nc-text (number of clusters) as 89 for train set’s annotations. However, each image in train set is annotated with at least 1 and at most 5 keywords, C3M resulted with few huge clusters. Because of this issue, we have specified nc-text as 315

that is the maximum number of clusters with non empty clusters. Secondly image regions are clustered according to their selected low level features with kmeans. We select number of clusters, nc-color as 200 experimentally.

4.2. Image retrieval and auto-annotation Whilst the query phase, images that are most similar to query image according to our proposed methodology are retrieved. Retrieved images are ranked and first 7 images are selected as query result. Annotations of retrieved images are selected as candidate annotations. We select 5, 7 or 10 (three distinct experiments) high frequent keywords from candidate annotations to autoannotate the query image. A total of 260 one word queries are possible in the test dataset. In our experiments we used precision and recall tests to evaluate the auto-annotation results. Precision and Recall tests for image retrieval is not an easy task in the absence of test beds for used image database. Also, it is not easy for auto-annotation, because of semantic similarity of keyword pairs such as sunset and sky, or horse and mare. For that reason we need to find the synonyms of keywords (if there is any) in the data set to make the evaluation results better. We have constructed a thesaurus for the keywords used in the dataset with C3M. C’ matrix in C3M algorithm [2] is used to make term clusters, so we can use it to find term similarities that will yield us synonym terms. For each keyword in the data set, we select a synonym keyword as follows with threshold 0.05: Synonym(keywordi) = keywordj Where, ∀k , c 'ij = max(c' ik ), c 'ik < threshold

Once we have constructed a thesaurus specific to the dataset, we modified the annotation of test images by adding the synonym of each keyword. Examples from the generated thesaurus can be seen at Table 5.

4.3. Experimental Results Similar to the previous studies on automatic image annotation, the quality of automatic image annotation is measured by the performance of retrieving autoannotated images regarding to single-word queries. For each single word-query, precision and recall are computed using the retrieval results and original test image annotation in the dataset that is modified as described in the previous section. Accuracy of image auto-annotations will also mean accuracy on image retrieval because of; annotations are obtained from the retrieval results. We have named our methodology as TSIS where it stands for “text space to images space” conversion. We have performed queries on all of images based on TSIS5, TSIS-7 and TSIS-10 methodologies individually, where the resulst are presented in Table 6. As the results

Table 5: synonym word examples from Thesaurus ladder girl crowd African nest woman Jet runway Boeing perch hawk

generated with C3M. buildings vendor people white-tailed people shirt people polar birds arctic people Fawn plane straightaway plane f-16 plane grizzly birds ocean branch writing

people deer people bear fox deer cars jet bear coral sign

of our experiments we obtained that TSIS-10 (color features with 10 most frequent keywords) method has got better results among others when we use the thesaurus. But, we use TSIS-5 to compare our results with other studies because, they used 5 keywords to annotate the query image. Few of our auto-annotation results are as in Figure 7. Table 6: Results of TSIS-5, TSIS-7 and TSIS-10 TSIS-5 TSIS-7 TSIS-10 #words with 76 83 94 recall>0

4.4. Model Comparison We compare the annotation performance of the similar models in the literature where they have used the same data set as in our study. We annotate each test image with 5 keywords (TSIS-5) by using our methodology as in other similar studies. Table 6 shows the results obtained on complete set of 260 words that appear in the test set. The values of recall and precision were averaged over the set of testing words, as suggested by [13, 14]. Table 7 presents result (borrowed from [13, 14]) obtained with various other methods under the same experimental set. Specially we consider Co-occurrence Model [11], the Translation Model [3], Cross-Media Relevance Models (CMRM) [15], Multiple-Bernoulli Relevance Model (MBRM) [14] and Mix-Hier[13]. MBRM and Mix-Hier have better performance than the method proposed, if we consider the recall values that are positive. On the other hand that

is another important issue is the complexity of annotation process. In our experiments over the set of 500 test images, the average annotation time was 14 seconds where it is 268 seconds for Mix-Hier and 371.

5. Conclusion and Future Works In this study, we presented a new solution to (1) semantically retrieve images using keywords and (2) auto-annotate images based on similarity with existing annotated images. Our main hypothesis is that images that fall in the same text cluster, can be described with common visual features of those images The system is highly relies on the overlapping of the similar parts of an image in both textually and visually although this hypothesis seems to strong and work on only for constrained image set. We have show that our proposal is capable to be used in auto-annotation of images and improve the retrieval effectiveness. The system was trained with a testbed containing 4500 images from COREL image database and tested with 500 images from outside the training database. Experiments have demonstrated that good accuracy of proposal and its high potential use in auto-annotation of images and for improvement of content-based image retrieval. In this study we have used only color features as the low level descriptors with constant parameters. We have been working on the performance improvement of our solutions under different parameters. We plan to work different number of clusters and observe the results in our future works. In addition to different parameters as well as considering the conditional probabilities of keyword occurrence. In longer term, we expect this solution to lead us into new researches including semantic web, semantic indexing, and development of image ontology automatically and extend to video.

Acknowledgment We would like to thank to M. J. L. de Hoon et.al. allowing us to use their clustering package and we thank to Korbus Barnard and Pinar Duygulu for making their dataset available.

Table 7. Performance comparison on the task of automatic image annotation on the Corel dataset. Model Cooccurence Translation CMRM MBRM MixHier TSIS #words with recall>0 19 49 66 122 137 76 Single word query results on all 260 words as in [13, 14, 15] Mean recall 0.02 0.04 0.09 0.25 0.29 0.09 Mean precision 0.03 0.06 0.10 0.24 0.23 0.10

Image id:113067 Corel: foals, grass, horses, mare Combine annotation: field, cat, foals, horses, mare, tiger, grass

Image id: 22013 Corel annotations: bridge, water, wood Combine annotation: cars, water, boats, tracks, coast, buildings, sky

Image id:152059 Corel combine : close-up, leaf, plants Combine annotation: birds, leaf, plants, flowers, nest, garden, tree

Image id:122098 Corel annotations: mountain, rocks Combine annotation: stone, people, pillar, pyramid, clouds, sculpture, ruins

Image id:153056 Corel annotations: people, pool, swimmers, water Combine annotation: people, coral, swimmers, ocean, pool, water, reefs

Image id:142057 Corel annotations: close-up, flowers, mountain, valley Combine annotation: cars, field, tracks, foals, horses, mare, turn

Figure 7. Auto-annotation of images 113067, 152059, 153056, 22013, 122098 and 142057 with TSIS-7

6. References 1.

2.

3.

Brown, P. & Pietra, S. & Pietra, V. & Mercer, Robert: “The Mathematics of Machine Translation: Parameter Estimation”, Computational Linguistics, 19, pp. 263-312, 1993. Can, F., Ozkarahan. E.A. [1990]. “Concepts and Effectiveness of the Cover Coefficient Based Clustering Methodology for Text Databases”, ACM Transactions on Database Systems, Vol. 15, No. 4. Duygulu, P. & Barnard, K. & Freitas, J.F.G. & Forsyth, D. A.: “Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary”. European Conference on Computer Vision (ECCV2002), 2002.

8.

M.J.L. de Hoon, S. Imoto, J. Nolan and S. Miyano: Open source clustering software http://bonsai.ims.utokyo.ac.jp/~mdehoon/software/cluster/

9.

Smeulders, A. W. M., Worring, M., Santini, S., Gupta, A., & Jain, R.: “Content-Based Image Retrieval at the End of the Early Years”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, 2000.

10. Town, C.P. & Sinclair, D.: “Content based Image Retrieval using Semantic Visual Category”. Society for Manufacturing Engineers, Technical Report MV01-211, 2001. 11. Y. Mori, H. Takahashi, and R. Oka. “Image-to-word transformation based on dividing and vector quantizing images with words”. First International Workshop on Multimedia Intelligent Storage and Retrieval Management, 1999. 12. Esen Ozkarahan, Database Machines and Database Management, Prentice Hall, 1986.

4.

James Z. Wang, Jia Li, and Gio Wiederhold: SIMPLIcity, “Semantics-Sensitive Integrated Matching for Picture Libraries”, IEEE Trans. ON PAMI, VOL. 23, NO. 9, 2001.

5.

Hofmann, T.: “Unsupervised Learning by Probabilistic Latent Semantic Analysis”. Machine Learning, 2001.

13. G.Carneiro and N.Vasconcelos “Formulating Semantic Image Annotation as a Supervised Learning Problem”. IEEE CVPR 2005.

6.

Rong Zhao, William I. Grosky: “Bridging the semantic gap in image retrieval”, Distributed multimedia databases: techniques & applications, 2002.

14. S.L.Feng, R.Manmatha, and V.Lavrenko. “Multiple Bernoulli relevance models for image and video annotation” IEEE CVPR 2004

7.

Monay, F., & Gatica-Perez, Daniel.: “On Image AutoAnnotation with Latent Space Model”. ACM MM’03

15. J.Jeon, V.Lavrenko, R.Manmatha. “Automatic Image Annotation and Retrieval using Cross-Media Relevence Models”. ACM SIGIR 2003

Semantic Image Retrieval and Auto-Annotation by ...

Conventional information retrieval ...... Once we have constructed a thesaurus specific to the dataset, we ... the generated thesaurus can be seen at Table 5. 4.3.

292KB Sizes 5 Downloads 352 Views

Recommend Documents

Image retrieval system and image retrieval method
Dec 15, 2005 - face unit to the retrieval processing unit, image data stored in the image information storing unit is retrieved in the retrieval processing unit, and ...

Shape Indexing and Semantic Image Retrieval Based on Ontological ...
Retrieval Engine by NEC USA Inc.) provides image retrieval in Web by ...... The design and implementation of the Redland RDF application framework, Proc.

Shape Indexing and Semantic Image Retrieval Based on Ontological ...
Center retrieves images, graphics and video data from online collections using color, .... ular class of image collection, and w(i,j) is semantic weight associated with a class of images to which .... Mn is defined by means of two coordinates (x;y).

An active feedback framework for image retrieval - Semantic Scholar
Dec 15, 2007 - coupled and can be jointly optimized. ..... space when user searches for cars images. ..... Optimizing search engines using click through data.

Image Retrieval: Color and Texture Combining Based on Query-Image*
into account a particular query-image without interaction between system and .... groups are: City, Clouds, Coastal landscapes, Contemporary buildings, Fields,.

Unsupervised, Efficient and Semantic Expertise Retrieval
a case-insensitive match of full name or e-mail address [4]. For. CERC, we make use of publicly released ... merical placeholder token. During our experiments we prune V by only retaining the 216 ..... and EX103), where the former is associated with

Unsupervised, Efficient and Semantic Expertise Retrieval
training on NVidia GTX480 and NVidia Tesla K20 GPUs. We only iterate once over the entire training set for each experiment. 5. RESULTS AND DISCUSSION. We start by giving a high-level overview of our experimental re- sults and then address issues of s

Efficient Speaker Identification and Retrieval - Semantic Scholar
identification framework and for efficient speaker retrieval. In ..... Phase two: rescoring using GMM-simulation (top-1). 0.05. 0.1. 0.2. 0.5. 1. 2. 5. 10. 20. 40. 2. 5. 10.

Efficient Speaker Identification and Retrieval - Semantic Scholar
Department of Computer Science, Bar-Ilan University, Israel. 2. School of Electrical .... computed using the top-N speedup technique [3] (N=5) and divided by the ...

Tag-Based Image Retrieval Improved by Augmented ... - IEEE Xplore
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 14, NO. 4, AUGUST 2012. 1057. Tag-Based Image Retrieval Improved by Augmented. Features and ...

Tag-Based Image Retrieval Improved by Augmented ... - IEEE Xplore
period of an event by leveraging millions of training web images and their associated rich textual descriptions. For any given query tag (e.g., “car”), the inverted ...

Evaluating Content Based Image Retrieval Techniques ... - CiteSeerX
(“Mountain” class); right: a transformed image (negative transformation) in the testbed ... Some images from classes of the kernel of CLIC ... computer science.

CONTENT-FREE IMAGE RETRIEVAL USING ...
signed value of one is denoted by XE = {Xj1 , ..., XjE } and is called the ... XE is in practice much smaller than. XH. .... Fourteen trademark examiners in the Intel-.

Tag-basedWeb Photo Retrieval Improved by Batch ... - Semantic Scholar
Tag-based Web Photo Retrieval Improved by Batch Mode Re-Tagging. Lin Chen. Dong Xu ... photo sharing web- sites, hosts more than two billion Flickr images.

Image Retrieval: Color and Texture Combining Based ...
tion has one common theme (for example, facial collection, collection of finger prints, medical ... It is possible to borrow some methods from that area and apply.

Image Retrieval Based on Wavelet Transform and Neural Network ...
The best efficiency of 88% was obtained with the third method. Key Words: .... Daubechies wavelets are widely used in signal processing. For an ..... Several procedures to train a neural network have been proposed in the literature. Among ...

Enhancing Image and Video Retrieval: Learning via ...
School of Computer Science and Engineering and the Center for Neural Computation. The Hebrew ... on the clustering and retrieval of surveillance data. Our.

Interactive and progressive image retrieval on the ...
INTERNET, we present the principle of an interactive and progressive search ... make difficult to find a precise piece of information with the use of traditional text .... images, extracted from sites of the architect and providers of building produc

Domains and image schemas - Semantic Scholar
Despite diÄering theoretical views within cognitive semantics there ...... taxonomic relation: a CIRCLE is a special kind of arc, a 360-degree arc of constant.

Domains and image schemas - Semantic Scholar
Cognitive linguists and cognitive scientists working in related research traditions have ... ``category structure'', which are basic to all cognitive linguistic theories. After briefly ...... Of course, reanalyzing image schemas as image. 20 T. C. ..

LATENT SEMANTIC RETRIEVAL OF SPOKEN ...
dia/spoken documents out of the huge quantities of Internet content is becoming more and more important. Very good results for spoken document retrieval have ...