Gabor Filters and Grey-level Co-occurrence Matrices in Texture Classification Jing Yi Tou1, Yong Haur Tay1 and Phooi Yee Lau2 1

Computer Vision & Intelligent Systems (CVIS) Group, Faculty of Information & Communication Technology Universiti Tunku Abdul Rahman (UTAR), MALAYSIA. 2

Instituto de Telecomunicacoes, PORTUGAL. Email: [email protected]

Abstract Texture classification is a problem that has been studied and tested using different methods due to its valuable usage in various pattern recognition problems, such as wood recognition and rock classification. The Grey-level Co-occurrence Matrices (GLCM) and Gabor filters are both popular techniques used on texture classification. This paper combines both techniques in order to increase the accuracy. The paper used 32 textures from the Brodatz texture dataset with 1024 training samples and 1024 testing samples. GLCM achieved a recognition rate of 84.00%, Gabor filters achieved 79.58% while combination of GLCM and Gabor filters achieved a recognition rate of 88.52%, which is better than both methods. The experiments showed that the best result can be achieved by using a GLCM with grey level of 16, spatial distance of one pixel and combine with Gabor features decomposed to six features. Key words: Texture classification, Gabor filters, Grey-level Co-occurrence Matrices

1. Introduction Texture classification has been a field that is frequently studied as it is applicable onto many applications, such as wood species identification [1], rock texture classification [2] and defects inspection. Since texture analysis techniques can be implemented in various machine learning problems, by studying the algorithms of texture classification, we can also implement them into other similar implementations involving texture-liked subjects such as text detection, face detection and etc.

The main objective of the paper is to examine the performance using GLCM and Gabor filters in the texture classification and by combining both techniques to enhance the accuracy. Section 2 introduces the techniques used in this paper. Section 3 introduces the experiment dataset used and the experiments conducted. Section 4 reveals the experiment result and discussion on the analysis of the results. Section 5 is the conclusion and future works of the paper.

2. GLCM and Gabor Filters 2.1. GLCM GLCM is proposed by Haralick et al back in 1973 [3]. It is then widely been used for various texture analysis applications, such as texture classification [4], rock texture classification, wood classification and etc. The GLCM is generated by cumulating the total numbers of grey pixel pairs from the images. Each GLCM will be generated by defining a spatial distance d and an orientation, which can be 0 degree, 45 degree, 90 degree or 135 degree at a selected grey level G. The GLCM produced will be of size G × G. When the GLCM is constructed, Cd(r,n) represents the total pixel pair value where r represents the reference pixel value and n represents the neighboring pixel value according to the spatial distance and orientation defined. The joint probability density function normalizes the GLCM by dividing every set of pixel pairs with the total number of pixel pairs used and is represented using p(r,n) as shown in Eq. (1) [5].

1 p(r,n) =

Cd(r,n) G-1

(1)

n=0

G-1

G-1



∑ p(r,n)2

r=0

n=0

G-1

G-1

∑ r=0

G-1

G-1





r=0 n=0

(3)

G-1

G-1



∑ (r – n)2 p(r,n)

(4)

r=0 n=0

p(r,n) (1 + |r – n|)

(5)

2.2. Gabor Filters The Gabor filters, also known as Gabor wavelets, is inspired by the concept of mammalian simple cortical cells [6]. The Gabor filters is represented by Eq. (6) where x and y represent the pixel position in the spatial domain, w0 represents the radial center frequency, θ represents the orientation of the Gabor direction, and σ represents the standard deviation of the Gaussian function along the x- and y- axes where σx = σy = σ [6]. Ψ(x,y,w0,θ) = ×

1 2 2 2 e -((x cos θ + y sin θ) + (-x sin θ + y cos θ) )/2σ 2πσ2 [e

2 2

i(w0x cos θ + w0y sin θ)

- e -w0 σ /2 ]

(6)

The Gabor filter can be decomposed into two different equations, one to represent the real part and another to represent the imaginary part as shown in Eq. (7) and Eq. (8) respectively [6].

Ψr(x,y,w0,θ) =

)} (8)

1 2πσ2

2

x' + y'

exp{- (

σ2

× [cos w0x' - e -w

0

2 2

x' = x cosθ + y sinθ

y' = -x sinθ + y cosθ

In this paper, we used σ = π / w0. Gabor features are derived from the convolution of the Gabor filter Ψ and image I as shown in Eq. (9) [6]. CΨI = I(x, y) * Ψ(x, y, w0, θ)

∑ p(r,n) log p(r,n)

(G–1)2

Homogeneity:

σ2

(9)

(2)

n=0

1 Contrast:

x'2 + y'2

where

Textural features are extracted from the GLCMs for classification process. There are a total of fourteen features for GLCM [1]. The textural features that are commonly used are shown in Eq (2) to Eq (5) [5].

Entropy:

2πσ

exp{- (

∑ Cd(r,n)

r=0

Energy:

2

× sin w0x'

G-1



1

Ψi(x,y,w0,θ) =

σ /2

]

2

)} (7)

The term Ψ(x, y, w0, θ) of Eq. (9) can be replaced by Eq. (7) and Eq. (8) to derive the real and imaginary parts of Eq. (6) and is represented by CΨIr and CΨIi respectively. The real and imaginary parts are used to compute the local properties of the image using Eq. (10) [6]. CΨI(x,y,w0,θ) =

√ ||CΨIr||2 + ||CΨIi||2

(10)

The convolution can be performed using a fast method by applying Fast Fourier Transform (FFT), point-to-point multiplication and Inverse Fast Fourier Transform (IFFT). It is performed on three radial center frequencies or scales, w0 and eight orientations, θ. In this paper, the radial center frequencies and orientations are represented by θm in Eq. (11) where n ∈ {0, 1, 2} and m ∈ {0, 1, 2, …, 7} [6]. This method is to reduce the computations comparing to conventional method of using a smaller subwindow to perform convolution over the whole image. wn =

π 2(2)n/2

θm =

π 8

m

(11)

2.2.1. Reducing Dimensionality. The Gabor features are at a high-dimensional space. However, higher dimension will affect the learning of the classification. A down-sampling can be performed by omitting values from the Gabor features with a factor of ρ. The Gabor features are concatenated to form a feature vector as shown in Eq. (12) [6]. C(ρ) = (CΨI(ρ)(x,y,w01,θ1), CΨI(ρ)(x,y,w01,θ2),…, CΨI(ρ)(x,y,w01,θm),…, CΨI(ρ)(x,y,w0n,θm))T

(12)

A Principal Component Analysis (PCA) can be performed to further decompose the Gabor feature size. Singular Value Decomposition (SVD) can be performed for a faster method of decomposing the feature vector. The SVD decomposes an i × j matrix C into three matrices as shown in Eq. (13)

C = u × λ × vT

(13)

where p is the minimum of i and j, u is a matrix of dimension i × k, λ is a matrix of dimension k × k , v is a matrix of dimension j × p [7]. A feature size s is selected to decompose the matrix u from m × k to an m × s matrix by directly discarding the values from the matrix beyond the size to Φ. The final Gabor feature matrix Γ after the decomposition is shown in Eq. (14) [6].

Γ(ρ) = ΦT (C(ρ) - Ψ)

(14)

2.3. k-Nearest Neighbor (k-NN) The k-NN is used as the classifier to classify the textures. The k-NN compares each testing sample with all the training samples and chooses the k training samples with the smallest Euclidean distance. The class with highest number of the k training samples chosen will be the winning class for the testing sample. The method requires comparison with all training samples, therefore will be slower if the training set is larger, but will affect the results if the training set is not large enough. In this paper, k-NN used for each experiment setting is using k ∈ {1, 2, 3, …, 10}.

3. Experiments The 32 textures were selected from the Brodatz texture dataset [8] as used in [10] and [11]. Only part of the dataset was used as the classification using the entire dataset can be harder to be accomplished [9]. Each of the texture were separated into 16 samples of size 64x64, each of the samples has four different variations, the original image, a rotated image, a scaled image and an image both rotated and scaled. For each texture, eight of the sixteen sets were randomly picked for training and the remaining for testing [10][11]. For GLCM method, four GLCMs were generated for the four different orientations. The four features, which are contrast, homogeneity, energy and entropy, were extracted from each of the four GLCMs. For the Gabor filters, a 64x64 filter was used, and downsample with ρ = 4. Finally, the GLCM features were combined with the Gabor features. All experiments were tested on ten different training and testing samples. These training sets are chosen by randomly selecting eight sub images and their respective variants as training samples and the remaining ones as testing samples for each class as mentioned at the first paragraph of this section. The experiments results shown in the next section are the average recognition rates of the ten different sets.

4. Results and Discussion 4.1. GLCM Experiment of the GLCM method was run on different settings of grey level and spatial distance. The grey levels used in the experiment are 8, 16, 32, 64, 128 and 256 while the spatial distances used are one to five pixels. The recognition results of different parameters are shown in Table 1 where the horizontal bar represents the spatial distance d and the vertical bar represents grey level G. The results shown are the best results obtained for a certain value of k of the kNN. Table 1. Results of GLCM

8 16 32 64 128 256

1 61.48 74.57 78.07 83.40 81.70 80.04

2 62.13 65.60 72.70 84.00 83.20 83.21

3 54.97 56.87 65.68 79.40 78.50 77.15

4 51.32 49.23 59.04 71.00 70.06 69.20

5 46.31 46.25 52.24 65.10 64.28 61.83

The results of the GLCM show that different spatial distance and grey level can affect the results, the grey level of 64 provides the best result. The suitable spatial distance may differ for different datasets and applications depending on the spatial distance that reveals the significance of pixel pairs. The spatial distances of one or two pixels are the most ideal spatial distances in this implementation. The grey level of 64 can produces better results than 128 and 256 because it will view a few grey values as one when degrading from the original grey-scaled image with 256 grey levels. The best result achieved by the GLCM method is 84.00%.

4.2. Gabor Filters Experiment of Gabor filters produced 6,144 features after down-sampling. The Gabor features were further decomposed using PCA, where different feature size after the decomposition was tested and the results are shown in Table 2 where the vertical bars represents the feature size s.

Table 2. Results of Gabor Filters s 30 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5

Gabor Filters 71.45 75.07 75.61 76.05 76.42 76.44 77.06 77.40 77.62 78.01 78.24 79.09 79.40 79.50 78.65 79.58 75.20

The experiments results show that as the feature size decreases, the recognition rate increases until it reaches a feature size of six. The experiment shows that six is the ideal feature size that provides the best recognition rate. This shows that the larger the dimensionality of the feature space, the more difficult to achieve a better classification result. However, the feature size could not be too small as it does not have enough details to help classify the problem. The results also shows the more significant values are concentrated at the front of the Gabor features since features at the back are discarded but yet the recognition rate is improved. The results show that the reduction of feature space is important for the classification as it reduces the computational steps and provides better results. However, the recognition rate is poorer than the GLCM method which is only achieving 79.53%.

4.3. GLCM and Gabor Filters The 16 features of the GLCM were combined with the Gabor features directly into a single feature vector without modifications of the values of the original features and classified by the k-NN. Shown in Table 3 is the recognition rate for the combination of GLCM and Gabor features where the horizontal bars represents the grey level G and vertical bars represents the feature size s.

Table 3. Results for GLCM + Gabor

30 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5

8 77.88 81.86 82.11 82.47 83.04 83.33 83.69 84.20 84.65 85.00 85.35 85.68 85.98 86.04 86.29 85.70 86.51

16 77.92 82.04 82.50 83.16 83.57 83.74 84.04 84.57 85.00 85.65 86.02 86.58 87.01 87.08 87.18 88.53 88.15

32 77.12 81.04 81.73 82.13 82.66 82.73 83.47 84.09 84.42 84.81 85.06 85.61 86.39 86.11 86.05 87.40 86.30

64 75.29 79.67 79.97 80.40 80.85 81.31 81.77 82.30 82.61 83.22 83.81 84.35 84.89 85.16 84.56 85.70 83.02

128 75.81 79.92 80.21 80.62 81.13 81.47 81.89 82.35 82.86 83.40 83.75 84.31 84.65 84.94 84.21 85.28 82.58

The comparison of the results of using GLCM and Gabor features combined for d = 1 and d = 2 are shown is Table 4. The results shown are the best results obtained for a certain value of grey level G in the GLCM and k of the k- NN. The horizontal bar represents the methods and vertical bar represents the feature size s. Table 4. Comparison of GLCM + Gabor s 30 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5

GLCM + Gabor (d = 1) 77.92 82.04 82.50 83.16 83.57 83.74 84.04 84.57 85.00 85.65 86.02 86.58 87.01 87.08 87.18 88.53 88.15

GLCM + Gabor (d = 2) 74.71 78.19 78.48 79.00 79.20 79.78 80.21 80.55 81.05 81.68 81.88 82.78 83.52 83.48 82.97 84.61 83.11

The experiment results show that GLCM method can outperform the Gabor filter but a combination of both methods is better than both of the methods when they are done separately. The combination of the two methods worked well because both methods can

extract useful features from the image but is not sufficient. However, they can complement each other to achieve higher accuracy. The best result is obtained with a setting of GLCM with grey level of 16, spatial distance of one pixel and Gabor feature size of six. The best recognition rate is 88.53%.

5. Conclusion In our work, we have discovered that both GLCM and Gabor filters are useful in texture classification, but by combining both techniques, the accuracy is higher than both implementations. This shows that both techniques can extract useful features from the images and accumulating the useful features to improve the accuracy. We have also discovered that the appropriate feature size for the Gabor is relatively small; this is because larger dimension of the feature size will cause difficulties for classification. However, the feature size could not be too low as it will contain too little information for classification purpose. In this paper, the best feature size tested for the Gabor filters is six. For the GLCM, the best spatial distances are usually one or two pixels as it is best describing the relationship of two neighboring pixels while the ideal grey level is usually lower than 256, as a lower grey level will merge a few grey levels as one, reducing the influence of slightly different grey levels for a same region on a subject due to differences of orientation during the acquisition. A lower grey level also produces a smaller GLCM whereby fasten the computation process. Our future work will be working on deploying the application into an embedded platform. A classifier which stores less training information will be used for the embedded platform.

6. Acknowledgement We would like to thank Wooi Hen Yap for sharing and discussion on Gabor filters as used in [6]. This research is partly funded my Malaysian MOSTI ScienceFund 01-02-11-SF0019.

7. References [1] Y. L. Lew, “Design of an Intelligent Wood Recognition System for the Classification of Tropical Wood Species”, Master of Engineering (Electrical) thesis, Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Malaysia, 2005. [2] M. Partio, B. Cramariuc, M. Gabboui, and A. Visa, “Rock Texture Retrieval using Gray Level Co-occurrence Matrix”, Proceedings of 5th Nordic Signal Processing Symposium, 2002.

[3] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification”, IEEE Transactions on Systems, Man. and Cybernatics, 1973, pp. 610-621. [4] M. Tuceryan, and A. K. Jain, “Texture Analysis, The Handbook of Pattern Recognition and Computer Vision, Ed. 2”, World Scientific Publishing Co., 1998. [5] M. Petrou, and P. G. Sevilla, “Image Processing: Dealing with Texture”, Wiley, 2006. [6] W. H. Yap, M. Khalid, R. Yusof, “Face Verification with Gabor Representation and Support Vector Machines”, IEEE Proc. of the First Asia International Conference on Modeling & Simulation, 2007. [7] V. C. Klema, and A. J. Laub, “The Singular Value Decomposition: Its Computation and Some Applications”, IEEE Transactions on Automatic Control, 1980, pp. 164176 [8] P. Brodatz, “Textures: A Photographic Album for Artists and Designers”, Dover New York, 1996. [9] R. W. Picard, T. Kabir, and F. Liu, “Real-time Recognition with the entire Brodatz Texture Database”, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, New York, 1993, pp. 638-639. [10] K. Valkealahti, and E. Oja, “Reduced Multidimensional Co-occurrence Histograms in Texture Classification”, IEEE Transcations on Pattern Analaysis and Machine Intelligence, 1998, pp. 90-94. [11] T. Ojala, K. Valkealahti, and M. Pietikainen, “Texture Discrimination with Multidimensional Distributions of Signed Gray Level Differences”, Pattern Recognition 34, 2001, pp. 727-739.

Gabor Filters and Grey-level Co-occurrence Matrices in ... - CiteSeerX

Faculty of Information & Communication Technology. Universiti Tunku Abdul ... which can be 0 degree,. 45 degree, 90 degree or 135 degree at a selected grey.

75KB Sizes 21 Downloads 222 Views

Recommend Documents

Gabor Filters as Feature Images for Covariance Matrix ...
Gaussian function along the x- and y- axes where σx = σy = σ [5]. ..... Int'l Workshop on Advanced Image Technology. Bangkok (2007). 4. Partio, M., Cramariuc ...

Rao-Blackwellized Particle Filters for Recognizing ... - CiteSeerX
2University of Washington, Dept. of Electrical Engineering, Seattle, WA. 1 Introduction ..... maximum likelihood training based on the labeled data.

Gabor Filter and Its Use in Fingerprint Technology
Mar 14, 2008 - A new problem in our camera based capture system, comparing to the traditional ... oriented texture, IP, 1999. Javier R. Movellan, Tutorial on ...

Matrices and matrix operations in R and Python - GitHub
To calculate matrix inverses in Python you need to import the numpy.linalg .... it for relatively small subsets of variables (maybe up to 7 or 8 variables at a time).

Part B: Reinforcements and matrices
market. Thus both matrix types will be studied. Moulding compounds will be briefly overviewed. For MMCs and CMCs, processing methods, types of materials, ...... A major application is garden decks (USA), picture frames and the ...... Primary processi

Part B: Reinforcements and matrices
Polymeric Matrix Composites (PMCs), Metal Matrix Composites (MMCs) and Ceramic. Matrix Composites (CMCs) will be discussed. For PMCs, synthetic and natural fibres as well as mineral particulate reinforcements will be studied. Polymeric matrices both,

PDF Download Matrices in Combinatorics and Graph Theory (Network ...
PDF Download Matrices in Combinatorics and. Graph Theory (Network Theory and Applications). Full eBook. Books detail. Title : PDF Download Matrices in ...

Gabor Feature-Based Collaborative Representation For.pdf ...
proposed 3GCR over the state-of-the-art methods in the literature, in terms of both the classifier. complexity and generalization ability from very small training sets. Page 1 of 1. Gabor Feature-Based Collaborative Representation For.pdf. Gabor Feat

POSITIVE DEFINITE RANDOM MATRICES
We will write the complement of α in N as c α . For any integers i .... distribution with n degrees of freedom and covariance matrix , and write . It is easy to see that ...

QoS in Linux with TC and Filters - GitHub
packet queues with different priorities for dequeueing to the network driver. ... (i.e. deciding which queue a packet should go into) is typically done based on Type Of Service ... (1) # tc qdisc replace dev eth0 root handle 1: htb default 30.

The Application of Gabor Filter in Chinese Writer ...
Henan University of Technology, Zhengzhou, 450001, China [email protected]. ... handwriting is considered as a texture image. A two-dimensional ... Proceedings of 2008 IEEE International Symposium on IT in Medicine and Education. 360.

Matrices and Linear Algebra -
Of course you can call the help on-line also by double clicking on the Matrix.hlp file or from the starting ...... The matrix trace gives us the sum of eigenvalues, so we can get the center of the circle by: n. A ...... g (siemens) electric conductan

Matrices and Linear Algebra -
MATRIX addin for Excel 2000/XP is a zip file composed by two files: ..... For n sufficiently large, the sequence {Ai} converge to a diagonal matrix, thus. [ ]λ. = ∞. →.

Auditory Attention and Filters
1970), who showed that receiver operating characteristics (ROCs) based on human performance in a tone-detection task fitted well to comparable ROCs for.

Weak and strong reflexives in Dutch - CiteSeerX
by stipulating two overlapping binding domains. I want to argue, however, that syntactic approaches are on the wrong track, and that semantic and pragmatic ...

intellectual capital management and reporting in ... - CiteSeerX
Nov 15, 2006 - Berndtson (2002) state, “higher education is affected today by a .... universities and research and technology organizations. ..... 8 “Advanced Quantitative methods for the analysis of performance of public sector research”.

Matrices
Matrix • Two-dimensional rectangular array of real or complex numbers. • Containing a certain number “m” of rows and a certain number of “n” columns.

intellectual capital management and reporting in ... - CiteSeerX
Nov 15, 2006 - domestic product (GDP) in the United States over the period 1995-2003, ... during the last decades refers to private companies, there is a growing interest in public ..... developing a trial of the Intellectual Capital Report in two ..