One-dimensional Grey-level Co-occurrence Matrices for Texture Classification Jing Yi Tou 1 , Yong Haur Tay 1, Phooi Yee Lau 2 1 Computer Vision and Intelligent Systems (CVIS) Group, Universiti Tunku Abdul Rahman (UTAR), Malaysia, 2 Instituto de Telecomunicacoes, Portugal. [email protected]

Abstract The Grey-level Co-occurrence Matrices (GLCM) has been widely used for various texture analysis implementations and has provided satisfying results. The conventional GLCM method is two dimensional as it focus on the co-occurrence of the specific pixel pairs. The one-dimensional GLCM reduces the matrices to a single dimension by focusing only on the differences of the grey level between pixel pairs. The experiment results on 32 Brodatz textures shows that in a same setting, the one-dimensional GLCM achieved a recognition rate of 83.01% while the conventional GLCM achieved a recognition rate of 81.35%. The results show that the one-dimensional GLCM can perform as good as the conventional GLCM but with fewer computations involved.

1. Introduction Texture classification is no longer a recent topic as it has been studied by many researchers for a long period of time. The topic remains to be important as it is not only useful in solving problems of classifying or differentiating textures, it is often used in many other pattern recognition problems where the classification involves patterns that can be viewed as textures, such as wood recognition [1][2], rock classification [3], face recognition, text detection and face detection. In some of the object recognition and classification problems, we can again view the objects as a specific texture [4], such as classifying wood species through its cross-section surface which uniquely regard each wood species as different textures. However, if different textures in a same problem show many similar properties, such as in the case of wood classification, it might cause classification difficulties as two classes

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

may present very similar textures, thus a precise classifier need to be used. The Grey-level Co-occurrence Matrices (GLCM) is a popular technique used for texture classification [5] ever since its introduction in 1973 by Haralick et al [6]. However, the GLCM method involves extensive calculations. When all 256 grey levels are being used to generate the GLCMs, each GLCM generated will be 256 × 256 in size. The GLCM will be used to extract textural features where calculations of each element in the GLCM is involved, therefore the larger the size, the more calculations are performed. In our previous work, we have implemented the GLCM method on five different wood species from the CAIRO dataset from the Centre for Artificial Intelligence and Robotics (CAIRO), UTM and has achieved 72% on 25 training samples and 25 testing samples. This shows that the GLCM is useful in the classification of wood species and can be used for wood species recognition [2], but further improvement is needed. Therefore, in this paper, the main objective is to introduce the reduction of computations for the GLCM method by reducing the matrix dimension from two dimensions to one dimension. It will also improve the recognition performance of the GLCM. Section 2 will introduce the reduction of the GLCM size and dimension. Section 3 is to elaborate the experiment datasets and the tool used for the experiment. Section 4 shows the results of the experiments and the findings from the results. Section 5 is the conclusion of this paper.

2. One-dimensional GLCM 2.1. Grey-level Selection When all 256 grey levels are used in generating a GLCM, the GLCM will have a size of 256 × 256 that shows every possible pixel pairs obtained. This is a disadvantage as the GLCM consumes computational time to generate the matrices. Furthermore, the calculations of textural features from the GLCMs involve the calculation on every element of the GLCMs. Hence, a lot of computational steps are wasted in these calculations. By selecting a lower grey level, it will reduce the size of the GLCM and computational time accordingly. For example, if we select the grey level to be 32, the GLCM generated will only be 32 × 32, which is very much smaller compared to the 256 × 256 GLCM. Besides reducing the matrix size, in a real world situation, a same object can have different values of grey-scaled pixels at a same area due to the orientation and lighting condition during the image acquisition. This will cause the same object to look darker when the light is less bright, the computer will percept the two object as distinct due to the differences in grey value. Although histogram equalization can often be used to normalize the grey-scaled level, but it may not give the exact same grey value for a same pixel at two different acquisitions. However, the grey value should be similar in value to each other. To make the problem illumination invariant, we can group up a few similar greys as one specific grey during the calculations. If we choose a grey level that is less than 256, we will have to group up a few grey values together to be viewed as one. For example, 8 grey values when the grey level is 256 will be regarded as 1 grey value when the grey level is 32.

2.2. Matrix Dimension Reduction

distance, m represents the reference pixel value and n represents the neighboring pixel value according to the spatial distance and direction defined. The joint probability density function normalizes the GLCM by dividing every set of pixel pairs with the total number of pixel pairs used and is represented using p(m,n) as shown in Eq. 2.1 [7]. The one-dimensional GLCM is similar, but focus only on the differences of grey value between the pixel pairs, therefore, x shows the differences of grey value between the two pixels of the pixel pairs, as shown in Eq. 2.2. p(m,n) =

1 All_pairs_of_pixels_used

Cd(m,n)

(2.1)

p(x) =

1 All_pairs_of_pixels_used

Cd(x)

(2.2)

The feature formulas must be modified to suite the one-dimensional GLCM, this must be done as the original feature extraction functions involved two dimensional data from the GLCM as shown in Eq 2.3 to Eq 2.6 [7]. The correlation feature of the conventional GLCM is omitted as it involves the calculations of specific pixel pairs, but the onedimensional GLCM has merged a few pixel pairs with the same grey differences into one, therefore has lost the information for specific pixel pairs.

Energy:

Entropy:

G-1

G-1



∑ p(m,n)2

m=0

n=0

G-1

G-1



∑ p(m,n) log p(m,n)

m=0

n=0

1

Contrast:

(G – To reduce computations, the GLCM dimension can be reduced from two dimensions to one dimension by combining certain values of the matrix. By focusing only on the differences of the grey level, we are only concerning on a one-dimensional GLCM with a significantly smaller size which is only 2 × G - 1 where G represents the grey level, compared to G × G for a conventional two-dimensional GLCM. By reducing the dimension of the GLCM, the calculations of features will be faster as fewer values are involved in the calculation. In the conventional GLCM, Cd(m,n) represents the total pixel pair value where d represents the spatial

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

Homogeneity:

G-1

G-1



∑ (m – n)2 p(m,n)

1)2 m=0

G-1

G-1





m=0

n=0

(2.3)

(2.4)

(2.5)

n=0

p(m,n) (1 + |m – n|)

(2.6)

For the modification of the textural features, the summation function that involves every value in the GLCM is only one dimension in the one-dimensional GLCM, and the joint probability density p(m,n) is replaced by p(x) in the one-dimensional GLCM. The calculations of (m-n) that represents the differences of grey value in the conventional GLCM is represented by

x that represents the same thing in the one-dimensional GLCM. After the modification, the values of contrast and homogeneity will be identical but the values of energy and entropy will be different with the conventional GLCM. The modified features are shown below from Eq. 2.7 to 2.10. L-1

Energy:



p(x)2

(2.7)

-p(x) log p(x)

(2.8)

m=-(L-1)

L-1

Entropy:

∑ m=-(L-1)

Contrast:

1 (L – 1)2 L-1

Homogeneity:

∑ m=-(L-1)

L-1

∑ (x)2 p(x)

(2.9)

m=-(L-1)

p(x) (1 + |x|)

(2.10)

2.3. Parameter Selection The selection of the parameters of the onedimensional GLCMs is important as each different implementation requires different parameters. As discussed above in Section 2.1, the grey level will affect the results of the GLCMs, and the best grey level differs for each different application, therefore experiments are required to find a suitable grey level for a certain problem. Other than that, the spatial distance and direction to be used is also important. Usually all four directions will be considered when a certain spatial distance is selected, producing four different GLCMs. The selection of the spatial distances is affected by the input images, there is no generic spatial distance that suites every case, as the important co-occurrence pair might occur in different spatial distances for different type of textures involves. Usually the spatial distance will not be too large as the larger the spatial distance, the less relationship between the pixel pairs.

3. Experiments The experiments in this paper are tested on two different datasets, Brodatz texture dataset [8] with 32 textures and CAIRO wood dataset with 5 species. In the experiments, 16 features will be extracted from each input samples by using four one-dimensional

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

GLCMs, with spatial distance of 1 pixel and four different directions, which are 0 degree, 45 degree, 90 degree and 135 degree. The textural features extract from each one-dimensional GLCM are contrast, energy, entropy and homogeneity. The extracted features will be fed into a classifier for classification, the classifier used in the experiments are k-Nearest Neighbour (k-NN) and multi-layer perceptron (MLP). For the Brodatz texture dataset, we are using a subset of 32 grey-scaled textures of size 256 × 256 from the Brodatz texture dataset which includes a total of 112 textures which is used in [9] as shown in Figure 1. Only part of the entire Brodatz texture dataset due to the limited number of training samples compare to the large number of classes [10]. For each of the 32 textures, they are segmented into 16 disjoint images of size 64 × 64. There are three images that are derived from these original images, the first being a rotated sample by 90 degrees, the next sample is a scaled image of size 64 × 64 from a 45 × 45 sub image extracted from the middle of the original samples, and the last image is both rotated and scaled. Eight sub images from each texture together with their respected variations will be randomly selected for training while the other eight will be used for testing [9]. This provides a total of 1024 training samples and 1024 testing samples for the whole dataset.

Figure 1. Samples of the Brodatz texture dataset. The CAIRO wood dataset is used for experiments for a real application to be implemented in a manner of texture classification. Only five species is used from the dataset that contains wood cross-section surfaces obtained from the Centre for Artificial Intelligence and Robotics (CAIRO), UTM, Malaysia as shown in Figure 2. Each species has fifty images for training and fifty images for testing; therefore there are a total of 250 training samples and 250 testing samples. The five species of wood that is selected are: 1. Keledang (Artocarpus kemando) 2. Nyatoh (Palaquium impressinervium)

3. 4. 5.

Punah (Tetramerista glabra) Ramin (Gonystylus bancanus) Melunak (Pentace triptera)

The next experiment is testing on 5 different spatial distances for a same grey level value. The spatial distances tested are 1, 2, 3, 4 and 5 pixels. Table 2 shows the result where the horizontal bar shows the spatial distance and the vertical bar shows the value of k used in the k-NN for grey level of 256. The value of k used is one to ten. Table 2. Experiment results for spatial distance of 1, 2, 3, 4 and 5 pixels with grey level of 256.

Figure 2. Samples of the CAIRO wood dataset.

4. Results and Discussion 4.1. Experiment 1: Brodatz Texture Dataset In this experiment, 16 features are extracted from the four one-dimensional GLCM for all four directions with a spatial distance and grey level. The k-NN is used as the classifier for the following experiments with k is one to ten. The first experiment is testing on 6 different grey levels, which are 8, 16, 32, 64, 128 and 256 to test the best grey levels for the problem. The spatial distance used to generate the one-dimensional GLCMs is one pixel. Table 1 shows the result where the horizontal bar shows the grey level and the vertical bar shows the value of k used in the k-NN. Table 1. Experiment results for grey level of 8, 16, 32, 64, 128 and 256 with spatial distance of one pixel.

1 2 3 4 5 6 7 8 9 10

8 79.30 79.30 76.66 76.37 71.48 70.12 67.38 65.72 63.96 63.38

16 80.86 80.86 77.54 77.73 76.07 73.83 71.68 71.09 69.43 67.87

32 81.54 81.54 79.30 79.39 77.34 77.05 74.12 73.34 71.00 72.27

64 80.66 80.66 77.15 77.64 75.59 75.49 71.48 72.46 70.21 68.95

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

128 78.81 78.81 77.93 78.32 78.52 77.34 77.73 76.27 75.59 75.49

256 83.01 83.01 81.35 81.35 79.39 79.00 77.54 78.42 76.66 74.71

1 2 3 4 5 6 7 8 9 10

1 83.01 83.01 81.35 81.35 79.39 79.00 77.54 78.42 76.66 74.71

2 76.07 76.07 75.49 74.90 74.12 74.71 74.32 74.32 73.93 73.54

3 74.22 74.22 73.93 74.80 75.59 75.39 74.02 73.73 72.56 72.27

4 67.68 67.68 66.70 67.68 68.36 67.97 67.19 66.02 68.55 68.55

5 58.20 58.20 61.43 62.30 63.09 63.09 64.45 65.43 64.65 65.23

The last experiment is testing on the onedimensional GLCM compared to the conventional GLCM where grey level is 256. Both used four GLCMs generated using spatial distance of one pixel for all four directions and four features are extracted from each, which are the contrast, energy, homogeneity and entropy. Table 3 shows the comparison of the results where the horizontal bar represents the value of k used in the k-NN which is one to five. Table 3. Comparison of results for onedimensional GLCM and conventional GLCM with grey level of 256.

1D GLCM GLCM

1

2

3

4

5

80.96

80.96

80.57

81.35

80.57

83.01

83.01

81.35

81.35

79.39

4.2. Experiment 2: CAIRO Wood Dataset In this experiment, the dataset is tested on both conventional and one-dimensional GLCM, the classifier used is k-NN and MLP. The experiment’s main objective is to prove that application such as wood species recognition can be regarded as a type of texture classification problem.

The experiments are done for both conventional GLCM method and one-dimensional GLCM method, both methods uses four GLCMs with spatial distance of one pixel, grey level of 256 and all four directions. Five features are extracted from each conventional GLCM which are contrast, energy, entropy, homogeneity and correlation. Four features are extracted from each one-dimensional GLCM which are similar with the conventional GLCM except correlation. The MLP used is having 20 input neurons for conventional GLCM and 16 input neurons for onedimensional GLCM, 20 hidden neurons and 5 output neurons. The k-NN is used with value of k from one to ten. Table 4 shows the comparison of experiment results for both types of GLCM method and both types of classifier. Table 4. Comparison of experiment results for conventional and one-dimensional GLCM

GLCM (20 features) 1D GLCM (16 features)

MLP

k-NN

56.80

58.40 (k = 2)

72.80

63.60 (k = 4)

An experiment is done with adding more training samples for each species of wood to 90 samples each but reducing the testing samples to 10 samples for each species. The one-dimensional GLCM with 16 features are used and achieved a recognition rate of 80% using a classifier of 3-NN. The last experiment is the comparison between different grey levels and classifiers. The grey level used is 8, 16, 32, 64, 128 and 256 while the classifier used is an MLP with 16 input neuron, 20 hidden neuron and 5 output neuron. For the results of the k-NN, the value of k that produces the best result for k is one to ten is shown in the brackets behind the recognition rate. Table 5 shows the result of the experiment where the horizontal bar shows the grey level and vertical bar shows the classifier. Table 5. Experiment results for grey level of 8, 16, 32, 64, 128 and 256 using MLP and k-NN, the number in the bracket for k-NN is the value of k with the highest recognition rate for k is one to ten.

MLP k-NN

8

16

32

64

128

256

56.00 62.00

70.00 57.20

77.00 55.60

70.00 57.20

75.00 57.20

72.00 63.20

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

(8)

(3)

(3)

(3)

(3)

(4)

4.3. Discussion From the experiment results above, it shows that different grey level and spatial distance will affect the recognition rate. The best grey level are 32 and 256 for Experiment 1 but in Experiment 2, 256 is the best grey level for k-NN and 32 is the best grey level for MLP. This shows that the value of suitable grey level is not generic and has to base on the different implementation. However, the ideal grey level is usually less than 256 and more than 4. When the grey level is 256, all 256 values of the image is concerned, however the exact values may differ in different acquisitions when the lighting condition differs. However, if the grey level is 2 or 4, it is too low where 128 or 64 grey values are regarded a single grey value respectively. This will cause information of the pixel pairs to be lost and therefore not useful for the implementations. In Experiment 1, it also shows that the spatial distance will affect the result where the result declines as the spatial distance increases. The spatial distance should not be too large as the larger it is the less related the pixels are, therefore the pixel pairs are usually concerning on smaller spatial distances. A comparison between the conventional and one-dimensional GLCM shows that the one-dimensional GLCM can achieve 83.01% of recognition rate where the conventional GLCM only achieves a recognition rate of 81.35%. This proves that the one-dimensional can achieve better results and on the same time, reducing computations. In Experiment 2, we used the wood dataset to implement the one-dimensional GLCM as a texture classification problem. However, the wood species are very similar textures to each other, therefore causing more challenges for classifying them. A comparison between the conventional GLCM and one-dimensional GLCM shows that even the features that is used is less but the results can be better, this implies that the correlation is not significant in the contribution of the recognition in this problem, hence adding one dimension of input may affect the results. On the other hand, only the value of contrast and homogeneity is identical in both methods, the energy and entropy differs, therefore it shows that the values obtained using one-dimensional GLCM can perform better in this problem. However, the k-NN performs badly in this experiment since the wood species has very similar features by nature, but k-NN is based on the Euclidean

distance between two samples, causing the confusion, the MLP performs better as it can learn to classify the wood species unlike the k-NN. When we change the settings of the experiment to include more training samples, the k-NN can perform well as the more training samples to compare with, the more accurate the results can be.

5. Conclusion GLCM is long proven to be a useful technique in texture classification problem, in this paper, we have proven that the one-dimensional GLCM is also useful for the texture classification problem, even though there are less information in the GLCMs, and less features can be obtained from the technique. The onedimensional GLCM is more efficient for calculations especially when there are restricted resources and computational power such as an embedded device. This paper also proves that the one-dimensional GLCM can be implemented on other applications that are texturebased, such as wood species recognition.

6. Acknowledgement The authors would like to thank Y. L. Lew and the Centre for Artificial Intelligence and Robotics (CAIRO) of Universiti Teknologi Malaysia (UTM) for sharing the wood images. This research is partly funded by Malaysian MOSTI ScienceFund 01-02-11-SF0019.

7. References [1] Y.L. Yew, “Design of an intelligent wood recognition system for the classification of tropical wood species”, Universiti Teknologi Malaysia, 2005. [2] J. Y. Tou, P. Y. Lau, and Y. H. Tay, “Computer visionbased wood recognition system”, Proc. Int’l Workshop on Advanced Image Technology (IWAIT), 2007. [3] M. Partio, B. Cramariuc, M. Gabboui, and M. Visa, “Rock texture retrieval using gray level co-occurrence matrix”, Proceedings of 5th Nordic Signal Processing Symposium, 2002. [4] J. Y. Tou, Y. H. Tay, and P. Y. Lau, “Gabor filters and grey-level co-occurrence matrices in texture classification”, MMU International Symposium on Information and Communications Technologies (M2USIC), 2007. [5] M. Tuceryan, and A.K. Jain, “Texture analysis, The handbook of pattern recognition and computer vision, Ed 2”, World Scientific Publishing Co. 1998.

978-1-4244-2328-6/08/$25.00 © 2008 IEEE

[6] R.M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification”, IEEE Transcations on Systems, Man, and Cybernetics, 1973, pp. 610-621. [7] M. Petrou, and P.G. Sevilla, “Image processing: Dealing with texture”, Wiley, 2006. [8] P. Brodatz, “A photographic album for artists and designers”, Dover New York, 1996. [9] R.W. Picard, T. Kabir, and F. Liu, “Real-time recognition with the entire Brodatz texture database”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition New York, 1993, pp. 638-639. [10] T. Ojala, K. Valkealahti, and M. Pietikainen, “Texture discrimination with multidimensional distributions of signed gray level differences”, Pattern Recognition 34, 2001, pp. 727-739.

One-dimensional Grey-level Co-occurrence Matrices for Texture ...

perform as good as the conventional GLCM but with ... light is less bright, the computer will percept the two object as distinct due to .... degree and 135 degree.

NAN Sizes 0 Downloads 153 Views

Recommend Documents

CLUSTERING of TEXTURE FEATURES for CONTENT ... - CiteSeerX
storage devices, scanning, networking, image compression, and desktop ... The typical application areas of such systems are medical image databases, photo ...

POSITIVE DEFINITE RANDOM MATRICES
We will write the complement of α in N as c α . For any integers i .... distribution with n degrees of freedom and covariance matrix , and write . It is easy to see that ...

Color Textons for Texture Recognition
illumination effects as local intensity, shading and shadow. But, the learning of representatives of the spatial structure and colors of textures may be ham- pered by the wide variety of apparent structure-color combinations. There- fore, our second

Matrices
Matrix • Two-dimensional rectangular array of real or complex numbers. • Containing a certain number “m” of rows and a certain number of “n” columns.

texture book.pdf
How do I make a rubbing? Choose a flat surface that has an interesting shape or tex- ture. Place your paper over it and gently rub over it with the. crayon.

Weighting Estimation for Texture- Based Face ...
COMPUTING IN SCIENCE & ENGINEERING. S cientific I ... two faces by computing their local regional similarities. A novel ..... 399–458. Raul Queiroz Feitosa is an associate professor in the ... a computer engineer's degree from the Pontifical.

Geometric Algebra equivalants for Pauli Matrices.
Having learned Geometric (Clifford) Algebra from ([1]), ([2]), ([3]), and other sources before studying any quantum mechanics, trying to work with (and talk to people familiar with) the Pauli and Dirac matrix notation as used in traditional quantum m

Texture Detection for Segmentation of Iris Images - CiteSeerX
Asheer Kasar Bachoo, School of Computer Science, University of Kwa-Zulu Natal, ..... than 1 (called the fuzzification factor). uij is the degree of membership of xi.

Texture Measures for Improved Watershed Segmentation of Froth ...
ore type changes) to each float cell. Typically, changes to the input variables are made by an experienced operator, based on a visual inspection of the froth. These operators look at features such as: bubble size, froth velocity, froth colour and fr

Texture and Bubble Size Measurements for Modelling Concentrate ...
of reducing the high dimensional bubble size distribution data associated with them ... the froth in a froth flotation process,"SmartFroth 5", Adams & Adams Patent Attorneys ...... pendency typically being shown on a grade-recovery curve. ... to the

Part B: Reinforcements and matrices
market. Thus both matrix types will be studied. Moulding compounds will be briefly overviewed. For MMCs and CMCs, processing methods, types of materials, ...... A major application is garden decks (USA), picture frames and the ...... Primary processi

Part B: Reinforcements and matrices
Polymeric Matrix Composites (PMCs), Metal Matrix Composites (MMCs) and Ceramic. Matrix Composites (CMCs) will be discussed. For PMCs, synthetic and natural fibres as well as mineral particulate reinforcements will be studied. Polymeric matrices both,

On the growth factor for Hadamard matrices
Determinants. Preliminaries. Solution. The proposed idea. Pivots from the beginning. Pivots from the end. Numerical experiments. Pivot patterns. Summary-. References. Backward error analysis for GE −→ growth factor g(n,A) = maxi,j,k |a. (k) ij. |

Repetition Maximization based Texture Rectification
Figure 1: The distorted texture (top) is automatically un- warped (bottom) using .... however, deals in world-space distorting and not with cam- era distortions as is ...

Descargar minecraft texture pack 1.5.2
Free download ... 1 link full.descargar gratis skype paraandroid 2.3.descargar sony vegas gratisen ... adobeaudition fullen español para windows 7.descargar gratis la ... messenger para windows 7 gratis.descargar i need your loveflowhot.

OSR Dungeon Encounter Matrices Alpha.pdf
99 Vapor Rat Tome of Horrors Complete. 100 Yienhool Swords & Wizardry Monster Book. Whoops! There was a problem loading this page. Whoops! There was ...

Repetition Maximization based Texture Rectification
images is an essential first step for many computer graph- ics and computer vision ... matrix based rectification [ZGLM10] can be very effective, most of our target ...

Texture and Bubble Size Measurements for Modelling ...
This thesis presents an improvement to the watershed algorithm for the measurement of bubble size distributions in flotation froths. Unlike the standard ...

Texture and Bubble Size Measurements for Modelling ...
in the Department of Chemical Engineering. UNIVERSITY OF CAPE TOWN ... Sandy Lambert (Anglo Platinum) and Ray Shaw (Rio Tinto) for funding this project. • Bernard Oostendorp, Doug ..... Machine vision systems for froth flota- tion typically consist

Texture and Bubble Size Measurements for Modelling ...
Video footage from selected industrial operations has been used for the development of improved algorithms for the ..... 4.3 Automatic Learning of Froth Classes . ...... Figure 3.35 shows a graphical illustration, where the new datum (black dot) ...

Texture and Bubble Size Measurements for Modelling Concentrate ...
Hydro-cyclones are a density separation device, that have an underflow of coarse particles and an overflow of fine particles. For a screen, the fine particles pass ...