Fruit and vegetable recognition by fusing color and texture features of the image using machine learning Shiv Ram Dubey*, Anand Singh Jalal Department of Computer Engineering and Applications, GLA University, Mathura, Uttar Pradesh, India E-mail: {shivram1987, anandsinghjalal}@gmail.com *Corresponding Author

This work is published in International Journal of Applied Pattern Recognition, Inderscience. Abstract: Efficient and accurate recognition of fruits and vegetables from the images is one of the major challenges for the computers. In this paper, we introduce a framework for the fruit and vegetable recognition problem which takes the images of fruits and vegetables as input and returns it‟s species and variety as output. The input image contains fruit or vegetable of single variety in arbitrary position and in any number. The whole process consists of three steps (1) background subtraction, (2) feature extraction and (3) training and classification. K-means clustering based image segmentation is used for background subtraction. We extracted different state-of-art color and texture features and combined them to achieve more efficient and discriminative feature description. Multi-class support vector machine is used for the training and classification purpose. The experimental results show that the proposed combination scheme of color and texture features supports accurate fruit and vegetable recognition and performs better than stand-alone color and texture features. Keywords: Multiclass SVM; Machine Learning; Global Color Histogram; Color Coherence Vector, Fruit Recognition, LBP, LTP, CLBP, Texture. Reference to this paper should be made as follows: Dubey, S. R. and Jalal A. S. (2014) „Fruit and vegetable recognition by fusing color and texture features of the image using machine learning‟, Int. J. Applied Pattern Recognition, Vol. XX, No. XX, pp. XXX–XXX. Biographical notes: Shiv Ram Dubey was a research fellow in Computer Vision lab, CSE, IIT Madras. His research was funded by DST, Govt. India. He received his B.Tech in Computer Science and Engineering in 2010 from the Gurukul Kangri Vishwavidyalaya Haridwar, India and his M.Tech in Computer Science and Engineering in 2012 from GLA University Mathura, India. His current research interests are Image Processing, Computer Vision and Machine Learning. Anand Singh Jalal received his MTech Degree in Computer Science from Devi Ahilya Vishwavidyalaya, Indore, India. He received his PhD in the area of Computer Vision from Indian Institute of Information Technology (IIIT), Allahabad, India. He has 14 years of teaching and research experience and currently, he is working as Head and Professor in Department of Computer Engineering and Applications, GLA University, Mathura, India. His research interests include image processing, computer vision and pattern recognition.

Copyright © 2014 Inderscience Enterprises Ltd.

1

Introduction

Pattern Recognition and Contemporary Vision problems such as DNA sequencing, fingerprinting identification, image categorization, and face recognition often require an

arbitrarily large number of properties and classes to consider. Recognition system is a „grand challenge‟ for the computer vision to achieve near human levels of recognition. In agricultural science, images are the important source of data and information. To reproduce and report such data, photography was the only method until recently. It is difficult to treat or quantify the photographic data mathematically. Digital image analysis and image processing technology circumvent these problems based on the advances in computer and microelectronics associated with traditional photography. This great tool helps to improve images from microscopic to telescopic range and offers a scope for their analysis. Image categorization, in general, relies on the combination of structural, statistical and spectral approaches. Structural approaches describe the appearance of the object using well-known primitives, for example, patches of important parts of the object. Statistical approaches represent the objects using local and global descriptors such as mean, variance, and entropy. Finally, spectral approaches use some spectral space representation to describe the objects, for example, Fourier spectrum (Gonzalez and Woods, 2007). The proposed method exploits statistical color and texture properties to recognize fruits and vegetables in a multi-class scenario. The aim of this paper is to perform automatic image processing in the field of agriculture. Several applications of image processing technology have been developed for the agricultural operations (Nasir, Rahman and Mamat, 2012). The fruit and vegetable classification can be used in the supermarkets where prices can be determined automatically for the fruits purchased by a customer. Recognizing different kind of fruit and vegetable is a regular task in the supermarkets, where the cashier must be able to identify not only the species of a particular fruit or vegetable (i.e., banana, apple, pear) but also identify its variety (i.e., Golden Delicious, Jonagold, Fuji), in order to determine it‟s price. This problem has been solved by using barcodes for packaged products but most of the consumers want to pick their product, which cannot be packaged, so it must be weighted. Issuing codes for each kind of fruit and vegetable is a common solution to this problem; but this approach has some problems such as memorization is hard, which may be a reason for errors in pricing. As an aid to the cashier, a small book with pictures and codes is issued in many supermarkets; the problem with this approach is that flipping over the booklet is time-consuming. This research reviews several image descriptors in the literature and presents a system to solve the fruit and vegetable recognition problem by adapting a camera at the supermarket that recognize fruits and vegetables on the basis of color and texture cues. Formally, the system must output a list of possible type of species and variety for an image of fruits or vegetables of single variety, in random position and in any number. Objects inside a plastic bag can add hue shifts and specular reflections. Given the variety and impossibility of predicting which types of fruits and vegetables are sold, training must be done on site by someone having little or no technical knowledge. The solution of the problem is that the system must be able to achieve higher level of accuracy by using only few training examples. Fruits and vegetables classification can also be used (1) in computer vision for the automatic sorting of fruits from a set, consisting of different kinds of fruit, and (2) for the automatic designing of agricultural operations through the remote images. There are a number of challenges that must be addressed to perform automatic recognition of the different kind of fruits or vegetables using images from the camera. Many types of fruit and vegetable are subject to significant variation in texture and color, depending on how ripe they are. Color and texture plays an important role in visual perception because these are the fundamental characteristic of natural images. Instead of considering color and texture feature separately, we fused them to construct the more efficient and robust feature description. The rest of the paper is structured as follows: Section 2 gives a brief overview of related work in object recognition and image categorization; Section 3 introduces the approach used for the fruit and vegetable recognition problem as well as a color and texture feature fusion scheme; Section 4 reports the experimental results; and Finally, Section 5 draws the conclusion and future direction.

2

Related works

In this section, we focus on the previous work done by several researchers in the area of image categorization, fruit and vegetable recognition. Recently, a lot of activity in the area of image categorization has been done. Previous approaches considered patterns in color, edge and texture properties (Pass, Zabih and Miller, 1997; Stehling, Nascimento and Falcao, 2002; Unser, 1986). Veggie-vision (Bolle et al., 1996) was the first attempt to the fruit and vegetable recognition problem. The system uses texture, color and density (thus requiring some extra information from the images). This system does not take some advantage of recent developments, because it was created some time ago. The reported accuracy was around 95% in some scenarios but it used the top four responses to achieve such result. Rocha et al. (2010) presented a unified approach that can combine many features and classifiers. The authors approached the multi-class classification problem as a set of binary classification problem in such a way that one can assemble together diverse features and classifier approaches custom-tailored to parts of the problem. They have achieved good classification accuracy in some scenario but they used top two responses to achieve them. Their method shows poor results for some type of fruit and vegetable such as Fuji Apple. A framework for the fruit and vegetable recognition and classification is proposed (Dubey and Jalal, 2012a; Dubey and Jalal, 2013; Dubey, 2013). They have considered images of 15 different types of fruit and vegetable collected from a supermarket. Their approach first segment the image to extract the region of interest and then calculate image features from that segmented region which is further used in training and classification by a multi-class support vector machine. In general, the fruit and vegetable recognition problem can be seen as an instance of object‟s categorization. In Turk and Pentland (1991) the authors employed Principal Component Analysis (PCA) and obtained the reconstruction error of projecting the whole image to a subspace then returning to the original image space. However, it depends heavily on pose, shape and illumination. A new image descriptor for the image categorization, the Progressive Randomization (PR) was introduced in the literature by Rocha and Goldenstein (2007) that uses perturbations on the values of the Least Significant Bits (LSB) of images. The introduced methodology captures the changing dynamics of the artifacts inserted between a perturbations process in each of the broadimage classes. The major drawback of using PR is that it uses only LSB of the images which lacks the information contained in the Most Significant Bits (MSB) of the images. Cutzu, Hammoud and Leykin (2005) used color, edge, and texture properties for differentiating photographs of paintings from photographs of real scenes. Using single feature they achieved 70–80% correct discrimination performance, whereas they achieved more than 90% correct discrimination results using multiple features. Low- and middle-level features are used to distinguish broad classes of images (Lyu and Farid, 2005; Serrano, Savakis and Luo, 2002). In addition, an approach to establish image categories automatically using histograms, shape and colors descriptors with an unsupervised learning method was presented by Heidemann (2005). Recently, Agarwal, Awan and Roth (2004) and Jurie and Triggs (2005) adopted approaches that take categorization problem as the recognition of specific parts that are characteristic of each object class. Marszalek and Schmid (2006) have extended the category classification with bag-offeatures, which represents an image as an orderless distribution of features. They have given a method to exploit spatial relations between the features by utilizing object boundaries when supervised training is in progress. They increase the weight of features that agree on the shape and position of the objects and suppress the weight of features that used in the background. But they achieve lower results than expectation in some cases. K-mean clustering based image segmentation approaches have shown very accurate segmentation results (Dubey et al., 2013). In this paper also, we use K-means clustering approach for background extraction. The image features used in (Singh et al., 2012; Kumar et al., 2010, Gupta et al., 2013, Dubey and Jalal, 2014a; Dubey and Jalal,

2014b) can also be integrated in the fruit and vegetable recognition to improve the efficiency of the approach. Sets of local features which are invariant to image transformations are used effectively when comparing images (Kumar et al., 2011). These techniques, generally called bag-of-features, showed good results even though they do not attempt to use spatial constraints among features (Grauman and Darrel, 2005; Sivic et al., 2005). Fruit diseases are also recognized using image processing techniques (Dubey and Jalal, 2012b; Dubey and Jalal, 2012c). First of all, they detected the defected region by k-means clustering based image segmentation technique then extracted the features from that segmented defected region which is used by a multi-class support vector machine for training and classification purpose. Another interesting technique was proposed by Berg, Berg and Malik (2005). They have exploited the concept of feature points in a gradient image. By joining a path, the points are connected and a match is finalized if contour found is similar enough to the one presents in the database. A very serious drawback of this method is that it requires a nonlinear optimization step for the finding of best contour; still it relies too heavily on the silhouette cues, which are not very informative cues for fruits like lemons, melons and oranges. Using a generative constellation model, Weber (2000) has taken spatial constraints into account. The algorithm can work with occlusion in a very good manner, but very costly (exponential with the number of parts). A further work made by Fei-Fei, Fergus and Perona (2006) introduced pre knowledge for the estimation of the distribution, so it reduces the number of examples used for the training to around 10 images while having a good recognition rate. The problem of exponential growth in the number of parts makes it impractical for the classification problem presented here.

3

Fruit and Vegetable Recognition

The framework used for the fruit and vegetable recognition system, shown in Figure 1, operates in three phases (i.e. background subtraction, feature extraction, and training and classification). In the first step fruit images will be segmented into foreground and background using K-means clustering technique. In the second step color and texture features are extracted from the segmented image (i.e. foreground of the image). In the last step fruits and vegetables are classified into one of the classes using trained Multi-class Support Vector Machine (MSVM). We also combined color and texture feature to achieve more accurate result for the fruits and vegetables classification using machine learing in this section. Figure 1

Fruit and vegetable recognition system

Training Images

Test Images

Background Subtraction

Background Subtraction

Color and Texture Feature Extraction and Fusion

Color and Texture Feature Extraction and Fusion

Training by Multi-class Support Vector Machine

Classification by Multiclass Support Vector Machine

Recognized Fruit or Vegetable

3.1

Background Subtraction

Image segmentation is a convenient and effective method for detecting foreground objects in images with stationary background. Background subtraction is a commonly used class of techniques for segmenting objects of interest in a scene. This task has been widely studied in the literature by Rocha et al. (2010). Background subtraction techniques can be seen as a two-object image segmentation and, often, need to cope with illumination variations and sensor capturing artifacts such as blur. Specular reflections, background clutter, shading and shadows in the images are major factors which must be addressed. Therefore, in order to reduce the scene complexity, it might be interesting to perform image segmentation focusing on the object‟s description only. For a real application in a supermarket, background subtraction needs to be fast, requiring only fractions of a second to carry out the operation. The best channel to perform the background subtraction is the S channel of HSV-stored images. This is intuitive from the fact that the S channel is much less sensitive to lighting variations than any of the RGB color channels.

Algorithm for Image Segmentation using K-Mean: 1. I ←Down-sample the image using simple linear interpolation to 25% of its original size to speed up the processing. 2. Extract the S channel of I and represent it as 1-d vector V of pixel intensity values. 3. Perform clustering D ←K-Means (V, k = 2). 4. M ← D back to image space by linear scanning of D. 5. UP ←Up-sample the generated binary M to the input image size. 6. Close small holes on UP using the Closing morphological operator with a disk structuring element of radius 7 pixels. Figure 2 Extracting region of interest from the images (a) before segmentation, (b) after segmentation (a)

(b)

Figure 3 Background subtraction results under partial occlusions and cropping effect (a) before segmentation, (b) after segmentation (a)

(b)

Figure 4 Background subtraction results under noisy and blurring effect (a) before segmentation, (b) after segmentation (a)

(b)

We use a background subtraction method based on K-means clustering technique (Rocha et al., 2010). Amongst several image segmentation techniques, K-means based image segmentation shows a trade-off between efficient segmentation and cost of segmentation. Some examples of image segmentation are shown in Figure 2. Some more background subtraction examples are depicted in Figure 3 under partial occlusions and cropping condition and in Figure 4 under noisy and blurring conditions. Note that, our algorithm only extract the foreground or remove the background, it does not separate the different objects in a single image because a particular image in the dataset is having only one type of fruits but in any number and orientation. Presently, we have not considered foreground segmentation from a mixture of fruits and vegetables.

3.2

Feature Extraction

In this sub-section, we extract some features which have shown promising results in the image categorization problems. We have used some state-of-the-art color and texture features and fused them to validate the accuracy and efficiency of the proposed approach. The color features used for the fruit and vegetable recognition problem are Global Color Histogram, Color Coherence Vector, and Color Difference Histogram while the texture features used are Structure Element Histogram, Local Binary Pattern, Local Ternary Pattern, and Completed Local Binary Pattern. Global Color Histogram (GCH) The Global Color Histogram (GCH) is the simplest approach to encode the information present in an image (Gonzalez and Woods, 2007). A GCH is a set of ordered values, for each distinct color, representing the probability of a pixel being of that color. Uniform normalization and quantization are used to avoid scaling bias and to reduce the number of distinct colors (Gonzalez and Woods, 2007). We extracted 64-bin GCH color feature. Color Coherence Vector (CCV) An approach to compare images based on color coherence vectors are presented by Pass, Zabih and Miller (1997). They define color coherence as the degree to which image pixels of that color are members of a large region with homogeneous color. These regions are referred as coherent regions. Coherent pixels are belongs to some sizable contiguous region, whereas incoherent pixels are not. In order to compute the CCVs, the method blurs and discretizes the image‟s color-space to eliminate small variations between neighboring pixels. Then, it finds the connected components in the image in order to classify the pixels of a given color bucket is either coherent or incoherent. After classifying the image pixels, CCV computes two color histograms: one for coherent pixels and another for incoherent pixels. The two histograms are stored as a single histogram of 64-bin dimension.

Color Difference Histogram (CDH) A feature descriptor called as color difference histograms (CDH) is designed by using color differences of neighboring pixels at a certain distance (Liu and Yang, 2013). The unique characteristic of CDH is that it count the perceptually uniform color difference between two points under different backgrounds with regard to colors and edge orientations in L*a*b* color space. It pays more attention to color, edge orientation and perceptually uniform color differences, and encodes color, orientation and perceptually uniform color difference via feature representation in a similar manner to the human visual system. The dimension of the CDH descriptor is considered as 54-bin in this paper. Structure Element Histogram (SEH) A texture descriptor called structure elements‟ histogram (SEH) is proposed to encode the small local structures of the image (Xingyuan and Zongyu, 2013). SEH describes images with its local features. It uses HSV color space (it has been quantized to 72 bins). SEH integrates the advantages of both statistical and structural texture description methods, and it can represent the spatial correlation of local textures. We have used 60-bin dimensional SEH feature description in this paper. Local Binary Pattern (LBP) Given a pixel in the input image, LBP is computed by comparing it with its neighbors (Ojala, Pietikäinen and Mäenpää, 2002): n 1 1, x  0 LBPN ,R   s(vn  vc )2 n , s( x)   (1) n 0 0, x  0 Where, vc is the value of the central pixel, vn is the value of its neighbors, R is the radius of the neighborhood and N is the total number of neighbors. Suppose the coordinate of vc is (0, 0), then the coordinates of vn are ( R cos(2 n / N ), R sin(2 n / N )) . The values of neighbors that are not present in the image grids may be estimated by interpolation. Let the size of image is I×J. After the LBP code of each pixel is computed, a histogram is created to represent the texture image: I

J

H (k )   f ( LBPN , R (i, j ), k ), k  [0, K ], i 1 j 1

1, x  y f ( x, y )   0, otherwise

(2)

Where, K is the maximal LBP code value. In this experiment the value of „N‟ and „R‟ are set to „8‟ and „1‟ respectively to compute the LBP feature. Then LBP feature is resized to the dimension of 64-bin. Local Ternary Pattern (LTP) Local ternary pattern is a natural extension of the original LBP. In (Tan and Triggs, 2010), Tan et al. proposed to use a base-3 pattern to represent the region. As a computationally efficient local image texture descriptor, LTP has been used with considerable success in a number of visual recognition tasks. The LTP can be calculated according to the following equation: () ( ) () ( ) () { (3) () ( ) Where P(0) is the intensity of the center pixel, and is a pre-defined threshold. The dimension of LTP feature is considered as 85-bin in this experiment. Completed Local Binary Pattern (CLBP) LBP feature considers only signs of local differences (i.e. difference of each pixel with its neighbors) whereas CLBP feature considers both signs (S) and magnitude (M) of local differences as well as original center gray level (C) value (Guo, Zhang and Zhang, 2010). CLBP feature is the combination of three features, namely CLBP_S, CLBP_M, and

CLBP_C. CLBP_S is the same as the original LBP and used to code the sign information of local differences. CLBP_M is used to code the magnitude information of local differences: n 1 1, x  c CLBPN , R   t (mn , c)2 n , t ( x, c)   (4) n 0 0, x  c Where, c is a threshold and set to the mean value of the input image in this experiment. CLBP_C is used to code the information of original center gray level value: 1, x  c CLBPN , R  t ( g c , c I ), t ( x, c)   (5) 0, x  c Where, threshold cI is set to the average gray level of the input image. In this experiment the value of „N‟ and „R‟ are set to „8‟ and „1‟ respectively to compute the CLBP feature. Then CLBP feature is resized to the dimension of 64-bin.

3.3

Training and Classification

Recently, Rocha et al. (2010) presented a unified approach that can combine many features and classifiers. The author approaches the multi-class classification problem as a set of binary classification problem in such a way one can assemble together diverse features and classifier approaches custom-tailored to parts of the problem. A class binarization is defined as a mapping of a multi-class problem onto two-class problems (divide-and-conquer) and binary classifier is referred as a base learner. For N-class problem N  (N-1)/2 binary classifiers will be needed where N is the number of different classes. The ijth binary classifier uses the patterns of class i as positive and the patterns of class j as negative. The minimum distance of the generated vector to the binary pattern (ID) representing each class yields the final outcome. Test case will belong to that class for which the distance between ID of that class and binary outcomes will be minimum. Table 1

x y z

Unique ID of each class

xy

xz

yz

+1 -1 0

+1 0 -1

0 +1 -1

The approach can be understood by a simple three class problem. Let three classes are x, y, and z. Three binary classifiers consisting of two classes each (i.e., xy, xz, and yz) will be used as base learners, and each binary classifier will be trained with training images. Each class will receive a unique ID as shown in Table 1. To populate the table is straightforward. First, we perform the binary comparison xy and tag the class x with the outcome +1, the class y with −1 and set the remaining entries in that column to 0. Thereafter, we repeat the procedure comparing xz, tag the class x with +1, the class z with −1, and the remaining entries in that column with 0. In the last, we repeat this procedure for binary classifier yz, and tag the class y with +1, the class z with -1, and set the remaining entries with 0 in that column, where the entry 0 means a “Don‟t care” value. Finally, each row represents unique ID of that class (e.g., y  [1, +1, 0]). Each binary classifier results a binary response for any input example. Let‟s say if the outcomes for the binary classifier xy, xz, and yz are +1, -1, and +1 respectively then the input example belongs to that class which has the minimum distance from the vector [+1, -1, +1]. So the final answer will be given by the minimum distance of



min dist 1, 1, 1 , 1, 1,0 , 1,0, 1 , 0, 1, 1



In this experiment, we have used Multi-class Support Vector Machine (MSVM) as a set of binary Support Vector Machines (SVMs) for the training and classification.

Figure 5

Data set used having 15 different kinds of fruit and vegetable

Agata Potato

Asterix Potato

GrannySmith Apple

Honneydew Melon

Orange

Plum

Fuji Apple

4

Cashew

Kiwi

Spanish Pear

Onion

Nectarine

Taiti Lime

Watermelon

Diamond Peach

Results and discussion

In this section, we describe the data set of fruits and vegetables, evaluate the proposed approach over the 15 types of fruits and vegetables and discuss various issues regarding the performance and efficiency of the system. First, we describe the data set used in this experiment and highlight several difficulties present in the data set. Second, the performance of different color, texture and fused features is compared for the fruit and vegetable recognition problem.

4.1 Data set To demonstrate the performance of the proposed approach, we have used a supermarket data set of fruits and vegetables, which comprises 15 different categories: Plum(264), Agata Potato(201), Asterix Potato(182), Cashew(210), Onion(75), Orange(103), Taiti Lime(105), Kiwi(151), Fuji Apple(212), Granny-smith Apple(155), Watermelon(192), Honeydew Melon(145), Nectarine(247), Spanish Pear(159), and Diamond Peach(211): totaling 2612 images. Figure 5 depicts the classes of the data set. (Data set is available at -http://www.ic.unicamp.br/~rocha/pub/downloads/tropical-fruits-DB-1024x768.tar.gz). Figure 6

Illumination differences, Kiwi category

Figure 6 shows an example of Kiwi category with different lighting. Figure 7 shows examples of the Cashew category with differences in pose. Figure 8 shows the variability in the number of elements for the Orange category. Figure 9 shows the examples of cropping and partial occlusion. Presence of these features makes the data set more realistic.

4.2

Result Discussion

To evaluate the accuracy of the proposed approach quantitatively, we used GCH, CCV, CDH, SEH, LBP, LTP, and CLBP feature descriptors. In this experiment, we use different number of images per class for the training. Figure 7

Pose differences, Cashew category

Figure 8

Variability on the no. of elements, Orange category

Figure 9

Examples of cropping and partial occlusion

Figure 10 (a) Comparison of GCH, CCV and CDH (i.e. color features) using MSVM as a base learner, (b) Comparison of SEH, LBP, LTP and CLBP (i.e. texture features) using MSVM as a base learner in terms of the accuracy and AUC plot. 3600

90 85 80 75 70

GCH CCV CDH

65 60 20

30

40

50

Area Under Curve

Average Accuracy (%)

(a) 95

3400 3200 3000 2800

60

GCH

Number of training mages per class

CCV Features

CDH

3600

90 85 80 75

SEH LBP LTP CLBP

70 65 60 20

30

40

50

Number of training images per class

60

Area Under Curve

Average Accuracy (%)

(b) 95

3400 3200 3000 2800 SEH

LBP LTP Features

CLBP

Figure 11 Average accuracy for fruit and vegetable recognition problem using MSVM by fusing color and texture features, (a) GCH is fused with LBP (i.e. GCH + LBP) and CCV is fused with LTP (i.e. CCV + LTP), (b) CDH is fused with CLBP (i.e. CDH + CLBP), CDH is fused with SEH (i.e. CDH + SEH) and CDH is fused with SEH and CLBP (i.e. CDH + SEH + CLBP), and (c) comparison among the fused descriptors GCH + LBP, CCV + LTP, CCV + CLBP, CDH + CLBP, CDH + SEH, and CDH + SEH +CLBP. 3650 3600

90

3550

85

GCH CCV LBP LTP GCH + LBP CCV + LTP

80 75 70 20

Area Under Curve

Average Accuracy (%)

(a) 95

30

40

50

3500 3450 3400 3350 3300 3250

60

3200

Number of training images per class

GCH

CCV

LBP

LTP

GCH+LBP CCV+LTP

Features

(b) 3500

Area Under Curve

Average Accuracy (%)

3600

90 80 CDH SEH CLBP CDH + CLBP CDH + SEH CDH + SEH + CLBP

70 60 50 20

30

40

50

3400 3300 3200 3100 3000 2900 2800 2700

60

Number of training images per class

CDH

SEH

CLBP

CDH+CLBP

CDH+SEHCDH+SEH+CLBP

Features

(c) 3650 3600

90 GCH + LBP CCV + LTP CCV + CLBP CDH + CLBP CDH + SEH CDH + SEH + CLBP

85 80 75 70 20

30 40 50 Number of training images per class

60

Area Under Curve

Average Accuracy (%)

95

3550 3500 3450 3400 3350 3300

GCH+LBP

CCV+LTP

CCV+CLBP

CDH+CLBP

CDH+SEHCDH+SEH+CLBP

Features

Figure 10 and 11 shows the average accuracy achieved with its Area Under Curve (AUC) while testing the introduced system for the fruits and vegetables classification considering different features. The x-axis represents the number of images per class for the training and y-axis represents the average accuracy. The average accuracy is computed by the following equation, ( )

( )

We have calculated the average accuracy for color and texture features by using MSVM for training and classification. Figure 10(a) shows the results for the color features GCH, CCV and CDH with AUC plot. Among color features, CCV performs better than others because CCV feature exploits the concept of coherent and incoherent regions whereas CDH performs poor than others because CDH incorporates perceptually uniform color difference which is better in retrieval problems. The results using GCH is towards the average performance. The performance of the texture features (i.e. SEH, LBP, LTP and CLBP) are plotted in the Figure 10(b) in terms of average accuracy and AUC. LTP performs better than other texture features because by dividing local difference into two parts, it gains more discriminative ability and robustness. The accuracy using SEH descriptor is poor than other texture descriptors because the basic structures of SEH lacks

some properties such as rotation, fine structures, etc. LBP and CLBP performs better than SEH (i.e. average performance). Figure 11 depicts the average accuracy obtained after fusing the color and texture features with AUC curve. The fusion of two features is carried out by combining one after another. If feature A is fused with feature B, then the dimension of fused feature A + B is dimension of A + dimension of B. The results of GCH + LBP (i.e. fusion of GCH and LBP) is much better than the results of stand-alone GCH or LBP feature and the results of CCV + LTP (i.e. fusion of CCV with LTP) is also much better than CCV and LTP as shown in Figure 11(a). The fusion result of CDH + CLBP (i.e. CDH is fused with CLBP), CDH + SEH (i.e. fusion of CDH and SEH) and CDH + SEH + CLBP (i.e. CDH is fused with SEH and CLBP) are reported in Figure 11(b). It is evident that the performance of each fused feature is better than individual color and texture features. The results of CDH and SEH are poor but the results of CDH + SEH is comparable. Figure 11(c) illustrates the performance comparison among the fused features. The performance of CCV + LTP is better than other tested combination whereas the performance of CDH + SEH is poor than other fused features. GCH + LBP also show the comparable performance with the CCV + LTP in terms of the accuracy and AUC. Figure 12 Performance comparison between MSVM and KNN classifier using fused (a) GCH + LBP, and (b) CCV + LTP feature in terms of the accuracy and AUC (a) 3600

90 3400

85 AUC

Accuracy using GCH+LBP

95

80

3200 3000

75

65 20

2800

MSVM KNN

70 30

40

50

2600

60

MSVM

Number of training examples per class

KNN

Classifiers

(b) 3600

90

3400

85 AUC

Accuracy using CCV+LTP

95

80

3200 3000

75 2800

MSVM KNN

70 65 20

30

40

50

Number of training examples per class

2600

60

MSVM

KNN

Classifier

We have also evaluated fused GCH + LBP and CCV + LTP feature using k-nearest neighbor classifier (KNN), as shown in Figure 12. The classification accuracy for MSVM classifier is better as compared to the KNN classifier for each fused features with large AUC for MSVM. We used MSVM as a base learner in this paper to train and classify fruit and vegetable images. Another important measurement is the recognition accuracy and AUC per category which may point out the categories which need more attention. Figure 13 depicts the category wise accuracy and AUC for each of the 15 category of fruit and vegetable using CCV + LTP fused feature considering MSVM as a classifier. To better visualization, we divided it into two plots one having categories of high accuracy (see Figure 13(a)) and another having categories of lower accuracy (see Figure 13(b)). The accuracy of cashew category is better among all categories. The performance for kiwi category is worse because the visual appearance of kiwi is similar with the nectarine. Table 2 shows the accuracy (%) of fruits and vegetables classification problem when MSVM is trained with 60 images per class for the different combination of color and texture

features (i.e. GCH + LBP, CCV + LTP, CCV + CLBP, CDH + CLBP, CDH + SEH, and CDH + SEH + CLBP). From this table it is clear that GCH + LBP and CCV + LTP features show

better result than other combinations. For categories cashew, agata potato, granny smith apple, onion and taiti lime, very good recognition accuracy is achieved by each combinations. Whereas, for categories kiwi, nectarine, fuji apple and Spanish pear, very poor recognition rate is reported.

4000

100 98

3950

Agata Potato Asterix Potato Cashew Diamond Peach Granny Smith Apple Onion Taiti Lime Watermelon

96 94 92 20

30

40

50

AUC

Accuracy per category (%)

Figure 13 Recognition accuracy and AUC per category (a) categories having high accuracy, and (b) categories having lower accuracy (a)

3900

3850

3800

60

Number of training examples per class

agata asterix cashewdiamond granny onion

taiti watermelon

Class 4000

80

3500 Fuji Apple Honneydew Melon Kiwi Nectarine Orange Plum Spanish Pear

60 40 20 20

30

40

50

AUC

Accuracy per category (%)

(b) 100

3000

2500

2000

60

fuji

honney

Number of training examples per class

kiwi

nectarine orange

plum

spanish

Class

Table 2 Recognition Accuracy (%) when system is trained with 60 images per class for the different combination of color and texture features Fruit

GCH +LBP

CCV+LTP

CCV+CLBP

CDH+CLBP

CDH+SEH

CDH+SEH+CLBP

Agata Potato

100.0

99.50

99.50

93.03

96.02

99.00

Asterix Potato

95.60

99.45

98.35

95.05

96.15

96.15

Cashew

100.0

100.0

100.0

99.52

99.52

99.52

Diamond Peach

99.53

100.0

100.0

96.21

89.57

96.21

Fuji Apple

74.53

67.45

62.74

59.43

60.38

70.75

GrannySmithApple

100.0

100.0

100.0

97.42

96.13

99.35

Honneydew Melon

99.31

99.31

98.62

98.62

90.34

97.93

Kiwi

70.20

74.83

70.20

74.83

75.50

76.82

Nectarine

87.04

88.26

83.81

80.97

85.43

89.88

Onion

100.0

100.0

100.0

100.0

98.67

100.0

Orange

99.03

99.03

98.06

98.06

97.09

98.06

Plum

99.24

99.62

98.48

97.35

97.73

98.86

Spanish Pear

83.65

75.47

85.53

83.02

69.81

85.53

Taiti Lime

100.0

100.0

100.0

100.0

96.19

99.05

Watermelon

99.48

100.0

99.48

98.44

98.96

99.48

Average

93.84

93.53

92.98

91.46

89.83

93.77

Table 3 Statistical analysis of accuracy of each type of fruit and vegetable for the different combination of color and texture features Fruit

GCH +LBP

CCV+LTP

CCV+CLBP

CDH+CLBP

CDH+SEH

CDH+SEH+CLBP

μ

σ

μ

σ

μ

σ

μ

σ

μ

σ

μ

σ

Agata Potato

99.7

00.4

98.9

00.8

99.0

00.7

91.0

02.4

92.5

03.3

96.4

02.2

Asterix Potato

95.2

01.5

99.0

00.9

96.2

01.3

90.2

03.0

92.1

03.6

95.2

01.5

Cashew

99.9

00.2

99.6

00.2

100

00.0

99.3

00.6

99.5

00.0

99.5

00.2

Diamond Peach

98.9

00.5

99.7

00.2

99.7

00.2

95.7

01.9

90.4

02.6

95.5

02.3

Fuji Apple

74.9

02.8

71.0

04.2

62.6

00.9

67.3

05.7

62.0

02.6

73.6

03.1

GrannySmithApple

99.5

00.4

99.5

00.3

99.6

00.3

97.3

01.4

97.7

00.9

99.6

00.3

Honneydew Melon

94.1

06.3

98.1

02.1

95.9

03.1

87.3

10.7

79.2

10.1

89.7

09.5

Kiwi

49.3

15.5

55.3

14.3

52.4

12.6

47.0

15.6

48.3

17.9

49.7

17.7

Nectarine

85.5

04.2

87.8

02.6

83.2

02.7

79.7

03.0

83.4

03.4

88.7

04.0

Onion

98.7

01.6

99.4

00.7

98.4

02.0

94.5

06.6

89.8

08.9

95.1

05.1

Orange

93.0

08.9

97.6

02.8

96.9

01.9

94.1

05.2

86.2

12.3

94.0

09.2

Plum

87.5

15.8

83.8

19.9

83.6

19.8

91.1

08.9

87.0

13.0

90.4

11.8

Spanish Pear

70.5

12.3

72.4

07.1

74.2

10.5

68.6

12.2

60.1

06.8

67.4

14.0

Taiti Lime

99.7

00.7

98.6

02.3

99.8

00.4

97.6

04.1

96.8

00.7

97.2

03.4

Watermelon

98.8

01.0

99.0

01.6

98.8

01.0

96.9

03.0

97.7

01.6

99.1

01.2

Average

89.7

04.8

90.6

04.0

89.4

03.8

86.5

05.6

84.2

05.8

88.7

05.7

Figure 14 Probability distribution of deviation with respect to zero mean of accuracy by each fused feature GCH+LBP CCV+LTP CCV+CLBP CDH+CLBP CDH+SEH CDH+SEH+CLBP

0.1

Probability

0.08 0.06 0.04 0.02 0 -20

-15

-10 -5 0 5 10 Deviation of accuracy from zero mean

15

20

We also analyzed the accuracy of each fused feature over each class statistically. The mean (μ) and standard deviation (σ) of accuracies obtained using different number of training sets are depicted in Table 3. The highest mean accuracy is 90.6% using CCV+LTP descriptor whereas minimum standard deviation is 3.8 using CCV+CLBP descriptor. It means that CCV+LTP have better performance but more uniform performance is shown by CCV+CLBP. It can also be visualized in the Figure 14, where the probability distribution is plotted against the deviation from zero means of accuracy of each descriptor. CCV+CLBP are having the highest amplitude at zero mean with lesser deviation. Figure 15 depicts some examples of the fruit and vegetable images which are very difficult to recognize but our method is able to correctly detect the type of fruit or vegetable present in the image. In nearly all images, there is too much illumination difference to correctly recognize but still our method does. In 7 th and 12th image, some portions of the objects are cropped but it is accurately recognized by CCV + LTP combination of features when MSVM was trained with 60 images per class.

Figure 15 Some difficult images that are correctly classified when MSVM is trained with 60 images per category

Figure 16 Some images of the data set which are misclassified when MSVM is trained with 60 images per class

Table 4

Comparison with existing fruit recognition approaches in terms of the accuracy rate

Reference

Fruit

Pre-Processing

Analysis

Proposed

15 types of Fruits Same dataset of 15 type (as in our case) 7 types of Fruits Apple

K-Mean clustering

Multiclass Support Vector Machine Minimum Distance Criterion

Arivazhagan et al. (2010) Seng and Mirisaee (2009) Unay and Gosselin (2005) Jamil et al. (2009) Lu et al. (2012)

Palm Oil Citrus

Conversion to HSV Colour Space Manual Area Segmentation Thresholding filter‟s intensity value on the image Binary Conversion Conversion to HSV

Accuracy Rate 93.84% 86.0%

K-Nearest Neighbours Algorithm Adaptive Boosting & Support Vector Machine

90.0%

Fuzzy Logic Morphological Reconstruction using Chromatic Aberration Map and Hue Map

73.3% 92.6%

90.3%

Some misclassified images of fruit and vegetable recognition by CCV + LTP feature are shown in Figure 16 when MSVM is trained with 60 examples per category. Two images are misjudged due to the presence of the hand in the image. Most of the misclassified images are either of kiwi or fuji apple or nectarine category because the color and texture of these categories are very much similar and it is difficult to categorize between them even by human in some cases. Rest of the images of figure 16 is subjected to either blurring effect, or cropping effect or illumination variation that leads to a misclassification. We have also compared our results with other existing fruit recognition methods in Table 4 and found that proposed method outperforms other methods in terms of the recognition accuracy. Arivazhagan et al. (2010) also performed over same dataset (as used in this paper) and achieved 86.0% accuracy as compared to our method is able to achieve near about 94% accuracy. From the experimental results, it is deduced that the proposed approach can be used effectively to recognize the fruits and vegetables from images.

5

Conclusion

This paper introduced and evaluated an approach to recognize the fruit and vegetable from the images. The described framework operates in three steps, background subtraction, feature extraction and training and classification. Background subtraction is performed using K-means clustering based segmentation technique. We extracted some state-of-art color and texture features from the foreground image and fused them together. The fusion of color and texture information makes the resultant feature more discriminative than color and texture feature individually. This paper uses a multi-class support vector machine for the training and classification. This paper also compared the performance fused features for support vector machine and nearest neighbor classifier and indicates that support vector machine is better choice for training and classification. The experimental results suggest that the introduced method is able to support the accurate recognition of fruits and vegetables from the images. One of the future directions of this work is to classify the diseases present in the fruits from the images.

References Agarwal, S., Awan, A. and Roth, D. (2004) „Learning to detect objects in images via a sparse, partbased representation‟, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 11, pp.1475-1490. Arivazhagan, S., Shebiah, R.N., Nidhyanandhan, S.S. and Ganesan L. (2010) „Fruit recognition using color and texture features‟, Journal of Emerging Trends in Computing and Information Sciences , Vol. 1, No. 2, pp. 90-94. Berg, A., Berg, T. and Malik, J. (2005) „Shape matching and object recognition using low distortion correspondences‟, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, Vol. 1, pp.26-33. Bolle, R.M., Connell, J.H., Haas, N., Mohan, R. and Taubin, G. (1996) „Veggievision: a produce recognition system‟, in Proceedings of Third IEEE Workshop on Applications of Computer Vision, Sarasota, USA, pp.1-8. Cutzu, F., Hammoud, R. and Leykin, A. (2005) „Distinguishing paintings from photographs‟, Computer Vision and Image Understanding, Vol. 100, No. 3, pp.249-273. Dubey, S. R. (2012) Automatic Recognition of Fruits and Vegetables and Detection of Fruit Diseases, Master‟s Thesis, GLA University Mathura, India. Dubey, S. R. and Jalal, A. S. (2012a) „Robust Approach for Fruit and Vegetable Classification‟, Procedia Engineering, Vol. 38, pp. 3449 – 3453. Dubey, S. R. and Jalal, A. S. (2012b) „Detection and Classification of Apple Fruit Diseases using Complete Local Binary Patterns‟, in Proceedings of the 3rd International Conference on Computer and Communication Technology, MNNIT Allahabad, India, pp. 346-351.

Dubey, S. R. and Jalal, A. S. (2012c) „Adapted Approach for Fruit Disease Identification using Images‟, International Journal of Computer Vision and Image Processing, Vol. 2, No. 3, pp. 51 – 65. Dubey, S. R. and Jalal, A. S. (2013) „Species and Variety Detection of Fruits and Vegetables from Images‟, International Journal of Applied Pattern Recognition, Vol. 1, No. 1, pp. 108 – 126. Dubey, S. R., Dixit, P., Singh, N. and Gupta, J. P. (2013) „Infected Fruit Part Detection using KMeans Clustering Segmentation Technique‟, International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 2, No. 2, pp. 65-72. Dubey, S. R. and Jalal, A. S. (2014a) „Automatic Fruit Disease Classification using Images‟, Computer Vision and Image Processing in Intelligent Systems and Multimedia Technologies, pp. 82-100. Dubey, S. R. and Jalal, A. S. (2014b) „Fruit Disease Recognition using Improved Sum and Difference Histogram from Images‟, International Journal of Applied Pattern Recognition. Fei-Fei, L., Fergus, R. and Perona, P. (2006) „One-shot learning of object categories‟, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 4, pp.594-611. Gonzalez, R. and Woods, R. (2007) Digital Image Processing, 3 rd ed., Prentice-Hall. Grauman, K. and Darrel, T. (2005) „Efficient image matching with distributions of local invariant features‟, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, Vol. 2, pp.627-634. Guo, Z., Zhang, L. and Zhang, D. (2010) „A completed modeling of local binary pattern operator for texture classification‟, IEEE Transactions on Image Processing (TIP), Vol. 19, No. 6, pp. 1657-1663. Gupta, J. P., Singh, N., Dixit, P., Semwal, V. B. and Dubey, S. R. (2013) „Human Activity Recognition using Gait Pattern‟, International Journal of Computer Vision and Image Processing, Vol. 3,No. 3, pp. 31 – 53. Heidemann, G. (2005) „Unsupervised image categorization‟, Image and Vision Computing, Vol. 23, No. 10, pp.861-876. Jamil, N., Mohamed, A. and Abdullah, S. (2009) „Automated Grading of Palm Oil Fresh Fruit Bunches (FFB) using Neuro-Fuzzy Technique‟, International Conference of Soft Computing and Pattern Recognition, Malacca, MY, pp.245-249. Jurie, F. and Triggs, B. (2005) „Creating efficient codebooks for visual recognition‟, in Proceedings of the Tenth IEEE International Conference on Computer Vision, Washington, DC, USA, vol. 1, pp.604-610. Liu, G. H. and Yang, J. Y. (2013) „Content-based image retrieval using color difference histogram‟, Pattern Recognition, Vol. 46, No. 1, pp. 188–198. Lu, J., Sang, N., Ou, Y., Huang, Z. and Shi, P. (2012) „Detecting Citrus Fruits with Shadow within Tree Canopy by a Fusing Method‟, 5th International Congress on Image and Signal Processing, Chongqing, Sichuan, CHN, pp.1229-1232. Lyu, S. and Farid, H. (2005) „How realistic is photorealistic‟, IEEE Transactions on Signal Processing, Vol. 53, No. 2, pp.845-850. Kumar, K.S., Prasad, S., Banwral, S. and Semwal, V.B. (2010) „Sports Video Summarization using Priority Curve Algorithm‟, International Journal on Computer Science & Engineering. Kumar, K.S., Semwal, V.B., Prasad, S. and Tripathi, R.C. (2011) „Generating 3D Model Using 2D Images of an Object‟, International Journal of Engineering Science and Technology, Vol. 3, No. 1, PP. 406-415. Marszalek, M. and Schmid, C. (2006) „Spatial weighting for bag-of-features‟, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, Vol. 2, pp.2118-2125. Nasir, A.F.A., Rahman, M.N.A. and Mamat A.R. (2012) „A study of image processing in agriculture application under high computing environment‟, International Journal of Computer Science and Telecommunications, Vol. 3, No. 8, pp.16-24. Ojala, T., Pietikäinen, M. and Mäenpää, T. T. (2002) „Multiresolution Gray-Scale and Rotation

Invariant Texture Classification with Local Binary Pattern‟, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Vol. 24, No. 7, pp. 971-987. Pass, G., Zabih, R. and Miller, J. (1997) „Comparing images using color coherence vectors‟, in Proceedings of the fourth ACM international conference on Multimedia, New York, USA, pp.65-73. Rocha, A. and Goldenstein, S. (2007) „PR: More than meets the eye‟, in Proceedings of the Eleventh IEEE International Conference on Computer Vision, pp.1-8. Rocha, A., Hauagge, C., Wainer, J. and Siome, D. (2010) „Automatic fruit and vegetable classification from images‟, Computers and Electronics in Agriculture, Vol. 70, No. 1, pp.96104. Seng, W.C. and Mirisaee S.H. (2009) „A new method for fruits recognition system‟, in Proceedings of the International Conference on Electrical Engineering and Informatics, pp. 130-134. Serrano, N., Savakis, A. and Luo, A. (2002) „A computationally efficient approach to indoor/outdoor scene classification‟, in Proceedings of the 16th International Conference on Pattern Recognition, Vol. 4, pp.146-149. Singh, N., Dubey, S. R., Dixit, P. and Gupta, J. P. (2012) „Semantic Image Retrieval by Combining Color, Texture and Shape Features‟, in Proceedings of the International Conference on Computing Sciences, pp. 116-120. Sivic, J., Russell, B., Efros, A., Zisserman, A. and Freeman, W. (2005) „Discovering objects and their location in images‟, in Proceedings of the Tenth IEEE International Conference on Computer Vision, pp.370-377. Stehling, R., Nascimento, M. and Falcao, A. (2002) „A compact and efficient image retrieval approach based on border/interior pixel classification‟, in Proceedings of the Eleventh International Conference on Information and Knowledge Management, New York, USA, pp.102-109. Tan, X. and Triggs, B. (2010) „Enhanced local texture feature sets for face recognition under difficult lighting conditions‟, IEEE Transactions on Image Processing, Vol. 19, No. 6, pp. 1635–1650. Turk, M. and Pentland, A. (1991) „Eigen faces for recognition‟, Journal of Cognitive Neuroscience, Vol. 3, No.1, pp.71-86. Unay, D. and Gosselin, B. (2005) „Artificial Neural Network-Based Segmentation and Apple Grading by Machine Vision‟, IEEE International Conference on Image Processing, pp.630633. Unser, M. (1986) „Sum and difference histograms for texture classification‟, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, No. 1, pp.118-125. Weber, M. (2000) Unsupervised Learning of Models for Object Recognition, PhD Thesis, Caltech, Pasadena, US. Xingyuan, W. and Zongyu, W. (2013) „A novel method for image retrieval based on structure elements descriptor‟, Journal of Visual Communication and Image Representation, Vol. 24, No. 1, pp. 63–74.

Fruit and vegetable recognition by fusing color and ...

background subtraction, (2) feature extraction and (3) training and classification. K-means ... Anand Singh Jalal received his MTech Degree in Computer Science from Devi ... Computer Vision from Indian Institute of Information Technology (IIIT),. Allahabad, India. He has 14 years of teaching and research experience and.

639KB Sizes 6 Downloads 272 Views

Recommend Documents

Fruit and vegetable processing - FAO Agribusiness Handbook.pdf ...
a database of agribusiness companies, including fruit and vegetable ... territory, city or area or of its authorities, or concerning the delimitation of its frontiers.

Fruit and vegetable processing - FAO Agribusiness Handbook.pdf ...
Fruit and vegetable processing - FAO Agribusiness Handbook.pdf. Fruit and vegetable processing - FAO Agribusiness Handbook.pdf. Open. Extract. Open with.

Fruit and Vegetable Processing.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Fruit and ...

Small-scale Fruit and Vegetable Processing and Products -UNIDO ...
economy environment employment. Small-scale Fruit and Vegetable. Processing and .... Small-scale Fruit and Vegetable Processing and Products -UNIDO.pdf.

evaluation of a fruit and vegetable education ...
Feb 12, 2006 - The number and proportion of older Americans is rapidly growing. Currently, Americans ...... Measuring rural diversity conference proceedings. 2002. ...... educators, and technical assistance provided on site or by phone. ...... on a l

Medium Fruit and Vegetable Processing Unit.pdf
Try one of the apps below to open or edit this item. Medium Fruit and Vegetable Processing Unit.pdf. Medium Fruit and Vegetable Processing Unit.pdf. Open.

extraction and characterization of vegetable oil using bread fruit seed ...
etc.) visit our website for more details. Page 1 of 1. extraction and characterization of vegetable oil using bread fruit seed.pdf. extraction and characterization of ...

SE Regional Vegetable and Small Fruit Day -
Mar 4, 2015 - against any person because of age, ancestry, color, disability or handicap, national origin, race, religious creed, sex, sexual orientation, gender identity, or veteran status. Discrimination or harassment against faculty, staff, or stu

Fresh Fruit & Vegetable Program.pdf
Whoops! There was a problem loading more pages. Fresh Fruit & Vegetable Program.pdf. Fresh Fruit & Vegetable Program.pdf. Open. Extract. Open with. Sign In. Details. Comments. General Info. Type. Dimensions. Size. Duration. Location. Modified. Create

Fruit disease recognition using improved sum and ...
A fungal disease, apple blotch appears on the surface of the fruit as dark, irregular or lobed edges. The precise segmentation is required for the defect detection.

Color Textons for Texture Recognition
illumination effects as local intensity, shading and shadow. But, the learning of representatives of the spatial structure and colors of textures may be ham- pered by the wide variety of apparent structure-color combinations. There- fore, our second

February 2018 Fruit and Vegetable.pdf
Page 1 of 1. Menus are subject to change without notice. This institution is an equal opportunity provider. www.taher.com. HARVEST OF THE MONTH EXTRA INFO. MONDAY T UESDAY WEDNESDAY T HURSDAY F RIDAY. Page 1 of 1. February 2018 Fruit and Vegetable.pd

Year 7 Exotic Fruit and Vegetables
people do not know anything about them. Prepare an ... 3 How much does the fruit and vegetable cost? 4 How are they ... pasted from the internet. Information ...

December 2017 Fruit and Vegetable.pdf
Whoops! There was a problem loading more pages. Retrying... December 2017 Fruit and Vegetable.pdf. December 2017 Fruit and Vegetable.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying December 2017 Fruit and Vegetable.pdf.

pdf-0738\face-detection-and-recognition-on-mobile-devices-by ...
pdf-0738\face-detection-and-recognition-on-mobile-devices-by-haowei-liu.pdf. pdf-0738\face-detection-and-recognition-on-mobile-devices-by-haowei-liu.pdf.

Action and Event Recognition in Videos by Learning ...
methods [18]–[20] can only adopt the late-fusion strategy to fuse the prediction ... alternatively adopt the early fusion strategy to form a lengthy ...... TABLE III. TRAINING TIME OF ALL METHODS WITHOUT USING PRIVILEGED INFORMATION ON THE KODAK DA

Texture recognition by using GLCM and various ...
Apr 4, 2010 - 3125, Australia (phone: +613 9251 74754; email: [email protected], ... calculating, for each cell, the occurrences of each grey-level combination ..... [7] T. Calvo, G. Mayor, and R. Mesiar, Eds., Aggregation Operators. New.

large-scale loop-closing by fusing range data and ...
The error contained in the raw data is especially difficult to ... and sensor data model. ...... input image are available, one is at the scale of 1 pixel = 1.189 meter, ...

Inferring Personality of Online Gamers by Fusing ...
They include race names (e.g., elf, gnome), role names (e.g., priest, warrior), actions (e.g., kill, wave), failures (e.g., drown, fatigue), scenarios (e.g., arena, dungeon) and other frequent words. We currently collect 80 keywords. In the textual a