Automatic CAD System for HEp-2 Cell Image Classification Shahab Ensafi∗ , Shijian Lu† , Ashraf A. Kassim∗ and Chew Lim Tan‡ ∗ Electrical † Institute

and Computer Engineering Dept., National University of Singapore, Email: {shahab.ensafi, ashraf}@nus.edu.sg for Infocomm Research Agency for Science, Technology and Research (A*STAR), Email: [email protected] ‡ School of Computing National University of Singapore, Email: [email protected]

Abstract—It has been estimated that autoimmune diseases are among the top ten leading causes of death among women in all age groups up to 65 years. However, the detection of it by indirect immunofluorescence (IIF) image analysis depends heavily on the experience of the physicians. An accurate and automatic Computer Aided Diagnosis (CAD) system will help greatly for the classification of the Human Epithelial type 2 (HEp-2) cell images with little human intervention. In this paper we present an automatic HEp-2 cell image classification technique that exploits different spatial scaled image representation and sparse coding of SIFT features. Additionally, spatial max pooling of sparse coding at different scales is used to boost the classification performance. The proposed method is tested on the ICPR 2012 contest dataset and experiments show that it clearly outperforms state-of-the-art techniques in cell and image level as well as two intensity level images.

I.

I NTRODUCTION

Computer Aided Diagnosis (CAD) systems are widely used for different tasks in medicine such as proof reading, increasing the diagnosis speed, training the physicians for special task, etc. One of such systems is proposed in recent days for automatic HEp-2 cell classification, which is important for the detection of the antibodies in human serum. If antibodies are present then they will bind to the antigens on the cells; in the case of antinuclear antibodies (ANAs), the antibodies will bind to the nucleus. To visualize the antibodies, a fluorescent tagged anti-human antibody binds to them. This fluorescent is usually fluorescein isothiocyanate (FITC) or rhodopsin B [1]. To see the molecule under the microscope, a specific wavelength of light shines on it and the fluorescent reacts to it and makes the molecule visible. Depending on the antibody presence in the human serum and the localization of the antigen in the cell, the patterns will be seen on the HEp-2 cells [2]. This paper presents an automatic HEp-2 cell image classification technique that leverages on the recent advance in computer vision technology. In particular, grid SIFT descriptors are used for feature detection, where SIFT features are extracted from the patches of certain size in the grid points and concatenated to provide the feature vectors of each cell image. The grid SIFT is used because certain types of cell images, such as the one labeled as Homogeneous in Fig. 2 are homogeneous and have very sparse feature points. Bag of Words (BoW) model is adapted for classification, which quantizes the appearance descriptors of an image into visual words [3]. These discrete words provide the dictionary, which represent the elements of the image and are widely used in object recognition or scene classification [4] [5]. To capture the spatial order of descriptors that is discarded in the traditional

BoW model [6], the Spatial Pyramid Matching (SPM) [7] is used to capture the spatial information of different image patches. In addition, Sparse Coding is adopted to achieve lower reconstruction error and capture salient properties of images [8] [9]. The publicly available dataset, MIVIA HEp-2 images dataset from International Conference of Pattern Recognition (ICPR) 2012 contest 1 is used for the experiment. In this competition, 28 competitors submitted their methods and provided the accuracy of classification in cell and image level as well as two intensity level images of six classes namely, Centromere, Coarse speckled, Fine speckled, Cytoplasmatic, Homogenous and Nucleolar [10]. The samples of images and six classes are shown in Fig. 2 and Fig. 3 respectively. Overall, the range of the classification accuracies was from 20% to 68.7% on the same test dataset. However, the bestobtained accuracy was still 4.6% lower than the specialist accuracy (73.3%) where cell images are classified manually by domain experts. There is still a clear gap between the submitted CAD techniques and the domain experts, which could be filled or reduced by further improving feature selection and classification technology that is used in the CAD systems. The paper is organized as follows: next the related works are examined in section II followed by our proposed method, which is described in section III. We then present and analyze our experimental results in section IV and providing some concluding remarks in section V. A. Related Works Cell classification is one of the hot topics in medical science and clinical practices from both research and application, e.g., mitosis detection and classification in breast cancer histopathology images [11] [12], detection and classification of white blood cells [13] [14], identification and classification of chromosome aberration in stem cells, etc. Various features and classification methods are used in these studies. For feature representation, the shape and texture features such as morphological, SIFT and HoG are widely used. Some feature extraction methods such as Linear Discriminant Analysis (LDA) [15], Principal Component Analysis (PCA) [16] and Independent Component Analysis (ICA) [17] are also widely used to reduce the dimensionality of the features and improve the classification performance. Additionally, a variety of supervised and unsupervised classifiers are implemented by 1 http://nerone.diiie.unisa.it/hep2contest/index.shtml

Test Images Masking and make grid

Sparse coding

Max

Pyramid matching

Pooling

Feature Vectors

SIFT

Max

descriptors

Pooling

Train

descriptors

Learning Dictionary Sparse coding Using Learned

make grid

SIFT

dictionary

Masking and

SVM model

Feature Vectors

Training Images

SVM model

Classification Result Fig. 1. The illustration of algorithm for calculating the feature descriptors using grid SIFT and learning the dictionary in training stage. Then a multiclass SVM is learned. In the testing stage, the sparse feature vectors are calculated using the dictionary and the SVM model classifies the images.

using generative or discriminative models such as graphical modelling [18], Neural Networks [19] [20], Convolutional Neural Networks [12] and Support Vector Machines (SVM) [21] [22] [23]. Recently, an attention toward the automatic HEp-2 cell classification led to the first dataset of such cell images that was provided in the ICPR 2012 contest. Interestingly, 28 competitors submitted their methods and results. Among those submitted methods, Nosakas method [21] achieved the best classification accuracy at 68.7%. In this method, the extension of Local Binary Pattern (LBP), CoALBP is used for features and linear SVM used for classification. The next best performance was by Xiangfei and Kuan [24]. They represent the images by the frequency histogram of textons on top of the texture features and a k-NN with χ2 distance classifier is used to categorize the images. Ersoy et al. [25] used robust structure tensors and texture features. For classification, they used ShareBoost that uses a single re-sampling distribution. The features, whose training error is minimal, determine this distribution. Ghosh et al. [22] used multiple features of HoG, SURF, Gray Level Co-occurrence Matrix (GLCM) and Region of Interest (RoI) texture features and classified the images using SVM. Most of the methods used Morphological, LBP, HoG, Discrete Cosine Transform (DCT) and GLCM features. Different types of SVM (linear and with RBF and χ2 kernel), kNN, AdaBoost and neural Network are used for the classifiers. In this paper a sparse coding based approach is designed to weight the feature vectors of the learned dictionary. Because patch based feature selections like grid SIFT is sparse by itself, it is worth noting this method for image classification purpose. Moreover, the computational time for the testing stage would be decreased due to the sparse representation of the cell images in decision making procedure. II.

M ETHOD

The framework of the proposed method is illustrated in Fig. 1, which consists of two stages. In the training stage, the

dictionary is learned by using the grid SIFT features which are samples from all the training images. Sparse coding is then applied to learn the dictionary and sparse codes iteratively. In the next stage to generate the feature vectors, max pooling is performed on the histogram of the local descriptors in three different scales. Finally a multiclass Support Vector Machine (SVM) is learned for image classification. In the testing stage, the same protocol is performed. The sparse codes are obtained by using the learned dictionary and the classification is done by using the trained SVM model. The explanation of each stage is stated in this section. A. Features The features to be used should be scale and rotation invariant to represent the characteristics of images. Therefore, the SIFT descriptors are used for the features in grid fashion. A grid mesh of equal spacing and a patch around each point is selected. The histogram of SIFT descriptors in each patch is calculated to represent the descriptors of that point. The grid SIFT is used to gather the information of the whole cell for classification not only the descriptors of the corners in the image. For example, in the homogeneous cells, the Harris corner detector cannot work well and the grid SIFT can find the descriptors in preordered fashion. B. Descriptor Representation The features to be used should be scale and rotation invariant to represent the characteristics of images. Therefore, the SIFT descriptors are used for the features in grid fashion. A grid mesh of equal spacing and a patch around each point is selected. The histogram of SIFT descriptors in each patch is calculated to represent the descriptors of that point. The grid SIFT is used to gather the information of the whole cell for classification not only the descriptors of the corners in the image. For example, in the homogeneous cells, the Harris corner detector cannot work well and the grid SIFT can find the descriptors in preordered fashion.

One of the promising approaches of the classification problems is Bag of Words model. The idea is firstly applied to the text documents and then extended to images [6]. The model learns a dictionary of features to represent the classes. According to the coefficients of the dictionary words, the classes are defined. In this regard, the image is divided to the overlapping regions and for each region, the features are calculated. Then an unsupervised method is used for clustering the feature space to predefined clusters, which provides the dimension of the dictionary. Each cluster is now a word of the dictionary. By using these predefined words we can represent the features of the test image. One of the most popular unsupervised methods for dictionary learning is Vector Quantization (VQ) [26] using k-means. Let F be a set of features in a D-dimensional space, i.e. F = [F1 , F2 , . . . , FN ]| ∈ R(N ×D) , and D = [D1 , D2 , . . . , DK ]| are K words (cluster centers) of our dictionary. In VQ we want to learn D, by optimizing the following problem min D

N X i=1

2 min Fn − Dk

k=1...K

(1)

where k.k denotes the L2-norm. In this problem, all the points will become a member of just one of the cluster centers. If we introduce the indicator function Z, which are the weights of the cluster centers Z = [z1 , z2 , . . . , zN ]T , we can reformulate (1) to min Z,D

N X

Fn − zn D 2

(2)

i=1

s.t. Card(zn ) = 1, |zn | = 1,

zn ≥ 0, ∀n

The cardinality constraint on zn means that only one element of zn can be nonzero. Moreover, this value should be nonnegative and the L1-norm (summation of all elements) of Z should be equal to one. Because of hard constraint on cardinality and L1-norm of zn , the dictionary learning is hard and reconstruction error will be affected. To relax these hard constraints the sparse coding method is proposed [8]

min Z,D

N X

Fn − zn D 2 + λ|zn |

representation of the input features. It is biologically inspired from human visual cortex that is estimated to be over complete by a factor of 500. For example a 14×14 input patch is coded by 100000 neurons. Here because the feature dimension of our method is 128 (SIFT descriptors) the K is chosen to be 1024, which results in best results. To represent each image by using the codes obtained by sparse coding, some statistics of codes for example histogram can be used. Here the max pooling of codes is used which is proved to be more effective than mean pooling [28]. Alternatively, the histograms are pooled from different spatial scales to make use of locality characteristic of the features. C. Classifire After calculating the feature vectors of the training images, a multiclass linear SVM is used to train a model for classification, where one-versus-all strategy is used. Given a set of features x and their class labels y, we have {(xi , yi )}ni=1 , yi ∈ Y = 1, . . . , L and yic = {+1, −1} where n is the number of features, Y is the set of labels and L is the number of labels. Because we have L classes, SVM aims to learn L linear functions to classify each feature vector to their corresponding classes. For a test image, which is represented by x, SVM tries to predict its class label by y = max < wc , x > c∈Y

where < . > is the inner product of two vectors. To achieve one-versus-all strategy for multiple classes classifier the optimization problem (5) should be solved.   n X 2 c min J(wc ) = kwc k + C `(wc ; yi ; xi ) wc

s.t. ||Dk || ≤ 1, ∀k = 1, 2, . . . , K Here the L1-norm of weights is moved to the objective function using Lagrange Multiplier and just an L2-norm of the dictionary is provided as a constraint. This constraint is to prevent all the elements of dictionary become zero, which is a trivial minimum of the objective function. Because (3) is not convex, it should be iteratively solved till get the sparsest solution [27]. Firstly, D is randomly initialized and Z is calculated, then by using the fixed Z, new dictionary D is optimized until convergence. The dimensionality of dictionary (K) plays an important role. If K is close to D (dimensionality of feature vectors,) it is said to be critically complete. In this case a small change of feature set results in large change of weights. Therefore, K should be more than D, which is said to be over complete to insure the proper

(5)

i=1

Here ` is the Hinge loss and if yi belongs to class c, yic = 1, otherwise yic = −1. The optimization problem (5) is solved using conjugate gradient method. Finally, in the testing stage, for each image, the feature vector is calculated and classified by using the trained model.

(3)

i=1

(4)

III.

E XPERIMENTS AND R ESULTS

A. Dataset The MIVIA HEp-2 images dataset [10] is publicly available dataset on Human Epithelial type 2 cells. It contains image and cell level fluorescent images. At the image level, there are 28 images acquired by means of a unit consisting of a fluorescence microscope (40-fold magnification) coupled with a 50W mercury vapor lamp and with a digital camera. The camera has a CCD with squared pixel of equal side to 6.45 µm. The images have a resolution of 1388×1038 pixels, a color depth of 24 bits and they are stored in uncompressed format. Each image contains just one of the staining patterns. The staining patterns are in six classes namely homogeneous, fine speckled, coarse speckled, centromere, nucleolar and cytoplasmatic. Fig. 2 shows the sample of each pattern in image level. To provide the ground truth, specialists manually segmented and annotated each cell at a workstation monitor and

cells in 28 images, which are divided to 721 images for training and 734 images for testing. TABLE I shows the number of cells in each image for different patterns and intensities. B. Cell and Image Level Classification Ce nt r ome r e

Coa r s es pe c k l e d

F i nes pe c k l e d

Homog e ne ous

Fig. 2.

Cy t opl a s ma c

Nuc l e ol a r

Image level slides of dataset containing six classes.

The proposed technique has been tested on the ICPR 2012 contest dataset. Two experiments are conducted, which are followed by the contest protocol. The first is on the each cell images in one slide without looking to the other cells of that specific slide. This is called the cell level classification. To justify our comparison to the ICPR 2012 contest, we used the same training and test sets. The dictionary of our method is learned by using 721 cell images of training set and the sparse coding method coded the words in the dictionary for each image. A linear SVM classifier is trained to capture the training data characteristics. In the test stage, the sparse codes of the test images are calculated using the learned dictionary and the SVM model classifies the test images. The confusion matrix of the classification is shown in TABLE II. As stated in TABLE II, the classification performance for Centromere, Homogeneous and Nucleolus classes are better than that for Corse speckled and Fine speckled. TABLE II.

Ce CS Cy FS H N

Fig. 3. Cell level slides. Each column is one type of cells Centromere, Coarse speckled, cytoplasmatic, Fine speckled, Homogeneous and Nucleolar respectively. 1st and 3rd row shows the positive and intermediate intensity images of six classes. 2nd and 4th row shows the heat map of the corresponding cells, which are multiplied with their masks.

reported data on fluorescence intensity and staining pattern. Annotation was carried out in two phases: first, a biomedical engineer segmented the cells using a tablet PC; then, each image was reviewed and annotated by a medical doctor specialized in immunology. Totally, there were 11 cells, which were dubious to specialists. These cells were omited from the dataset by the contest organizers to provide a trustable dataset. Fig. 3 shows the cell level images. The 1st and 3rd rows show the positive and intermediate intensity level of cells for different classes. Moreover, 2nd and 4th rows show the heat map of the corresponding images, where the masks are applied to them. The heat map can represent the intensity values clearly in each image. As can be seen in Fig. 3 and reported in [10], the classification of Centromere and Cytoplasmatic cells is easier than the others due to a distinctive textures of them. On the other hand, the textures of three classes Corse speckled, Fine speckled and homogeneous are close to each other and make the classification challenging. The masks are also provided by the dataset and the goal is purely a classification problem. In the dataset, the distribution of cell types are almost equal and it is divided to training and test sets. Totally there are 1455

C ONFUSION M ATRIX FOR T EST I MAGES IN C ELL L EVEL FOR S IX C LASSES .

Ce 88.6 6.9 0.0 4.4 1.7 6.5

CS 0.0 62.4 2.0 28.9 1.1 4.3

Cy 0.0 4.0 98.0 0.9 0.0 1.4

FS 0.0 20.8 0.0 29.8 15.0 0.7

H 0.0 5.0 0.0 36.0 82.2 11.5

N 11.4 1.0 0.0 0.0 0.0 75.5

The second experiment is conducted on image level. As stated in TABLE I, 14 images are used for each of the training and testing stages. Each image has the same cell types but different number of cells. Using the codebook that is learned in cell level experiment, all the cells in each test images are classified. The majority label of the cells in each image is obtained and stated as the class label of the whole image. The accuracy of the image level is obtained 85.8% and the confidence matrix of image level is shown in TABLE III. TABLE III. Ce CS Cy FS H N

C ONFUSION M ATRIX FOR T EST I MAGES IN I MAGE L EVEL . Ce 100.0 0.0 0.0 0.0 0.0 0.0

CS 0.0 66.7 0.0 50.0 0.0 0.0

Cy 0.0 0.0 100.0 0.0 0.0 0.0

FS 0.0 33.3 0.0 50.0 0.0 0.0

H 0.0 0.0 0.0 0.0 100.0 0.0

N 0.0 0.0 0.0 0.0 0.0 100.0

These experiments are completely aligned with the test configuration of ICPR 2012 competition. Therefore, the accuracies are comparable with the results stated in [10]. The results of accuracies are shown in Fig. 4. As Fig. 4 shows, the proposed technique clearly outperforms all participant methods on the cell level and obtains an accuracy of 72.8%, which is very close to the specialist accuracy at 73.3% performed by domain experts. This specialist is different from whom provided the ground truth of the dataset. On the image level, the proposed technique obtains an accuracy of 85.8%, which is almost the same as the accuracy that Nosaka’s [21] method and specialist obtained.

TABLE I.

THE NUMBER OF CELLS IN EACH IMAGE FOR INTERMEDIATE AND POSITIVE INTENSITIES AND DIFFERENT CELL TYPES . T OTALLY (721 CELLS ) ARE FOR TRAINING AND 14 IMAGES (734 CELLS ) FOR TESTING OF OVERALL 28 IMAGES (1455 CELLS ).

Centromere Coarse speckled Cytoplasmatic Fine speckled Homogeneous Nucleolar Total

100 90 80 70 60 50 40 30 20 10 0

Fig. 4.

Training set Intermediate Positive 2 (119) 1 (89) 1 (41) 1 (68) 1 (24) 1 (34) 1 (48) 1 (46) 1 (47) 2 (103) 1 (46) 1 (56) 7 (325) 7 (396) 14 (721)

Test set Intermediate Positive 1 (65) 2 (84) 1 (33) 2 (68) 1 (13) 1 (38) 1 (63) 1 (51) 1 (61) 1 (119) 1 (66) 1 (73) 6 (301) 8 (433) 14 (734)

Total Intermediate Positive 3 (184) 3 (173) 2 (74) 3 (136) 2 (37) 2 (72) 2 (111) 2 (97) 2 (108) 3 (222) 2 (112) 2 (129) 13 (626) 15 (829) 28 (1455)

14 IMAGES

Overall 6 5 4 4 5 4 28

(257) (210) (109) (208) (330) (241) (1455)

Accuracy in cell and image level cell level

Image level

The accuracy in cell and image level. Our cell level accuracy (second column) is close to human expert.

Accuracy on the test set 90 80 70 60 50 40 30 20 10 0

All cells Intermediate intensity Positive intensity

Fig. 5. Accuracy on test set for intermediate and positive intensity images. Our accuracy (second column) in cell level is as close as human expert and in positive and intermediate intensity images, the accuracy is 13% and 2% more than human expert respectively.

C. Cell Classification Using Intensity Levels In this experiment, we gain the prior knowledge on the intensity level of the images. In this regard, following the ICPR contest protocol, the data set is divided to two sets of intermediate and positive intensity images, which contain 626 and 829 images respectively as listed in TABLE I. In the intermediate intensity dataset, 325 images are for training and 301 images for testing. Similarly, 396 and 433 images are for training and testing in positive intensity level. Each intensity level group is learned a dictionary and tested separately. The accuracy in the intermediate intensity level is 62.46%, which is almost 13% more than the human expert

accuracy (49.5%) and almost 5% more than the best results reported in the ICPR contest. For the positive intensity images, the human expert accuracy is 79.5% and we achieved 81.5%, which improves the results for 2%. Overall, the cell level accuracy by considering the intensity level of the cell images is achieved 73.57%, which is slightly better than the human expert result. Fig. 5 shows the accuracies in intensity levels for all the methods that were submitted to the ICPR contest. IV.

C ONCLUSION

In this paper an automatic CAD system is proposed to classify the HEp-2 cell images. The proposed method uses

the sparse coding representation of SIFT features on the cell images. Because of the sparse nature of the patches in Grid SIFT calculation and low reconstruction error of sparse coding, this method obtains superior performance in comparison with vector quantization method. After sparse coding, the max pooling of scaled images in three scales is used. For classification, a multiclass linear SVM is used, which is trained by one-versus-all strategy on training images. The proposed method is experimented on publicly available MIVIA HEp2 images dataset in both cell and image level. Moreover, the classification using two intensity levels is experimented in this paper.

[10]

In the cell level, the accuracy is achieved 72.8%, which is almost the same as human expert. This experiment is done by looking to each cell images without considering the other cells in that specific specimen. Additionally, in image level, all the cells in the specimen are considered to label the image as one of the six classes. The accuracy in image level is achieved 85.8%, which is the same as result of human expert.

[14]

By utilizing the prior knowledge on intensity level of the cell images, the best accuracy is achieved in intermediate intensity (62.46%), which is almost 13% and 5% more than the human expert and best result of the contest accuracy respectively. Additionally, in the positive intensity cell images, the 81.5% accuracy is achieved, which is almost 2% more than the human expert accuracy. For further improvement in the classification result of the dataset under study, it is suggested to work on discriminative features to represent the characteristics of the Fine speckled and Coarse speckled to distinguish between these two classes from Homogeneous class. Because the experiments show that the classification error for the Corse speckled and especially for Fine speckled classes are more superior. R EFERENCES [1] [2]

[3] [4]

[5]

[6]

[7]

[8]

[9]

W. B. Storch, Immunofluorescence in clinical immunology: a primer and atlas. Springer, 2000. J. M. Gonz´alez-Buitrago and C. Gonz´alez, “Present and future of the autoimmunity laboratory,” Clinica chimica acta, vol. 365, no. 1, pp. 50–57, 2006. T. Joachims, Text categorization with support vector machines: Learning with many relevant features. Springer, 1998. J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, vol. 2. IEEE, 2005, pp. 1800–1807. E. Nowak, F. Jurie, and B. Triggs, “Sampling strategies for bagof-features image classification,” in Computer Vision–ECCV 2006. Springer, 2006, pp. 490–503. Y. Zhang, R. Jin, and Z.-H. Zhou, “Understanding bag-of-words model: a statistical framework,” International Journal of Machine Learning and Cybernetics, vol. 1, no. 1-4, pp. 43–52, 2010. S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 2. IEEE, 2006, pp. 2169–2178. J. Yang, K. Yu, Y. Gong, and T. Huang, “Linear spatial pyramid matching using sparse coding for image classification,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 1794–1801. D. Zonoobi and A. A. Kassim, “On the reconstruction of sequences of sparse signals–the weighted-cs,” Journal of Visual Communication and Image Representation, vol. 24, no. 2, pp. 196–202, 2013.

[11]

[12]

[13]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

P. Foggia, G. Percannella, P. Soda, and M. Vento, “Benchmarking hep-2 cells classification methods,” Medical Imaging, IEEE Transactions on, vol. 32, no. 10, pp. 1878–1889, 2013. R. Ludovic, R. Daniel, L. Nicolas, K. Maria, I. Humayun, K. Jacques, C. Fr´ed´erique, G. Catherine, L. Gilles, N. Metin et al., “Mitosis detection in breast cancer histological images an icpr 2012 contest,” Journal of Pathology Informatics, vol. 4, no. 1, p. 8, 2013. H. Irshad, S. Jalali, L. Roux, D. Racoceanu, L. J. Hwee, G. Le Naour, and F. Capron, “Automated mitosis detection using texture, sift features and hmax biologically inspired approach,” Journal of pathology informatics, vol. 4, no. Suppl, 2013. L. H. Lee, A. Mansoor, B. Wood, H. Nelson, D. Higa, and C. Naugler, “Performance of cellavision dm96 in leukocyte classification,” Journal of pathology informatics, vol. 4, 2013. P. Hiremath, P. Bannigidad, and S. Geeta, “Automated identification and classification of white blood cells (leukocytes) in digital microscopic images,” IJCA special issue on “recent trends in image processing and pattern recognition” RTIPPR, pp. 59–63, 2010. J. W. Chan, D. K. Lieu, T. Huser, and R. A. Li, “Label-free separation of human embryonic stem cells and their cardiac derivatives using raman spectroscopy,” Analytical chemistry, vol. 81, no. 4, pp. 1324– 1331, 2009. M. E. Plissiti and C. Nikou, “Cervical cell classification based exclusively on nucleus features,” in Image Analysis and Recognition. Springer, 2012, pp. 483–490. Y. Yang, A. Wiliem, A. Alavi, and P. Hobson, “Classification of human epithelial type 2 cell images using independent component analysis,” in IEEE International Conference on Image Processing (ICIP), pages, 2013. B. Misselwitz, G. Strittmatter, B. Periaswamy, M. C. Schlumberger, S. Rout, P. Horvath, K. Kozak, and W.-D. Hardt, “Enhanced cellclassifier: a multi-class classification tool for microscopy images,” BMC bioinformatics, vol. 11, no. 1, p. 30, 2010. T. Kiyan, “Breast cancer diagnosis using statistical neural networks,” IU-Journal of Electrical & Electronics Engineering, vol. 4, no. 2, 2011. T. W. Nattkemper, H. J. Ritter, and W. Schubert, “A neural classifier enabling high-throughput topological analysis of lymphocytes in tissue sections,” Information Technology in Biomedicine, IEEE Transactions on, vol. 5, no. 2, pp. 138–149, 2001. R. Nosaka and K. Fukui, “Hep-2 cell classification using rotation invariant co-occurrence among local binary patterns,” Pattern Recognition, 2013. S. Ghosh and V. Chaudhary, “Feature analysis for automatic classification of hep-2 florescence patterns: Computer-aided diagnosis of auto-immune diseases,” in Pattern Recognition (ICPR), 2012 21st International Conference on. IEEE, 2012, pp. 174–177. N. Cristianini and J. Shawe-Taylor, An introduction to support vector machines and other kernel-based learning methods. Cambridge university press, 2000. X. Kong, K. Li, J. Cao, Q. Yang, and L. Wenyin, “Hep-2 cell pattern classification with discriminative dictionary learning,” Pattern Recognition, 2013. I. Ersoy, F. Bunyak, J. Peng, and K. Palaniappan, “Hep-2 cell classification in iif images using shareboost,” in Pattern Recognition (ICPR), 2012 21st International Conference on. IEEE, 2012, pp. 3362–3365. J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “Lost in quantization: Improving particular object retrieval in large scale image databases,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008, pp. 1–8. D. Zonoobi, A. A. Kassim, and Y. V. Venkatesh, “Gini index as sparsity measure for signal reconstruction from compressive samples,” Selected Topics in Signal Processing, IEEE Journal of, vol. 5, no. 5, pp. 927–932, 2011. Y. Boureau, N. Le Roux, F. Bach, J. Ponce, and Y. LeCun, “Ask the locals: multi-way local pooling for image recognition,” in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011, pp. 2651–2658.

Automatic CAD System for HEp-2 Cell Image ...

among the top ten leading causes of death among women in all age groups ..... The heat map can represent the intensity values clearly in each image. As can ...

2MB Sizes 0 Downloads 207 Views

Recommend Documents

Hybrid Generative/Discriminative Learning for Automatic Image ...
1 Introduction. As the exponential growth of internet photographs (e.g. ..... Figure 2: Image annotation performance and tag-scalability comparison. (Left) Top-k ...

Automatic Intensity-Pair Distribution for Image Contrast ...
[email protected] , [email protected]. Abstract—Intensity-pair ... Index-terms: automation, intensity-pair, contrast enhancement, histogram.

Method and system for image processing
Jul 13, 2006 - images,” Brochure by Avelem: Mastery of Images, Gargilesse,. France. Porter et al. ..... known image processing techniques is that the image editing effects are applied ..... 6iA schematic illustration of the FITS reduction. FIG.

Method and system for image processing
Jul 13, 2006 - US RE43,747 E. 0 .File Edi! Monan Palette Llybul. 09 Fib Edit Malian PM L. II I ... image editing packages (e.g. MacIntosh or Windows types), manipulates a copy of ...... ¢iY):ai(X>Y)¢ii1(X>Y)+[1_ai(X>Y)l'C. As there is no ...

Tattoo-ID: Automatic Tattoo Image Retrieval for Suspect ...
performance of the system is evaluated on a database of 2,157 tattoos representing 20 .... 2 Tattoo Image Database. We have ..... Intelligence, vol. 19, no. 7, pp.

A Revisit of Generative Model for Automatic Image ...
visit the generative model by addressing the learning of se- ... [15] proposed an asymmetrical support vector machine for region-based .... site i for the kth image.

Tattoo-ID: Automatic Tattoo Image Retrieval for Suspect ...
East Lansing, Michigan 48824, USA. {jain, leejun11 ... Tattoos are a useful tool for person identification in forensic applications. There ... We have downloaded 2,157 tattoo images from the Web [14] belonging to eight main ..... We have presented th

Learning Contextual Metrics for Automatic Image ... - Springer Link
Mi) dx. (6). In our metric learning method, we regularize each metric by Euclidean distance function. So we add the following single regularization term into our ...

Multi-Label Sparse Coding for Automatic Image ...
Department of Electrical and Computer Engineering, National University of Singapore. 3. Microsoft ... sparse coding method for multi-label data is proposed to propagate the ...... Classes for Image Annotation and Retrieval. TPAMI, 2007.

Multi-Label Sparse Coding for Automatic Image ... - Semantic Scholar
Microsoft Research Asia,. 4. Microsoft ... [email protected], [email protected], {leizhang,hjzhang}@microsoft.com. Abstract .... The parameter r is set to be 4.

Image forming system
Mar 18, 2005 - toner stain adhered onto an image bearing member at the ... an area on the photoreceptor drum With Which a developer contained in a Bk ...

Image forming system
Mar 18, 2005 - kept in contact with each other, toner images formed on the image bearing member ... The image forming system using such an intermediate.

Automatic steering system and method
Feb 6, 2008 - Such sophisticated autopilot and auto matic steering ..... ware and software complexities associated with proportional steering correction.

Image retrieval system and image retrieval method
Dec 15, 2005 - face unit to the retrieval processing unit, image data stored in the image information storing unit is retrieved in the retrieval processing unit, and ...

Automatic steering system and method
Feb 6, 2008 - TRACK DRIVE PUMP ... viding GPS-based guidance for an auxiliary steering system, which is installed in .... actual turning rate in a track drive vehicle. FIG. .... ware and software complexities associated with proportional.

Process control system including automatic sensing and automatic ...
Nov 9, 2001 - digital device by assigning a physical device tag' a device ... control system, assigns a physical device tag that assigns the. _ ..... DATABASE.

Process control system including automatic sensing and automatic ...
Nov 9, 2001 - Trends in PLC Programming Languages and Programming. 5,519,878 ... C. K. Duffer et al., “HighiLevel Control Language Custom. 5,530,643 ...

Automatic Control System for Oil Pumping Unit Management ... - IJRIT
The motivation of developing this system is that 1) due to the special nature of oil ... IS for power economy and the malfunction report to the maintenance staff via ... networks have drawn much attention for their broad practical applications [1]–

Automatic Control System for Oil Pumping Unit Management ... - IJRIT
The software-defined (SD) TLS is designed for hundreds of oilwells's data ... Evidently, this module, i.e., CPU, is in charge of all data analysis and processing for all I/O ports. .... good generalization capability although its convergence is slow.

Automatic Control System for Oil Pumping Unit ... - PDFKUL.COM
control system is proposed for OPU management and oil well health monitoring. ..... [4] R. Kumar, R. R. Das, V. N. Mishra, and R. Dwivedi, “A neuro-fuzzy ...

100.REMOTE WIRELESS AUTOMATIC METER READING SYSTEM ...
100.REMOTE WIRELESS AUTOMATIC METER READING SYSTEM BASED ON GPRS.pdf. 100.REMOTE WIRELESS AUTOMATIC METER READING SYSTEM ...

Automatic prescription filling, sorting and packaging system
May 7, 1996 - In an automated prescription dispensing and packing. (58) Field of Classi?cation .... The use of mail service to ?ll prescriptions has been.

A solid oxide fuel cell system for buildings
and the heat and mass balance and system performance are obtained through numerical ... Energy Conversion and Management 48 (2007) 809–818 ...