Thinned ECOC Decomposition for Gene Expression Based Cancer Classification

Nima Hatami Department of Electrical Engineering, Shahed University, Tehran, Iran [email protected]

Abstract Cancer classification using gene expression data has the great importance in bioinformatics and is known to contain the keys for addressing the fundamental problems relating to cancer diagnosis and drug discovery. Error correcting output coding (ECOC) is a method to design Multiple Classifier Systems (MCS), which reduces a multi-class problem into some binary sub-problem. A key issue in design of any ECOC ensemble is defining optimal code matrix with maximum discrimination power and minimum number of columns. This paper introduces a heuristic method for application dependent design of optimal ECOC matrix base on the thinning algorithm used in the ensemble design. The key idea of proposed method which called Thinned ECOC is to remove some redundant and unnecessary columns of any initial code matrix successively based on a metric defined for each column. Experimental results on two real datasets show the robustness of Thinned ECOC in comparison with the other existing code generation methods. Keywords: Multiple Classifier Systems (MCS), Thinning, Diversity, Error Correcting Output Codes (ECOC), Support Vector Machine, Cancer classification, Gene expression data.

1. Introduction In pattern recognition systems the ultimate goal is to achieve the best possible classification performance for the task at hand. Since there is no single classifier to handle all classification problems completely, combining multiple classifiers is proposed. In this concept, it’s proved that combining more independent classifiers with acceptable accuracy achieved better performance; therefore we try to increase diversity among accurate base classifiers [1, 2]. There are many development approaches in machine learning to generate diverse classifiers such as boosting [3], mixture of experts [4] and ECOC [5]. Each of which has own approaches: in boosting, we change input samples distribution for each classifier to concentrate on difficult to learn points. In mixture of

expert we try to specialize each expert in part of input pattern and each expert is responsible for own task. ECOC is a method to manipulate label of classes and achieved promising results on both synthetic and real datasets [6, 7, 16, 28]. In this approach, we first define a discrete decomposition matrix (code matrix) for a multiclass problem at hand. Then this problem decomposed into a number of binary sub-problems, dichotomies, according to the sequence of 0s and 1s of columns of the code matrix. After training each binary classifier on this dichotomies and testing them for any incoming sample, binary output vector is created. The label is assigned to the class with the smallest distance between this vector and the codewords. Since, performance of decomposition stage highly related to code matrix structure, problem of generating optimal code is in attention. Various methods proposed in literature to code matrix generation [5, 8]. The algebraic-based BCH codes [9], dense and sparse random method, pairwise coupling or 1vs.1 and 1 vs.all are some famous code generation methods with good results in the literature [5, 6, 8, 10]. In almost all of these methods, the final goal is having greatest possible distance between any pair of code words for more error correcting capability and low correlation among matrix columns (binary sub-problems) for independency in performance of dichotomies. All these coding strategies are fixed in the ECOC design step, defined independently of the problem domain or the classification performance. In fact, very little attention has been paid in literatures to the coding process of the ECOC matrix. The first approach to ECOC coding design was proposed by Utschick et al. [11]. In their work, they optimize a maximumlikelihood objective function by means of the expectation maximization algorithm in order to improve the process of binary coding. Crammer et al. [12] also reported improvement in the design of the ECOC codes. As an alternative, they proposed a method for heuristically search of the optimal coding matrix by relaxing the representation of the code matrix from discrete to continuous values. Pujol et al. [13] proposed the embedding of discriminant tree structures derived from the problem domain in the ECOC framework. With this method which called Discriminant ECOC, a multiclass problem decomposed into C-1 binary problems. As a result, they obtained a compact discrete coding matrix with a

small but fix number of dichotomizers and very high accuracy. In ref [14], Pujol et al. proposed a method that improves the performance of any initial output coding by extending it in a sub-optimal way. They proposed strategy creates the new dichotomizers by minimizing the confusion matrix among classes guided by a validation subset. With many progress in design of the optimal ECOC, an open issue to use ECOC is how to design the code matrix such that high discrimination power balanced against minimum code length. The performance of ECOC ensemble previously investigated on many multi-class problems such as face recognition, cell detection in bright field images and OCR problem [15-17, 16]. The bioinformatics problem of classifying different tumor types is of great importance in cancer diagnosis and drug discovery. Accurate prediction of different tumor types has great value in providing better treatment and toxicity minimization on the patients. However, most previous cancer classification studies are clinical-based and have limited diagnostic ability. Cancer classification using gene expression data is known to contain the keys for addressing the fundamental problems relating to cancer diagnosis and drug discovery. The recent advent of DNA microarray technique has made simultaneous monitoring of thousands of gene expressions possible. With this abundance of gene expression data, researchers have started to explore the possibilities of cancer classification using gene expression data. It has the issue of “high dimension low sample size” (referred as HDLSS problem in statistics [18, 19]. Quite a number of methods have been proposed in recent years with promising results. But there are still a lot of issues which need to be addressed and understood. In this paper, we propose a new method for automatic design of application dependent optimal code matrix i.e. ECOC with high discrimination power and appropriate code length for problem at hand. Clearly, in this approach we try to take advantage of some basic concept of ensembles making such as diversity of classifiers and thinning method to designing of optimal code matrix. Thinning ensemble is a framework that tries to measure diversity and accuracy; then use it explicitly in the process of building the ensemble. The main idea of the thinning algorithm is to find out which classifier is most often incorrect on the ensemble misclassification points and remove it from the ensemble. Similarly, we first produce an initial ECOC matrix; then proposed method, called Thinned ECOC, used to remove its some redundant and unnecessary columns successively based on a metric defined for each column. The paper is divided in the following sections: Section 2 provides a brief introduction to error correcting output codes; Section 3 first describes existing thinning method to building the classifier ensemble and then introduces the proposed method which used the thinning algorithm to automatic design of optimal ECOC. In Section 4, we apply our proposed method to classification of tissue types based on gene expression

and discuss the results; and Section 5 concludes the paper.

2. Error-Correcting Output Codes The main idea of the ECOC is to create a codeword for each of the N c classes. Arranging the codewords as rows of a matrix, we define a “code matrix” M , Nc ×L

where M ∈ {−1, 0, + 1}

L the code length. From point of view of learning, M , specifies N c classes to train L classifiers (dichotomizers), f 1 ... f L . The classifier f l is trained according to the column M (., l ) . If M ( N , l ) = +1 , all examples of class N are positive and if M ( N , l ) = −1 , all examples of class N are negative, and if M (N , l ) = 0 none of the examples of class N participates in the training of classifier f l . Let y = [ y 1 ,..., y L ], y l ∈ {−1, + 1} be the output vector of the L classifiers in the ensemble for a given input x . In the decoding step, the class output is selected that maximizes some similarity measure s , between y and row M ( N ,.) : Class Label = Arg Max S ( y , M (N ,.)) (1) , being

Concerning the similarity measures, two of the most common techniques are the Hamming decoding distances, equation 2, when classifier outputs are hard decision and Margin decoding, equation 3, when the outputs are soft level. L

S H ( y , M (N ,.)) = 0.5 × ∑1 + y l M (N , l ) (2) l =1

S M ( y , M (N ,.)) =

L

∑y l =1

l

M (N , l )

(3)

The matrix construction stage codifies the different partitions of classes that are considered by each dichotomizer. Most of the popular coding strategies up to now are based on pre-designed problemindependent codeword construction satisfying the requirement of high separability between rows and columns. These strategies include: one-versus-all, where each classifier is trained to discriminate a given class from the rest of classes using N c dichotomizers; random techniques, that can be divided in the dense random strategy, that consists of a binary matrix with high distance between rows with estimated length of 10 log 2 (N c ) bits per code; and the sparse random strategy, that includes the ternary symbol and the estimated optimal length is about 15log 2 ( N c ) . Oneversus-one is one of the most well-known coding N c (N c − 1) 2 dichotomizers strategies with

including all the combinations of pairs of classes [20]. Finally, BCH codes are based on algebraic techniques from Galois Field theory, and while its implementation is fairly complex, it has some advantages such as generating ECOC codewords separated by a minimum, configurable Hamming distance and good escalation with hundreds or thousands of categories. All these codification strategies are defined independently of the data set and satisfy two properties: • Row separation. In order to avoid misclassifications, the codewords should be as far apart from one another as possible. We can still recover the correct label for x even if several classifiers have guessed wrongly. A measure of the quality of an error-correcting code is the minimum Hamming distance, H c , between any pair of codewords. The number of errors that the code is guaranteed to be able to correct is

⎡H c − 1 ⎤ . 2 ⎦⎥ ⎣⎢

• Column separation. It is important that the dichotomies given as the assignments to the ensemble members are as different from each other as possible too. This will drive the ensemble towards low correlation between the classification errors (high diversity) which will hopefully increase the ensemble accuracy [5].

3. Thinning ensemble for design of optimal code

3.1 Thinning algorithm for building the ensemble Common intuition suggests that the classifiers in the ensemble should be as accurate as possible and should not make coincident errors. This simple sentence, explains the importance of accuracy and diversity among member of a multiple classifier system. The methods for building ensembles, which rely on inducing accuracy and diversity in an intuitive manner, are very successful. Thinning the ensemble is a method to design of an ensemble with the high recognition rate and minimum size, base on both accuracy and diversity of the base classifiers. The original idea of thinning is trying to improve the ensemble accuracy by removing classifiers that cause misclassifications. In [21] an ensemble is thinned by attempting to include the most diverse and accurate classifiers. They create subsets of similar classifiers (those that make similar errors) and then choose the most accurate classifier from each subset. In [22], the McNemar test was used to determine whether to include a decision tree in an ensemble. This pre-thinning allowed an ensemble to be

kept to a smaller size. Banfield et al. [23] introduce two new methods for removing classifiers from an initial ensemble based on diversity calculations. In accuracy in diversity (AID) thinning, the classifiers that are most often incorrect on examples that are misclassified by many classifiers are removed from the ensemble. Another method which called the concurrency thinning algorithm is based on the correctness of both the ensemble and the classifier with regard to a thinning set. A classifier is rewarded for obtaining a correct decision, and rewarded more for obtaining a correct decision when the ensemble is incorrect. A classifier is penalized in the event both the ensemble and classifier are incorrect. The procedure starts with all classifiers and the desired ensemble size is reached by removing one classifier at each step. The original thinning algorithm is shown in figure 1. For each classifier Ci For each sample If Ensemble Incorrect and Classifier Incorrect Metrici=Metrici-2 If Ensemble Incorrect and Classifier Correct Metrici=Metrici+2 If Ensemble Correct and Classifier Correct Metrici=Metrici+1 End for End for Remove Ci with the lowest Metrici

Fig. 1 The concurrency thinning algorithm.

Note that all of the thinning algorithms will learn to overfit the thinning set, negatively affecting the generalization accuracy of the ensemble. A potential solution to the overfitting problem is to use a thinning set to determine the optimal ensemble configuration for the thinning algorithms. As shown in [23] each thinning algorithm scores a higher Borda count and has a higher accuracy increase over random assembly. Concurrency thinning shows this particularly well, beating the other algorithms in every category. Therefore, we used the concurrency thinning algorithm in our proposed method to create Thinned ECOC matrix with minimum possible columns and maximum discrimination power which lead to most efficient and effective MCS. 3.2 Thinned ECOC In this subsection, we propose Thinned ECOC to design the code matrix for ECOC by choosing the code words utilizing the intrinsic information from the training data. The key idea of Thinned ECOC is to selectively remove some of the columns from the initial code matrix based on a Metric we define for a column. This measure will help us to determine how likely we will remove a column and its corresponding base classifier in MCS from the initial ECOC classifier. The summarized steps for the Thinned ECOC approach is shown here. Note that, the process

described is iterated while the minimum size of desired optimal matrix is reached.

Thinned-ECOC algorithm Given an initial code matrix Minit and minimum size of desired optimal matrix Θ Train the base classifiers using Minit and build an initial MCS while size(Minit)≥Θ 1- Calculate the Metrici for each column of Minit 2- Find the column with the lowest Metric on the thinning set 3- Remove this column from Minit 4- Calculate the accuracy of new Minit on the thinning set 5- If the Accuracy of new MCS Improved Mopt=Minit

Since generated Thinned ECOC is subset of given initial code matrix, the length of this matrix must be large enough to achieve suboptimal search region in the column space. When the number of classes is relatively high, exhaustive search optimization is computationally unfeasible; for instance length of the Exhaustive code matrix for 15 class problem is as large as 2 − 1 . In this case, for creating initial matrix, we proposed using of compound matrix which composed of different popular code matrix unified into one matrix. Fig. 2 shows the initial compound matrix for 4class problem which is combination of three 1vs 1, 1vs All and BCH-7 code matrixes. 14

Fig.2 The initial code matrix is a combination of three 1vs 1, 1vs All and BCH-7 code matrixes.

Fig. 3. The flow of the training, thinning and testing algorithms for Thinned ECOC.

Fig. 3 describes the flow of calculating the Metric and selecting the base learners to remove, which is the core of the Thinned ECOC algorithm. We can see that Thinned ECOC is a problem dependent approach of designing code matrix: instead of having a preset matrix, adaptively generates the code matrix based on the structure of the given training and thinning data.

4. Experimental Results To evaluate the usefulness of the proposed approach, we carried out experiments on two multi-

class data sets of gene expression profiles. Two expression datasets popularly used in research literature are the NCI [24, 25] and Lymphoma [26]. The details of these data sets are summarized in Table 1. We note that the number of tissue samples per class is generally small (e.g. <10 for NCI data) and unevenly distributed (e.g. from 46 to 2 in Lymphoma data). This, together with the larger number of classes (e.g., 9 for Lymphoma data), makes the classification task more complex.

Class 1

NCI data set Class name NSCLC

2

Renal

9

3 4

Breast Melanoma

8 8

5

Colon

7

6

Leukemia

6

7 8 9

Ovarian CNS Prostate

6 5 2

# of samples 9

Total # of samples Dimensionality

Lymphoma data set Class name Diffuse large B cell lymphoma Chronic Lympho. leukemia Activated blood B Follicular Lymphoma Resting/ activated T Transformed cell lines Resting blood B Germinal center B Lymph node/tonsil

# of samples 46 11 10 9 6 6 4 2 2

60 9703

96 4026

Table 1. Multi-class gene expression data sets for different tissue types.

As shown in Table 1, originally these data sets have contained expression levels of thousands of genes, i.e., the dimension of the data is very high. It has the issue of “high dimension low sample size”. So we first applied a dimension reduction method, Principal Component Analysis (PCA) [27], to reduce the dimensionality of the data. We assess classification performance using the "Leave-One-Out Cross Validation" (LOOCV). CV accuracy provides more realistic assessment of classifiers which generalize well to unseen data. For

presentation clarity, we give the number of LOOCV errors in Tables 2 and 3. Then compare Thinned ECOC with five common decomposition methods: one-vs-All, one-vs-one, BCH, dense and sparse random. Tables 2 and 3 show the error rates of different decomposition strategy of the Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and Decision tree techniques, on the NCI and Lymphoma data sets, respectively. All of these experiments are carried out by 2 different numbers of input feature size, 30 and 55.

Table 2. Error rate on NCI data set: # of features Code matrix ECOC with SVM ECOC with MLP ECOC with Decision tree

1 vs 1

30 1 vs A

BCH

Sparse random 18.0

Thinned ECOC 14.1

1 vs 1

55 1 vs A

BCH

18.3

Dense random 17.7

16.0

22.7

12.2

18.1

16.3

23.8

18.6

17.2

18.1

14.2

13.3

16.1

24.3

18.4

17.6

18.2

15.7

Sparse random 10.0

14.8

Dense random 15.5

Sparse random 14.1

Thinned ECOC 10.0

18.0

14.1

15.1

14.5

11.3

12.4

19.2

14.7

14.9

14.0

11.1

Thinned ECOC 7.1

1 vs 1

55 1 vs A

BCH

6.0

10.3

7.3

Dense random 6.0

Sparse random 7.1

Thinned ECOC 5.9

Table 3. Error rate on Lymphoma data set: # of features Code matrix ECOC with SVM ECOC with MLP ECOC with Decision tree

1 vs 1

30 1 vs A

BCH

8.2

14.1

10.0

Dense random 8.5

8.7

15.7

10.8

9.1

10.1

7.5

7.7

11.1

7.9

7.2

8.0

6.7

8.9

14.9

10.5

9.3

10.7

7.9

7.3

11.9

7.9

6.9

7.5

6.9

As shown in tables 2 and 3, our Thinned ECOC outperforms the other code generation algorithm. Another point in this table is that for these two databases using SVM as a base classifier achieved higher accuracy in comparison with MLP and decision Tree algorithms. Note that, opposite to other coding method, in Thinned ECOC length and structure of code matrix is variable and may differ from each run of algorithm.

5. Conclusion In this paper, Thinned ECOC as a heuristic method for application dependent design of Error Correcting

Output Codes is introduced. In the Thinned ECOC, we tried to take advantage of thinning algorithm for building ensemble, with the problem of designing optimal code matrix. As a result, a compact matrix with high discrimination power is obtained. This properties lead to ECOC ensemble with high accuracy and minimum number of base classifiers. Thinned ECOC algorithm has been applied successfully to the problem of cancer classification using gene expression data. From the different experiments, we observe that the building process of the ECOC matrix is of great importance. Finally, we can conclude that the Thinned ECOC design is a very promising alternative to other ECOC methods, frequently outperforming most of them.

References [1] Ali. K.M and Pazzani. M.J.: On the Link Between Error Correlation and Error Reduction in Decision Tree Ensembles, Technical Report 95-38, ICS-UCI, 1995. [2] Hatami. N. and Ebrahimpour. R. Combining Multiple Classifiers: Diversify with Boosting and Combining by Stacking, IJCSNS International Journal of Computer Science and Network Security, 7(1), 2007, 127-131. [3] Freund. Y. and Schapire. R.E.: A decision-theoretic generalisation of on-line learning and an application to boosting, J. of Computer and System Science, 55(1), 1997, 119-139. [4] Jacobs. R.A, Jordan. M.I, Nowlan. S.E, G.E. Hinton, Adaptive mixture of experts, Neural Comput., 3, 1991, 79–87. [5] T. G. Dietterich and G. Bakiri, Solving multiclass learning problems via error correcting output codes, J. of Artificial Intelligence Research, 2,1995, 263-286. [6] Ghaderi, R., Arranging simple Neural Networks to solve Complex classification problems, Ph.D. Thesis, Center for Vision, speech and signal processing, university of Surry, (2000). [7] Windeatt, T., Ghaderi, R., Binary labelling and decision level fusion. Inform. Fusion, 2, 2001, 103–112. [8] E.L Allwein, R.E Shapire and Y. Singer, “Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers,” Journal of Machine Learning Research, 1, 2000, 113-141. [9] S. Lin and D. J. Costello. Error Control Coding, Second Edition. Prentice-Hall, Inc., 2004. [10] W.W. Peterson and J.R. Weldon. Error-Correcting Codes. MIT press, Cambridge, MA, 1972. [11] W. Utschick, W. Weichselberger, Stochastic organization of output codes in multiclass learning problems, Neural Computation 13 (5) (2001) 1065–1102. [12] K. Crammer, Y. Singer, On the learnability and design of output codes for multiclass problems, Machine Learning 47 (2) (2002) 201–233. [13] O. Pujol, P. Radeva, J. Vitria, Discriminant ecoc: A heuristic method for application dependent design of error correcting output codes, IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (6) (2006) 1001– 1007. [14] O. Pujol, S. Escalera, P. Radeva, An incremental node embedding technique for error correcting output codes Pattern Recognition 41 (2008) 713 – 725. [15] X. Long, W. L. Cleveland, Y. L. Yao; Multiclass cell detection in bright field images of cell mixtures with ECOC probability estimation, Image and Vision Computing 26 (2008) 578–591. [16] N. Hatami, R. Ebrahimpour, R. Ghaderi, ECOC-Based Training of Neural Networks for Face Recognition, 3rd IEEE International conference on Cybernetics and Intelligent Systems, CIS-RAM08, 2008, (to appear). [17] N. Hatami, S. Seyedtabaii, M. Mikaili, “Combining Classifiers for Handwritten Digit Recognition”, In Proceeding of the 3rd International Conference on Information & Knowledge Technology (IKT07), 2007. [18] J.S. Marron, M. Todd, Distance weighted discrimination, Technical Report No. 1339, School of Operations Research and Industrial Engineering, Cornell University, July 2002. [19] S.J. Raudy, A.K. Jain, Small sample size effects in statistical pattern recognition: recommendations for practitioners, IEEE Trans. PAMI 13 (3) (1991) 252–264. [20] T. Hastie, R. Tibshirani, Classification by pairwise grouping, The Annals of Stat. 26 (5) (1998) 451–471.

[21] G. Giacinto, F. Roli, An approach to automatic design of multiple classifier systems, Pattern Recognition Letters, v. 22, pp. 25-33, 2001. [22] P.Latinne, O. Debeir, and C. Decaestecker, Limiting the number of trees in random forests, Second International Workshop on Multiple Classifier Systems, pp. 178-187, 2001. [23] R.E. Banfield, L.O. Hall, K.W. Bowyer, W.P. Kegelmeyer, Ensemble diversity measures and their application to thinning, Information Fusion 6 (2005) 49– 62. [24] Ross, D.T., Scherf, U., et al. (2000). Systematic variation in gene expression patterns in human cancer cell lines, Nature Genetics, 24(3), 227-234. [25] Scherf, U., Ross, D.T., et al. (2000). A cDNA microarray gene expression database for the molecular pharmacology of cancer, Nature Genetics, 24(3), 236-244. [26] Alizadeh, A.A., et al. (2000). Distinct types of diffuse large Bcell lymphoma identified by gene expression profiling, Nature, 403, 503-511. [27] K. Fukunaga, Introduction to statistical pattern recognition, Academic Press, Boston, 2rd edition, 1990. [28] N. Hatami, S. Seyedtabaii, Error Correcting Output Codes Using Genetic Algorithm-Based Decoding, 4th International Conference on Networked Computing and Advanced Information Management (NCM2008), IEEE CS Proceedings, 2008,(to appear).

Thinned ECOC Decomposition for Gene Expression ...

Thinning, Diversity, Error Correcting Output. Codes (ECOC), Support Vector Machine,. Cancer classification, Gene expression data. 1. Introduction. In pattern ...

75KB Sizes 0 Downloads 162 Views

Recommend Documents

Thinned-ECOC ensemble based on sequential code ...
method has been validated using the UCI machine learning database and further applied to a couple of real-world pattern ... Experimental results emphasize the robustness of Thinned-ECOC in comparison with existing state-of-the-art code .... advantage

Thinned-ECOC ensemble based on sequential code ...
A potential solution to the overfitting problem is .... accuracy when comparing accuracy of initial and final Thinned-. ECOC. ..... Machine recognition of faces from still and video images has .... In 3rd IEEE international conference on cybernetics.

Modeling Dependent Gene Expression
From Pathways to Conditional Independence Priors. ◦ Non-recursive graphs and Markov Random Fields. • Probability of Expression (Parmigiani and Garreth ...

Gene Expression and Ethnic Differences
Feb 8, 2007 - 1Ludwig Center and Howard Hughes Medical Institute, .... for Bioinformatics, Salk Institute for Biological Studies, La Jolla, CA 92186, USA. D.

Gene Expression and Ethnic Differences
Feb 8, 2007 - MIC, lists a total of 109 silent mutations out of 2335 .... Genetics LLC, State College, PA 16803, USA. ... Company, Indianapolis, IN 46285, USA.

Modeling Dependent Gene Expression
Nov 13, 2008 - Keywords: Conditional Independence, Microarray Data, Probability Of Expression, Probit Models, Recip- ..... other words, we partition E into E = S ∪ M ∪ U. 2.3 A Prior ..... offers a good recovery of the true generating pattern.

man-41\gene-expression-concept-map.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

POGIL 14 Gene Expression-Transcription-S.pdf
Page 1 of 2. Stand 02/ 2000 MULTITESTER I Seite 1. RANGE MAX/MIN VoltSensor HOLD. MM 1-3. V. V. OFF. Hz A. A. °C. °F. Hz. A. MAX. 10A. FUSED. AUTO HOLD. MAX. MIN. nmF. D Bedienungsanleitung. Operating manual. F Notice d'emploi. E Instrucciones de s

Rapid, broadâ•'scale gene expression evolution ... - Wiley Online Library
Apr 26, 2017 - Fishes, Leibniz-Institute of Freshwater. Ecology and Inland Fisheries, Berlin,. Germany. 5Division of Integrative Fisheries. Management, Department of Crop and. Animal Sciences, Faculty of Life Sciences,. Humboldt-Universität zu Berli

Control of insulin gene expression by glucose
buffered Krebs bicarbonate medium containing 5 mg of BSA/ml for 1 h. Subsequently cells were incubated for a further 4 h in fresh medium containing test ...

Gene Expression Changes in the Motor Cortex Mediating ... - PLOS
Apr 24, 2013 - commenced, prior to the initial rise in task performance, or at peak performance. Differential classes of gene ..... the regression curve at which 10% of the peak performance is achieved (t10%-max) ...... VASP downregulation triggers c

regulation of gene expression in prokaryotes pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. regulation of ...

Gene expression perturbation in vitro—A growing case ...
Sep 14, 1982 - expression data and review how cell lines adapt to in vitro environments, to what degree they express .... +46 8 5248 6262; fax: +46 8 319 470.

Population genomics of human gene expression
Sep 16, 2007 - Understanding the molecular basis of human phenotypic variation is a ... functional genetic effects between populations, and describe the degree ... Received 30 May; accepted 29 August; published online 16 ...... Annotation (GOA) Datab

Gene Expression Changes in the Motor Cortex ... - Re.Public@Polimi
Apr 24, 2013 - In each of these three-group comparisons, the Kruskal-Wallis test was used to ..... 276–295. 114.8. (2)TGTCGGTGTCGTAAGGGTTG. 350–331.