Pattern Recognition 34 (2001) 315}321

Dynamic generation of prototypes with self-organizing feature maps for classi"er design Arijit Laha, Nikhil R. Pal* Electronics and Communication Science Unit, Indian Statistical Institute, Calcutta 700 035, India Received 2 June 1999; accepted 1 October 1999

Abstract We propose a new scheme for designing a nearest-prototype classi"er using Kohonen's self-organizing feature map (SOFM). The net starts with the minimum number of prototypes which is equal to the number of classes. Then on the basis of the classi"cation performance, new prototypes are generated dynamically. The algorithm merges similar prototypes and deletes less signi"cant prototypes. If prototypes are deleted or new prototypes appear then they are "ne tuned using Kohonen's SOFM algorithm with the winner-only update strategy. This adaptation continues until the system satis"es a termination condition. The classi"er has been tested with several well-known data sets and the results obtained are quite satisfactory.  2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Keywords: Nearest-prototype classi"er; Dynamic prototype generation; Self-organizing feature map; Split-merge technique

1. Introduction Kohonen's self-organizing feature map (SOFM) has been successfully used in numerous "elds of application such as speech recognition [1], robotics [2,3], industrial process control [5], image compression [4], etc. Designing of classi"ers [6] and other pattern recognition systems based on SOFM [7] are some of the most successful areas of application. SOFM [8,9] has the interesting property of achieving a distribution of the weight vectors that approximates the distribution of the input data. This property of the SOFM can be exploited for designing nearest prototype classi"ers. Here we propose a new approach for this. Although our training data are labeled, the SOFM is trained without using the class information. When the training is over, the weight vectors are converted into labeled prototypes of a classi"er using the class information. The performance of the classi"er is then evaluated. Based on the evaluation results a tuning * Corresponding author. Tel.: #91-33-477-8085; fax: #9133-577-6680. E-mail address: [email protected] (N.R. Pal).

step consisting of deletion, merging, splitting and retraining of the net is performed. The evaluation and tuning are repeated until the number of prototypes stabilizes or the performance of the classi"er reaches a satisfactory level. In case of highly overlapped class boundaries usually it is very di$cult to estimate the adequate number of prototypes. Small number of prototypes su!er from large error rates, while at the other extreme a large number of prototypes make the system expensive. Here the tuning strategy is designed to strike a compromise between the classi"cation performance and the number of prototypes for such data. These prototypes can be used to generate fuzzy rules for a fuzzy rule based pattern recognition system.

2. Self-organizing feature map We view the self-organizing feature map as an algorithmic transformation A" : RNP<(RO) that is often 1-$+ advocated for visualization of metric-topological relationships and distributional density properties of feature vectors (signals) X"+x ,2, x , in RN [9].  ,

0031-3203/00/$20.00  2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 9 9 ) 0 0 2 3 2 - 0

316

A. Laha, N.R. Pal / Pattern Recognition 34 (2001) 315}321

current iteration number. Find w , that best matches P R\ x in the sense of minimum Euclidian distance in RN. This vector has a (logical) `imagea which is the cell in O with  subscript r. Next, a topological (spatial) neighborhood N (t) centered at r is de"ned in O , and its display cell P  neighbors are located. A 3;3 window, N(r), centered at r corresponds to updating nine prototypes in RN. Finally, w and the other weight vectors associated with cells P R\ in the spatial neighborhood N (r) are updated using the R rule w "w #h (t)(x!w ). G R G R\ PG G R\

(1)

Here r is the index of the `winnera prototype

Fig. 1. The SOFM architecture.

SOFM is implemented through a neural-like network architecture as shown in Fig. 1 and it is believed to be similar in some ways to the biological neural network. The visual display produced by A" helps to form 1-$+ hypotheses about topological structure present in X. Although, in this article we concentrate on (m;n) displays in R, in principle X can be transformed onto a display lattice in RO for any q. In practice, visual display can be made only for q)3 and are usually made on a linear or planar con"guration arranged as a rectangular or hexagonal lattice. As shown in Fig. 1 input vectors x3RN are distributed by a fan-out layer to each of the (m;n) output nodes in the competitive layer. Each node in this layer has a weight vector (prototype) w attached to it. Let O " GH N +w ,LRN denote the set of m;n weight vectors. O is GH N (logically) connected to a display grid O L<(R). (i, j)  in the index set +1, 2,2, m,;+1, 2,2, n, is the logical address of the cell. There is a one-to-one correspondence between the m;n p-vectors w and the m;n cells (+i, j,), GH i.e., O  O . In the literature display cells are sometimes N  called nodes, or even neurons, and we shall use both of them. The feature mapping algorithm starts with (usually) a random initialization of the weight vectors w . For GH notational clarity we suppress the double subscripts. Now let x3RN enter the network and let t denote the

"", (2) r" arg min +""x!w G R\ GHI G and "" * "" is the Euclidian norm on RN. The function h (t) PG which expresses the strength of interaction between cells r and i in O usually decreases with t, and for a "xed t it  decreases as the distance (in O ) from cell r to cell i in creases. h (t) is usually expressed as the product of PG a learning parameter a and a lateral feedback function R g (dist(r, i)). A common choice for g is g (dist(r, i))" R R R   exp\  P GNR . a and p both decrease with time t. The R R topological neighborhood N (r) also decreases with time. R This scheme, when repeated long enough, usually preserves spatial order in the sense that weight vectors which are metrically close in RN generally have, at termination of the learning procedure, visually close images in the viewing plane. We next provide a schematic description of the algorithm. Algorithm A (Kohonen): 1-$+ Begin Input X /HH unlabeled data set X"+x 3RN : G i"1, 2,2, N, HH/ Input m, n /HH the display grid size, a rectangle of size m;n is assumed HH/ Input maxstep /HH maximum number of updating steps HH/ Input N /HH initial neighborhood size HH/  Input a /HH the initial step size (learning coe$cient)  HH/ Input p and p /HH parameters to control e!ective  D step size HH/ /HH Learning phase HH/ Randomly generate initial weight vectors +w , i"1, 2,2, m; j"1, 2,2, n, w 3RN, GH GH tQ0 While(t(maxstep) Select randomly x(t) from X; "nd r" arg min +""x(t)!w (t)"", G GHI G /HH r and i stands for two-dimensional indices that uniquely identify a weight vector in O HH/ N

A. Laha, N.R. Pal / Pattern Recognition 34 (2001) 315}321

w (t#1)Qw (t)#a g (dist(r, i))[x(t)!w (t)] ∀i3N (r) G G R R G R w (t#1)Qw (t) ∀i , N (r) G G R /HH dist(i, j) is the Euclidian distance between the centers of nodes r and i on the display lattice, g (d) R is the lateral feedback function, usually   g (d)"e\B NR HH/ R tQt#1 a Qa (1!t/maxstep) R  N QN !t(N !1)/maxstep R   p Qp !t(p !p )/maxstep R   D /HH there are other ways to readjust a , N and p , R R R and many choices for g HH/ R End While /HH Display phase HH/ For each x3X "nd

End.

r" arg min +""x!w "",, and mark the associated G GHI G cell r in O . 

3. Labeling of SOFM prototypes In this investigation we use a 1-D SOFM, but the algorithm can be extended to 2-D SOFM also. First we train a one-dimensional SOFM using the training data, of course, without using the class information of the input data. Initially the number of nodes in the SOFM is the same as the number of classes c. This is motivated by the fact that the smallest number of prototypes that may be required is equal to the number of classes. At the end of the training the weight vector distribution of the SOFM will re#ect the distribution of the input data. These unlabeled prototypes are then labeled using class information. For each of N input feature vectors we identify the prototype closest to it, i.e., the winner node. Since no class information is used during the training, it is only natural that some prototypes may become the winner for data from more than one classes. For each prototype v we compute a score D , which is the number G GH of data points from class j to which v is the closest G prototype. Due to strong interaction among the neighboring nodes of the SOFM during the training some prototypes may be so placed that for no input data they are the closest prototypes; i.e., D is 0 for all j. Naturally GH we reject such prototypes. For the remaining prototypes the class label C of the prototype v is determined as G G (3) C "arg max D . G GHI GH H The scheme will assign a label to each of the c prototypes, but such a set of prototypes may not classify the data satisfactorily. For example, from (3) it is clear that D data points will be wrongly classi"ed by the H$!G GH

317

prototype v . Hence we need further re"nement of the G initial set of prototypes < "+v , v ,2, v ,LRN.    A which we do next. 3.1. Rexnement of prototypes The prototypes generated by the SOFM algorithm represent the overall distribution of the input data. A set of prototypes useful for classi"cation job must be capable of dealing with class speci"c characteristics (such as class boundaries) of the data. We present a strategy of modifying the initial set of prototypes < leading to the enhance ment of performance of the classi"er. This process of modi"cation is repeated till the number of prototypes and their performance stabilize within an acceptable level. On mth iteration the prototype set < from previous K\ iteration is used to generate the new set of prototypes < . K The labeled prototypes < are used to classify a set of K\ training data and their performance is monitored. Let = be the number of training data to which prototype G v is the closest one. Let S "max +D ,"D G . Thus G G H GH G! when v is labeled as a prototype for class C , S training G G G data points will be correctly classi"ed by v and G F " D data points will be incorrectly classi"ed. G H$!G GH Thus, = "S #F G G G and = " D . G GH H Let X"+x ,2, x , be the set of training data and N be  , H the number of training data from class j. The re"nement stage uses (c#1) parameters, a global retention parameter a and a set of class-wise retention parameters b I (one for each class), to evaluate the performance of each prototype. a and b are computed dynamically (not "xed) I for mth iteration using the following formula: 1 , a " K K "< "  K\ 1 b " , KI K "
318

A. Laha, N.R. Pal / Pattern Recognition 34 (2001) 315}321

merge v w.r.t. class k we identify the prototype v closest G J to v where C "k (i.e., v also is a prototype for class k). G J J Let us denote X as the set of training data vectors from GH class j whose nearest prototype is v . When we merge G v with v w.r.t. class k, v is updated according to the G J J equation, = v # x GI x Z6 . v" J J J = #D J GI

(4)

Note that we do not say here, when to merge. This will be discussed later. Modifying a labeled prototype: A prototype v is modiG "ed according to the following equation: x x v " Z6G!G . G D G G!

(5)

Splitting a prototype: A prototype v is split into r new G prototypes for r di!erent classes according to the following rule. For each of r new prototypes v of class C we J J compute x x v " Z6G!J . J D J G!

(6)

The prototype v is deleted. So after the splitting the G number of prototypes is increased by r!1. Deleting a prototype: The prototype v is deleted so that G the number of prototypes is reduced by one. Now we are in a position to schematize the evaluation and enhancement strategy for the prototypes as follows. Repeat for all v 3< until termination condition is G K\ satis"ed. If = OD G and = (aN and there is at least another G G! G prototype for class C G then delete v . (Global deletion) G /HIf a prototype is not a pure one (i.e., it represents data from more than one classes) and does not represent a reasonable number of points, it fails to qualify to become a prototype. However if there is no other prototype for class C the prototype is retained H/ G Else if = 'aN but D (b N for all classes G GH KH H then merge v for the classes for which D '0 and G GH delete v . G (Merge and delete) /H The prototype represents a reasonable number of points, but not a reasonable number of points from any particular class so that it can qualify as a prototype for a particular class. But we cannot ignore the prototype completely. We logically xrst split v into s prototypes G v , v ,2v , s)c, s is the total number of classes for G G GQ which D '0, and then merge v to its closest prototype GH GH from class j. v is then deleted. H/ G

Else if = 'aN and D G 'b G N G but D (b N K! ! GH KH H G G! for all jOC G then merge v with respect to all the classes other G than C for which D '0 using (4) and modify G GH v using (5). (Merge and modify) G /H The prototype represents points from more than one classes, however, the points from one class only are well represented by the prototype. According to our labeling scheme the prototype is labeled with the most represented class. Thus we merge v with respect to the classes other G than C using (4) and then modify v by (5). H/ G G Else if = 'aN and D 'b N for more than one class G GH KH H then merge v w.r.t. classes for which D (b N by G GH KH H (4) and split v into new prototypes for the classes for G which D 'b N by (5). Add these new prototypes GH KH H to the new set of prototypes < . K (Merge and split) /H The prototype represents points reasonably well from more than one classes. So we merge the prototype with respect to the classes whose data are not represented reasonably well and split the prototype into one for each class whose data are reasonably well represented by v . H/ G Let < be the union of the unaltered prototypes of K < and the modi"ed as well as the new prototypes. K\ Run the SOFM algorithm on < with winner-only K update (i.e., no neighbor is updated) strategy using the same training data as input. /H At this stage we want only to xne tune the prototypes. If the neighbors also are updated the prototypes again might migrate to represent points from more than one class. H/ Termination conditions The algorithm may terminate under any one of the following three conditions. (i) Satisfactory recognition score de"ned in terms of percentage of correct classi"cations (e). (ii) Stability of prototypes. (iii) A maximum number of iterations (I ) reached.

 Proper use of condition (i) requires some knowledge of the data. However, even if we do not have the same, we can always set a high (conservative) percentage for (e), say 95%. Condition (ii) can be checked by a parameter d using N the following condition: ""< "!"< "" K\ K (d , N "< " K\

(7)

where "< " is the number of prototypes in < . K\ K Thus the algorithm terminates when between two successive iterations the number of prototypes do not change signi"cantly.

A. Laha, N.R. Pal / Pattern Recognition 34 (2001) 315}321

319

Condition (iii) is used to protect against in"nite looping of the algorithm for some data with highly overlapped structures for which the chosen values of e and d may not be reachable. N 4. Results Several data sets have been used to judge the performance of the algorithm. But we report here results for "ve data sets: Iris, Glass, Breast Cancer, Vowel and Norm4. Iris data [10] have 150 points in four dimensions that are from three classes each with 50 points. Glass data [11] consist of 214 samples with nine attributes from six classes. Breast cancer [12] data have 569 points in 30 dimensions from two classes. The vowel [15] data set consists of 871 samples of discrete phonetically balanced speech samples for the Telugu vowels in consonant} vowel nucleus}consonant (CNC) form. These samples are generated from three male informants (in the age group of 25}30 yr) on an AKAI-type recorder. The spectrographic analysis was done on a Kay Sonograph Model 7029-A. The data have three features as the "rst three formant frequencies. The data set Norm4 [13] is a sample of 800 points consisting of 200 points each from the four components of a mixture of 4 class 4-variate normals. All our reported results are obtained on the entire data sets. Table 1 summarizes the classi"cation performances. We used the values K "3, K "6, d "0.2, e"95%   N and I "10.

 It is well known that classes 2 and 3 of Iris have some overlap and the typical re-substitution error with a nearest-prototype classi"er de"ned by three prototypes obtained by some clustering algorithm is 15}16 (i.e., about 10% error with three prototypes). Our algorithm terminated with seven prototypes in three iterations. The performance of the proposed system with seven prototypes is quite good resulting only in 2.66% error. Breast cancer data have been used in Ref. [12] to train a linear programming-based diagnostic system by a

Table 1 Performance of the classi"er for di!erent data sets Data set

Iris Glass Breast cancer Vowel Norm4

Size No. of prototypes

150 214 569 871 800

Initial

Final

3 6 2 6 4

7 30 5 15 4

No. of iterations

% of error

4 7 6 5 1

2.66% 21.29% 11.07% 21.01% 3.75%

Fig. 2. Scatterplot of the glass data along two most signi"cant principal components.

variant of multisurface method (MSM) called MSM-tree and about 97.5% accuracy was obtained. Breast cancer data of a similar kind have also been used in a recent study [14] with 74.0% accuracy with 100 rules. Our classi"er could achieve as low as 11.07% error with only "ve prototypes and it is quite good. Glass data shows a high percentage of error; this is possibly unavoidable, because a scatterplot (Fig. 2) of the two principal components shows that the data for class 3 are almost randomly distributed among the data points from other classes. In fact the points from class 3 (represented by #) are not visible in the scatterplot. In Ref. [14] the recognition score for the glass data is 64.4%, i.e., about 35% error. Our classi"er could realize more than 78% accuracy with 30 prototypes generated in seven iterations of the algorithm. Although the vowel data set has three features, we used only the "rst two features. Bayes classi"er for this data set [16] gives an overall recognition score of 79.2%. Fig. 3, the scatterplot of vowel data depicts that there are substantial overlap among di!erent classes and hence some misclassi"cation is unavoidable. The proposed classi"er could achieve nearly 79% correct classi"cation with 15 prototypes. The performance on Norm4 [13] with only four prototypes, i.e., one prototype/class is excellent too. In this case the SOFM based classi"er could achieve up to 96% accuracy with only four prototypes.

5. Conclusions We have proposed a simple but powerful approach of "nding a set of reliable prototypes for designing

320

A. Laha, N.R. Pal / Pattern Recognition 34 (2001) 315}321

References

Fig. 3. Scatterplot of the vowel data.

nearest-prototype classi"ers. The algorithm "rst "nds a set of representative prototypes from the training data using SOFM disregarding the class information. These prototypes are labeled following a `most-likely classa heuristic. Subsequent stages of tuning cycles "ne-tune the prototype set to realize a better class discrimination. Depending on the performance of the classi"er prototypes are deleted, merged, modi"ed or split. The retention parameters try to strike a compromise between error rate and the number of the prototypes. Global retention parameter a(K ) prevents uncontrolled increase in the  number of prototypes while the class-wise retention parameters b (K ) try to generate prototypes as pure as I  possible resulting in an increase in the number of prototypes. Proper choice of K and K is needed for balanc  ing the opposing tendencies generated by a and b s. It is I found that for data with well separated classes (such as Norm4 used here) the process is comparatively insensitive to the change of values of K and K , but for data   with highly overlapped classes (like Glass and Vowel) the performance of the system varies considerably with the change of value of K and K , especially K . The pro   cess, to some extent, depends on d also. Further investN igation is required to provide a guideline for selection of these parameters. Since, the proposed scheme can generate a small number of good prototypes for 1-NP classi"er, they can be used to extract fuzzy rules for classi"er design also. This is currently under investigation.

[1] T. Kohonen, K. Torkkola, M. Shozakai, J. Kangas, O. Venta, Microprocessor implementation of a large vocabulary speech recognizer and phonetic typewriter for Finnish and Japanese, Proceedings of the European Conference of Speech Technology, Edinberg, 1987, pp. 377}380. [2] D.H. Graf, W.R. Lalonde, A neural controller for collisionfree movement of general robot manipulators, Proceedings of the IEEE International Conference on Neural Networks, ICNN-88, San Diego, CA, 1988, pp. I-77}I-80. [3] D.H. Graf, W.R. Lalonde, Neuroplanners for hand/eye coordination, Proceedings of the Internationl Joint Conference on Neural Networks, IJCNN 89, Washington, DC, 1989, pp. II-543}II-548. [4] N.M. Nasarabadi, Y. Feng, Vector quantization of images based on Kohonen self-organizing feature maps, Proceedings of the IEEE International Conference on Neural Networks, ICNN-88, San Diego, CA, 1988, pp. I-101}I-108. [5] K.M. Marks, K.F. Goser, Analysis of VLSI process data based on self-organizing feature maps, Proceedings Neuro-Nimes'88, Nimes, France, pp. 337}347. [6] S. Mitra, S.K. Pal, Self-organizing neural network as a fuzzy classi"er, IEEE Trans. System Man. Cybernet. 24 (3) (1994) 385}398. [7] Z. Chi, J. Wu, H. Yan, Handwritten numeral character recognition using self-organizing maps and fuzzy rules, Pattern Recognition 28 (1) (1995) 59}66. [8] T. Kohonen, The self-organizing map, Proc. IEEE 78 (9) (1990) 1464}1480. [9] J.C. Bezdek, N.R. Pal, A note on self-organizing semantic maps, IEEE Trans. Neural Networks 6 (5) (1995) 1029}1036. [10] E. Anderson, The IRISes of the Gaspe peninsula, Bull. Amer. IRIS Soc. 59 (1935) 2}5. [11] R.C. Holte, Very simple classi"cation rules perform well on most commonly used data sets, Mach. Learning 11 (1993) 63}91. [12] O.L. Mangasarian, W.N. Street, W.H. Wolberg, Breast cancer diagnosis and prognosis via linear programming, Oper. Res. 43 (4) (1995) 570}577. [13] N.R. Pal, J.C. Bezdek, On cluster validity for the fuzzy c-means model, IEEE Trans. Fuzzy Systems 3 (3) (1995) 370}379. [14] H. Ishibuchi, T. Nakashima, T. Murata, Performance evaluation of fuzzy classi"er systems for multi-dimensional pattern classi"cation problems, IEEE Trans. SMC 29 (5) (1999) 601}618. [15] S.K. Pal, D. Dutta Majumder, Fuzzy sets and decision making approaches in vowel and speaker recognition, IEEE Trans. System Man. Cybernet SMC-7 (1977) 625}629. [16] S.K. Pal, S. Mitra, Multilayer perceptron, fuzzy sets, and classi"cation, IEEE Trans. Neural Networks 3 (5) (1992) 683}697.

About the Author*ARIJIT LAHA obtained B.Sc. with honors in physics and M.Sc. in physics from the University of Burdwan in 1991 and 1993, respectively. He obtained his M.Tech. (Comp. Sc.) from the Indian Statistical Institute in 1997. From August 97 to May 98 he has worked in Wipro Infotech Global R & D as a senior software engineer. Currently he is a software engineer in the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Calcutta and involved in a real-time expert system development project. His research interests include pattern recognition, neural networks, fuzzy systems and expert systems.

A. Laha, N.R. Pal / Pattern Recognition 34 (2001) 315}321

321

About the Author*NIKHIL R. PAL obtained B.Sc. with honors in physics and Master of Business Management from the University of Calcutta in 1979 and 1982, respectively. He obtained M. Tech. (Comp. Sc.) and Ph.D. (Comp. Sc.) from the Indian Statistical Institute in 1984 and 1991, respectively. Currently he is a Professor in the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Calcutta. From Sept. 1991 to Feb. 1993, July 1994 to Dec. 1994, Oct 1996 to Dec. 1996 he was with the Computer Science Department of the University of West Florida. He was a guest faculty of the University of Calcutta also. His research interest includes image processing, pattern recognition, fuzzy sets theory, measures of uncertainty, neural networks, genetic algorithms, and fuzzy logic controllers. He has coauthored a book titled Fuzzy Models and Algorithms for Pattern Recognition and Image Processing, Kluwer Academic Publishers. He is an associate editor of the International Journal of Fuzzy Systems, International Journal of Approximate Reasoning and IEEE Transactions on Fuzzy Systems.

Dynamic generation of prototypes with self-organizing ...

The net starts with the minimum number of prototypes which is equal to the number of classes. ... Small number of prototypes su!er from large ..... a sample of 800 points consisting of 200 points each from .... About the Author*NIKHIL R. PAL obtained B.Sc. with honors in physics and Master of Business Management from the ...

161KB Sizes 2 Downloads 170 Views

Recommend Documents

Dynamic generation of prototypes with self-organizing ...
The classi"er has been tested with several well-known data sets and the results .... r corresponds to updating nine prototypes in RN. ..... trographic analysis was done on a Kay Sonograph ... a linear programming-based diagnostic system by a.

Dynamic Generation of Test Cases with Metaheuristics.pdf ...
test data set in a suitable timeframe and with a. greater coverage than conventional methods. such as ... ultimately communicating, the best solution. that it has ...

Evolution of Millon's Personality Prototypes
posed a new classification for emotional or psychiatric disorders. The foundation of the system was made up of eight personality prototypes borrowed from the psychi- atric nosology that was current at the time, the second edition of theDiagnostic and

Prototypes White Paper.pdf
Page 3 of 4. ATTACHMENT ONE. Applicable Statutes, SFC Rules & Regulations, and A/E Design Agreement. W.S. 21-15-114 (a) The school facilities ...

Reconfiguration of Distribution Networks with Dispersed Generation ...
generators of an electric network imposes some additional problems; among ..... Systems for Loss Reduction and Load Balancing", IEEE Trans. Power Delivery ...

Dynamic Provisioning in Next-Generation Data Centers ...
hosted application services that entail short but real-time interactions between the .... buy energy from the grid; otherwise, it is the best to buy the cheap on-site ...

Automatic, evolutionary test data generation for dynamic ... - CiteSeerX
Jan 18, 2008 - University of Cyprus, Department of Computer Science, 75 Kallipoleos .... their system (DifferentialGA) does not support efficiently ...... 365–381.

Block-and-break generation of microdroplets with ... - AIP Publishing
Apr 10, 2013 - 1School of Engineering and Applied Sciences/Department of Physics, Harvard University,. Cambridge ... 2Department of Chemical Engineering, Delft University of Technology, Julianalaan 136,. 2628 BL ... Quantitative Biosciences, Universi

krishan-kumar-ijser-Challenges-of-Millennials-Generation-Y-with-non ...
and big data for chronic disease data analysis for improved treatment and care. ..... -dependent-type-2-diabetes-in-an-increasingly-Diabetogenic-World.pdf.

Generation of transgenic non-human primates with ...
May 28, 2009 - Supplementary Information is linked to the online version of the paper at ... Global COE program for Education and Research Centre for Stem Cell Medicine ... the Japan Science and Technology Agency and grants from MEXT ...

Uniform generation of random directed graphs with prescribed degree ...
Mar 31, 2006 - Department of Mathematics and Computer Science. Institute of Computer ... Research in this direction has started some years ago. .... 2. Methods. 6. Problem. Given a graphical degree sequence, choose uniformly one rep-.

Identifying Dynamic Spillovers of Crime with a Causal Approach to ...
Mar 6, 2017 - physical and social environment through a variety of mechanisms. ... 3Levitt (2004) describes efforts by the media to attribute falling crime rates in ... behavior as young adults.5 In addition, our findings contribute to the long ...

Identifying Dynamic Spillovers of Crime with a Causal Approach to ...
Mar 6, 2017 - and empirical analysis of the statistical power of the test that ..... data, we begin by considering a large subset of candidate models (Section 5.2).

Dynamic quantification of antigen molecules with flow ...
Nov 18, 2015 - After all sampling was completed, the microbeads or cells were ana- ... detectors, and 14 fluorescent detectors utilizing DiVa software (BD.

Dynamic microsimulation of location choices with a quasi-equilibrium ...
phone: +56 2 978 43 80 ... system and changes in urban policy, are usually costly to implement and therefore require mod- ... The paper is organized as follows: Section 2 describes the main theory ..... of service (office and hotel) jobs in i. ×.

Adaptive Dynamic Inversion Control of a Linear Scalar Plant with ...
trajectory that can be tracked within control limits. For trajectories which ... x) tries to drive the plant away from the state x = 0. ... be recovered. So for an ..... 375–380, 1995. [2] R. V. Monopoli, “Adaptive control for systems with hard s

A Dynamic Model of Privatization with Endogenous Post-Privatization ...
Aug 21, 2008 - would imply minimal changes in post-privatization firm performance in the ..... G.3 Meanwhile, by entitling G to monitor, government ownership allows ...... “Rent Seeking and Government Ownership of Firms: An Application to ...

A Dynamic Model of Privatization with Endogenous Post-Privatization ...
Aug 21, 2008 - 4Since ownership in this model is defined as cash flow rights, this ..... to privatize the firm at ts or to postpone the privatization until the instant.

Supervised selection of dynamic features, with an ...
Abstract. In the field of data mining, data preparation has more and ..... The use of margins is validated by the fact that they provide distribution-free bounds on ...

Dynamic microsimulation of location choices with a quasi-equilibrium ...
phone: +41 21 693 93 29 fax: +41 21 ... phone: +56 2 978 43 80 ... traded in an auction market, where the best bid for a particular location determines both the.

Diagnosing Misconfiguration with Dynamic Detection of ...
properties describing the value of a program variable. .... Certificate: A task started by binary executable or script ... Anomaly Detection in Online Data Sources. In.

Supervised selection of dynamic features, with an ... - Semantic Scholar
cation of the Fourier's transform to a particular head-related impulse response. The domain knowledge leads to work with the log of the Fourier's coefficient.

Identification of dynamic models with aggregate shocks ...
May 23, 2011 - with an application to mortgage default in Colombia ..... To the best of our knowledge, the literature has not yet established general ..... 8Regular commercial banks had exclusive rights to issue checking accounts ..... effect on the