Auto-segmentation of normal and target structures in head and neck CT images: A feature-driven model-based approach Arish A. Qazia) Radiation Medicine Program, Princess Margaret Hospital, Toronto, Ontario M5G 2M9, Canada

Vladimir Pekar Philips Research North America, Markham, Ontario L6C 2S3, Canada

John Kim, Jason Xie, Stephen L. Breen, and David A. Jaffray Radiation Medicine Program, Princess Margaret Hospital, Toronto, Ontario M5G 2M9, Canada

(Received 31 May 2011; revised 7 September 2011; accepted for publication 30 September 2011; published 26 October 2011) Purpose: Intensity modulated radiation therapy (IMRT) allows greater control over dose distribution, which leads to a decrease in radiation related toxicity. IMRT, however, requires precise and accurate delineation of the organs at risk and target volumes. Manual delineation is tedious and suffers from both interobserver and intraobserver variability. State of the art auto-segmentation methods are either atlas-based, model-based or hybrid however, robust fully automated segmentation is often difficult due to the insufficient discriminative information provided by standard medical imaging modalities for certain tissue types. In this paper, the authors present a fully automated hybrid approach which combines deformable registration with the model-based approach to accurately segment normal and target tissues from head and neck CT images. Methods: The segmentation process starts by using an average atlas to reliably identify salient landmarks in the patient image. The relationship between these landmarks and the reference dataset serves to guide a deformable registration algorithm, which allows for a close initialization of a set of organ-specific deformable models in the patient image, ensuring their robust adaptation to the boundaries of the structures. Finally, the models are automatically fine adjusted by our boundary refinement approach which attempts to model the uncertainty in model adaptation using a probabilistic mask. This uncertainty is subsequently resolved by voxel classification based on local lowlevel organ-specific features. Results: To quantitatively evaluate the method, they auto-segment several organs at risk and target tissues from 10 head and neck CT images. They compare the segmentations to the manual delineations outlined by the expert. The evaluation is carried out by estimating two common quantitative measures on 10 datasets: volume overlap fraction or the Dice similarity coefficient (DSC), and a geometrical metric, the median symmetric Hausdorff distance (HD), which is evaluated slice-wise. They achieve an average overlap of 93% for the mandible, 91% for the brainstem, 83% for the parotids, 83% for the submandibular glands, and 74% for the lymph node levels. Conclusions: Our automated segmentation framework is able to segment anatomy in the head and C 2011 American neck region with high accuracy within a clinically-acceptable segmentation time. V Association of Physicists in Medicine. [DOI: 10.1118/1.3654160] Key words: radiation therapy planning, segmentation, head and neck, image registration I. INTRODUCTION Intensity modulated radiation therapy (IMRT) is a modern radiation therapy treatment method which enables delivery of the therapeutic dose to the tumor with very high precision. Accurate delineation (segmentation) of the target and risk structures is, however, an essential step for successful implementation of IMRT. This procedure is mostly carried out manually; it requires high concentration and is very timeconsuming and tedious for the operator. According to a recent study, the average physician’s time to fully contour a single head and neck case is approximately 2.7 h.1 Another disadvantage of manual contouring are potential errors arising from both interobserver and intraobserver variability in delineation, which may even exceed planning and setup errors.2 6160

Med. Phys. 38 (11), November 2011

The use of fully automated segmentation methods in radiation therapy planning (RTP), however, is often limited by insufficient image content, in particular due to low soft tissue contrast in CT data and various image artifacts and therefore, is extremely challenging. State of the art auto-segmentation methods in RTP can be broadly categorized into: (i) modelbased, (ii) atlas-based, or (iii) hybrid approaches. Model-based segmentation has been demonstrated to be a powerful image analysis technique, where the lack of reliable image information can be, to a certain extent, compensated by imposing prior shape constraints in the segmentation process.3 This is typically done by statistical analysis of reference ground truth segmentations. Deformable models of anatomical structures are often represented by flexible triangulated surface meshes, where the

0094-2405/2011/38(11)/6160/11/$30.00

C 2011 Am. Assoc. Phys. Med. V

6160

6161

Qazi et al.: Feature-driven model-based segmentation

shape is designed to be close to the average shape of the anatomical structure of interest and also possibly covers possible shape variations by using a dedicated parameterization, e.g., principal component analysis. In the context of radiation therapy, model-based segmentation has been applied to delineation of the male pelvis4 and 4D thorax5 among others. In these cases, prior knowledge about the appearance of the structures of interest was used besides shape knowledge: characteristic gray-value range, gradient direction, and strength, etc. Applying deformable organ models for segmentation requires close initialization to the target anatomy. In practice, this is often done by a manual drag-and-drop operation and may also require complex manual editing, including nonrigid model deformations by using special mesh manipulation tools. An alternative way to approach automated delineation is the use of atlas-based deformable registration, where the patient dataset is registered with some reference image or averaged population containing some ground truth segmentations. The derived transformation is then applied to the presegmented regions of interest (ROIs), which transfers them to the patient dataset.6–8 An advantage of this method is that it can work in a fully unsupervised manner. On the other hand significant variability across patients and image artifacts often do not allow for reliable evaluation of similarity between the datasets, which is the essential prerequisite. To overcome this problem, various approaches based on optimal atlas selection, and multiatlas segmentation and fusion have been proposed.9–13 Hybrid approaches combine registration and segmentation into a common framework,14–16 where, for example, evolution of deformable models can serve as a registration constraint or used to compensate for the residual differences after the registration step. In this paper we present a hybrid approach, which combines deformable registration and organ-specific model-based segmentation (MBS) and introduce a probabilistic refinement step to further improve the results of MBS. We validate the method on automated delineation of several structures in the head and neck region. Anatomy of the head and neck region is extremely complex; having structures with nonexistent structure boundaries, such as the lymph node regions. In terms of clinical burden manual delineation of the lymph node levels is the most time consuming part in head and neck RTP. Their automated delineation is extremely challenging, since image content alone is not enough for their robust identification, rather it is mostly dependent on prior expert knowledge of the physician. Our aim is fully automated delineation of both, the normal structures (organs at risk) and target tissues (lymph node levels I–IV) by combining registration based initialization and model-based segmentation into a common framework, and utilizing organ-specific local information, thereby replicating the physician’s way of contouring complex structures in the head and neck region. The method starts by creating an average atlas which allows us to reliably identify certain landmarks in the patient image. The relationship between these landmarks and the reference dataset serves to guide a deformable registration Medical Physics, Vol. 38, No. 11, November 2011

6161

algorithm which allows for a very close initialization of a set of deformable organ models in the patient image and their subsequent adaptation. Finally, the adapted models are automatically fine adjusted by our boundary refinement approach utilizing local information to segment the structures of interest. We validate our approach by comparing the segmentation results with those obtained by manual expert delineations. The results for most of the segmented normal structures are additionally compared to the best performer12,17 of the recently held MICCAI segmentation challenges for organs at risk in the head and neck region.18,19 This paper is organized as follows. The technical background is presented in the first part of the paper. It includes the atlas creation method using manually annotated training data and a stochastic optimization technique applied to robustly detect a set of predefined landmarks in the patient dataset. Further on, the deformable registration algorithm is described which yields the transformation used to transfer the set of deformable organ models from the reference to the patient dataset. Finally, fully automated model adaptation and boundary refinement is presented. The second part of the paper is devoted to the quantitative validation of the proposed method using 10 datasets not present in the atlas and refinement training. II. METHODS The auto-segmentation method developed in this work is a multistep approach, which can be divided into two major steps. (i) Model initialization, where as a first step, a lowdimensional transformation is determined compensating for global differences between the reference and the patient dataset, such as size, flexion of the neck, etc. Next, this global transformation is used to initialize a dense deformable registration method, which further initializes organ-specific deformable models. (ii) Model adaptation and refinement, where the initialized deformable organ models (associated with a reference dataset) are deformed with respect to the patient anatomy, and the adapted boundaries within an uncertainty band, as defined by a probabilistic mask, are further refined using voxel classification, thus compensating for the residual local differences. The complete segmentation pipeline is presented in Fig. 1. II.A. Model initialization

The model initialization consists of three steps: (i) offline creation of an averaged gray-value atlas by nonrigid registration and merging of N training datasets, (ii) low-dimensional matching of the atlas to unseen patient image using mean position and spatial variability modes of specific landmarks,20 and (iii) dense deformable registration (one displacement vector for each voxel position) to recover local deformations. II.A.1. Atlas construction (offline)

The aim of using an atlas in our framework is to reliably identify certain stable point landmarks in the patient image.

6162

Qazi et al.: Feature-driven model-based segmentation

6162

FIG. 1. Flowchart of the segmentation algorithm.

The atlas construction process starts with manual annotation of the training datasets. We have chosen 14 landmarks distributed within the head and neck area and corresponding to the global patient pose: they are located at the principal bones and along the spinal cord, see Fig. 2. One dataset from the training data is selected as a reference. The coordinate system of the reference dataset, which also contains manually delineated ROIs is chosen as the reference coordinate frame and landmarks from all training datasets are registered with those of the reference dataset by using a method based on singular value decomposition.21 Next, the mean landmark positions and the covariance matrix are computed as x ¼

N 1X xi ; N i¼1



N 1 X di diT : N  1 i¼1

(1)

where xi ¼ fxi1 ; xi2 ; xi3 g is the landmark position in the i-th training dataset, and di ¼ xi  x. The eigenvectors of matrix Q corresponding to the first M largest eigenvalues are considered as principal modes of spatial variations of the landmarks. Finally, all training images are registered by thin-plate spline transformations (TPS)22 derived from the landmark correspondences, where the average landmark position is used as a reference. The deformed images are averaged to form the gray-value atlas, denoted as T, see Fig. 2. Note that the created atlas is fuzzy except the areas around the average landmark position. This property is useful when matching the atlas to the patient dataset, as described in Sec. II A 2. II.A.2. Atlas matching

In order to detect the landmarks in an unseen patient image, a low-dimensional nonrigid transformation is optimized which yields the optimal similarity between small image patches located around each individual landmark. This transformation maps the model landmarks (mean landmark positions) into the image and is parameterized as

Tl ðx; pÞ ¼ sR x þ

M X

! wj qj

þ t;

(2)

j¼1

where s is the scale factor, R denotes the rotation matrix, t is the translation vector, qj are the principal eigenvalues of Q (variation modes), and wj are the corresponding weights. The parameter vector to optimize is thus defined as p ¼ (rx, ry, rz, s, tx, ty, tz, w1, …, wM)T where the first 7 parameters define, correspondingly, the rotations, scaling factor and translations. The atlas matching problem can now be formalized as finding the optimal transformation parameters

FIG. 2. Gray-value atlas with the mean landmark positions. The off-plane landmarks are marked by smaller dots. Medical Physics, Vol. 38, No. 11, November 2011

To ¼ arg min Tl

Gi N X 1X jgk ðxi Þ  gk ðTl ðxi ÞÞj; L i¼1 k¼1

(3)

6163

Qazi et al.: Feature-driven model-based segmentation

where L is number of landmarks and g(xi) are the gray values from the patch around the i-th landmark in the patient image, and gðxi Þ are the gray values from the patch in the atlas image warped using a TPS transformation derived from the landmark correspondences. Since the transformation (2) is low-dimensional, the optimization can be done efficiently using global stochastic optimization methods, such as controlled random search23 or differential evolution.24 The atlas matching step enables us to identify the landmark positions in the patient dataset. The correspondences between these landmarks and those in the reference dataset are then used to derive a TPS transformation. This transformation warps the reference dataset and is used as an initialization for the dense deformable registration step presented next. II.A.3. Dense deformable registration

The atlas matching procedure described above works well to recover large displacements. In order to resolve for local deformations we apply an enhanced version of the “Demons” deformable registration algorithm,8 originally proposed by Thirion.25 The registration is essentially voxelbased, utilizing an optical flow based diffusion model. The enhanced demons works by adding an “active” force to the iterative diffusion process, which has the effect of making the registration more efficient, both in terms of convergence and computation time. One assumption of Demons is that the method relies on resolving for small deformations, therefore, as suggested in (Ref. 26) the method is implemented in a multiresolution scheme, where the number of levels is fixed to four. Prior to registration, the original images are resampled to isotropic voxels of size 3  3  3 mm3. In order to have a smooth and stable solution, in every iteration the deformation field is regularized by a Gaussian filter,25 with standard deviation r, which controls the degree of smoothing of the deformation map. The regularization is performed at all levels of the multiresolution pyramid and r is varied from 3 to 0.7 mm (coarsest to finest). II.B. Model-based segmentation and boundary refinement

Consecutive application of the TPS transformation from the atlas matching step and the vector field from the dense registration step is used to initialize organ-specific models to the patient image, which are adapted to the patient anatomy using MBS. The organ models, used for adaptation, are created from manual expert segmentations of the reference dataset and are represented as triangular surface meshes. The implemented MBS approach uses the iterative energy minimization technique adapted from Ref. 4 where the adaptation process is carried out by minimizing the sum of external energy attracting the model to prominent features in the image, such as edges, and internal energy maintaining the consistency of model shape. The total energy to be minimized is defined as4 Medical Physics, Vol. 38, No. 11, November 2011

6163



NX triangles

  2 wi ei  ^x0i  pi

i¼1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Eexternal

þ

NX vertices

X h

x0j



x0k

i2

(4)

 sR Djk ;

j¼1 k2NBðjÞ

|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Einternal

where in the external energy, pi is the attractor point in the image, ^xi is the barycenter of i-th triangle, ei is the unit vector in the direction of the gradient at the point pi, and ^x0i ¼ 13 ðx0i1 þ x0i2 þ x0i3 Þ is the new position of the triangle barycenter. In the internal energy, xj and xk denote the vertices of the mesh corresponding to the reference shape, NB(j) is the set of vertices connected to j-th vertex, and Djk denotes the distance between the vertices in the reference shape. The attractor points pi are determined in each iteration by using feature search along each triangle’s normal using a set of predefined organ-specific properties, such as gray value range, gradient strength and direction, etc. The scaling factor s and the rotation matrix R between the reference surface mesh and the deforming mesh are recomputed in each step using point-based registration. Energy minimization is performed using the efficient conjugate gradient method.27 After adaptation, we further refine the segmentations by employing a scheme based on quantification of the uncertainty in the boundary, as segmented by MBS. The uncertainty is represented by probabilistic masks whose construction is described in Sec. II B 1. II.B.1. Construction of the probabilistic masks (offline)

Expert delineations are used in the form of binary volumes and are also converted to triangular surface meshes. In order to construct the masks the first step is point-based affine registration of all expert delineated organ-specific surface meshes with an unbiased mean mesh. Analogously to Ref. 7, the unbiased mesh is constructed by registration of all meshes to an arbitrary reference mesh to create an initial mean mesh and then iterating the process each time taking the resulting mean mesh from each iteration as the reference mesh for registration. On convergence, we apply the resultant transformations to align the expert binary volumes and average the results to construct a probabilistic mask. The areas of intersection of all binary volumes are assigned the probability of 1, and the regions with the probabilities between 0 and 1 represent the areas of uncertainty, with respect to affine transformations, see Fig. 3. To smooth the borders the probabilistic masks are blurred with the Gaussian kernel (r ¼ 1 mm), which also has the effect of simulating organ variability. II.B.2. Refinement of the probabilistic mask by voxel classification

The mean meshes of the organs, as constructed above, are registered with the adapted reference meshes using affine

6164

Qazi et al.: Feature-driven model-based segmentation

6164

FIG. 3. Axial slices of the probabilistic masks for the mandible (left), brain stem (middle), and left parotid (right). Brighter areas correspond to higher probabilities.

transformations. The resulting transformations are then applied to the probabilistic masks to transfer them to the patient image. In the refinement step, voxels in the transformed probabilistic masks, with the probabilities between 0 and 1, representing the uncertainty around the segmented boundary are classified into organ and background class using local lowlevel image features. For classification, a fast implementation of a kNN classifier based on approximate nearest neighbor search is used,28 with k chosen to be 100. The kNN classifier is based on lazy learning, or instance based learning, which means that it stores all the training data and then postpones the generalization process to the time of classification. kNN is one of the simplest, yet powerful machine learning methods, it is nonparametric, quite easy to use, and requires no prior knowledge about the distribution of the data. For each voxel v to be classified, we compute a feature vector Fv. The posterior probability of v belonging to class wi is then computed by p(wi j Fv) ¼ ki=kT, where kT is the total number of neighbors, and ki, determined by the classifier, is the number of nearest neighbors belonging to class wi. Computation of posterior probabilities leads to a “soft” classification of the voxels. In order to obtain a hard segmentation we define a cutoff point, Tc, chosen to be 0.5. The probabilistic mask provides a measure of registration accuracy coupled with anatomical variability. Inclusion of the probabilistic mask enforces a shape prior and leads to a segmentation that is relatively smooth at the boundaries. We combine the two probabilities by estimating the weighted mean and thresholding at 0.5. Thus, the decision rule for classifying a voxel is:  v2

wo ; ðl1 pðwo jFv Þ þ l2 pðmÞÞ > 0:5; wb otherwise;

where wo is the organ class, and wb is background, p(m) is the probability from the mask and l1 and l2 are the weighting factors, both chosen to be 0.5. II.B.3. Features and feature selection (offline)

The performance of the classifier is highly dependent on how well the features are able to discriminate a certain tissue type from the surrounding structures. Image features used in our method are discussed below and are computed on a pervoxel basis: Medical Physics, Vol. 38, No. 11, November 2011

Raw image intensity and 3D position: The importance of these two features comes from the observation that when manually segmenting an organ of interest, a radiologist always looks for the location and the intensity of the organ in the image. Smoothed image intensities and derivatives: The features are computed on three different scales (1.7, 2.0, and 2.25 mm), representing the amount of smoothing applied to the image, specifically, the standard deviation of the Gaussian kernel, as in the scale-space paradigm.29 The Gaussian smoothed intensities on the three scales are included as candidate features as well as local structural features based on the first and second order derivatives. Local texture and structure properties: These features are based on the gray-level distribution, as represented by the local intensity histogram. The computed features are the first four moments of the histogram such as, mean, standard deviation, skewness, and kurtosis. Additionally, local entropy and uniformity are included. Local features based on second order texture properties, quantified using a 2D gray level co-occurrence matrix (GLCM) are also included. GLCM is a statistical method that computes the relationship between pixel pairs in the image in order to measure the variation in intensity at the pixel in consideration,30 and in our case, it is based on quantification of three-dimensional voxel neighborhood relationships. For each voxel, a set of 13 GLCM matrices, representing the independent combinations of spatial voxelto-voxel relationships, are computed and averaged to result in a single GLCM matrix. Four textural descriptors30 are then computed from this matrix: texture contrast (CON), texture homogeneity (HOM), texture energy (ENGY), and cluster tendency (CTE) (the shortened notations are used in Table I). All the texture features are computed using neighborhood sizes of 3.0, 5.0, and 7.0 mm. Shape based features: The eigenvalues, (k1, k2, k3), of the Hessian matrix provide important information about the local shape of a structure.31 The magnitude of the eigenvalues as a measure of object contrast is included. Several rotationally invariant features such as the Laplacian, Gaussian curvature, and first order gradient magnitude are also included, where the Laplacian and Gaussian curvature are computed using the eigenvalues of the Hessian. The structure tensor, also known as the second-moment matrix is an important tool for analyzing the local coherence of

6165

Qazi et al.: Feature-driven model-based segmentation

structures.29,32 The eigenvalues of the structure tensor are included as potential features. All shape based features are computed on three different scales (1.7, 2.0, and 2.25 mm). Each voxel is represented by an initial set of 91 features. A complete list of the candidate features is described in Table I. Since all features have different ranges; they are normalized to zero mean and unit variance to ensure that the classifier is not sensitive to scaling of the features. II.B.3.a. Feature selection. A high-dimensional feature space not only increases the computational time but can also degrade the classification performance. Second, each organ may have different characteristics, such as texture, shape, etc. Therefore, to filter out irrelevant and redundant features, we employ a feature selection step based on sequential forward floating selection (SFFS)33 which results in a subset of organ-specific features. The performance of SFFS is comparable to the optimal branch and bound algorithm, while being more computationally efficient. SFFS translates to a forward selection (FS) step followed by backward selection (BS). FS starts from an empty set and adds features sequentially as long as the performance criterion improves. Subsequently, BS iteratively removes the least significant features according to the performance criterion. The outcome of the performance criterion is evaluated at each iteration, and we stop iterating when the dimensionality of the feature space reaches a point after which the improvement is not significant. For the performance criterion, we maximize the area under the receiver operating characteristic (ROC) curve. The

6165

ROC curve is determined by varying the threshold for the classifier and then plotting the ratio of false positives vs the ratio of true positives.34 The feature selection is carried out by randomly dividing the training data into two subsets. The classifier is trained for a certain combination of features on the first set and the performance is then evaluated on the second set. After the feature selection, the classifier is constructed from all the training data, using the optimally selected features only. To avoid redundant data and to speed up the computation, 60% of randomly selected voxels are sampled from each expert segmentation and background to train the classifier. II.B.3.b. Feature weighting. Our boundary refinement is based on the kNN classifier. The prediction quality of kNN is known to be highly dependent on how well the features are able to discriminate between two different instances. Since the discrimination is based on the Euclidean distance function, it makes the kNN classifier to be quite sensitive to presence of redundant, irrelevant, and noisy features. This is circumvented to some extent by doing dimensionality reduction using the feature selection, as described in Sec. II B 3 a. Feature selection, however, assigns a binary weight to each feature selected and might be useful for filtering out irrelevant and redundant features but it has been shown that if the features vary in their relevance then classification accuracy can be further improved by using feature weighting.35 We propose to find the optimal relevance weight of features that have already been selected by the SFFS method. In this way, we are eliminating the computational burden of

TABLE I. Candidate feature list for voxel classification. I is the image and H(i) is the local histogram, L is the number of gray levels, M represents the GLCM matrix, G represents Gaussian kernel with std deviation r. Feature Voxel intensity and position 1st order texture features

Formulation I(x,y,z), x,y,z sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi L1 L1 L1 L1 P P P P l¼ iHðiÞ; ði  lÞ2 HðiÞ; ði  lÞ3 HðiÞ; ði  lÞ4 HðiÞ i¼0

i¼0

i¼0



Entropy and uniformity

L1 P

HðiÞ logðHðiÞÞ;

i¼0

CON ¼ GLCM texture features

L1 X L1 X

1st and 2nd order derivatives Gradient magnitude

ði  jÞ2 M½i; j; HOM ¼

L1 X L1 X i¼0 j¼0

M2 ½i; j; CTE ¼

Ir(x,y,z)

ði þ j  2uÞ2 M½i; j

i¼0 j¼0

Ix, Iy, Iz, Ixx, Iyy, Izz, Ixy, Ixz, Iyz qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ix2 þ Iy2 þ Iz2 k1k2k3, k1 þ k2 þ k3

Magnitude of eigenvalues

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi k21 þ k22 þ k23

Medical Physics, Vol. 38, No. 11, November 2011

L1 X L1 X M½i; j ; 1 þ ji  jj i¼0 j¼0

L1 X L1 X

Curvature and Laplacian

Structure tensor

HðiÞ2

i¼0

i¼0 j¼0

ENGY ¼ Intensity smoothed

i¼0 L1 P

0 Ix Ix Go @ Iy Ix Iz Ix

Ix Iy Iy Iy Iz Iy

1 Ix Iz Iy Iz A Iz Iz

6166

Qazi et al.: Feature-driven model-based segmentation

relevance weighting for both, redundant and irrelevant features. Our feature weighting is based on a line search technique. Line search methods are typically used to find the minimum value of a function in one dimension.36 Since, the line search is one-dimensional, while searching for the optimal weight for a particular feature, the weights of all other features are kept constant. All weights are initialized to 1. This process is repeated for all the selected features. We repeat the line search step for several iterations until the optimization criterion cannot be further improved, where the relevance criterion is maximization of the AUC. If during line search the criterion is not improved for a specific feature then its weight remains unchanged. For implementation of line search, we use Brent’s method,37 which first finds a bracket which contains the desired optima. A bracket is a triple (x, y, z), such that, AUCðxÞ < AUCðyÞ > AUCðzÞ. Once the bracket is found using interpolation=Golden section method,37 we iteratively search for the optimal weight. Figure 4 illustrates the improvement in classification accuracy by the same features selected using SFFS, and then weighted using our line search technique. It shows that for the first iteration of the line search procedure the AUC increases considerably implying the effect of feature weighting and in successive iterations the AUC reaches to an optimal value. II.B.3.c. Organ-specific relevant features. The position feature was the only feature commonly selected for all the organs. With feature relevance weighting, we found that though the position feature was most commonly selected, however, depending on the organ type, it was not the most dominant feature. For example, the most dominant feature (with the largest relevance weight) for the brainstem was standard deviation of the intensities (a texture feature), whereas the position feature was most relevant for the mandible. Features selected for the brainstem, parotids, submandibular glands, and lymph node levels were quite similar, and primarily consisted of local mean intensity, second order derivatives, and Gaussian curvature. Features selected for the mandible, however, consisted of second order

6166

textural features, first order derivatives, gradient magnitude, and eigenvalues of the structure tensor. II.B.4. Computation time

In order to give an estimate of computation time a distinction is made between “offline” and “online” calculations. The offline calculations, consisting of atlas creation, probabilistic mask construction, and feature selection and weighting are performed only once. Online calculations are carried out for each new patient and include the automatic initialization of the models, their adaptation, and mask refinement. The entire online calculation for the five organs, took about 10 min. on a 2.8 GHz dual-core AMD processor, in which the classification step took approximately 120 s. II.B.5. Quantitative evaluation

To quantitatively evaluate the automatic segmentations, we compare them to the manual segmentations outlined by the expert. This is carried out by estimating two common measures: volume overlap fraction or the Dice similarity coefficient (DSC),38 and a geometrical metric, the median symmetric Hausdorff distance (HD), which is evaluated slicewise. The DSC measure is defined by the following equation: DSC ¼2jVexpert \ Vautomatic j=jVexpert j þ jVautomatic j; where Vexpert is the expert delineation, and Vautomatic is the result of auto-segmentation. The DSC measure varies between [0-1], where 0 implies no overlap and 1 represents two identical regions with perfect overlap. Statistical volumetric measures, such as DSC, can give a good estimate of expert agreement; however, it is insensitive to the exact position of errors in the segmentation. Hausdorff distance, on the other hand estimates the degree of mismatch by measuring the distance between the expert and auto-segmented contours. III. RESULTS III.A. Data

Following a research ethics approved protocol for retrospective analyses; head and neck CT images of 25 patients were acquired at Princess Margaret Hospital in Toronto, Canada using a standard field of view, and did not contain large neck deformations due to disease (only N0 necks). Scan resolution for all datasets was approximately 1  1  2 mm3. For atlas construction and refinement training, 15 datasets were used, and the remaining 10 datasets were used to validate the method. The manual delineations of lymph node levels (I–IV) and four important organs at risk in the head and neck region, mandible, brainstem, sub-mandibular, and parotid glands were done by an expert, following the guidelines in Ref. 39. III.B. Validation FIG. 4. Plot showing the increase in AUC by feature weighting. The AUC value at iteration 0 is achieved from the floating feature selection method. Medical Physics, Vol. 38, No. 11, November 2011

Tables II and III list the results of the quantitative measures used to evaluate the performance of our method on a set

6167

Qazi et al.: Feature-driven model-based segmentation

6167

TABLE II. Segmentation results for all the structures validated on 10 patient images: The table lists the DSC of our automated approach vs manual expert segmentations. Mean DSC overlap (last column). Sub.: Sub-mandibular glands, L: Left, R: Right. Structure Mandible Brainstem L-parotid R-parotid L-sub. glands R-sub. glands L-node level IB R-node level IB L-node levels II–IV R-node levels II–IV

1

2

3

4

5

6

7

8

9

10

Mean

0.92 0.90 0.80 0.74 0.82 0.67 0.69 0.75 0.69 0.77

0.92 0.92 0.78 0.78 0.80 0.74 0.69 0.67 0.61 0.58

0.90 0.87 0.82 0.83 0.85 0.82 0.71 0.81 0.65 0.77

0.93 0.90 0.87 0.89 0.89 0.86 0.80 0.79 0.77 0.75

0.94 0.91 0.76 0.78 0.79 0.84 0.77 0.79 0.66 0.73

0.92 0.91 0.86 0.86 0.87 0.84 0.81 0.85 0.70 0.69

0.94 0.91 0.86 0.87 0.83 0.80 0.77 0.78 0.74 0.77

0.94 0.92 0.81 0.78 0.83 0.76 0.75 0.72 0.66 0.73

0.93 0.92 0.88 0.90 0.86 0.83 0.82 0.83 0.72 0.76

0.93 0.91 0.87 0.91 0.81 0.89 0.79 0.76 0.64 0.76

0.93 0.91 0.83 0.83 0.84 0.81 0.76 0.78 0.68 0.73

common framework to segment various organs at risk and target volumes (lymph node levels) in head and neck CT images used in radiation therapy planning. Our framework relies on quantification of uncertainty in boundary delineation by a probabilistic atlas and then uses local information based classification to resolve the uncertainty on a per voxel basis. The driving force behind the segmentation process is to start at a global level (atlas registration) and then refine the segmentation down at the voxel-level (classification refinement). The method also incorporates a feature selection step which automatically selects the optimal organ-specific features from a pool of textural and geometrical features at different scales. To improve the accuracy performance of the classifier we introduce a feature weighting methodology, which also poses the advantage of quantifying the degree of feature importance. Our method is generic and fundamentally can be applied to any application involving segmentation based on deformable models. Combining the advantages of both local low-level features and global high level prior shape information is a first step toward achieving a more reliable and robust segmentation. The quantitative validation results demonstrate that for all organs at risk which were part of the MICCAI Head and Neck Auto-segmentation Challenges,18,19 in terms of segmentation accuracy, our method is comparable to one of the best performing group.12,17 Much of the success of our method is attributed toward its ability to operate at a voxel level which is particularly helpful in regions of low soft

of 10 datasets. Figures 5 and 6 illustrate the segmentation results for various structures on a sample dataset. Comparison to other methodologies is difficult due to lack of availability of common validation data. The aim of recently organized Head and Neck Auto-Segmentation Challenges18,19 was to evaluate the performance of automated state-of-the-art algorithms in segmenting organs at risk, such as mandible, brainstem, and the parotid glands from head and neck CT image data used in radiotherapy planning. The performance criterion was based on two metrics (Sec. II B 5): the DSC measure, and the Hausdorff distance. In this paper, the results are validated on the same images used for evaluation of the automated methods for the challenges. Although not validated in the same competitive setting, our method fully satisfies the time constraints posed to the participants of the challenge. Since the challenges included three organs, therefore, it is only possible to compare the results for a subset of structures included in this paper. When compared to the best performing group of the head and neck auto-segmentation challenges,12,17 our average segmentation overlap, or DSC, was slightly better for the mandible (0.92 vs 0.93) and brainstem (0.88 vs 0.91), however, for the parotids it was slightly worse, left parotid (0.85 vs 0.83) and right parotid (0.86 vs 0.83). IV. DISCUSSION AND CONCLUSIONS We have presented a fully automated approach which combines registration and model-based segmentation in a

TABLE III. Segmentation results for all the structures validated on 10 patient images: the table lists the median HD (mm) of our automated approach vs manual expert segmentations. Average median HD distance (last column). Sub.: Sub-mandibular, L: Left, R: Right. Structure Mandible Brainstem L-parotid R-parotid L-sub. glands R-sub. glands L-node level IB R-node level IB L-node levels II–IV R-node levels II–IV

1

2

3

4

5

6

7

8

9

10

Mean

2.76 2.93 4.98 5.26 2.93 7.02 8.30 7.96 9.62 5.26

3.91 2.18 8.59 7.04 4.03 4.02 7.87 9.00 9.21 6.25

3.01 3.34 6.36 5.21 3.73 3.81 9.76 5.01 15.5 8.47

2.18 2.93 5.52 4.03 2.76 2.93 6.25 6.25 9.00 11.1

2.18 2.18 5.94 7.87 4.37 3.09 5.86 6.17 13.1 8.40

2.18 2.76 5.56 7.11 3.09 3.52 7.04 4.14 8.57 12.4

2.18 3.91 4.38 5.69 3.52 4.03 7.72 8.57 9.77 8.4

2.18 2.18 9.21 6.84 3.09 5.26 10.6 9.49 15.3 12.7

2.76 2.76 3.56 3.09 3.52 4.88 6.9 7.04 11.3 10.5

3.09 2.85 4.14 4.88 4.14 3.09 7.87 8.34 12.5 7.87

2.64 2.80 5.82 5.70 3.52 4.17 7.82 7.20 11.3 9.14

Medical Physics, Vol. 38, No. 11, November 2011

6168

Qazi et al.: Feature-driven model-based segmentation

6168

FIG. 5. Qualitative evaluation of our method vs manual segmentations: results superimposed on a sample patient image: axial slice (top), saggittal slice (bottom left), coronal slice (bottom right). The manual contours are shown in green; auto-contours are depicted in blue (mandible), purple (parotids), and red (brainstem).

tissue contrast. An important remark here is that we have used a single, randomly selected atlas image to deform and initialize the models in the patient image. An additional step that can, potentially, further improve the segmentation accuracy would be selecting an atlas image from the

database that is most similar to the patient image being segmented. This is also known as “atlas selection”, where the selection can be based on intensity-based or deformationbased similarity measures.9–11 Alternatively, a more popular technique is to use several atlas images from the training

FIG. 6. Qualitative evaluation of our method vs. manual segmentations: results superimposed on a sample patient image: axial slice (top), saggittal slice (bottom left), coronal slice (bottom right). The manual contours are shown in green; auto-contours are depicted in blue (lymph node level I), and red (lymph node levels II–IV).

Medical Physics, Vol. 38, No. 11, November 2011

6169

Qazi et al.: Feature-driven model-based segmentation

database, resulting in multiple segmentations of the structures, which can then be fused into a single segmentation. The multiatlas and fusion technique were successfully used by majority of the algorithms (including the winner) submitted to the segmentation challenges,12,17,40 where the fusion step was carried out using the STAPLE method.41 Given the high degree of anatomical variability in the head and neck region, even with using a single atlas image the accuracy of our method is comparable to multiatlas based state of the art approaches. Therefore, we believe that integration of multi-atlas based segmentation would further benefit our segmentation methodology, however, only at the cost of reduced computational efficiency. Comparison to other studies with respect to segmentation of the lymph node levels is difficult due to the lack of common validation data. In a comparable approach, combing registration and active shape models Chen et al.14 reported an average DSC of 0.698 for levels II–IV using a set of 15 patient images with an average maximum distance error of 9.59 mm. Gorthi et al.15 combined active contours with atlas based registration to segment levels IA, IB, IIA, IIB, III, and IV, and using leave-one-out strategy on 10 validation patient images the authors reported a mean DSC of 0.40, 0.61, 0.53, 0.46, 0.43, and 0.36, respectively, resulting in a large range of average HD (7.96–21.81 mm). In comparison, we have an average DSC of 0.77 for level IB, and 0.71 for levels II–IV, with an average median HD of 7.5 and 10.2 mm, respectively. Commowick et al.7 proposed atlas-based registration to segment lymp node levels. The method is validated on a database of 45 images; however, the metrics used for

6169

evaluation are sensitivity and specificity, and therefore it is difficult to relate or compare to their results. We have validated our method on various structures in the head and neck region that are delineated on a daily basis in radiation therapy planning; however, we are able to automatically segment more structures (see Fig. 7), such as the brain, both eyes, and the spinal cord, which is one of the most important organs at risk during radiation therapy treatment. These organs were excluded from this study due to unavailability of sufficient manual expert delineations. Therefore, in the future we will focus on segmentation and validation of structures that have excluded from this study. Also, in this work segmentation of the lymph node levels is only limited to N0 nodes, future work will involve evaluating the method to segment Nþ nodes, which is an extremely challenging task due to large deformations caused by the tumor. Additionally, our results are validated against one observer only and due to significant variation in target delineation, as investigated by Hong et al.,42 a multicenter=observer study would be quite beneficial in giving insight to the robustness, reliability, and stability of our automated approach. In short, we have shown that our automated segmentation framework is able to segment anatomy in the head and region with high accuracy within a clinically-acceptable segmentation time. ACKNOWLEDGMENTS The authors would like to thank Dr. Ste´phane Allaire, Dr. Claudia Leavens, Karl Bzdusek, and Dr. Michael Kaus for their help and support in developing the presented approach. This research has been funded by Philips Healthcare, and is carried out under a research agreement between University Health Network and Philips Healthcare. a)

FIG. 7. Three-dimensional visualization of the segmentation results of our method. The structures validated in this paper are shown: mandible, brainstem, parotids, and lymph node levels, along with structures, such as the spinal cord, eyes, and brain which have been auto-segmented but have not been validated. Medical Physics, Vol. 38, No. 11, November 2011

Author to whom correspondence should be addressed. Electronic mail: [email protected] 1 P. M. Harari, S. Song, and W. A. Tome´, “Emphasizing conformal avoidance versus target definition for IMRT planning in head-and-neck cancer,” Int. J. Radiat. Oncol., Biol., Phys. 77, 950–958 (2010). 2 E. Weiss and C. F. Hess, “The impact of gross tumor volume (GTV) and clinical target volume (CTV) definition on the total accuracy in radiotherapy,” Strahlenther. Onkol. 179, 21–30 (2003). 3 T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” Comput. Vis. Image Underst. 61, 38–59 (1995). 4 V. Pekar, T. R. McNutt, and M. R. Kaus, “Automated model-based organ delineation for radiotherapy planning in prostatic region,” Int. J. Radiat. Oncol., Biol., Phys. 60, 973–980 (2004). 5 D. Ragan, G. Starkschall, T. McNutt, M. Kaus, T. Guerrero, and C. W. Stevens, “Semiautomated four-dimensional computed tomography segmentation using deformable models,” Med. Phys. 32, 2254–2261 (2005). 6 X. Han, M. S. Hoogeman, P. C. Levendag, L. S. Hibbard, D. N. Teguh, P. Voet, A. C. Cowen, and T. K. Wolf, “Atlas-based auto-segmentation of head and neck CT images,” Medical Image Computing and ComputerAssisted Intervention (Springer, New York, 2008), pp. 434–441. 7 O. Commowick, V. Gre´goire, and G. Malandain, “Atlas-based delineation of lymph node levels in head and neck computed tomography images,” Radiother. Oncol. 87, 281–289 (2008). 8 H. Wang, L. Dong, M. F. Lii, A. L. Lee, R. de Crevoisier, R. Mohan, J. D. Cox, D. A. Kuban, and R. Cheung, “Implementation and validation of a three-dimensional deformable registration algorithm for targeted prostate cancer radiotherapy,” Int. J. Radiat. Oncol., Biol., Phys. 61, 725–735 (2005).

6170

Qazi et al.: Feature-driven model-based segmentation

9

O. Commowick and G. Malandain, “Efficient selection of the most similar image in a database for critical structures segmentation,” Medical Image Computing and Computer-Assisted Intervention (Springer-Verlag, Berlin, 2007), pp. 203–210. 10 T. Rohlfing, R. Brandt, R. Menzel, and C. R. Maurer, “Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains,” Neuroimage 21, 1428–1442 (2004). 11 L. Ramus and G. Malandain, “Assessing selection methods in the context of multi-atlas based segmentation,” IEEE International Symposium on Biomedical Imaging: From Nano to Macro (IEEE, New York, 2010), pp. 1321–1324. 12 X. Han, L. S. Hibbard, N. O’Connell, and V. Willcut, “Automatic Segmentation of Head and Neck CT Images by GPU-Accelerated Multi-atlas Fusion,” MIDAS Journal: http://www.insight-journal.org/browse/publication/685 (2009). 13 O. Commowick, S. K. Warfield, and G. Malandain, “Using Frankenstein’s creature paradigm to build a patient specific atlas,” Medical Image Computing and Computer Assisted Intervention (Springer-Verlag, Berlin, 2009), pp. 993–1000. 14 A. Chen, M. A. Deeley, K. J. Niermann, L. Moretti, and B. M. Dawant, “Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images,” Med. Phys. 37, 6338–6346 (2010). 15 S. Gorthi, V. Duay, N. Houhou, M. Bach Cuadra, U. Schick, M. Becker, A. S. Allal, and J. P. Thiran, “Segmentation of head and neck lymph node regions for radiotherapy planning using active contour-based atlas registration,” IEEE Journal of selected topics in signal processing 3, 135–147 (2009). 16 A. A. Qazi, J. J. Kim, D. A. Jaffray, and V. Pekar, “Probabilistic refinement of model-based segmentation: Application to radiation therapy planning of the head and neck,” Medical Imaging and Augmented Reality (Springer-Verlag, Berlin, 2010), p. 403. 17 X. Han, L. S. Hibbard, N. P. O’Connell, and V. Willcut, “Automatic segmentation of parotids in head and neck CT images using multi-atlas fusion,” Medical Image Analysis for the Clinic: A Grand Challenge (CreateSpace, Seattle, 2010), pp. 297–304. 18 V. Pekar, S. Allaire, J. J. Kim, and D. A. Jaffray, “Head and neck autosegmentation challenge,” MIDAS Journal: http://www.midasjournal.org/ browse/publication/703 (2009). 19 V. Pekar, S. Allaire, A. A. Qazi, J. J. Kim, and D. A. Jaffray, “Head and neck auto-segmentation challenge: Segmentation of the parotid glands,” Medical Image Analysis for the Clinic: A Grand Challenge (CreateSpace, Seattle, 2010), pp. 273–280. 20 C. Leavens, T. Vik, H. Schulz, S. Allaire, J. Kim, L. Dawson, B. O’Sullivan, S. Breen, D. Jaffray, and V. Pekar, “Validation of automatic landmark identification for atlas-based segmentation for radiation treatment planning of the head-and-neck region,” Proc. SPIE, 6914, 69143G (2008). 21 K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3D point sets,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 698–700 (1987). 22 F. L. Bookstein, “Principal warps: Thin-plate splines and the decomposition of deformations,” IEEE Trans. Pattern Anal. Mach. Intell. 11, 567–585 (2002).

Medical Physics, Vol. 38, No. 11, November 2011

6170 23

W. L. Price, “Global optimization by controlled random search,” J. Optim. Theory Appl. 40, 333–348 (1983). 24 R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” J. Global Optim. 11, 341–359 (1997). 25 J. P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell’s demons,” Med. Image Anal. 2, 243–260 (1998). 26 P. Castadot, J. A. Lee, A. Parraga, X. Geets, B. Macq, and V. Gre´goire, “Comparison of 12 deformable registration strategies in adaptive radiation therapy for the treatment of head and neck tumors,” Radiother. Oncol. 89, 1–12 (2008). 27 G. H. Golub and C. F. Van Loan, Matrix Computations (Johns Hopkins University Press, Baltimore, 1996). 28 S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu, “An optimal algorithm for approximate nearest neighbor searching fixed dimensions,” J. ACM 45, 891–923 (1998). 29 T. Lindeberg, Scale-Space Theory in Computer Vision (Springer, New York, 1994). 30 R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern. 3, 610–621 (1973). 31 A. F. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” Medical Image Computing and ComputerAssisted Intervention (Springer-Verlag, Berlin, 1998), pp. 130–137. 32 J. Weickert, Anisotropic Diffusion in Image Processing (Teubner-Verlag, Wiesbaden, 1998). 33 P. Pudil, J. Novoviova´, and J. Kittler, “Floating search methods in feature selection,” Pattern Recogn. Lett. 15, 1119–1125 (1994). 34 M. H. Zweig and G. Campbell, “Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine,” Clin. Chem. 39, 561–577 (1993). 35 D. Wettschereck, D. W. Aha, and T. Mohri, “A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms,” Artif. Intell. Rev. 11, 273–314 (1997). 36 J. Nocedal and S. J. Wright, Numerical Optimization (Springer Verlag, Berlin, 1999). 37 W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes (Cambridge University Press, Cambridge, 2007). 38 L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology 26, 297–302 (1945). 39 V. Gre´goire, A. Eisbruch, M. Hamoir, and P. Levendag, “Proposal for the delineation of the nodal CTV in the node-positive and the post-operative neck,” Radiother. Oncol. 79, 15–20 (2006). 40 J. Yang, Y. Zhang, L. Zhang, and L. Dong, “Automatic segmentation of parotids from CT scans using multiple atlases,” Medical Image Analysis for the Clinic: A Grand Challenge, (CreateSpace, Seattle, 2010), pp. 323–330. 41 S. K. Warfield, K. H. Zou, and W. M. Wells, “Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation,” IEEE Trans. Med. Imaging 23, 903–921 (2004). 42 T. S. Hong, W. A. Tome, R. J. Chappell, and P. M. Harari, “Variations in target delineation for head and neck IMRT: An international multiinstitutional study,” Int. J. Radiat. Oncol., Biol., Phys. 60, S157–S158 (2004).

Auto-segmentation of normal and target structures in ... -

ment training. II. METHODS. The auto-segmentation method developed in this work is a multistep approach, which can be divided into two major steps. (i) Model ...

2MB Sizes 4 Downloads 191 Views

Recommend Documents

Auto-segmentation of normal and target structures in ... -
Radiation Medicine Program, Princess Margaret Hospital, Toronto, Ontario M5G 2M9, Canada. Vladimir ... ometrical metric, the median symmetric Hausdorff distance (HD), which is .... Next, the mean landmark positions and the covariance ma-.

Coarse-graining of protein structures for the normal ...
Mar 1, 2007 - forms.1–3 It is essential to understand the protein dynamics that provides the information ... mode analysis (NMA) has been an alternative to MD simulation. NMA has enabled ... tion by minimization of anharmonic potential energy, whic

CSI2101 Discrete Structures Winter 2009: Propositional Logic: normal ...
Propositional Logic: normal forms, boolean functions and circuit design. Lucia Moura. Winter 2009. CSI2101 Discrete Structures Winter 2009: Propositional Logic: normal forms, boolean functions and circuit design. Lucia Moura .... boolean function of

CSI2101 Discrete Structures Winter 2009: Propositional Logic: normal ...
Normal forms for compound propositions. Boolean functions and digital circuits. Review of Concepts. Truth assignments, tautologies and satisfiability. Definition. Let X be a set of propositions (also called propositional variables). A truth assignmen

Angiogenesis and vascular remodelling in normal and cancerous ...
Oct 22, 2008 - the degree of pruning increasing as the pressure drop increases. In the model ... Electronic supplementary material The online version of this article ... Centre for Mathematical Biology, University of Oxford, Oxford OX1 3LB, UK.

Emergence of target waves in paced populations of ...
Oct 2, 2009 - Online at http://www.njp.org/ .... One full Monte Carlo step consists of N = L2 elementary steps, during .... Importantly, while for M = 10−5 the two depicted time courses are .... Program of Higher Education of China (SRFD no.

exploration and ambulatory behaviours in normal and ...
Ardeatina 306, 00179 Rome (Italy), Telephone: +39 06 5150 1459, Fax:+39 06 5150 1213, Email: sze- ... hippocampal lesioned rats in open field tests (Whishaw IQ et al., 1994), but other studies have .... automated movement tracking system.

Structural Adaptation in Normal and Cancerous ...
Bioinformatics Unit, Department of Computer Science, University College. London .... In the top layer, we deal with the structure of the vascular network and.

Inscription and characterization of micro-structures in ... - CiteSeerX
capable of creating three dimensional micro-structures deep inside bulk of a dielectric or polymeric ..... capable of on-site diagnosis. ... T. Mitsuyu and K. Hirao, “Photowritten optical waveguides in various glass with ultrahosrt pulse laser”,

Inscription and characterization of micro-structures in ... - CiteSeerX
modified fused silica and application to waveguide fabrication,” J. Opt. Soc. Am. B. .... doped glass written with near IR femtosecond laser pulses”, Electron. Lett.

Disambiguation of Ellsberg equilibria in 2x2 normal form games
Mar 2, 2016 - For player 1, strategy p is a best reply to q if U1(p, q) ≥ U1(p. ′ ..... level of ambiguity associated with the domain Pr. When kr = 0, density fr.

Delusions of alien control in the normal brain
system could cause a lack of attenuation of the sensory con- sequences of self-produced .... obtained from the Administration of Radioactive Substances. Advisory ..... network during self-generated actions is associated with the misattribution of ...

ESTIMATION OF CAUSAL STRUCTURES IN LONGITUDINAL DATA ...
in research to study causal structures in data [1]. Struc- tural equation models [2] and Bayesian networks [3, 4] have been used to analyze causal structures and ...

CD44 in normal human pancreas and pancreatic ...
Contract grant sponsor: Minister of Science, State of Mecklenburg; Contract grant .... showed an augmented CD44st staining in comparison to the CD44 variants.

Disambiguation of Ellsberg equilibria in 2x2 normal form games ...
Mar 2, 2016 - Center for Mathematical Economics, Bielefeld University, ... to the Nash equilibria, new Ellsberg equilibria may arise in which players use.

Enzymes of Glucose Metabolism in Normal Mouse ...
1963; Coore & Randle, 1964) has received support from studies on the .... obtained from Sigma Chemical Co. because commercial samples ... Ltd., Poole, Dorset, U.K. [1-14C]Glucose (specific radio- activity 0.2 ... Liquid-Scintillation Computer.

Performance of Input Devices in FPS Target Acquisition
Jun 15, 2007 - XBox360 controller, the combination of a mouse and a keyboard, and a Trackmouse in .... mapping and field-of-view on human performance in.

Stein's method and Normal approximation of Poisson ... - Project Euclid
tion) kernel on Zp+q−r−l, which reduces the number of variables in the product ..... Now fix z ∈ Z. There are three possible cases: (i) z /∈ A and z /∈ B, (ii) z ∈ A,.

Restricted normal cones and the method of alternating ...
Mar 1, 2013 - mappings) corresponding to A and B, respectively, are single-valued with full domain. In order to find a point in the intersection A and B, it is ...

Ebook Gait Analysis: Normal and Pathological ... - WordPress.com
... online read Gait Analysis: Normal and Pathological Function, read online Gait Analysis: Normal and Pathological Function, review book Gait Analysis: Normal ...

Normal Science, Pathological Science and Psychometrics - CiteSeerX
disguised. It is concluded that psychometrics is a pathology of science, and ...... ena. Why? And why did it take so long for psychologists to wish to be included? ..... Similarly, if C at motivation level 2 performs the same as B at motivation level