Current Opinion in Solid State and Materials Science 17 (2013) 93–106

Contents lists available at SciVerse ScienceDirect

Current Opinion in Solid State and Materials Science journal homepage: www.elsevier.com/locate/cossms

Computational methods for materials characterization by electron tomography Jose-Jesus Fernandez ⇑ National Centre for Biotechnology, National Research Council (CNB-CSIC), Campus UAM, Darwin 3, Cantoblanco, 28049 Madrid, Spain

a r t i c l e

i n f o

Article history: Available online 30 March 2013 Keywords: Materials science Electron tomography Image processing Computational methods Tomographic reconstruction Segmentation

a b s t r a c t Electron tomography (ET) is a powerful imaging technique that enables thorough three-dimensional (3D) analysis of materials at the nanometre and even atomic level. The recent technical advances have established ET as an invaluable tool to carry out detailed 3D morphological studies and derive quantitative structural information. Originally from life sciences, ET was rapidly adapted to this field and has already provided new and unique insights into a variety of materials. The principles of ET are based on the acquisition of a series of images from the sample at different views, which are subsequently processed and combined to yield the 3D volume or tomogram. Thereafter, the tomogram is subjected to 3D visualization and post-processing for proper interpretation. Computation is of utmost importance throughout the process and the development of advanced specific methods is proving to be essential to fully take advantage of ET in materials science. This article aims to comprehensively review the computational methods involved in these ET studies, from image acquisition to tomogram interpretation, with special focus on the emerging methods. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction Electron tomography (ET) allows determination of the threedimensional (3D) structure and detailed characterization of materials at the nanometre scale [1]. ET has become an invaluable tool that materials scientists use routinely to derive qualitative and quantitative structural information and analyze physicochemical properties at the nanoscale [2,3]. Thanks to the latest technical developments, there are exciting prospects to push the resolution attainable by ET towards the atomic level [4]. This would enable fully understanding of materials, which is essential for many areas in nanoscience and nanotechnology. ET was originally pioneered in life sciences forty years ago. There, the technological advances in the last few years have allowed realization of its unique potential to visualize the molecular organization of native cells at a resolution of a few nanometres. As a consequence, ET is now established as an important imaging technique in structural cell biology [5,6]. The development of alternative imaging modes in the electron microscope enabled adaptation of the technique to the analysis of inorganic materials at the beginning of the past decade [7–11]. Since then, ET has progressively been consolidating its position as a key tool in materials science. Thus, in the past decade a number of major breakthroughs have been possible thanks to ET, which have provided important ⇑ Tel.: +34 91 585 4619; fax: +34 91 585 4506. E-mail address: [email protected] 1359-0286/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cossms.2013.03.002

3D structural and chemical information about a variety of materials [12–16]. ET is based on the acquisition of a series of projection images from a single sample at different views. The 3D volume (usually known as tomogram) is then computed from these images. Afterwards, the tomogram is subjected to visualization and interpretation. Therefore, computational methods play an essential role in the data processing and analysis. Many of the methods that are currently applied have been adopted directly from those used in life sciences [17]. However, lately there have been remarkable developments that deal with the particularities found in materials science and that have turned out to be central to make the most of ET [2,4]. This article reviews the computational methods applied in ET studies in materials science. The standard methods required for the different stages, from image acquisition to tomogram interpretation, are explained in detail. Furthermore, the recent advanced developments (e.g. discrete and compressed sensing tomographic reconstruction) are thoroughly described.

2. Automated image acquisition The principle of ET is 3D reconstruction from a series of projection images acquired with the microscope. In transmission electron microscopy (TEM), different imaging modes can be operated. Although the mode most often used for ET in life sciences (bright-field, BF-TEM) provides projection images of the relatively

94

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

systems are equipped with predictive abilities (based on specimen movement precalibration, or a geometrical rotation model, or actual shifts in the current acquisition session), which allows rapid estimation and compensation for the shifts without the need for tracking and focusing ([17] and references therein). Thus, predictive image collection significantly contributes to increase the throughput. In addition to commercial packages from manufacturers of microscopes and related systems (e.g. FEI Xplore3D, JEOL TEMography, Gatan Digital Micrograph 3D Tomography, TVIPS EMMenu), several software packages for tilt-series collection have been developed by the ET community [17]: Leginon, SerialEM, TOM, UCSF Tomography. The tilt range and the angular increment strongly influence the resolution of the 3D reconstruction, as already illustrated elsewhere [11,4,21]. Due to technical limitations of electron microscopes, the maximum tilt range in ET is around ±70°. The lack of specimen views at high tilt angles causes artefacts in the tomograms, as described below. Sometimes, another tilt-series is taken with the specimen rotated by 90° (the double-axis tilting geometry), for better angular coverage and substantial reduction of these artefacts [22,23]. As far as angular increment is concerned, the radiation damage to the sample is the limiting factor. The tilt increment is typically in the range 1–2°, or slightly lower, which provides a number of images usually in the range 60–200. Some work has proposed to computationally estimate intermediate views by means of adaptive interpolation, thus increasing the angular sampling [24]. This manages to improve the resolution of the final tomogram by reducing the artefacts during reconstruction, which may be particularly strong when a low number of images are available. Alternatively, the use of emerging powerful reconstruction algorithms (see below) may be particularly convenient under such circumstances.

thin specimens being imaged [5], this does not hold for inorganic materials because of the diffraction contrast [11]. Alternative imaging modes have thus been developed that are better suited for the properties of these materials and meet the so-called projection requirement [11]. The two most common imaging modes are high-angle annular dark-field scanning TEM (HAADF-STEM) and energy filtered TEM (EF-TEM). In HAADF-STEM (also known as Zcontrast imaging), images are formed by high angle, incoherently scattered electrons resulting from the interaction with the proximity of the atom nucleus. The contrast in these images is fairly proportional to the square of the atomic number Z and varies monotonically with the sample thickness, thereby fulfilling the projection requirement [11]. In EF-TEM, energy-filtered images are recorded by selecting only electrons that have lost a particular range of energy. As the target signal is much lower than the background, it is necessary to pre-process the images to remove the latter and thus isolate the compositional information [18]. To a reasonable approximation, EF-TEM images satisfy the projection requirement [11]. Therefore, images acquired with these modes can be considered projections of the specimen, i.e. the 3D information of the original structure is collapsed onto 2D images. In ET, a sample is introduced in the electron microscope and a series of images (so-called tilt-series) is recorded by tilting the sample at different angles typically around a single axis that is perpendicular to the electron beam. The typical acquisition session yields a tilt-series with images over a tilt range of ±60° or 70° and at small increments of 1–2° (Fig. 1). The establishment of ET as an important tool in nanoscience has been driven by the advent of computer-automated image collection [19,20]. Software programs take charge of microscope optics, sample stage and camera with the aim of automatically acquiring the images of the tilt-series from the same position on the sample and at the same imaging conditions using an electron dose fractionation scheme. This involves a loop consisting in tilting, tracking, focusing and imaging, for each view to be taken from the specimen. Due to mechanical imperfections, during acquisition samples undergo shifts in X, Y and Z that are compensated by means of tracking (X, Y) and focusing (Z). Modern image collection

3. Tilt-series alignment During acquisition, the imperfections of the mechanical tilt system produce distortions to the images (e.g. shifts). As mentioned above, the larger component of these distortions is compensated

tilt axis

-60o

50o

-40o

30o

10o

-20o 0o

Fig. 1. Single-tilt axis data acquisition geometry. The specimen is imaged in the microscope by tilting it over a range typically ±60° or 70° in small tilt increments around a single axis that is perpendicular to the electron beam. As a result, a set of projection images (so-called tilt-series) is collected.

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

by the automated collection procedure. However, a more accurate alignment of the images is needed afterwards. The computational alignment is intended to mutually set the images to a common coordinate system by correction for the shifts, rotations and other distortions [5,25]. This step is of paramount importance if ultrahigh resolution is to be achieved. The main alignment procedure in materials science is performed by cross-correlation (CC) of adjacent images [11]. For the computation, the images may be stretched in the direction perpendicular to the tilt-axis by an amount equal to the cosine of the tilt angle. This stretching aims to better represent the spatial relationship of the content of the consecutive images so as to yield a more precise CC coefficient. One disadvantage of this approach is the fact that only translational alignment is carried out. Accurate determination of the direction of the tilt axis and compensation for other distortions (rotational misalignment, magnification changes, etc.) are still required, which is normally done by manual intervention [11]. The other main alignment technique is based on tracking of fiducial markers [17]. These markers are specifically added to the sample during preparation (typically colloidal gold beads), or some particular structures present in the sample (e.g. nanoparticles) may be used for this purpose [26]. The procedure usually starts with the result of the CC-based alignment. Then, the coordinates of the fiducial markers are determined manually, automatically or semiautomatically throughout the images of the tilt-series. From these coordinates, the images are then mutually aligned by means of a

95

least squares procedure aiming to minimize the alignment error as a function of the shifts, rotations and other parameters included in the projection model. Alignment based on fiducial markers is usually much more accurate than the CC-based one. However, it is not always possible to count on such markers on the sample. There is a family of alignment techniques often applied in life sciences that have barely been employed in materials science (e.g. [27]) despite its potential to be useful in this field. Here, in cases where the use of gold beads is not possible or convenient, the alignment is instead based on virtual markers, i.e. specific features or just simply patches, present in the images [17]. These features or patches are tracked automatically throughout the tiltseries usually through cross-correlation. The coordinates collected this way are then input to the least squares procedure referred to in the previous paragraph. These alignment techniques may turn out to be precise, particularly with high-contrast images as those frequently found in materials science. Currently, there is pressing need for the development of alignment procedures suitable for atomic-scale ET. There have already been some proposals. [28] have presented a marker-less refinement algorithm that, starting from the initial CC-based aligned tilt-series, performs an iterative loop where at each iteration an optimum solution is selected from trial tomograms calculated for an ensemble of image shifts and other parameter values. The objective function to maximize is the contrast in subvolumes containing sharp features, which proves to be reasonably indicative of the tomogram resolution. The optimization process may be accom-

Fig. 2. Alignment method based on the centre-of-mass. (a) A 3D model used for illustrative purposes. (b) A tilt-series of 71 images in the range ±70° at an interval of 2° was simulated. The tilt axis runs along the Y axis. In the simulation, misalignment of the images was imposed. Several projection images of the tilt-series are shown here. (c) The centre-of-mass method relies on projection of the images to the Y axis, and later to the X axis. This panel shows the stack of images in the tilt-series (only that at 50° is explicitly showed) and the results of the projections onto the X and Y axes. Misalignment of the projections, both in X and Y axes, is apparent. (d) The method first aligns the images in the direction parallel to the tilt axis by projecting them onto the Y axis, and the resulting projections are aligned to that of the untilted image (0°). This panel shows the stack of aligned images (only that at 0° is explicitly showed) and their projections onto the Y axis. Those Y-aligned images are then subjected to the centre-of-mass method for X-alignment. They are projected onto the X axis and the centre-of-mass is calculated. The images are then shifted in the X direction to place the centre-of-mass at the origin of the X axis. This panel shows the projections onto the X axis of the aligned images, highlighting the centre-of-mass (CM in the panel) placed at the origin of that axis.

96

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

plished by means of exhaustive grid search or more efficient global optimization strategies. For the sake of improved robustness, the method has the option of aligning the images to projections calculated from the optimal tomogram in each iteration (this strategy, commonly known as projection-matching in life sciences [17], have been used in isolated form in materials science scarcely, e.g. [29]). On the other hand, a strategy based on the alignment of the centre-of-mass has been proposed recently [16]. The approach first aligns the images in the direction parallel to the tilt axis. For this, the images in the tilt-series are projected onto the tilt axis (without loss of generality, let the tilt axis run along the Y axis), and the resulting 1D curves are all aligned to that of the untilted image, which is taken as a reference. Alignment in the direction perpendicular to the tilt axis (the X axis) is then based on the property of the Radon transform that states that the projection of the centre-of-mass is the centre-of-mass of the projection. Here, the images are projected onto the X axis, and the centre-of-mass of the projections is calculated. The images are then shifted so that their centre-of-mass is at the origin of the X axis, thereby making them all mutually aligned. The result is that the 3D tomogram, once reconstructed, will also have its centre-of-mass placed at the origin of the X axis. A pre-requisite for this strategy is to have the tilt axis running along one of the major image axes. This strategy has already been used in several atomic-scale ET works [16,30]. Fig. 2 sketches how the centre-of-mass method would work on an example case.

4. Tomographic reconstruction The reconstruction problem in ET is to obtain the 3D volume from the set of aligned images in the tilt-series. The mathematical principles of tomographic reconstruction are based upon the central section theorem, which states that the Fourier transform (FT) of a 2D projection of a 3D object is a central section of the 3D FT of the object [5,11,31]. Therefore, the 3D FT of the specimen can be computed by assembling the 2D FTs of the images in the tiltseries, which yields the 3D structure by an inverse FT (Fig. 3). One problem of this approach is related to the non-trivial interpolation in Fourier space [11,5,31]. As a consequence, other methods are applied in practice.

Weighted backprojection (WBP) is a standard tomographic reconstruction method. It is essentially equivalent to the Fourier approach just described but working in real space [5,11]. WBP assumes that the projection images represent the amount of mass density encountered by imaging rays. The method simply distributes that specimen mass evenly over computed backprojection rays (Fig. 3). When this process is repeated for all the projection images in the tilt-series, backprojection rays from the different images intersect and reinforce each other at the points where mass is found in the original structure. Therefore, the 3D mass of the specimen is reconstructed from a series of 2D projection images. The backprojection process involves an implicit low-pass filtering that makes reconstructed volumes strongly blurred. In practice, in order to compensate for the transfer function of the backprojection process, a high-pass filter (i.e., weighting) is applied to the projection images previously, hence the term ‘‘weighted backprojection’’. This weighting is necessary to properly represent the high frequency information in the reconstruction. This weighting filter commonly follows a linear ramp curve, from 0 at the origin to a maximum value at the maximum frequency. Sometimes, a falloff curve is also used to reduce the contribution of the high frequency components, where noise is predominant. For a detailed description of the method, refer to [5,11]. The relevance of WBP in ET mainly stems from its computational simplicity. Its disadvantage is the sensitivity to the conditions found in ET, namely the limited tilt angle and noise. There exist alternative real-space reconstruction algorithms that formulate the 3D reconstruction problem as a large system of linear equations to be solved by iterative methods [31]. They are robust to face the particularities of limited angle data and noise in ET. In essence, these methods refine the volume progressively by minimizing the error between the experimental projection images and the equivalent projections calculated from the reconstructed volume [11]. There are many iterative algorithms that share the same common idea but differ in the way this minimization is accomplished. A very well accepted iterative method in the ET field is SIRT, which stands for Simultaneous Iterative Reconstruction Technique [32]. In every iteration of SIRT, (1) projections from the current volume are computed; (2) the error between the experimental projections and those computed from the volume is calculated; and (3) the volume is refined by backprojection of the

Fig. 3. Principles and methods of tomographic reconstruction. (left) The Fourier transform (FT) of the images acquired are central sections of the 3D FT of the object, which can be assembled in the Fourier space. An inverse FT would yield the 3D structure of the object. The grey triangles denote the missing wedge, a space where no information is available due to the lack of views at those tilt angles. The tilt axis is oriented perpendicular to the sheet. (centre) WBP. The projection images in the tilt-series are projected back into the volume to be reconstructed. (right) SIRT. The reconstruction is progressively refined by minimizing the average error between the experimental and the calculated projections. Forward projection denotes the process of calculating projections from the volume at the current iteration. Backward projection denotes the process of backprojecting the average error.

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

average error (Fig. 3). Under double-tilt axis geometry, this iterative loop should cover the projections from the two orthogonal tilt-series, or alternate between them [23]. In materials science, SIRT was rapidly adopted as a standard method because of their robustness against the experimental conditions in ET (e.g. [11,13,33,34]). Fig. 4 shows a comparison of the performance of WBP and SIRT on a dataset of Pt/Ru nanoparticles within a MCM-41 mesoporous silica. The improvement in contrast and definition of features and edges in SIRT are apparent. One of the disadvantages of iterative methods is their computational demands. Approximately, every iteration takes twice the time of WBP, and the optimal number of iterations is in the range 30–50 [35], which makes these methods up to two orders of magnitude slower than WBP. Fortunately, these algorithms are very well suited for parallel processing, so high performance computers have traditionally been used to cope with these demands [36]. Recently, the advent of modern powerful multicore-based desktop computers and the availability of optimized implementations have made it possible to obtain SIRT tomograms in a matter of a few minutes [37]. Exploitation of the power of graphics processing units (so-called GPU processing) improves the speed by an extra factor [38,39]. The limited tilt range in ET results in a region empty of information in the Fourier space of the 3D reconstruction (so-called ‘missing wedge’, Fig. 3). A ±70° tilt range involves that 22% of the information is missing. The use of double-tilt axis acquisition geometry significantly reduces the missing information (down to 7% in the case of ±70° tilt range), which now is confined to a pyramid instead of a wedge, and the associated artefacts [22]. Therefore, the missing wedge makes the resolution of the reconstruction anisotropic (i.e. direction-dependent). In real space, it produces artefacts such as blurring of the spatial features in the beam direction. This causes some features to look elongated in that direction (i.e. there is a significant loss of resolution in the Z-direction), features oriented perpendicular to the tilt axis tend to fade from view, and others are not resolved at all (see Fig. 4). Estimation of the actual resolution attained in a tomogram is not an easy task. The Crowther criterion [40] has been widely used to give rough estimates [11]. Under the assumption of a perfectly aligned tilt-series and a fully covered ±90° tilt range, it states that the resolution along the tilt axis (Y-direction) is that of the projection images and the resolution along the X-direction is given by dx = pD/N, where D is the diameter of the reconstructed volume and N is the number of images in the tilt-series. The resolution

97

along the beam direction would be dz = exzdx, which accounts for the resolution loss due to the missing wedge through the term pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi exz ¼ ða þ sin a cos aÞ=ða  sin a cos aÞ, where a denotes the maximum tilt angle [11]. Recently, an approach to estimate the actual resolution directly from the tomogram has been proposed [35]. It is based on characterization of the PSF (point spread function) estimated at the edges of reconstructed features with sharp boundaries, such as gold particles. The results point out that the Crowther criterion may be too conservative. WBP and SIRT constitute the most common methods used so far. Although SIRT turns out to be much better, it is still sensitive to the missing information in Fourier space, as shown in Fig. 4. These artefacts then hinder proper quantitative analysis of the tomograms. Lately, a number of methods have appeared in the field that yield tomograms better suited for the extraction of quantitative structural information. These emerging methods will be covered in more detail in a section below. 5. Visualization and interpretation of tomograms Once the tomogram is computed, the next set of computational stages is related to visualization and interpretation of the information within. In general, structures are usually visualized in 3D by means of surface rendering, often complemented by 2D cross-sectional views. One key step prior to 3D visualization is segmentation, which aims to decompose the tomogram into its structural components by identifying the sets of voxels that constitute them and to extract them out from the background. Quantitative interpretation of tomograms also relies on segmentation. This task is severely hampered by the low signal-to-noise (SNR) of tomograms, the artefacts due to the missing wedge in Fourier space and the complexity of the structures under study. Moreover, segmentation is strongly case-specific and largely depends on the structures under investigation. 5.1. Preprocessing for segmentation The low SNR may pose a significant hurdle for the performance of segmentation. As a consequence, noise reduction techniques are normally applied as a pre-processing step before segmentation and visualization. There exist a myriad of techniques [17]. The simplest techniques employed in experimental works are based on low-pass filtering, normally implemented as a weighted-average (e.g. fol-

Fig. 4. Three-dimensional reconstruction with WBP (left) and 50 iterations of SIRT (right) of a dataset of Pt/Ru nanoparticles within MCM-41 mesoporous silica. The XZ, XY and ZY planes are shown at top-left, bottom-left and bottom-right panels, respectively. The planes containing the Z axis clearly show the effect of the missing wedge, in the form of severe blurring, elongation and fading-out of features. Furthermore, it is clearly seen that the contrast in the SIRT reconstruction is much better. The tilt-series contained images in the range ±70° at an interval of 2°. TOMO3D was used to compute these reconstructions [37].

98

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

lowing a Gaussian function) of the voxels in a neighbourhood [41– 43]. Though noise is reduced, these techniques blur features of interest. This blurring is sometimes compensated for by means of edge enhancement techniques [43]. The median filter is a non-linear method that has particularly good abilities to reduce shot noise and preserve edges. Despite this, it has been seldom applied in the field [34]. It consists of substituting a voxel by the median of its neighbours (the values are sorted and the value placed in the middle is picked up as the new value of the voxel). The most sophisticated and powerful noise reduction method in the field is anisotropic nonlinear diffusion (AND) [44]. AND is inspired on diffusion, the physical process whereby concentration differences are equilibrated as a function of time without creating or destroying mass. In AND the density values play the role of concentration, and the aim is to diffuse (smooth) the density but preserving the edges as much as possible. AND achieves feature preservation and enhancement as the strength and direction of the filtering are adaptively tuned to the local structure around each voxel. This local structure is estimated by eigen-analysis of the structure tensor, which yields three orthogonal eigenvectors (vi) and their corresponding eigenvalues (li). The first eigenvector points to the direction of the maximum variation whereas the third one to the direction of the minimum one. Based on the relative values of the eigenvalues, basic local structures can be recognized, which are then used to tune the filtering accordingly (Fig. 5). In linear features, the filtering should be applied along the direction pointed by the third eigenvector v3 while the strength of the smoothing along the other eigenvectors depends on the gradient (the higher the value, the lower the strength). In locally planar features, however, it should be applied along the directions with lower density variation (i.e. those pointed by the eigenvectors v2 and v3). This type of features are predominantly found in ET due to the blurring effect of the missing wedge [44]. AND has been successfully applied in this field, particularly to enhance mesoporous silica [33,45,46] (Fig. 6). Other preprocessing operations occasionally applied intend to improve the contrast and enhance the boundaries of the structures [47]. Thus, some operators well known in the image processing field have been used, such as histogram equalization [41], Sobel filter [14], unsharp masking [48], Difference of Gaussians (DoG) [49] or combination of morphological operations [43]. However, the contribution of these steps might be minor if noise reduction has been done with AND. 5.2. Segmentation Thresholding is one of the most common approaches for segmentation. The technique yields a binary volume where all voxels

with density lower than a given threshold are masked out as they are considered background. If several structures with different density ranges are present in the tomogram, various threshold values could be used to extract them all out. Fig. 7 shows an example of 3D visualization after thresholding. Setting up the value of the threshold(s) is central. The procedure often includes the calculation of a histogram of the tomogram densities (i.e. a curve showing the voxel count as a function of density range) because the valleys of the histogram are potential candidates for threshold values. Though manual tuning is still employed [34,50], automated methods for objective threshold determination, such as Otsu’s [51], are now common [41,48,52,53]. Under the assumption that the tomogram has a bi-modal histogram (i.e. the tomogram contains two classes of voxels, e.g. foreground and background), the Otsu method calculates the optimal threshold that separates those two classes so that their intra-class variance is minimal and inter-class one is maximal. The extension to multiple thresholds is straightforward (referred to as Multi Otsu method). One disadvantage of global thresholding techniques is that they cannot account for local density variations that appear in tomograms, for instance due to the effects of the missing wedge, among other factors. Alternative local thresholding methods try to find out local thresholds by taking into account the statistics or density ranges in the neighbourhood of every point. One of these methods determines these local thresholds by comparing the experimental projection data with reprojections from the segmented tomogram, showing promising results where the missing wedge hardly affected the segmentation of synthetic volumes [54]. However, these methods still have not been applied extensively in the field. One drawback of thresholding is that it only segments based on density, with no consideration of the spatial relationship among voxels. So, the voxels segmented by thresholding need not be contiguous, spurious voxels may be included in the selection, and also there might be holes within the segmented regions. Furthermore, these problems are exacerbated by the residual noise present in the tomogram. As a consequence, segmentation by thresholding is often followed by sequences of morphological operations (dilation, erosion, closing, opening, etc.) aiming at refining the results [41,52]. Thus far, thresholding has been invaluable to produce 3D views of datasets where the features of interest appeared relatively isolated and had a very well identifiable range of density in comparison to the rest of the structures in the tomogram (e.g. nanoparticles embedded in silica support [33,45]). However, as ET is being applied to more and varied materials (e.g. with low contrast; dense particle packing; intricate structures; small density difference between structures; etc.), the limitations of thresholding are becoming ostensible. Actually, these materials are even chal-

Fig. 5. Basic local structures considered in anisotropic nonlinear diffusion. The local structure around each voxel is determined based on an eigen-analysis that provides three orthogonal eigenvectors (vi) and their corresponding eigenvalues (li). This analysis allows identification of basic geometric figures, as shown, and the strength and direction of the smoothing are adaptively tuned so that the edges of these structures are preserved and enhanced. Therefore, in linear structures the filtering should be applied along the major direction (v3) while in planar ones should be across the plane defined by (v2, v3).

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

99

Fig. 6. Noise reduction. A comparison of slices (left, as reconstructed; right, after noise reduction with anisotropic nonlinear diffusion) taken from tomograms of MCM-48 (top) and MCM-41 (bottom) silica. The noise reduction and feature enhancement are apparent.

lenging for more sophisticated automated segmentation methods. Thus, manual segmentation has been a method of choice in a number of cases [34,55,56]. Nonetheless, this is a time-consuming, tedious and, more importantly, subjective task. As a consequence, ad hoc semi-automated strategies that combine different segmentation methods and other image processing operations are devised for specific problems. Several examples of these ad hoc approaches have been reported. A combination of thresholding, the Watershed transform and the Hough transform was designed to detect and characterize spheres associated to nanoparticles [43]. The Hough transform is applied to the individual regions found by the Watershed transform to find out the parameters (location and radius) describing a sphere. In [41] a sequence of several morphological operations were used to detect them. The approach devised by [49] to detect nanoparticles relies on thresholding followed by a Watershedbased refinement stage (Fig. 7B). Here, the Watershed transform is applied to a Euclidean Distance Map computed from the thresholding result, where the distance of every voxel to the immediate background is represented. A similar strategy was used by Grothausmann et al. [52]. Estimation of the radii of the nanoparticles is then derived from the volume of the regions, as is standard in the field [34,49,52,55]. In all these cases, preprocessing by noise filtering and/or edge enhancement was applied. These effects are, however, obtained directly with denoising with AND, whose results can then be fed to a Watershed segmentation algorithm to determine the centre of nanoparticles [45]. In that work, surface patches around the nanoparticles were extracted for further characterization and analysis [45].

The previous paragraph shows that Watershed transform has potential to become an important segmentation tool in this field. Here, the gradient is interpreted as a topographical relief. The relief is progressively flooded by a fluid, which drains to the catchment basins. The limits between adjacent basins then form the watersheds of the relief, i.e. the boundaries of the different segmented objects (Fig. 7C). One disadvantage of the Watershed method is its sensitivity to noise, which makes the method tend to oversegmentation. The use of the preprocessing steps described above overcomes this drawback. Recently, in life sciences a trend has raised towards development of automated methods for segmentation of specific structural features in tomograms (e.g. membranes, filaments, etc.) [17]. The lack of success of generic segmentation techniques has been the driving force of this trend. It is likely that materials science also moves towards this direction, as actually exemplified by the nanoparticle detection approaches reviewed above. Fig. 7(D)) shows the application of an algorithm devised for detection of locally planar structures that succeeds in segmenting the walls of nanopores in MCM-41 mesoporous silica, thus revealing its lattice structure. The algorithm relies on a local detector designed from the first and second order derivatives, followed by morphological operations [57]. Simple thresholding was enough to identify the nanoparticles. 5.3. Quantitative analysis Segmentation is central to extract quantitative structural information from the tomograms (nanometrology, to some extent),

100

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

Fig. 7. Tomogram segmentation and 3D visualization. (A) Surface rendering of catalyst nanoparticles (red) supported on and within disordered mesoporous silica (grey). 3D visualization was achieved by segmentation based on thresholding techniques. The histogram in the inset displays the two thresholds that separate the three peaks corresponding to the background, silica and nanoparticles. A 2D cross-sectional view of the tomogram is shown below. (B) 2D slice of a tomogram of cluster of GaPd2 catalyst nanoparticles and the result of the segmentation (colour-coded) that was done with thresholding followed by a Watershed-based refinement stage (image kindly provided by Leary). (C) Watershed segmentation algorithm. The gradient computed from the voxels of the tomogram is considered as a topographical relief. The algorithm progressively floods the relief with a fluid, which drains to the catchment basins. The watersheds are ‘dams’ built between adjacent catchment basins. In this example, five segmented regions result from the algorithm. (D) Lattice structure of mesoporous ordered silica MCM-41 revealed by an algorithm to detect locally planar structures. (left) The segmentation result is shown in blue overlying a slice of the original tomogram (see Fig. 6). (centre) Volume texture highlighting the lattice structure. (right) 3D view of the segmented structure with the silica (yellow) and the nanoparticles (red). The nanoparticles were segmented via thresholding.

which in turn is crucial to understand the physical and chemical properties of the materials under study. The type of analysis actually depends on the material and the specific question. Some works on catalyst materials are representative examples. After detection of nanoparticles, either manually [34,55] or through automated methods [34,41,43,45,49,52], characterization parameters (centre-of-mass, radius, anisotropy, location, etc.) are extracted by working with the density in the segmented regions in different ways, as described above. Statistical analysis of these parameters allows thorough studies on the size and shape of the nanoparticles, their spatial distribution and arrangement, inter-particle distances, etc. [34,41,43,49,52,55,58]. Quantitative information has also been obtained about pores and porosity [45,46,52,59]. In [46], subvolumes associated to the pores were manually boxed out, and the parameters describing them (centre, radius) were estimated from the density levels, which showed an interesting irregular variation along the pores. Other similar detailed analyses have revealed the fractal dimension of catalyst supports and the preferred distribution of nanoparticles according to a local curvature study from surface patches [45] (Fig. 8). There have been other numerous ET studies that have made possible 3D nanometrology for characterization of complex nanostructures [30,48,53,60]. There is an approach known as single particle ET often used in life sciences that has potential to be useful to extract quantitative information in materials science [17]. Here, subvolumes extracted from the raw tomograms are mutually aligned to each other, and an average subvolume is computed. The alignment and averaging procedures must account for the effects of the missing wedge. The resulting average turns out to be free from distortions, and exhibits much higher resolution than the raw tomogram and raw subvolumes, thereby potentially making nanometrology measurements more reliable. Classification of the subvolumes could also reveal statistically significant differences among them. The lack

of reproducibility at the required scale may limit the application of this approach to the physical sciences. 6. Emerging tomographic reconstruction methods Accurate interpretation of tomograms and objective assessment requires high quality, undistorted 3D data. However, the tomograms obtained by means of standard reconstruction methods, namely WBP and SIRT, are prone to artefacts due to limited angular sampling and to the missing wedge, as was shown in Fig. 4. As a consequence, development of tomographic reconstruction methods much better suited for quantification has turned into an active area of research in the last few years. Several methods have been proposed in the field and some of them exhibit outstanding abilities to facilitate precise and quantitative characterization of nanostructures. They all refine the tomogram iteratively and prior knowledge about the sample is exploited to constrain the solution. Some works have even proposed a combination of them to fully exploit their own particular advantages [61]. In the following, these advanced methods are described in higher detail. 6.1. Equally Slope Tomography (EST) The equally slope tomography (EST) method, along with other innovations used in the workflow (e.g. image preprocessing based on background subtraction and centre-of-mass image alignment), seems to have been one of the decisive factors to achieve atomicscale resolution in ET [16]. EST acquires the images in the tilt-series with equal slope tilt increments rather than the more traditional constant angular spacing. This enables the use of the pseudopolar fast Fourier transform (PPFFT) to switch between the pseudopolar grid in the Fourier space and the Cartesian one in the real space

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

101

Fig. 8. Classification of surface patches extracted from a tomogram of catalyst nanoparticles on and within disordered silica (see Fig. 7A) based on local curvature [45]. The results suggested that nanoparticles within the interior of the support showed preferential anchoring to saddle points, whereas those on the exterior surfaces also demonstrated a strong preference for concave ‘cup-like’ regions. Nanoparticles are shown in red. Scale bar: 20 nm. The panel on the right shows an enlarged view of the surface (scale bar: 5 nm).

(remember the central section theorem and the reconstruction method based on the inverse Fourier transform explained at the beginning of Section 4). However, there are some stringent circumstances that preclude direct application of the inverse PPFFT to derive the 3D reconstruction. Among other conditions, the tilt range must be completely covered ([90°, 90°]) and the number of experimental projections is strictly fixed in relation to the size of the reconstruction. To circumvent this, an iterative tomographic reconstruction method is then applied to reconstruct the structure (Fig. 9). First, the FTs of the views in the aligned tilt-series are assembled in the pseudopolar Fourier space. The missing, unknown views, which will be progressively estimated as the iterations proceed, are initially set to zero. The process then iterates back and forth between the Fourier space and Cartesian real space through the PPFFT and PPFFT1. In each iteration, constraints are applied in real space (structure boundary; density positivity). In the Fourier space, the FT of the experimentally available views are superimposed to the current iterate whereas those of the other views are kept unchanged. This loop typically iterates a few hundreds of cycles. Owing to the correlation existing among Fourier components for a finite sized structure, this iterative refinement process succeeds in filling the missing wedge with some information [16]. Although the EST method exhibits potential, broad acceptance in the field has yet to be confirmed. 6.2. Direct Iterative Reconstruction of Computed Tomography Trajectories (DIRECTT) DIRECTT has recently been proposed as a promising alternative to the standard methods in ET [62] and actually has already been used in experimental ET studies [52]. The method seems to provide tomograms with contrast of quality similar to SIRT, but with sharper and more resolved features owing to its apparently more robustness against the ET particularities [62]. DIRECTT iteratively refines the reconstruction using a loop similar to that in SIRT (Section 4). The important difference resides in the refinement stage (’Backward projection‘ step in SIRT sketched in Fig. 3), where at each iteration only a pre-defined percentage of voxels of the tomogram is selected to be updated. The number of chosen voxels is very limited at first and progressively increases with the iterations. This acts as a regularizing mechanism to avoid appearance of artefacts. This selection of voxels is based on the magnitude of their density error calculated along their projection trajectories throughout the tilt-series (i.e. average difference between the experimental projections and those calculated from the current iterate). The updating operation just consists of a weighting addition of the error, similar to that occurring in SIRT.

Therefore, the whole procedure resembles SIRT, but the density in the projection images is slowly introduced in the tomogram in a controlled fashion, giving higher contribution to features that consistently appear denser throughout the tilt-series. Alternatively to the density, the contrast (obtained by means of high pass filtering) may be used as a criterion for this voxel selection [63]. Actually a combination of both may be convenient, with the first iterations based on density and last ones upon contrast [63]. DIRECTT iterates for a maximum number of iterations or until the error converges to zero. The good results of DIRECTT stem from the selection procedure, which favours the contribution of high density/high contrast features to the tomogram. A weakness of the method appears when fiducial markers are present in the sample. In order to avoid misleading, a pre-processing step is thus required to clip the high densities of these markers down to the level of the genuine features (e.g. catalyst nanoparticles) [62]. The couple of methods described in the following might be susceptible to this weakness as well. 6.3. Discrete Algebraic Reconstruction Technique (DART) Discrete tomography addresses the 3D reconstruction of samples that contain only a few different homogeneous phases. Under such conditions, it is possible to apply the number of compositions as a constraint during the reconstruction process. This in practice results in that the tomogram will be composed of a small discrete set of grey levels, each representing one of the compositions present in the sample. Imposing such prior knowledge succeeds in substantially attenuating the artefacts due to the missing wedge [64], or effectively reducing the number of views required for a good tomogram [15,64]. Moreover, the resulting tomogram actually turns out to be segmented automatically because each of the different phases of the sample will be represented by its own grey level. Therefore, quantitative and objective analysis is straightforward. Other sources of prior information that can be exploited is that the sample may be only composed by discrete constituents (e.g. atoms) that may also be arranged in a particular grid [15]. An algorithm presented a few years ago, Discrete Algebraic Reconstruction Technique (DART) [64], has been widely accepted in the field [1,2,4] and actually has played an important role in breakthroughs [15]. Internally, DART makes use of SIRT as a continuous (i.e. non-discrete) reconstruction algorithm. First, a SIRT reconstruction is computed from the aligned tilt-series. From that, estimates of the number of compositions in the sample as well as their average grey levels in the tomogram are obtained. Then the DART loop follows. The SIRT tomogram is thresholded with proper

102

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

Fig. 9. Algorithm for Equally Slope Tomography (EST). After acquisition of images with equal slope tilt increments, and after precise mutual alignment, their FTs are assembled in the pseudopolar grid in Fourier space. Then the process follows with the iterative steps switching back and forth between Fourier and real spaces. In real space, constraints on structure boundary and density positivity are enforced. In Fourier space, the FT resulting from the current object iterate is shown with red lines. Then the FTs of the experimental views are imposed (highlighted in black lines), keeping those of the missing views unchanged (in red). In this figure, the unavailable views are only confined to the missing wedge (denoted in grey). However, this needs not be the general case and there may be some more unavailable views.

values (according to the grey levels initially estimated for the compositions) to produce a guess of the discrete tomogram. Then the voxels located at the boundaries of the segmented regions are identified by means of simple morphological operations. The non-boundary voxels are fixed to the grey level estimated for the regions they belong to. The SIRT algorithm is now applied again, but only the boundary voxels are allowed to change. And the DART loop iterates again. Fig. 10 shows a flowchart of the algorithm and example results. DART successfully deals with the effects of the missing wedge and produces output segmented tomograms with well defined boundaries among the different compositions in the sample [64]. Moreover, it has been demonstrated that high fidelity tomograms are obtained even with a substantially limited number of projection images [64]. Nevertheless, an important limitation of DART is that the set of discrete grey levels must be known in advance. They could be estimated from the initial SIRT reconstruction, as just described. These guesses might be enhanced by masking out the background at each iteration [65]. But, anyway, this estimation may be a non-trivial task depending on the sample and imaging conditions. Some progress to overcome this restriction is taking place so that only the number of compositions in the tomogram would be required [66]. On the other hand, the assumption that the whole sample is fully discrete in terms of grey levels may severely restrict the samples which the method is suitable for. Recent developments on partially DART (PDART) [67] relax this constraint so as to make it appropriate for, in particular, datasets containing dense nanoparticles within light matrix support. Here, the algorithm assumes a

high density constituent (nanoparticles) that will be treated discretely whereas the remaining compositions are processed in continuous form. PDART is actually a slightly modified version of SIRT, where the voxels that are identified as nanoparticles through thresholding are segmented (i.e. the density is set to a specific grey level) and kept fixed in subsequent iterations. The remaining voxels are processed as usual in SIRT. The parameters (threshold and segmentation discrete density) may be easily tuned manually or via automated procedures [67]. 6.4. Compressed Sensing (CS) The advent of CS has brought excitement to the field of ET (CSET) in materials science in the last couple of years [2,4,30,48,68– 70]. Originated from the field of information theory, CS has attracted significant attention in many disciplines [71], and in particular those related to imaging. The principle of CS is based on the reconstruction of a signal from highly undersampled data. CS manages to recover the signal provided that it is ‘sparse’ in a known domain and that this sparsity is used as prior knowledge. A signal is said to be ‘sparse’ if it is represented by a few non-zero components in comparison to the total number components comprising the signal. Often, the signal is not exactly sparse, but rather compressible. Such a signal has a few predominant elements where most of the energy and information are concentrated whereas the other elements are not exactly zero but negligible. By using only those few main components and discarding the remaining ones, the compressible signal can also be considered sparse without much error. Another prerequisite for

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

103

Initial SIRT reconstruction Estimate no. & grey levels of compositions

SIRT

Segmentation by Thresholding

Identify non-boundary voxels, N

Identify boundary voxels, B

Fix voxels in N

DART

SIRT

SIRT refinement of B, with N kept fixed

DART Fig. 10. Discrete Algebraic Reconstruction Technique (DART). (left) Flowchart of the DART algorithm. (right) Reconstruction of a gold nanoparticle from 15 projection images by DART and comparison with SIRT. The missing wedge had an angular extension of about 60°. The nanoparticle had a diameter of around 45 nm in XY. At the top, orthogonal slices in the XY and XZ planes of the reconstructions obtained with SIRT and DART are shown. In SIRT, the effects of the missing wedge are apparent in the Z direction. In DART, however, the boundaries of the structure are well outlined. At the bottom, 3D visualization of the reconstructions with surface rendering is displayed. Images kindly provided by Batenburg et al. [64].

CS to be applicable is that the measurements must be sampled ‘incoherently’ with respect to the domain in which the signal is sparsely represented. This incoherence means that (1) the information contained in the original signal is spread out over rather random measurements in that domain and (2) the artefacts due to incomplete measurements should not be sparse, but distributed across that domain, and somewhat below the true signal. Under such conditions, CS then yields the sparsest (i.e. the simplest) solution that is consistent with the measured data by means of a convex optimization process [72]. In CS-ET, the signal to recover is the tomogram. There are several transformations to look for sparsity. It can be found in the reconstruction domain itself (i.e. only a few voxels will have non-zero values). This would reduce the missing wedge artefacts and the contribution of the background. Sparsity in the gradient (i.e. total variation) domain can additionally be applied to enhance tomograms with samples that exhibit sharp transitions between regions, thereby producing piecewise constant volumes [48,68]. Other transformations, e.g. Discrete Wavelet Transform or Discrete Cosine Transform, might be more appropriate for complex structures. The ‘incoherence’ prerequisite is fulfilled because all projections in the tilt-series contain information about every part of the tomogram, and the typical artefacts due to limited angular sampling are spread throughout. Several different implementations have been presented and used in materials science. Thanks to the prior knowledge about the sparsity, they all manage to reduce the effects of the missing wedge and yield tomograms more suitable for segmentation and objective assessment. [48] exploited the sparsity in the image and in the gradient domains. The implementation intended to minimize the l1 norm of the tomogram in terms of density and gradient

subject to consistency with the available data in the tilt-series. The latter was based upon the l2 difference between the Fourier transforms of the experimental projections and the calculated ones from the tomogram iterate. Even with greatly undersampled data, they succeeded in obtaining high-quality tomograms that turned out to be amenable to extraction of quantitative information. Fig. 11 shows illustrative results obtained with this implementation. [68,69] also focused on the sparsity in the gradient domain, obtaining very good results as well. Their implementation aimed to simultaneously minimize the l2 distance between experimental and calculated projections and the l1-norm of the discrete gradient of the tomogram. [30] took advantage of the sparsity of the sample itself to faithfully obtain an atomic-scale reconstruction. Here, only a limited number of voxels contained an atom whereas the others correspond to vacuum. The algorithm then simultaneously minimized the projection distance and the l1-norm of the tomogram (i.e. sum of the absolute values of all voxel densities). Note that the use of l1 norm (i.e. sum of absolute values) in contrast to l2 (i.e. Euclidean) norm is important to recover a sparse solution in the chosen domain [70], as applied by all reported implementations [30,48,68]. CS-ET has great potential to become a major player in materials science. It exhibits outstanding abilities to reduce the artefacts arising from the missing wedge. Another significant feature is that good tomograms are obtained even from relatively few projection images. Thus it proves to be especially attractive for studies involving specimens that are sensitive to electron dose. Although these aspects are somehow shared with DART, CS-ET presents the important advantage that no prior knowledge about the grey levels is required, thereby making it applicable to specimens of unknown composition.

104

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

Fig. 11. Reconstruction of concave iron oxide nanoparticles by Compressed Sensing Electron Tomography (CS-ET) and comparison with SIRT. (A) An expanded area from a projection image of the original tilt-series [48]. (B) Reconstruction with CS-ET. 3D visualization of the same area with volume texture (green), and of a selected nanoparticle (indicated with an orange arrow) with surface rendering is presented. On the right, a XZ slice of the tomogram taken from the position marked by the white arrow is shown. (C) Reconstruction with SIRT. The artefacts due to the missing wedge are particularly evident in the XZ slice, and are also noticeable in the 3D visualization. The nanoparticles had a size around 30 nm in diameter. Images courtesy of Saghi.

7. Software tools for electron tomography

8. Conclusions and perspectives

This section intends to list the software packages most commonly used in ET studies of materials (see Table 1). Xplore3D from FEI is the primary tool for acquisition of tilt series, though some groups use other commercial solutions from JEOL or Gatan. Academic acquisition packages (see Section 2 and [17]) seem to have not been used thus far. For image preprocessing, tilt-series alignment and tomographic reconstruction with WBP and SIRT, two packages are by far the most frequently employed: IMOD and Xplore3D (specifically, the tool Inspect3D within this package). Application of some other tools (e.g. TOMOJ, Xmipp) [17] is scarce. For 3D reconstruction with SIRT in particular, the package Tomo3D [37] is getting interest in the field because of its high speed. In the past, implementations of SIRT in IDL were applied. For reconstruction with DART and CS-ET, it has been reported that programs developed with MATLAB were used by the authors [48,64]. For 3D visualization, Amira and Avizo are predominant tools in the field. Utilization of other packages such as 3D slicer (http://www.slicer.org), Blender (www.blender.org) or Chimera (www.cgl.ucsf.edu/chimera) has been minor. In addition to Amira and Avizo, ImageJ and MATLAB (its image processing toolbox) are common solutions for segmentation and extraction of quantitative information. It should be noted the availability of numerous academic-free packages in ET in life sciences [17], some of which could be successfully applied for materials.

The significant technical and computational advances occurred in the last decade have led to the establishment of electron tomography as a tool of paramount importance for the three-dimensional characterization of materials. This review has described the computational methods that are routinely used in this field as well as some advanced tomographic reconstruction techniques that have appeared on the scene. Since the ET technique was originally pioneered in life sciences, the materials science community rapidly adopted the methods used there for their own studies. However, the need for specifically devised methods that cope with the particularities of these structures was made clear as ET matured in this field. This has driven the development of sophisticated techniques for tomographic reconstruction and original strategies for data interpretation and quantification. At present, materials and life sciences cross-fertilize ideas and developments, which will surely stimulate the future progress of ET to address the challenges ahead. New computational methods and algorithms are expected to appear in the short term so as to overcome the current limitations. Advances towards precise alignment of tilt-series in an automated and objective manner are required to ensure ultra-high resolution details at the tomogram level. The search for reconstruction algorithms better suited for quantitative analysis will likely continue. In this regard, the CS-ET technique has brought much enthusiasm to the field and, in the coming years, it may be established as a standard tool. Nonetheless, in general, there is still lack of thorough assessment of these emerging methods, aimed to analyze their benefits and limitations, and thus determine their range of the applicability. Another area where progress is anticipated is segmentation, which is currently a major bottleneck in the field. Automated and objective tools are definitely needed to facilitate tomogram interpretation and subsequent quantification. Last but not least, the availability of public domain software packages is rather limited compared to the situation in life sciences. Thus, efforts should be spent to make the new methods publicly available, if widespread use is to be achieved. There is hope that all these developments, along with improvements in instrumentation, contribute to the advancement of the technique and the realization of the potential of 3D atomic-imaging for a broad variety of structures.

Table 1 Common software packages for electron tomography in materials science. Package Image acquisition FEI Xplore3D a JEOL temography Gatan Digital Micrograph

Commercial/ academic

Web site

Commercial Commercial

http://www.fei.com http://www.jeolusa.com/ Default.aspx?tabid=332 http://www.gatan.com/products/ software/

Commercial

Calculation of tomogram IMOD a Academic FEI Xplore3D a Commercial Visualization and quantification Commercial Amira a Avizo a Commercial ImageJ Academic MATLAB Commercial a

http://bio3d.colorado.edu/imod http://www.fei.com http://www.amira.com http://www.vsg3d.com/avizo/ http://rsb.info.nih.gov/ij/ http://www.mathworks.com/ products/matlab/

Packages by far most frequently used in the field.

Acknowledgements The author thanks Z. Saghi and R. Leary for critical reading the manuscript. He is also very grateful to Paul Midgley and members of his group, Z. Saghi, R. Leary, E. Ward, T.J.V. Yates, for kindly providing data and/or images to prepare the illustrative figures in this

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

review. The author also thanks K.J. Batenburg for material provided for Fig. 10. Funding from Grants TIN2008-01117 (MCI), P10-TIC6002 (JA), TIN2012-37483-C03-02 (MEC) is acknowledged. References [1] Midgley PA, Dunin-Borkowski RE. Electron tomography and holography in materials science. Nat Mater 2009;8:272–80. [2] Leary R, Midgley PA, Thomas JM. Recent advances in the application of electron tomography to materials chemistry. Accounts Chem Res 2012;45:1782–91. [3] van Tendeloo G, Bals S, van Aert S, Verbeeck J, van Dyck D. Advanced electron microscopy for advanced materials. Adv Mater 2012;24:5655–75. [4] Saghi Z, Midgley PA. Electron tomography in the (S)TEM: from nanoscale morphological analysis to 3D atomic imaging. Annu Rev Mater Res 2012;42:59–79. [5] Frank J, editor. Electron tomography: methods for three-dimensional visualization of structures in the cell. New York: Springer; 2006. [6] Fridman K, Mader A, Zwerger M, Elia N, Medalia O. Advances in tomography: probing the molecular architecture of cells. Nat Rev Mol Cell Biol 2012;13:736–42. [7] Koster AJ, Ziese U, Verkleij AJ, Janssen AH, de Jong KP. Three-dimensional transmission electron microscopy: a novel imaging and characterization technique with nanometer scale resolution for materials science. J Phys Chem B 2000;104:9368–70. [8] Midgley PA, Weyland M, Thomas JM, Johnson BFG. Z-contrast tomography: a technique in three-dimensional nanostructural analysis based on Rutherford scattering. Chem Commun 2001:907–8. [9] Mobus G, Inkson BJ. Three-dimensional reconstruction of buried nanoparticles by element-sensitive tomography based on inelastically scattered electrons. Appl Phys Lett 2001;79:1369–71. [10] de Jong KP, Koster AJ. Three-dimensional electron microscopy of mesoporous materials – recent strides towards spatial imaging at the nanometer scale. ChemPhysChem 2002;3:776–80. [11] Midgley PA, Weyland M. 3D electron microscopy in the physical sciences: the development of Z-contrast and EFTEM tomography. Ultramicroscopy 2003;96:413–31. [12] Arslan I, Yates TJV, Browning ND, Midgley PA. Embedded nanostructures revealed in three dimensions. Science 2005;309:2195–8. [13] Barnard JS, Sharp J, Tong JR, Midgley PA. High-resolution three-dimensional imaging of dislocations. Science 2006;313:319. [14] Li H, Xin HL, Muller DA, Estroff LA. Visualizing the 3D internal structure of calcite single crystals grown in agarose hydrogels. Science 2009;326:1244–7. [15] van Aert S, Batenburg KJ, Rossell MD, Erni R, van Tendeloo G. Threedimensional atomic imaging of crystalline nanoparticles. Nature 2011;470:374–7. [16] Scott MC, Chen CC, Mecklenburg M, Zhu C, Xu R, Ercius P, et al. Electron tomography at 2.4 Å resolution. Nature 2012;483:444–7. [17] Fernandez JJ. Computational methods for electron tomography. Micron 2012;43:1010–30. [18] Verbeeck J, van Dyck D, van Tendeloo G. Energy-filtered transmission electron microscopy: an overview. Spectrochim Acta Part B – Atom Spectrosc 2004;59:1529–34. [19] Dierksen K, Typke D, Hegerl R, Koster AJ, Baumeister W. Towards automatic electron tomography. Ultramicroscopy 1992;40:71–87. [20] Ziese U, Janssen AH, Murk JL, Geerts WJC, van der Krift T, Verkleij AJ, et al. Automated high-throughput electron tomography by pre-calibration of image shifts. J Microsc 2002;205:187–200. [21] Koster AJ, Grimm R, Typke D, Hegerl R, Stoschek A, Walz J, et al. Perspectives of molecular and cellular electron tomography. J Struct Biol 1997;120:276–308. [22] Arslan I, Tong JR, Midgley PA. Reducing the missing wedge: high-resolution dual axis tomography of inorganic materials. Ultramicroscopy 2006;106:994–1000. [23] Tong J, Arslan I, Midgley PA. A novel dual-axis iterative algorithm for electron tomography. J Struct Biol 2006;153:55–63. [24] Cao M, Zhang HB, Lu Y, Nishi R, Takaoka A. Formation and reduction of streak artefacts in electron tomography. J Microsc 2010;239:66–71. [25] Amat F, Castano-Diez D, Lawrence A, Moussavi F, Winkler H, Horowitz M. Alignment of cryo-electron tomography datasets. Methods Enzymol 2010;482:343–67. [26] Roiban L, Hartmann L, Fiore A, Djurado D, Chandezon F, Reiss P, et al. Mapping the 3D distribution of CdSe nanocrystals in highly oriented and nanostructured hybrid P3HT-CdSe films grown by directional epitaxial crystallization. Nanoscale 2012;4:7212–20. [27] Kwon OH, Zewail AH. 4D electron tomography. Science 2010;328:1668–73. [28] Houben L, bar Sadan M. Refinement procedure for the image alignment in high-resolution electron tomography. Ultramicroscopy 2011;111:1512–20. [29] Leschner J, Biskupek J, Chuvilin A, Kaiser U. Accessing the local threedimensional structure of carbon materials sensitive to an electron beam. Carbon 2010;48:4042–8. [30] Goris B, Bals S, van den Broek W, Carbo-Argibay E, Gomez-Grana S, Liz-Marzan LM, et al. Atomic-scale determination of surface facets in gold nanorods. Nat Mater 2012;11:930–5. [31] Herman GT. Image reconstruction from projections: the fundamentals of computerized tomography. 2nd ed. London: Springer; 2009.

105

[32] Gilbert P. Iterative methods for the 3D reconstruction of an object from projections. J Theor Biol 1972;36:105–17. [33] Yates TJV, Thomas JM, Fernandez JJ, Terasaki O, Ryoo R, Midgley PA. Threedimensional real-space crystallography of MCM-48 mesoporous silica revealed by scanning transmission electron tomography. Chem Phys Lett 2006;418:540–3. [34] Prieto G, Zecevic J, Friedrich H, de Jong KP, de Jongh PE. Towards stable catalysts by controlling collective properties of supported metal nanoparticles. Nat Mater 2013;12:34–9. [35] Mezerji HH, van den Broek W, Bals S. A practical method to determine the effective resolution in incoherent experimental electron tomography. Ultramicroscopy 2011;111:330–6. [36] Fernandez JJ. High performance computing in structural determination by electron cryomicroscopy. J Struct Biol 2008;164:1–6. [37] Agulleiro JI, Fernandez JJ. Fast tomographic reconstruction on multicore computers. Bioinformatics 2011;27:582–3. [38] Vazquez F, Garzon EM, Fernandez JJ. A matrix approach to tomographic reconstruction and its implementation on GPUs. J Struct Biol 2010;170:146–51. [39] Agulleiro JI, Vazquez F, Garzon EM, Fernandez JJ. Hybrid computing: CPU + GPU co-processing and its application to tomographic reconstruction. Ultramicroscopy 2012;115:109–14. [40] Crowther RA, de Rosier DJ, Klug A. The reconstruction of a three-dimensional structure from projections and its applications to electron microscopy. Proc R Soc London A 1970;317:319–40. [41] Gommes CJ, de Jong K, Pirard JP, Blacher S. Assessment of the 3D localization of metallic nanoparticles in Pd/SiO2 cogelled catalysts by electron tomography. Langmuir 2005;21:12378–85. [42] Andersson BV, Herland A, Masich S, Inganas O. Imaging of the 3D nanostructure of a polymer solar cell by electron tomography. Nano Lett 2009;9:853–5. [43] Thiedmann R, Spettl A, Stenzel O, Zeibig T, Hindson JC, Saghi Z, et al. Networks of nanoparticles in organic–inorganic composites: algorithmic extraction and statistical analysis. Image Anal Stereol 2012;31:27–42. [44] Fernandez JJ, Li S. An improved algorithm for anisotropic nonlinear diffusion for denoising cryo-tomograms. J Struct Biol 2003;144:152–61. [45] Ward EPW, Yates TJV, Fernandez JJ, Vaughan DEW, Midgley PA. Threedimensional nanoparticle distribution and local curvature of heterogeneous catalysts revealed by electron tomography. J Phys Chem C 2007;111:11501–5. [46] Gommes CJ, Friedrich H, Wolters M, de Jongh PE, de Jong KP. Quantitative characterization of pore corrugation in ordered mesoporous materials using image analysis of electron tomograms. Chem Mater 2009;21:1311–7. [47] Ortalan V, Herrera M, Morgan DG, Browning ND. Application of image processing to STEM tomography of low-contrast materials. Ultramicroscopy 2009;110:67–81. [48] Saghi Z, Holland DJ, Leary R, Falqui A, Bertoni G, Sederman AJ, et al. Threedimensional morphology of iron oxide nanoparticles with reactive concave surfaces. A compressed sensing-electron tomography (CS-ET) approach. Nano Lett 2011;11:4666–73. [49] Leary R, Saghi Z, Armbruster M, Wowsnick G, Schlogl R, Thomas JM, et al. Quantitative high-angle annular dark-field scanning transmission electron microscope (HAADF-STEM) tomography and high-resolution electron microscopy of unsupported intermetallic GaPd2 catalysts. J Phys Chem C 2012;116:13343–52. [50] Oosterhout SD, Wienk MM, van Bavel SS, Thiedmann R, Koster LJ, Gilot J, et al. The effect of three-dimensional morphology on the efficiency of hybrid polymer solar cells. Nat Mater 2009;8:818–24. [51] Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 1979;9:62–6. [52] Grothausmann R, Zehl G, Manke I, Fiechter S, Bogdanoff P, Dorbandt I, et al. Quantitative structural assessment of heterogeneous catalysts by electron tomography. J Am Chem Soc 2011;133:18161–71. [53] Hindson JC, Saghi Z, Hernandez-Garrido JC, Midgley PA, Greenham NC. Morphological study of nanoparticle-polymer solar cells using high-angle annular dark-field electron tomography. Nano Lett 2011;11:904–9. [54] Batenburg KJ, Sijbers J. Adaptive thresholding of tomograms by projection distance minimization. Pattern Recognit 2009;42:2297–305. [55] Friedrich H, Sietsma JRA, de Jongh PE, Verkleij AJ, de Jong KP. Measuring location, size, distribution, and loading of NiO crystallites in individual SBA-15 pores by electron tomography. J Am Chem Soc 2007;129:10249–54. [56] Hungria AB, Eder D, Windle AH, Midgley PA. Visualization of the threedimensional microstructure of TiO2 nanotubes by electron tomography. Catal Today 2009;143:225–9. [57] Martinez-Sanchez A, Garcia I, Fernandez JJ. A differential structure approach to membrane segmentation in electron tomography. J Struct Biol 2011;175:372–83. [58] Jinnai H, Kaneko T, Nishioka H, Hasegawa H, Nishi T. Spatial arrangement of metal nanoparticles supported by porous polymer substrates studied by transmission electron microtomography. Chem Rec 2006;6:267–74. [59] Biermans E, Molina L, Batenburg KJ, Bals S, van Tendeloo G. Measuring porosity at the nanoscale by quantitative electron tomography. Nano Lett 2010;10:5014–9. [60] Perassi EM, Hernandez-Garrido JC, Moreno MS, Encina ER, Coronado EA, Midgley PA. Using highly accurate 3D nanometrology to model the optical properties of highly irregular nanoparticles: a powerful tool for rational design of plasmonic devices. Nano Lett 2010;10:2097–104.

106

J.-J. Fernandez / Current Opinion in Solid State and Materials Science 17 (2013) 93–106

[61] Goris B, Roelandts T, Batenburg KJ, Mezerji HH, Bals S. Advanced reconstruction algorithms for electron tomography: from comparison to combination, Ultramicroscopy 2013;127:40–7. [62] Lange A, Kupsch A, Hentschel MP, Manke I, Kardjilov N, Arlt T, et al. Reconstruction of limited computed tomography data of fuel cell components using direct iterative reconstruction of computed tomography trajectories. J Power Sources 2011;196:5293–8. [63] Luck S, Kupsch A, Lange A, Hentschel MP, Schmidt V. Statistical analysis of tomographic reconstruction algorithms by morphological image characteristics. Image Anal Stereol 2010;29:61–77. [64] Batenburg KJ, Bals S, Sijbers J, Kubel C, Midgley PA, Hernandez JC, et al. 3D imaging of nanomaterials by discrete tomography. Ultramicroscopy 2009;109:730–40. [65] Zurner A, Doblinger M, Cauda V, Wei R, Bein T. Discrete tomography of demanding samples based on a modified SIRT algorithm. Ultramicroscopy 2012;115:41–9. [66] van Aarle W, Batenburg KJ, Sijbers J. Automatic parameter estimation for the discrete algebraic reconstruction technique (DART). IEEE Trans Image Process 2012;21:4608–21.

[67] Roelandts T, Batenburg KJ, Biermans E, Kubel C, Bals S, Sijbers J. Accurate segmentation of dense nanoparticles by partially discrete electron tomography. Ultramicroscopy 2012;114:96–105. [68] Goris B, van den Broek W, Batenburg KJ, Mezerji HH, Bals S. Electron tomography based on a total variation minimization reconstruction technique. Ultramicroscopy 2012;113:120–30. [69] Altantzis T, Goris B, Sanchez-Iglesias A, Grzelczak M, Liz-Marzan LM, Bals S. Quantitative structure determination of large three-dimensional nanoparticle assemblies. Part Part Syst Charact 2013;84–88:30. [70] Thomas JM, Leary R, Midgley PA, Holland DJ. A new approach to the investigation of nanoparticles: electron tomography with compressed sensing. J Colloid Interface Sci 2013;392:7–14. [71] Eldar YC, Kutyniok G. Compressed sensing: theory and applications. Cambridge University Press; 2012. [72] Candes EJ, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 2006;52:489–509.

Computational methods for materials characterization ...

Mar 30, 2013 - fully understanding of materials, which is essential for many areas ... In EF-TEM, energy-filtered images ..... There exist alternative real-space reconstruction algorithms ..... Other sources of prior information that can be.

3MB Sizes 5 Downloads 197 Views

Recommend Documents

pdf-1862\computational-methods-for-electromagnetic-phenomena ...
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. pdf-1862\computational-methods-for-electromagnetic-phe ... ics-in-solvation-scattering-and-electron-transport.pdf. pdf-1862\computational-methods-for-elec

Computational and Crowdsourcing Methods for ...
At the same time, the development of semantic web is creating a cyberspace that contains resources ... flickr.com Flickr is an online photo management and sharing application. 2 delicious.com .... 417–426. ACM, New York (2006). 2. Braun, S.

Materials Characterization II (pdf)
an electron beam: Electron Impact (EI) ionization. The excited molecule ... Chemical Ionization, Field desorption, Laser Desorption, Fast Atom. Bombardment.

pdf-0738\computational-materials-science-surfaces-interfaces ...
... apps below to open or edit this item. pdf-0738\computational-materials-science-surfaces-int ... -insights-by-am-ovrutsky-a-s-prokhoda-ms-rasshchu.pdf.

Computational methods for Traditional Chinese Medicine
expert systems are reviewed in term of their benefits and drawbacks. Various .... database, effective components database of. Chinese herbs, Chinese medical dietotherapy prescription database, and Chinese medical recipe database. Qiao et al. [4] ....

pdf-1295\computational-methods-for-linear-integral ...
Try one of the apps below to open or edit this item. pdf-1295\computational-methods-for-linear-integral-equations-by-prem-kythe-pratap-puri.pdf.

Course Notes for STAT 8701 Computational Statistical Methods
learning statistics or anything else for that matter–you've got to use it to ... do this simply write an R program in your favorite text editor and save it as myfile.