h-adaptive FEM technique – An application to phase field modeling for the study of ICVI process for Silicon Carbide Shameek Ganguly Indian Institute of Technology, Guwahati India May-July 2009

Abstract This work deals with the application of adaptive grid generation technique to the industrial problem of phase-field modeling of the ICVI process of SiC reinforcement of SiC matrices. This is achieved using the 'Rivara's longest edge bisection algorithm'. A new weighted energy based a posteriori error estimator is developed to define the adaptation criteria. Simulation based results are presented to demonstrate the advantages of the adaptive grid strategy over the uniform refinement of the mesh. Discussions follow.

1

Contents 1. Introduction to present work................................................................3 2. Adaptive grid generation.....................................................................4 2.1. Desired traits of adaptive grid algorithms..................................5 2.2. Review of some widely used adaptive grid strategies..............10 2.3. Post- processing techniques......................................................13 2.4. Adaptive strategies employed in present work.........................14 3. Finite element method in brief...........................................................30 3.1. Introduction...............................................................................30 3.2. Formulations..............................................................................31 3.3. Non-linearity in equations.........................................................32 3.4. Elements employed in present work …....................................32 4. Phase-field modeling of Silicon Carbide in ICVI process.................36 4.1. The SiC ICVI process...............................................................36 4.2. Mathematical modeling.............................................................37 4.3. Phase field model......................................................................38 4.4. FEM formulations.....................................................................41 4.5. Numerical simulation and results ….........................................42 5. Error estimation in FEM analysis.......................................................44 5.1. True error and error estimates...................................................44 5.2. a posteriori error estimation......................................................44 5.3. Hybrid error estimator for convection-diffusion-reaction type partial differential equations......................................................45 5.4. Adaptive grid strategy using error estimator.............................48 5.5. Implementation issues...............................................................48 5.6. Simulation of the phase-model of SiC ICVI process using adaptive grid generation............................................................49 6. Discussions and conclusion................................................................53 7. Acknowledgment and Bibliography...................................................53 2

1. Introduction to the present work In this work, adaptive grid generation technique as a method for controlling computational cost along with enhanced accuracy is applied for solving the coupled convection-diffusion-reaction type non-linear system of PDE's which describe the phasefield model of the CVI process of SiC reinforcement of SiC matrix using the finite element method (FEM). For this purpose, a wide study of scientific literature in this field is preliminarily conducted for developing or adopting a suitable adaptation algorithm which allows easy nested coarsening techniques to be applied simultaneously to fully consider the dynamic nature of the SiC deposition process. Preliminary experiments using model elliptical partial differential equations (PDEs) with node-based adaptation algorithms showed promise on rectangular grids and allowed high reversibility in terms of coarsening. However, due to their boundary non-conforming nature in case of curved boundaries, a new technique was adopted based on Rivara's 4-triangle longest edge bisection algorithm. To undo the process of Rivara's refinement, an algorithm for nested mesh coarsening based on geometric properties imbibed during the refinement process itself is proposed and developed. Experiments based on the model elliptical equations validate the applicability of the chosen refinement-coarsening pair of algorithms to dynamic problems such as deposition processes or moving fronts. In the next phase of the work, finite element method as a tool for solving systems of PDEs is thoroughly investigated. The usage of second order finite element formulations is studied and developed on the software package MATLAB. The phase field model of the ICVI process of SiC is then studied and its FEM formulations are developed using second order triangular elements. The reaction-diffusion-convection equations for the phase field model are then solved using these formulations as an initial step towards the adaptive solution. The final problem of generating an optimally bounding error estimator was then addressed in order to construct solid criteria for mesh refinement and coarsening. In view of this, a literature study is undertaken in the field of a posteriori error estimators and a weighted energy norm based hybrid error estimator was generated for the static nonlinear reaction-diffusion-convection type equations. Finally, using an iteration based 2nd order FEM formulation for solving the problem, the error estimator is applied in each iterative step in the continuous time time domain and adaptive grid refinement and coarsening is used to achieve improved convergence. Numerical simulations and results showing comparative performance of uniform mesh refinement versus the dynamic adaptive grid generation technique follow to validate the work.

3

2. Adaptive grid generation Adaptive grid generation is the method of locally refining or coarsening the grid on which the solution to the problem is being evaluated, in a particular region in the solution domain to reduce discretization error. The criteria for the selection of the target region is usually based on some sort of an error estimate of the solution. This technique has been popularly used recently in various works based on finite element formulations for achieving faster computations and higher accuracy of solutions at the same time as compared to uniform mesh refinement ([1-10]). In this context, it is referred to as the hadaptive FEM technique. Results in general, have proved this method to be accurate and of great use in problems which especially involve either domain singularities or solution singularities, moving fronts of high gradient [1][2] such as shock waves and pipe flows, layer based processes such as in the field of in semiconductor microelectronics [3], dynamically evolving processes such as terrain modeling [5] and interface modelling [6], and other general cases where the solution shows a steep gradient at some regions in the domain. Since the fundamental process in h-adaptive FEM depends on the formulation of an algorithm to locally restructure the grid in order to reduce (for refinement) or increase (for coarsening) the size of the local grid, a wide amount of scientific literature has also been devoted to the development of such algorithms ([11-20]) which usually tend to revolve around preserving certain desirable traits of the grid which include: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Conforming elements Usability of unstructured grids Boundary conforming Small angle restriction Delaunay structure of the refined/coarsened grid Uniform elements Improvement of bad quality triangles Computational speed Molecular adaptation 10. Nested coarsening formulations These fundamental properties are discussed at length in the following section. However, all adaptive techniques revolve around the same algorithmic flow of the program: 1. Create an initial coarse mesh 2. Solve the system of equations on the mesh (using FEM or FVM or BEM) 3. Estimate the error at either each node (for node based adaptation techniques) or each element(for element based adaptation techniques) or each edge(for edge based adaptation techniques) 4. Adapt the grid based on the adaptivity criteria generated based on the error estimations 5. Interpolate the previous solution over the new mesh and repeat from the second step until end of solution

4

2.1. Desired traits of adaptive grid algorithms 1.

Conforming elements: Conforming elements are defined as elements which share either a vertex or a complete edge with neighboring elements (figure 2). In other words, no vertex of a triangle may be located on the edge of another triangle. This property has been established to be an important requirement of any adaptation algorithm over the scientific studies in error analysis of FEM formulations that have been undertaken in the past [16][17] which showed the error increased and was often unbounded in the presence of non-conforming elements. As a result, several algorithms which were earlier computationally extremely cheap and which largely preserved the quality of the triangulation after adaptation have been rendered unusable. For example, both the traditional Longest-edge bisection in triangular elements and the division of a triangle into four similar triangles suffer from this particular defect (section 2.2). Several modifications have been introduced in order to work past this problem such as refinement and coarsening propagation which may be either recursive or non-recursive[13].

2.

Usability of unstructured grids: Unstructured grids allow immensely more flexibility in terms of both the use of lower order simplices as well as the use of powerful adaptation techniques for grid improvement [2][16][21][22]. Structured grids are mostly based on rectangular elements (figure 3(a)) which increases their computational cost. Hence, most adaptive grid algorithms in the recent past have been focused on the applicability to unstructured grids.

5

3.

Boundary conforming: Traditional adaptive grid algorithms disallowed the use of boundary points for adaptive grid refinement/coarsening processes since it necessitated the reformulation of boundary conditions. However, this caused the non-convergence of adaptation processes where singularities or high discretization errors existed on the boundary. As a way to resolve this, the grid had to be pre-refined with a minimum discretization error at the boundary even before the solution was initialized. Moreover, conformation of elements could not be guaranteed due to the non-adaptability of boundary elements. Hence, it is now a generally accepted process to allow the splitting of boundary edges which allows better decomposition of the boundary[18]. This problem is of greater importance in parallel implementations if grid adaptation techniques where the presence of internal boundaries leads to increased communication between the parallel processors which is highly undesirable. Work in this direction is an ongoing process in the scientific community [23].

6

4.

Small angle restrictions: The traditional small angle restriction issue for discrete elements of the domain in grid generation techniques is carried over to the problem of grid adaptation. Small angles less than 30 degrees may lead to very skinny triangles (figure 5), which result in singularities in the system element matrix in the FEM formulations leading to highly non-physical errors in the solution ([24][25][26]). This is a basic problem addressed in almost every adaptive grid generation techniques and often result in solutions such as postprocessing of the grid (section 2.3) or local re-triangulation (section 2.2.1) based on Delaunay triangulation or non-conforming solutions which ensure similarity of the new triangles to the parent triangles after the adaptation process (section 2.2.3). However, no single well-defined solution to this problem exists as yet and the most works present their lower limit to be 30o([12][13][24]). This is a problem highly dependent on both the initial triangulation as well as the adaptation algorithm and calls for a heavy compromise between triangle quality and computational cost.

5.

Delaunay structure of the refined/coarsened grid: This problem encompasses both the problem of small angle restriction and the restriction of longest edges in an element, leading to a general restriction on 'bad quality' triangles [25]. Delaunay triangulation (figure 6) and its relationship to the Voronoi diagrams was originally introduced for the problem of grid generation and has been locally extended to the problem of mesh adaptation ([14][17][25]). While most adaptive algorithms compromise on the true Delaunay nature of the triangulation in order to achieve simpler and faster algorithms for grid adaptation, they attempt to enforce a local and passive Delaunay criteria either by means of geometric properties or by algorithmic post processing (section 2.3). High deviations from the Delaunay property are undesirable for adaptive grid generation due to issues addressed in the previous point.

7

6.

Uniform elements: The problem of non-conforming elements in an adaptation algorithm is often addressed by the use of mixed elements instead of the uniform use of the same element throughout the grid ([7][22]). This method has been proved to be very effective in the case of the mixed use of quadrilateral and triangular elements (figure 7). However, it drastically increases the computational effort needed and at the same time, leads to compromise on the two previously discussed properties as well (property 4 and 5). But the flexibility it offers in terms of possible ways of adaptation is promising for future research.

7.

Improvement of bad quality triangles: While property (4) deals with the retention of the quality of triangles after refinement/coarsening, the need for improvement of bad quality triangles in the initial mesh is also desirable. While strategies such as the local re-meshment and the longest edge bisection naturally possess this trait (section 2.2), other strategies implement it using post-processing techniques (section 2.3). This is not an essential property but often a control technique for regulating the overall quality of the grid. It may even be used separately on the entire grid before or after the use of an adaptation algorithm by means of a suitable post-processing technique.

8

8.

Computational speed: This is an especially essential property for commercial softwares which implement adaptive grid refinement/coarsening techniques since their usability for real-time design and optimization analysis demands so. This problem often reduces to the use of advanced data structures especially for the definition of higher level global topology [8]. This process is aimed not only at faster searches and insertions/deletions but also at quick determination of nodal or edge based or element based connectivity which are essential for both the grid refinement/coarsening process itself as well as for high accuracy error estimations. While the practically achieved best times are totally completely strategies with O(n) in time but it is theoretically possible to achieve O(log n) ([8], [12]). Effort in this direction is a continual process.

9.

Molecular adaptation: This property refers to the retention of the molecular structure (section 2.4.2.7) at a vertex after iterative adaptations as shown in figure (22). It is not an essential property but allows the refinement propagation to smoothen the grid after a local adaptation so that there does not exist a sharp discontinuity in the size of the grid between the locally adapted and non-adapted regions in the grid. It is suggested in several scientific studies ([12][24]) that such a property can be desirable in the studies of moving fronts to achieve preconditioning in the form of mild adaptation in the regions where the front is either about to enter or just left which reduces computational efforts in the later time steps. Furthermore, generation of a stable molecule leads to a bound on the number of times a particular vertex gets divided, effectively bounding the smallest angle property of the triangulation once more. It is a property commonly displayed by the modified longest edge bisection strategies such as 4T-LE (section 2.4.2) or 7T-LE([24]).

10.

Nested coarsening formulations: Nested adaptive grid formulations involve hierarchical refinement/coarsening of the mesh which allows quick algorithmic detection of parents during coarsening strategies ([5][15][19][27])( figure 8). According to the coarsening algorithm used, one may either use nested nodal formulations or nested element formulations or even nested edges. Nested formulations involve the assignment of a hierarchical level tag to each identity. Since it reduces computational cost to a great extent at some price of increased computational space (which is usually not the main consideration), it has been the main technique for coarsening formulations over several years.

9

Though all of these properties are what define a 'good' adaptation technique, achieving all of them optimally is nearly impossible and often contradictory by some means or the other. In the following section, some of the common adaptive grid refinement techniques are discussed followed by the ones presented in the current work. 2.2. Review of some widely used grid adaptation techniques 2.2.1. Local re-triangulation methods This method is based on the selection of a local region in the mesh and re-triangulating it using a suitable algorithm such as Rupert's Delaunay meshing algorithm ([14]) using constraints on the permissible length of the longest edge in each triangle ([4][9][10]). A suitable error estimator is used to calculate the required adaptive length of the grid as well as the diameter of the circle that encloses the region of refinement. Furthermore, the overlapping between adjacent circles is avoided so as to avoid over-refinement at certain places. A suitable data structure is then generated by traversing the region and finding out the internal boundary points of this region. Finally, the present nodes and elements which completely lie in the domain are deleted and the region is re-triangulated. The same approach is used for coarsening where the newly evaluated permissible size of the triangulation is larger than the the current size . Though this is a powerful method of grid adaptation ensuring very high quality of the retriangulation, the fact that it needs recursive Delaunay meshing makes it very expensive with time. Furthermore, for quicker traversals of the considered region, powerful datastructures are required which are not possible on lower level languages and also increase the complexity of the programming. Some notable works applying this method are mentioned in [6]. 2.2.2. Traditional longest-edge bisection methods The longest-edge bisection method has been a computationally preferred choice of adaptive meshing for the past few decades due to its simple nature and easy implementation ([13][20]) and the fact that it improves the quality of obtuse triangles. The procedure involves the division of a triangle into two generally dissimilar triangles by adding a bisector edge on the longest side of the triangle (figure 9). This ensures the division of the largest angle of the triangle and its reduction by at least an amount that is equal to the smallest angle of the triangle in case of bad quality (obtuse angled) triangles ([12]). The addition of an extra node on the side of the triangle causes the triangulation to become non-conforming and in it lies the main disadvantage of this method. Several works have been proposed in the past literature to go ahead with the solution of the pde's on such a non-conforming mesh due to the fact that this method allows only a single nonconforming node on every edge in the worst case scenario given an initial Delaunay mesh. However, as discussed before, even a low amount of non-conformity can give rise to highly non-physical errors in the solution (section 2.1). Hence, several other works have proposed a variety of ways to generate a conforming mesh as discussed below. In most cases, the non-conformity of the resulting triangulation is handled by bisecting both triangles that share the edge in consideration [13]. Though it is the easiest way to deal with the problem of non-conformity, it causes the bisection of a neighboring triangle 10

with no guarantee for the resulting quality of the new triangulation in that triangle since the present edge may be the smallest edge of that triangle in the worst case scenario. A better method involves the recursive longest edge division of the neighboring triangle which ensures the retention of the quality of the new triangulation and causes propagation of the refinement at the same time which further ensures a smooth refinement process [28]. However these traditional longest edge bisection methods suffer from the issue of nondirectional refinement since the longest edge is not necessarily the edge connecting elements with the maximum estimated error in the locality. To overcome this, a uniform division into 4 triangles was proposed by Rivara which is discussed in (section 2.4.2). In all of these cases, coarsening of the mesh is only possible using nested element/node techniques.

2.2.3. Similarity division methods These methods work on the assumption of a high quality initial grid and retention of quality using generation of similar triangles during adaptation. The most common of these techniques is the 4-triangle similarity division which involves the division of the selected triangle by interconnecting the mid-points of all edges ([13]). This results in the formation of 4 new similar triangles, thus retaining the earlier quality of the mesh (figure10). However, this method also suffers seriously from a resulting non-conforming nature.

11

Propagation based conformation techniques using similarity division on neighboring triangles is not possible since it results in refinement of the entire mesh as termination is possible on the boundary of the mesh. Other methods for restoring conformity have been proposed such as the bisection of the adjacent triangles or the recursive longest edge division of the neighboring triangles, but they fail to exactly retain the quality of the mesh and rather defeat the purpose of using the computationally costlier similarity division. A unique method was proposed recently [13] involving the temporary use of the bisection of the neighboring triangles for obtaining the solution of the pde's to be solved and removing them once again during the adaptation of the mesh. Though this is slightly better than the former methods, it still affects the solution. Additionally, post-processing is needed to add the temporary (bisecting) edges or inversely, to remove them after the solution has been obtained. Finally, an additional data structure is needed to store the labels of the temporary edges. 2.2.4. Centroidal division technique This is the simplest of all the methods discussed so far and results in completely conforming meshes with the formation of three new triangles which replace the original triangle. The procedure involves the division the the triangle by joining all the vertices to the centroid of the triangle thus resulting in three new triangles (figure 11). However, this method predictably destroys the quality of the triangulation and is hence not preferred ([12]).

12

2.2.5. Mixed element adaptation This technique involves the usage of mixed elements such as triangles and quadrilaterals and results in conforming meshment. Being a relatively new and computationally complex technique, literature in this field is limited. Some works using this technique are presented in [7] and [22].

2.3. Post-processing techniques These techniques which can either operate in a local internal region or the entire mesh and function to improve the quality of the triangulation using either the redistribution of points Steiner points ([17]) or using passively enforced Delaunay criteria which are described below. They are stand alone techniques independent of the method of adaptation used and often predictably improve the quality. However, they involve the traversal of the mesh in the specified region which is a time consuming procedure unless a powerful data structure is used. Some of the commonly used post processing techniques are mentioned below. 2.3.1. Jiggling This method is used to improve the distribution of points in the mesh by adjusting the positions of the nodes slightly. Each mesh point that is not located on an edge segment is moved toward the center of mass of the polygon formed by the adjacent triangles. As a result, the quality of the triangulation is usually increased. This method has been discussed and applied in the recent work in [29]. Figure 12 shows the efficiency of the jiggling procedure. Note the reduction in the cyanshaded bad-quality triangles.

13

2.3.2. Edge swapping This method tries to enforce the Delaunay criterion for triangular meshes by means of swapping an edge whose diametrical circle contains other nodes. The resulting triangulation is usually better as show in figure 13. A recent work using this technique is presented for 3 dimensional cases in [30].

2.4. Adaptive strategies employed in this work Before moving on to the specific formulations of the adaptation strategies and algorithms applied for the present task, we present the mathematical groundwork for the notations used ahead. Let Ω be a continuous and smooth domain and Γ be its piecewise polygonal boundary ∂Ω. Let τ be an initial triangulation of the domain. Then, for any triangle T ⊂ τ as shown in figure 14, the following statements are true: i) Its included angles are α,β,γ where γ≥ β≥ α ii) Its edges are l1, l2 and l3 where their lengths, l1≥ l2≥ l3 Additionally, a set of bijective operations on the mesh are defined as follows: i)

A refinement operation on a node N of the mesh is defined as 14

fR ( N, τi ) = τi+1 where (τ0,...,τi.....τn) form a set of nested triangulations where τi is obtained by a local refinement of τi-1 ii) A refinement operation on a triangle T of the mesh is also defined as fR ( T, τi ) = τi+1 iii) A coarsening operation at a node N or a triangle T is defined as fC ( N, τi ) = τi-1 and fC ( T, τi ) = τi-1 respectively With these definitions, we can now review the adaptation procedures used in this work: 2.4.1. Node-based adaptation : Diamond-Box longest edge bisection

2.4.1.1. Introduction Node-based adaptation strategies are ideally used when the error estimator is node-based (usually a priori) as is in most cases economical while using the FEM since all values are evaluated on the nodes. As a result, using element-based adaptation techniques along with node-based error estimators result either in gross over adaptation due to adaptation of all elements connected to the high-error node or in under-adaptation if a directional selection is made for the element to be adapted. With this as the basic motivation, an adaptation process has been developed using a node-labeling technique for use along with right triangle bintree (RTIN) triangulations (figure 15) which are very efficient on rectangular domains ([4][5]). The categorical labeling of nodes allows extremely simple and highly efficient refinement as well as coarsening in a completely local manner with the use of an appropriate data structure which uses the adjoincy of node-types discussed in the following section.

15

2.4.1.2. Formulation A categorical labeling of the nodes in a right triangulation nested bintree (RTIN) mesh is carried out based on node connectivity, element connectivity and simple geometrical considerations. It is found that there exist four distinct types of symmetrically associated nodes in such a mesh as shown in figure 16. Type 1 (N1): N is adjoint with 8 other nodes and lies at the intersection of four squares(except on the boundary) each having intersecting diagonals. Type 2 (N2): N is adjoint with 4 other nodes and lies at the intersection of the diagonals of a square. Type 3 (N3): N is adjoint with 8 other nodes and lies at the intersection of four squares(except on the boundary) which have at least one diagonal terminating at vertex N Type 4 (N4): N is adjoint with 4 other nodes and lies at the intersection of the diagonals of a diamond (square oriented at 45o)

Thus the global general problem of finding a local refinement/coarsening at N for the triangulation τ is now a local problem of finding a refinement/coarsening for Nk . This is further achieved using the node associativity of different nodes, which in the completely non-adapted case can be symbolized as follows:

N1→{4(N2), 4(N1)} N2→{4(N1)} N3→{4(N4), 4(N1)} N4→{4(N3)} With this knowledge, we proceed to use the longest edge bisection algorithm in an autoconformal manner by a diamond-box recursive procedure. The diagrammatic representation of the adaptation of each node is presented in figure 17.

16

2.4.1.3. Properties 1. It is noted that the transformations can be summarized as follows: 2. 3.

N1 → N3 + 4(N4) N2 → N3 + 4(N4) 17

4. 5.

N3 → N1 + 4(N2) N4 → N1 + 4(N2)

The functional relationships between the four type can be clearly seen from remark (1) which shows that N1 and N2 exhibit box nature in their transformation while N3 and N4 exhibit diamond nature in their transformation. Thus, the name 'Diamond-Box Longest edge bisection' 7. The transformation is not one-one but many-one which is why it is not possible to inverse the process for coarsening. This issue is addressed by encoding the history of the node using decimal digits which is discussed in the mesh coarsening section. 8. N1 and N3 nodes cannot be created or destroyed; they can only be refined or coarsened. On the other hand, N4 and N2 nodes can be freely created and deleted during the refinement and coarsening respectively, of the adjacent N1 and N4 nodes. 9. Due to the local nature of the refinement/coarsening process, it is computationally cheap. This point is further addressed is section . 10. Conforming mesh is guaranteed at every step. 11. Quality of the mesh is 100% retained (section) 12. Recursive calls due to refinement propagation are highly limited to depths not more than twice since it is adequate to refine any two oppositely located adjoint nodes in order to refine the selected node while retaining the conforming nature of the triangulation. 6.

2.4.1.4. Refinement algorithm In all cases the basic algorithm for the refinement procedure are the same as the original longest edge bisection method: •

Bisect l3 edge of all associated elements which are connected to adjoint nodes at the current nesting level of the given node.

2.4.1.5. Coarsening algorithm Since the direct inversion of the refinement process is not possible as discussed before (section), we encode the history of the evolution of the node using decimal encoding as illustrated as follows: Consider the following transformation history of a node N due to repetitive refinements.

N4 → N1 → N3 → N1 This transformation history is stored as a tag for the node in the following form:

4131 Here, the unit's position of the tag represents the present type while the digit in the next higher place represents the previous state. This is an extremely inexpensive way of concatenating the history of the adaptation process and storing it in a single position. It is a trivial process now to reverse the process of adaptation and call the coarsening function on the current high error node as a local problem.

18

2.4.1.6. Implementation issues 1. It is possible for a node to get adapted without being marked for adaptation if any two of its oppositely adjoint nodes are adapted (figure 18). Hence it is necessary to check for passive adaptation of all nodes connected to the node marked for adaptation after the process has been completed. This is a local post-processing procedure.

A data structure must be used to store the nested levels of hierarchy for each node since it is required for nested refinement and coarsening, as well as for resolving issues of multiple adaptation during a single call for adaptation of set of nodes R(N) due to passive and active adaptation at the same time. 3. Anti-clockwise local ordering of nodes must be maintained in all elements after both adaptation processes. 4. All measurements of distances within the mesh need to be done with a certain tolerance. For RTIN meshes, we select our multiplicative tolerance parameter as follows: 2.

tolerance = (H/h)*2level where H=size of the mesh h = local size of the triangulation level = local hierarchical level of the nested triangulation Then, any distance measured for purposes of comparison is represented by:

distance = round ( tolerance * measured distance )/(2level) where round() is a function that rounds off decimal places to give the nearest integral output. 2.4.1.7. Retention of quality The adaptation procedure outlined hereby works on the RTIN mesh and hence assures similarity of all triangles, thus retaining the quality of the initial triangulation 100%.

19

2.4.1.8. Computational time cost On being implemented with the earlier proposed data structure (section), this algorithm is local in space and hence linear in time given the necessity to scan all elements for marking for adaptation as a pre-processing step. However, recursive call is possible but not to all elements. Hence the average time complexity is roughly O(nlogn) and not O(n) where n is the number of nodes adapted, as is often claimed by most other algorithms since recursive calls for conformity are not accounted for [12]. 2.4.1.9. Post- processing Additionally, no post-processing is needed since the triangulation is assured to be Delaunay at all times. 2.4.1.10. Simulation and results 1. Solving the Laplace equation with temperature varying from 0 on one edge of a 2D square domain to 100 on the opposite edge. The solution has been calculated using linear finite elements and presented here for demonstrating the Diamond-Box adaptation algorithmic refinement. Adaptation criteria is based on absolute error.

Figure shows obtained solution

Figure shows an adapted mesh based on the previous solution 20

2. Solving the steady state Laplace equation with Dirichlet boundary conditions as in problem (1) and initial state as zero temperature over all domain. Euler backward time difference formula is used for approximating the time derivative. Solution is obtained until convergence to show the effectiveness of the adaptation algorithm.

Figure shows the solution at some intermediate time

Figure shows the adapted grid based on an error estimate (section 5)

Figure shows the solution on the adapted mesh. Observe smoothening of mesh

21

2.4.2. Element based adaptation strategy : 4T-LE Rivara's algorithm 2.4.2.1. Introduction Although the Diamond-box algorithm produces high quality results in the rectangular domain (section 2.4.1) and high compatibility with a priori error estimators, its use in more general and complex cases is difficult (issues of boundary tessellation). Furthermore, the practical use of a priori error estimators is highly limited and confined to research purposes and for the testing of new mesh generation and adaptation strategies or solution strategies (section 5). For real time simulations, a posteriori error estimators are used. Additionally, most a posteriori error estimators are based on residual calculations (section 5.2) and hence are evaluated over each element. Hence, it is more correct intuitively to use an element – based adaptation strategy for such cases. The four-triangle (4T) longest edge bisection algorithm proposed by Rivara combines the salient features of the traditional longest edge algorithm with a high non-directionality and higher resolution ([12]). Moreover, its nested refinement allows the use of computationally local coarsening algorithms based on nested meshes and geometric properties with the use of an efficient data structure (section 2.4.2.6). The quality of the triangulation is also increased. 2.4.2.2. Formulations The fundamentals of this algorithm lies in its division of a target triangle into four triangles by bisecting the longest edge (LE) of the target triangle as well as that of the two newly generated triangles. The process is non-recursive in itself at a local level. However, ensuring conforming nature of the triangulation calls for a recursive longest edge bisection of the neighboring triangles as discussed in section 2.4.2.4. The process is figuratively depicted in figure 19. It is further observed that of the four primarily resulting triangles, two are similar to the parent triangle while two are dissimilar in general and of an improved quality (section 2.4.2.3).

Another interesting property of this algorithm lies in the fact that it can be easily used to form nested Delaunay triangulation strategies since it naturally benefits from a good point distribution and insertion of new points [12][14].

22

2.4.2.3. Mathematical properties For the given target triangle T (⊂ τi) the present task is to find the higher level nested triangulation τi+1 iteratively by partitioning T using repeated use of the longest edge bisection to first obtain T½1 and T½2 , and then T¼11 , T¼12 , T¼21 and T¼22 (as shown in figure 19) by the insertion of new nodes on edge l1 of triangle T followed by that of T½1 and T½2. Let l4 be the new edge added to a triangle (between the opposite vertex and the mid-pt of the longest edge) following its LE bisection. In the remainder of this document we shall always consider the triangle T¼12 (the unique distinct triangle obtained after the first 4-triangles partition of T) which has the midpoints of l1 and l2 as vertices (see Figure (19)). Following are the properties of the 4T-LE refinement algorithm (detailed proofs can be found in [12]): ¼ ¼ ¼ ¼ ¼ ¼ 1. T 11 ≡ T 22 and T 12 ≡ T 21 but T 12 is dissimilar to T 11 in general. ¼ 2. Furthermore, T 11 ≡ T for all cases. ¼ 3. If T is non-obtuse, then T 12 is obtuse (or right angled) such that

γ (T) + γ (T¼12) = π and the 4T-LE bisection of T¼12 produces only triangles similar to T and T¼12 iteratively. ¼ ¼ ¼ 4. If T is obtuse, then α (T 12) > α (T) and γ (T 12) < (γ (T) - α (T 12)). As an addition, if T¼12 is non-obtuse, then property (3) holds true. 5. Thus, the algorithm is found to improve the quality of obtuse triangles greatly while bounding the smallest angle in all possible cases (both obtuse and nonobtuse T ). 2.4.2.4. Refinement algorithm The 4T-LE refinement procedure can be algorithmically depicted as follows: //Given a current nested triangulation τ , and a triangle To of it being marked for refinement, do the following: ½ ½ 1. Longest edge bisection of To (produces T 1 and T 2) ½ 2. Longest edge bisection of T 1 ½ 3. Longest edge bisection of T 2 //Describing procedure of longest edge bisection of a triangle T: 1. Locate mid-pt of l3 (T). 2. Add node N at location from (1.) if it doesn't exist already. ½ ½ 3. Add bisector edge l4 and form new triangles T 1 and T 2 4. Update data structures 5. Check neighbors for conformity and perform longest edge bisection of them if not conformal ½ ½ 6. Return T 1 and T 2 // The propagation of the refinement to ensure conformity is demonstrated in figure 20. 23

2.4.2.5. Computational cost Rivara points out in his document [12] that: The time cost analysis of the algorithm should study its asymptotic behavior to obtain triangles of a specified size over a desired refinement region R. From this point of view the refinement algorithm performs two essential tasks: (i) Task 1. Insertion of N vertices inside the region R (refinement of the target triangles) which ensures that the tolerance condition on the diameters of the triangles is satisfied. (ii) Task 2. Insertion of K vertices outside the region R (by refinement of some non-target triangles), which ensures the conformity of the refined mesh. Thus it is enough to show that the time-cost of inserting points into a single triangle is constant to prove that the overall cost is linear in the number of points to be inserted. This again, follows from a closer look at the algorithm detailed in the previous section which shows the problem to be local to the concerned triangle and hence does not require any global searches or traversal of the necessary data-structures (section 2.4.2.6). Thus it follows that the overall time cost is linear for a set of P triangles to be refined and is given by O(P). However, recursive calls for ensuring conformity of the mesh may result in higher number of points getting selected. In mathematical terms, for each of the K points outlined in task two, there may be associated another K' points which need to be inserted in order to maintain conformity. Hence, the overall cost of the algorithm can be roughly estimated to be O(NlogN) ([12]). 2.4.2.6. Data structures To facilitate the easy local traversal of the domain, we have used the topological edge connectivity of each node (figure 21) and the topological element connectivity of each edge as the required additional data structures other than the storage of node, edge and element data. As demonstrated in the previous section, this allows the refinement process as well as the coarsening process (section 2.4.2.9) to be completely local in nature.

24

2.4.2.7. Iterative generation and retention of stable molecules As mentioned earlier, the 4T-LE algorithm leads to the generation of stable molecules. A formal definition of the term follows: For any conforming triangulation τ and any vertex N of τ, we shall call the stable molecule associated with vertex N to be the fixed partition of the plane around vertex N, induced by the iterative use of the longest-side refinement algorithm, for refining the triangulation around the vertex N. The size of the stable molecule will be the number of angles converging in N.

The examples of Figures 22 and 23 illustrate these ideas. More specifically, the shadowed polygons of Figures 22(a) and 22(b), respectively correspond to the stable molecule associated with the vertices C and P (the angles converging in C and P are not refined anymore). In Figure 23, the longest-side refinement algorithm was 25

used over a one-triangle initial triangulation (triangle ABC) in order to respectively refine the triangulation around the vertices C and B. Notice that the first iterations of the algorithm around vertex C have constructed the stable molecule, while the next iterations of the algorithm only produce a fractal geometry around this vertex. In the case of vertex B, in contrast, the algorithm immediately “finds" the fractal geometry, since the stable molecule of vertex B is exactly defined by the sides AB and BC. As a result of the above discussion, the following observation is summarized: Let τ be any conforming triangulation and consider any vertex N of τ. The use of the longest-side refinement algorithm to refine the triangulation around the vertex N produces triangulations having the following characteristics: 1. After a finite number of iterations, the algorithm produces a triangulation τ* such that the stable molecule associated with vertex N is obtained. 2. The next iterations of the algorithm do not partition the angles of the stable molecule, but only introduce a set of new vertices distributed in geometric progression along the sides of the stable molecule of N. Part (2) follows from the fact that the triangles that share vertex N in each intermediate triangulation have longest side coincident with one of the straight line segments that defines the stable molecule, where N is one of its end points. Thus, the new points inserted are always the midpoints of these sides. 2.4.2.8. Post-processing The 4T-LE algorithm is not Delaunay in itself though it can be built using point insertion strategies which assure good quality Delaunay triangulation (which are heavier computationally) [12][24]. Hence, it is possibly desirable to use some suitable postprocessing techniques such as jiggling or swapping which have been discussed in section 2.3. However, they destroy the geometrical properties of the triangulation imbibed in the refinement process, thus rendering the coarsening algorithm (section 2.4.2.9) used in this work impossible and is hence not advisable. 2.4.2.9. Coarsening procedure The hereby outlined coarsening procedure work follows closely the work of [19] and is based on the following assumptions: There exists a unique conforming triangulation τi-1 for every triangulation τi in a sequence of hierarchical nested triangulations (τ0, …., τi-1, τi,...) generated by repeated use of the 4T-LE refinement. 2. New nodes are inserted on the longest edge of the triangle, thus it is located at the intersection of at least two edges which form a straight line. 3. Furthermore, these edges are lower in hierarchy than the other sides of the stable molecule associated with this vertex since they are 'older'. 1.

With these assumptions, the algorithm hereby devised is presented as follows: //Given a series of hierarchical nested triangulations (τ0, …., τi-1, τi,...), where τi 26

is a 4T-LE refinement of τi-1, the task is to coarsen a triangle T belonging to τi such that the conforming nature of the triangulation is retained 1. Locate node N* of the highest hierarchical level among the vertices of T 2. Use 4T-LE coarsening algorithm (follows) on N* //4T-LE coarsening algorithm for any vertex N: 1. Let R (N1, N2, …) form the set of vertices converging in N. 2. Coarsen (4T-LE coarsen) all elements of R which are at a higher hierarchical level than N. //This ensures the resultant conformity of the final triangulation. //Following step (2), only those elements of R should remain which either share straight edges with N or form the bisectors of the triangles which are to remain after the coarsening procedure (figure (24)) Detect the edges converging in N which form a pair of collinear segments, e1 and e2. 4. Merge the triangles on either sides of e1 and e2 . 5. Delete vertex N and update all data structures. 3.

Some of the properties of the algorithm: 1. The 4T-LE coarsening algorithm works solely using the principles of nested triangulations and geometric properties of the 4T-LE algorithm. 2. Since it is local on nature, the algorithm is linear in time. 3. The algorithm results in a mild reduction of the mesh and not in a complete reduction as shown in figure (25). This assures a highly bounded region of coarsening propagation (to ensure conformity). It also results in a slower decay of the mesh than the rate of growth during refinement. This in turn, is an intuitively beneficial feature especially in dynamic problems, where the variation of the solution over time can be completely arbitrary resulting in difficulty to predict the next region of sudden growth of error, which needs to be refined again.

27

28

2.4.2.10. Simulation and results 1. Solving the Laplace equation with temperature varying from -100 on one edge of a square domain to 100 on the opposite edge.

Figure shows obtained solution

Figure shows the adapted grid based on the obtained solution

Figure shows the solution on the adapted mesh 29

2. Solving the steady state Laplace equation with Dirichlet boundary conditions as in problem (1) and initial state as zero temperature over all domain.

Figure shows an adapted grid based on the solution obtained at some intermediate time

Figure shows the solution on the adapted grid

3. The Finite Element Method in brief 3.1. Introduction The finite element method (FEM) has been used since past several decades for solving systems of partial differential equation over simple to complex domains in 1D to 3D. It involves the solution of an auxillary equation based on the weighted residual of the PDE system. Instead, it may also be formulated using the functional of the system of PDEs. It is a preferred method over the finite difference method due to increased accuracy and flexibility and possible solution over unstructured meshes. The following section discusses some of the mathematical aspects of FEM.

30

3.2. Formulation As a model problem, consider a partial differential equation of the following form:

L(u) = f in Ω (1) where the (possibly nonlinear) differential operator L may consist of both spatial and time derivatives. The variational form of eqn (1) is derived by first multiplying the residual of this equation by a weighting function w and integrating over the domain (2) Ω ∫w {L(u) − f } dx = 0 It is well known that the solution to (2) exists and is unique. Let the exact solution u be approximated by means of finite elements u ≈ uh = ∑ uj Φj (3) j

where Φj denotes the basis functions spanning the finite-dimensional subspace, T in this case, a triangle of the triangulation τ of the domain Ω and j is the local number of nodes associated with each finite element, T (see figure()). Furthermore, we associate the subscript 'h' with vectors defined over all nodes of τ. Equation (2) can now be written in its discrete form: T

∫w{L (∑ uj Φj) − f } dx = 0

(4)

j

for each element T. Furthermore, the weak formulations of (4) may be obtained by applying Green's formula to reduce the order of L. The sum of the contribution of each element results in the building up of what are known as the stiffness matrix (K) and the nodal force matrix (f) (both terms derived from structural mechanics where FEM is frequently used to measure resultant stresses and strains). Thus the final approximate solution to (1) is obtained as follows

Kuh = f

(5)

where the solution is defined at all nodes in the domain defined by the triangulation and the finite element discretization. For the continuous Galerkin method, the weighting functions are chosen to be as follows: wj = ∂uh /∂uj (6) Thus,

wj = Φj

(7)

Transient analysis: For time dependent problems, problem (1) can be restated as:

∂u/∂t + L*(u) = f where

L := (∂/∂t + L*)

Thus, the variational formulation now becomes: 31

(8)

T

∫w{ ∑ Φj (∂uj/∂t) } dt + T ∫w{ L*( ∑ uj Φj ) − f } dx = 0 j

(9)

j

Discretization in the time variable is achieved using the Finite Difference Method:

∂uj/∂t = ( ujt – ujt' ) / (Δt)

(10)

where ujt = solution at time level t (current time)

ujt' = solution at time level t' (previous time) Δt = discrete time step Thus, after using proper weighting functions w, equation (10) can be written in the same form as equation (5) by introducing the time element matrix (M) as: (Forward difference method)

M( ut – ut' ) + (Δt) Kut' = (Δt) f

(11)

(Backward difference method)

M( ut – ut' ) + (Δt) Kut = (Δt) f

(12)

Both formulations are first order accurate in time. Other formulations such as the CrankNicolson method are second order in time whereas the Runga-Kutta algorithm is 4 th order in time [31]. 3.3. Non-linearity in equations We often deal with the non-linear force terms in equation (12) or (11) by iteratively solving them along with forward substitution of the non-linear terms until convergence. It can outlined by the following algorithm: //Solution for equation (12)/(11) with non-linear f, at t = t is to be found given solution at t = t' //E (.) is some form of a relative error estimate between consecutive iterative steps //i is the iteration counter while (E (ui – ui-1) > Eo) repeat steps 2 – 3 2. i = i + 1 i-1 2. Evaluate f ( u ) i 3. Solve equation (12) for value for f from step 2 to obtain u 1.

where, Eo represents a user defined numerical threshold This process is repeated for every time step. 3.4. Finite elements tested in this work 3.4.1. Linear triangular elements

32

As seen in figure 26, T(p1, p2, p3) is the triangular finite element considered in this section. In the physical plane x-y, it has co-ordinates of vertices p1(x1,y1), p2(x2,y2) and p3(x3,y3). As known from our Finite element knowledge, we must define shape functions Φj (P(x,y)), (j = 1,2,3) for the element such that:

Φj(pi) = 1 if j = i and 0 if j ≠ i Additionally,

∑ Φj = 1 It is observed, that a valid candidate for the shape functions are the area co-ordinates which are

Φ1 = A1/∆ Φ2 = A2/∆ Φ3 = A3/∆ where Aj (pj) is the fractional area opposite to vertex pj as shown in figure 26 and ∆ is the area of T( p1,p2,p3 ) since

A1+A2+A3 = ∆ Aj ( pj ) = ∆ Aj ( pi ) = 0, i ≠ j With the shape functions appropriately defined, we can proceed from eqn (3) As it can be clearly seen, these shape functions together give a linear variation of uh over the element T.

33

3.4.2. Quadratic triangular elements (using isoparametric elements) Though it is a very simple tool to implement finite element method, the linear triangular element is often insufficient in terms of accuracy of the solution and requires a very fine mesh to work upon, thus increasing computational cost in terms of both time and space. Instead, it is advisable to work with the second order triangular element which is discussed in this section. However, the computational complexity and chances of algorithmic error increases to a great extent and hence most works that use uniform elements are restricted to the second order. A notable exception is the p-adaptive FEM technique which uses higher order FEM approximations over target elements that display high estimated error. Moreover, their usage allows for highly unstructured grids to be usable. Further reading can be found in [32][33][34]. Formulations of the quadratic involve the placement of three new nodes at the mid-pts of all sides of the triangle. These formulations are extremely cumbersome in the physical plane (figure 27 (a)) and is hence carried out using the isoparametric elements technique wherein the computations are bijectively mapped into a special computational space. (figure 27(b))

34

As with the linear element, we once again use the area co-ordinates here in the computational space as seen in figure 27(b). Thus,

ξ1 = A 1 / Δ ξ2 = A 2 / Δ ξ3 = A 3 / Δ where notations on right hand side are same as defined before. Here, we can observe that

ξ1 + ξ2 + ξ3 = 1 Thus, the number of independent variables is two (ξ1 and ξ2) whereas we consider ξ3 to be a dependent variable in the computational space. Thus, the first step is to formulate the mapping from the physical space to the computational space. Since we are using quadratic formulations, we have the following rule:

x = a1 + a2ξ1 + a3ξ2 + a4ξ12 + a5ξ22 + a6ξ1ξ2 where x = [ x y ]T and ai = [ aix aiy ]T Using conditions of mapping on known points p1, p2, p3, p4, p5 and p6 as shown in figure 27, we have

a1k = k3 a2k = 4k6 – k1 – 3k3 a3k = 4k5 – k2 – 3k3 a4k = 2k1 – 4k6 + 2k3 a5k = 2k2 – 4k5 + 2k3 a6k = 4(k4 – k6 + k3 – k5) where k = x, y Substituting back into equation () gives:

x=Φẍ where ẍ = [x1 x2 x3 x4 x5 x6] and Φ is the vector of required shape functions obtained to be:

Φ = [ξ1(2ξ1-1) ξ2(2ξ2-1) ξ3(2ξ3-1) 4ξ1ξ2 4ξ2ξ3 4ξ3ξ1] Here, we must remember that ξ3 = 1 – ξ1 – ξ2 This can be directly substituted into equation (3) To simplify later formulations, we also introduce the Jacobian matrix, J for the transformation which is defined as:

[∂/∂ξ1 ∂/∂ξ2]T = J [∂/∂x ∂/∂y]T Thus, following simple differential chain rules and using p4, p5 and p6 to be the midpoints of sides of the triangle, we get 35

J = [(x1 – x3) (y1 – y3)] [(x2 – x3) (y2 – y3)] It is now trivial to show that the determinant of J is given by

det(J) = 2∆

Also, it is useful to note that we can transform domain integrals from the physical space to the computational space using the transformation:

dx dy = det(J) dξ1 dξ2

4. Phase field modeling of SiC deposition in ICVI process 4.1. The SiC ICVI process Ceramic-matrix composite materials made of Silicon Carbide (SiC) are increasingly gaining importance in the industrial applications because of their useful properties like improved fracture toughness, resistance to corrosion and light weight. They are widely used in the space industry as also in the defense industry (figure 28). However, this limitation is slowly being eliminated as such composites are becoming a preferred choice of material in several other industries which affect our day to day life such as the automotive industry. But large scale production of SiC is still restricted by expenses. One of the major ways of producing SiC fiber reinforced SiC matrix (which are widely used in the defense industry for bullet proof jackets) is the Isothermal Chemical Vapor Infiltration process(ICVI) (which is a special form of the Chemical Vapor Deposition (CVD) process). In a typical CVD process, the substrate is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit (figure 29). Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber. For the ICVI process of SiC, methyltrichlorosilane – CH3SiCl3 (MTS) is used as the precursor along with HCl and H2 is formed as one of the by-products along with the SiC which gets deposited on the SiC preforms (matrix) which are maintained at constant temperature and pressure. The coating on the fibers, which exhibits different thermal properties and mechanical properties, will combine the excellent mechanical properties of the SiC fibers with the improved resistance towards aggressive environment of the SiC matrix.

36

4.2. Mathematical modeling of the ICVI process of SiC The successive growth of the SiC matrix during the CVI process can lead to fibre interlocking and finally pore-blockage, which increases the porosity, and hence the volume to mass ratio of the material, which is highly undesirable for most industrial purposes. As a result, a lot of scientific research has been directed towards the mathematical modeling of the deposition process and to the study of the topological growth of the SiC matrix and especially the pores formed to set the basis for correlation between the process conditions and the resulting pore micro-structure. There are two main methodologies which are used to simulate the evolution of the growth profile of the deposit matrix: the sharp interface model and level-set method. The growth profile is described as a moving boundary problem from the mathematical point of view in the sharp interface model, which are fully researched. 37

However there are two main disadvantages in the sharp interface model: the discontinuity in the domain, which is separated by the interface, is input; From a computational point of view, the sharp interface model requires an accurate and stable tracking and description of the moving interface, which requires complicated re-meshing of the computational domain for the systems with topological changes. On the other hand, the level-set method describes the movement of the interface with an advection equation in that the interface is represented by a zero contour of a level-set function. It can perform the numerical computation involving curves, surfaces and the topological changes on a fixed grid. From the computational point of view, the level-set method has the numerical instability problem. Compared with the level-set method, the phase-field model uses a pair of the nonlinear reaction-diffusion type equations to describe the interface evolution. The idea is to replace the interface by a diffuse region where the level-set function φ as the phase-field parameter is introduced to implicitly describe the interface between two different phases. This parameter changes continuously in this region from the value of the solid phase on the solid-diffuse boundary to the value of the gas phase on the diffuse-gas boundary. It has been proved that the solution of the phase-field equation reduces to the sharpinterface solution when the thickness of the interface tends to zero by asymptotic analysis. 4.3. The Phase-field model The derivations of the phase field model have been done on the unidirectional fiber bundle (figure 30). The region Ωgas where the homogeneous gas phase processes (both convection and diffusion) are taking place, and the region Ω solid which represents the fibers in the fiber bundle (shown in figure 30): where t is the evolution time

Ω = Ωgas (t) ∪ Ωsolid (t)

The diffuse interface is neither solid nor gas but a mixture of them. The phase-field parameter is defined as:

φ(x,t) = {1, x ∈ Ωgas(t)} U {g(x, t), x ∈ Γ(t)} U {−1, x ∈ Ωsolid(t)}

where g(x ,t) is a continuously varying function in x The main idea of the phase-field model is to couple the inter-facial free energy with the diffusive-convection reacting equations and thus to couple the phase-field parameter

38

with a relevant variable describing the deposition process which in this case, is the concentration of SiC deposit. 4.3.1. System of partial differential equations Following mathematical analysis of the above described parameters, the following system of non-linear convection-diffusion-reaction type coupled equations are obtained in two dimensions:

τ (∂φ/∂t) = ξ2 {(1/l 2) (∂2φ/∂x2) + (1/ξ2) (∂2φ/∂y2)} – 4Aφ(φ2 – 1) + (B/2)(cSiC)

(13)

∂cMTS/∂t = {(1 + φ)/2} {(–vx/l) (∂cMTS/∂x) + ∂/∂x(DMTS,H2/l2)(∂cMTS/∂x) + ∂/∂y(DMTS,H2/ξ2)(∂cMTS/∂y)} + {(1 – φ)/2}δ(φ) rMTS

(14)

∂cSiC/∂t = {(1 – φ)/2}δ(φ) rSiC

(15)

where: φ = Phase field parameter cMTS = Concentration of MTS cSiC = Concentration of SiC l = length of the fiber ξ = thickness of the interface x = normalized grid X co-ordinate = x*/l y = normalized grid Y co-ordinate = y*/ξ t = time of evolution τ = relaxation time for free energy modeling A, B = Phenomenological constants vx = convection velocity of MTS longitudinal to the SiC fiber DMTS,H2 = Binary diffusivity of H2 with MTS δ(φ) = 1 for (φ > –1) and zero otherwise rMTS= heterogeneous reaction rate for MTS given by

rMTS = – Sv k(T) . cMTSn where Sv = surface to volume ratio of the microstructure k(T) = Arrhenius rate of the reaction T = Absolute temperature of the reaction n = Reaction order rSiC = heterogeneous reaction rate for MTS given by

rSiC = k(T) . cMTSn 39

The surface to volume ratio of the micro-structure is given as,

Sv = 2(1 – ϵ)/(ϵr) where ϵ = porosity of the microstructure r = radius of SiC fibre Arrhenius rate of reaction is given by

k(T) = ko exp{–Ea /(RT)} where ko= Rate constant for the reaction Ea = Activation energy for the reaction R = Universal gas constant T = Absolute temperature 4.3.2 Initial conditions All the initial conditions listed below have been applied on the domain Ω\Γ at time, t = 0:

φ (x,0) = 0 cMTS (x,0) = 0 cSiC (x,0) = 0 4.3.3 Boundary conditions All boundary conditions are diagrammatically displayed in following figure:

In figure: c = cMTS as well as cSiC

40

4.4. Finite element formulations The finite element modeling of the above described equations is done on a uniform triangular mesh which conforms with the domain Ω and its boundary Γ, which are shown in figure (above). Quadratic elements are used to model each sub-region in the triangulation which have been discussed in section (). Using notations following from section (), the term by term weak formulations of equations ()() and () can be presented as follows: For the phase field parameter (equation 13):

∫wξ2 {(1/l 2) (∂2φ/∂x2) + (1/ξ2) (∂2φ/∂y2)}dx = –T ∫ξ {(1/l 2) (∂φ/∂x)(∂w/∂x) + (1/ξ2) (∂φ/∂y)(∂w/∂y)}dx T 2

since ∂φ/∂n is zero on all ΓNφ , the Neumann boundary for φ as indicated in figure (above). The non-linear second and third terms are evaluated iteratively until convergence by the procedure discussed in section (earlier). Let fφ denote the force term (remaining two terms in eq 13). Then,

fφ = – 4Aφ(φ2 – 1) + (B/2)(cSiC) No weak formulation exists for fφ. For the MTS concentration (equation 14): Let GMTS := (1 + φ) /2 Then the weak formulations for different terms can be summarized as follows:

∫GMTSw∂/∂x{(DMTS,H2/l 2)(∂cMTS/∂x) + ∂/∂y(DMTS,H2/ξ2)(∂cMTS/∂y)}dx = –T ∫GMTSDMTS,H2 {(1/l 2) (∂cMTS/∂x)(∂w/∂x) + (1/ξ2) (∂cMTS/∂y)(∂w/∂y)}dx + Γgas/solid∫(w∂cMTS/∂n)∂Γ T

Since boundaries, Γgas and Γsolid are insulated, we apply isolated boundary conditions upon them, leading to

∂cMTS /∂n = – ( vx/l ) cMTS

Substituting back into the weak formulations outlined earlier gives the complete formulation for the diffusive term in eq 14. The convective term is not included in the weak formulations and is integrated directly. Similar to the build up of the force term for eq 13, we can define

fcMTS = {(1 – φ)/2}rMTS This term is also excluded in the weak formulation and integrated directly. Finally, since there is no diffusive term in eq 15, no weak formulations are needed. 41

Thus we only define

fcSiC = {(1 – φ)/2}rSiC Thus combined with the time variation, the net weak FEM formulations for eqs 13-15 can be noted as: T

∫τw( φht – φht' ) / (Δt) dx + T ∫ξ2 {(1/l 2) (∂φh/∂x)(∂w/∂x) + (1/ξ2) (∂φh/∂y) (∂w/∂y)}dx – T ∫w fφh dx = 0

∫w( cMTS ht – cMTS ht' ) / (Δt) dx + T ∫GMTSDMTS,H2 {(1/l ) (∂cMTS h/∂x)(∂w/∂x) + (1/ξ2) (∂cMTS h/∂y)(∂w/∂y)}dx – Γgas/solid∫{w(vx/l)cMTS h}∂Γ – T ∫w fcMTS h dx = 0 T 2

T

∫w( cSiC ht – cSiC ht' ) / (Δt) dx - T ∫w fcSiC h dx = 0

where all notations hold their previously defined meanings. 4.5. Numerical simulation and results Numerical data for simulation is the same as proposed in [35]. At time t=0: (From top to bottom: φ, cMTS, cSiC )

42

At time t = 11232 (From top to bottom: φ, cMTS, cSiC )

43

5. Error estimates in finite element method As we use finite element shape functions to approximate the solution over each subspace T in the domain Ω, there is a rise in the domain discretization error. But that is not the only source of errors as we may also need to account for integration errors, round-off errors, implementation errors, algorithmic errors, etc. 5.1. True error and error estimates Thus if the evaluated solution for equation (5) is uh, and the actual error is represented by u, the error in the FEM analysis of equation (1) can be estimated to be a suitable norm of the actual error, ε = (u – uh) (16) For example, we may set estimated error,

EΩ = || ε ||0, Ω which is nothing but the zero norm (standard L2 norm) of the error over the domain Ω. Similarly, we may evaluate the estimate of the error over each triangle which is, ET = || ε ||0, T (17) Such an estimate which accurately gives a bound on the error in the solution are called a priori error estimators and as seen in equation (17), require a prior knowledge of the exact solution u. However, for most practical purposes, while solving complex equations, the exact solution is unavailable and hence, cannot be used to predict the error in equation (17) ([36][37][38][39]). In such cases, we device an a posteriori error estimator which works by providing a suitable bound on the actual error which is represented by equation (17). This makes these estimators suitable for most industrial applications as well as research applications and an active field of development in the scientific world ([9][10][36][37] [38]). 5.2. a posteriori error estimation Starting with the pioneering work of Babuska[40], theories and methods of a posteriori error estimation have been developed extensively as mentioned earlier. In particular, the development of the finite element method, and its h-version and hp-version require reliable and computable estimates of the error that depend only on the approximate solution just computed, if possible. This is the means for the local mesh refinement/coarsening in the h-version. Hence, it is not only essential to have an estimate of the error, but in order to assure optimal working of the adaptive procedure, it is necessary to have an estimate which closely bounds the actual error and is not a gross over estimate[37]. There are two primary types of error estimators which have been devised in the scientific literature since the first attempts outlined in [40]. The first is the residual based error estimators ([9]) while the second one is the class of recovery based error estimators ([41]). Residual based error estimators work on the principle of local residual calculations 44

as the name suggest and have been found to be very effective. On the other hand, recovery based error estimators work on the principle of gradient recovery or recovery of a 'better' solution using either mass lumping methods, or average projections or the method devised by Zienkiewicz and Zhu [42] which is known as the patch recovery technique. These methods are independent of the actual system of equations solved (unlike residual based error estimators) and are still in their nascent stages of development although very promising for the future when they can potentially slash computational costs [39]. The basic algorithm for using a posteriori error estimators for h-adaptation is the same as outlined in section 2. 5.3. Hybrid error estimator for convection-diffusion-reaction system of equations Since equations 13-15 are CDR type PDE's, we hereby outline the development of an error estimator for these kind of equations, which have been less studied in the scientific literature. Furthermore, since we are solving the system iteratively, we consider only a static system of equations, and follow the work of Korotov [37]. - ∇.(ϵ∇u) + b.∇u + cu = f (18) on the continuous domain Ω with a Lipschitz boundary Γ = ∂Ω with ϵ >0, b ∈ W∞1(Ω, R2) and c ∈ L∞(Ω) Also, let ϵ∇u denote the element-wise scalar multiplication between vectors ϵ and ∇u. Then the weak formulation of equation (18) can be read as

a(u,w) = F(w)

where w ∈ H0 (Ω)

(19)

1

Here, the bilinear a(. , .) and the functional F(.) are given by:

a(v,w) = Ω ∫ (ϵ∇v).∇w dx + Ω ∫ (b.∇v)w dx + Ω ∫ cvw dx F(w) = Ω ∫fw dx

(20) (21)

It is well known that the solution u to equation (19) exists and is unique provided:

ć(x) = c(x) - {∇.b(x)}/2 ≥ 0

(22)

Let uh be the approximate solution to the problem obtained using FEM analysis and is thus defined over H01(Ω). Thus, we can use definition (16) for the global error in the solution. Also, we define the global weighted energy norm as follows

| ||ε|| |2λ,μ, Ω := λ Ω∫ |∇ε|2 dx + μ Ω∫ ćε2 dx

(23)

Furthermore, it can be proved that

a(ε,ε) = ||ε||21, Ω = | ||ε|| |2ϵ,1, Ω

45

(24)

It follows logically, that evaluating a(ε,ε) provides us a suitable hint towards the construction of the required a posteriori estimate. Thus,

a(ε,ε) = a(u-uh , u-uh) = Ω∫ {ϵ∇(u-uh)}.∇(u-uh) dx + Ω ∫ {b.∇(u-uh)}(u-uh) dx + Ω ∫ c(u-uh)2 dx Using a(u , u - uh ) = F(u - uh ), we have

a(ε,ε) = Ω∫ f(u-uh) dx - Ω∫ (ϵ∇uh).∇(u-uh) dx - Ω ∫ (b.∇uh)(u-uh) dx - Ω ∫ cuh(u-uh) dx = Ω∫(f – b.∇uh – cuh)(u-uh) dx – Ω∫ (ϵ∇uh).∇(u-uh) dx At this point, we introduce a parametric vector function y* ∈ H(div,Ω)

a(ε,ε) = Ω∫(f – b.∇uh – cuh)(u-uh) dx – Ω∫ (ϵ∇uh – y*).∇(u-uh) dx − Ω∫ y*.∇(u-uh) dx

(25)

Using Green's formula on the last term in the right hand side of equation (25), we have a(ε,ε) = Ω∫(f – b.∇uh – cuh + ∇.y*)(u-uh) dx + Ω∫ (y* – ϵ∇uh).∇(u-uh) dx (26) since Neumann boundary conditions are assumed to be satisfied by the FEM solution. Now, another parametric function v ∈ H01(Ω) is introduced as follows:

a(ε,ε) = Ω∫(f – b.∇uh – cuh + ∇.y* – b.∇v – cv)(u–uh) dx + Ω∫ (y* – ϵ∇uh + ϵ∇v).∇(u-uh) dx + Ω∫{(b.∇v + cv)(u–uh) – ϵ∇v.∇(u-uh)} dx

(27)

Equation (27) can now be broken up into three distinct integrals, T1, T2 and T3, each of which correspond to a respective term in the whole integral. Thus, a(ε,ε) := T1 + T2 + T3 So, the global problem of estimating a bound for a(ε,ε) has now been reduced to finding a corresponding bound for each of the three sub-integrals. Bound for T1 : Using |(a,b)| ≤ ||a|| ||b||, where (a,b) represents the standard inner product and ||.|| represents the standard L2 norm, we have T1 ≤ ||f – b.∇uh – cuh + ∇.y* – b.∇v – cv||0,Ω ||u–uh||0,Ω (28) Using Friedrichs – Poincare inequality:

||q||0,Ω ≤ CΩ ||∇q||0,Ω

where CΩ is a constant depending only on the size of the domain,

T1 ≤ CΩ||f – b.∇uh – cuh + ∇.y* – b.∇v – cv||0,Ω ||∇(u–uh)||0,Ω 46

(29)

Similarly, we have for T2:

T2 ≤ ||y* – ϵ∇uh + ϵ∇v||0,Ω ||∇(u–uh)||0,Ω

(30)

Now using the simple inequality (for any α > 0) we have

pq ≤ (α/2)p2 + q2/(2α)

for (p,q ≥ 0)

Hence,

T1 + T2 ≤ (α/2)||∇(u–uh)||20,Ω + (CΩ||f –b.∇uh –cuh +∇.y* –b.∇v –cv||0,Ω + ||y* –ϵ∇uh +ϵ∇v||0,Ω)2/(2α) Now, using the Euler inequality (for γ > 0):

(p + q)2 ≤ (1 + γ)p2 + (1+1/γ)q2

for (p,q ≥ 0)

we have

T1 + T2 ≤ (α/2)||∇(u–uh)||20,Ω + {(1+1/γ)C2Ω||f –b.∇uh –cuh +∇.y* –b.∇v –cv||20,Ω + (1 + γ)||y* –ϵ∇uh +ϵ∇v||2 0,Ω} /(2α)

(31)

Estimated bound for T3:

T3 = Ω∫{(b.∇v + cv)(u–uh) – ϵ∇v.∇(u-uh)} dx Using a(u,v) = F(v), we have

T3 = Ω∫{b.(∇v)(u–uh) + cv(u–uh) + ϵ∇v.∇uh – fv + (b.∇u)v + cuv} dx = Ω∫{b.(∇v)(u–uh) + 2cv(u–uh) + (b.(∇(u-uh))v} dx + Ω∫ {ϵ∇v.∇uh – fv + (b.∇uh)v + cvuh} dx = Ω∫{b.∇(v(u–uh)) + 2cv(u–uh)} dx + Ω∫ {ϵ∇v.∇uh – fv + (b.∇uh)v + cvuh} dx Using Green's formula again on the first term,

T3 = Ω∫{(–∇.b)(v(u–uh)) + 2cv(u–uh)} dx + Ω∫ {ϵ∇v.∇uh – fv + (b.∇uh)v + cvuh} dx = Ω∫ {ϵ∇v.∇uh – fv + (b.∇uh)v + cvuh} dx + 2 Ω∫ ć(v(u–uh)) dx

(32)

Thus we can once more split it as follows: T3 := T3,1 + T3,2 Here, T3,1 is independent of u and can be directly evaluated while T3,2 cannot be directly used. Hence, apply following inequality to T3,2 :

T3,2 ≤ (1/β) ||(√ć)(u–uh)||20,Ω + β ||(√ć)v||20,Ω 47

(33)

Thus using (24), (27), (31) and (33), we finally obtain:

(||ϵ||0 – (α/2))||∇(u–uh)||20,Ω + (1-(1/β)) ||(√ć)(u–uh)||20,Ω ≤ {(1+1/γ)C2Ω||f –b.∇uh –cuh +∇.y* –b.∇v –cv||20,Ω + (1 + γ)||y* –ϵ∇uh +ϵ∇v||2 0,Ω} /(2α) + Ω∫ {ϵ∇v.∇uh – fv + (b.∇uh)v + cvuh} dx + β ||(√ć)v||20,Ω

(34)

Define right hand side of equation (34) to be the required estimate ESTα,β,γ(uh , y*, v), then we have

ESTα,β,γ(uh, y*, v) ≥ | ||ε|| |2(||ϵ|| – (α/2)),(1-(1/β)), Ω

(35)

which proves that the evaluated function is really a bound for the actual error. It is further proved in [37] that this estimate is not grossly larger than the actual value, but is in fact, quasi- sharp. 5.4. Grid adaptation criteria based on error estimation The global error calculated in eqn (34) can instead be calculated over each element leading to an element based estimated. Then,

EΩ = ∑ ET T⊂τ

Thus, we evaluate ET for each element and form a criteria based on max(ET)as follows: • If ET > θR max(ET), mark element for refinement • Else, if ET < θC max(ET), mark element for coarsening where θR and θC are user defined thresholds for refinement and coarsening respectively lying in (0,1]. 5.5. Implementation issues As can be seen in Eqn. (35), EST is a function of several unknowns (α, β, γ, y*, v). In this section we outline a method to select each of these parameters in order to make the estimate EST quasi-sharp.

α, β: We can easily observe that for α = 0 and β → ∞, the right hand side of eqn (35) reduces to the standard one-norm of the error as defined in eqn 24. Further analysis can be done (as in [37]) to prove that an upper bound of at the maximum, twice of this error norm can be obtained for α = ||ϵ||, and β = 1. Thus the values for α and β must lie within this range. 48

γ : Since γ is a free parameter, it can be chosen completely arbitrarily and functions like a weight for the gradient error term in equation (34) (second term on the right hand side). Thus a high value of γ leads to increased weightage for the error in the gradient and decreased weightage for the error in residual (term one in right hand side eqn (34)). An optimally chosen value would result in equal weightage for both error terms.

y* and v: It can be easily shown that for y* := ϵ∇u and v := u – uh , the error estimate reduces to the bounds:

| ||ε|| |2(||ϵ|| – (α/2)),(1-(1/β)), Ω ≤ ESTα,β,γ ≤ | ||ε|| |2{(2(1+γ)||ϵ||||ϵ||/ α) – ||ϵ||},(β-1), Ω

(36)

Since it is not possible to implement these values in practice, we use

y* := ϵ G(∇uh) and v := uhi+1 - uhi where G = a suitable gradient averaging operator ([39]) uhi+1 = current iterative solution uhi = previous iterative solution During implementation, we use a recovery technique to evaluate ∇uh as follows:

∇uh = ∑∇Φjuj

for each element from equation (3).

j

Then y* can be calculated at each node N as the G – average of (∇uh)N , which is the contribution to ∇uh at node N for all elements sharing a vertex at N. For this work we select

G(a,b) = {(sign(a) + sign(b))/2} {max(|a+b|/2, 2|a|, 2|b|)} where sign(.) denotes the signum function. 5.6 Numerical implementation and results At time = 0

Simulation of error in phase field parameter 49

Simulation of the adapted grid based on the error in the phase field parameter Iteration 1:

Simulation of error in phase field parameter

Simulation of the adapted grid based on the error in the phase field parameter 50

Iteration 2:

Simulation of error in phase field parameter

Simulation of the adapted grid based on the error in the phase field parameter Iteration 3:

Simulation of error in phase field parameter 51

Simulation of the adapted grid based on the error in the phase field parameter At time 3600- after single iteration:

Simulation of error in phase field parameter

Simulation of the adapted grid based on the error in the phase field parameter 52

Simulation at time, t=0 shows the convergence of the solution at the first time step iteratively. It is found that convergence is faster than initial mesh by almost twice the rate. Also, the efficiency of the adaptive grid scheme can be seen from the high resemblance of the grid spacing to the error density. Simulation at time, t=3600 show the solution after 3600 seconds. Note the evolution of the phase-field interface and the corresponding evolution of the error which proves that the error density is highest at the phase field interface. Furthermore, the solution is found to converge in a single iterative step.

6. Conclusions The present work successfully proves the numerous merits of the adaptive grid generation technique when applied to the phase-field modeling of the ICVI process of SiC reinforcement of SiC matrices. As a concluding remark, it is noted that future work in this direction must be aimed at both, an improved approach towards the phase-field model itself and at the same time, to more complex and generalized possibly 3D solution domains, where the evolution of the pores can be clearly visualized. At the same time, work may also be directed towards an additional adaptive time stepping procedure which will further improve computational speed of the simulations. 7.1. Acknowledgment The author is sincerely grateful to the financial support of the Institute for Solid Mechanics, University of Karlsruhe, and the kind guidance of Prof. Dr. -Ing Eckart Schnack and Ms. Fengwen Wang throughout the duration of the work. 7.2. References [1] Jose Castillo, Erik M. Pedersen, 'Solution adaptive direct variational grids for fluid flow calculations', Journal of Computational and Applied Mathematics 67 (1996) 343370 [2] A. Jahangirian, Y. Shoraka, 'Adaptive unstructured grid generation for engineering computation of aerodynamic flows', Mathematics and Computers in Simulation 78 (2008) 627–644 [3] Bruno Baccus, Dominique Collard, and Emmanuel Dubois, 'Adaptive Mesh Refinement for Multilayer Process Simulation Using the Finite Element Method', IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN, VOL. I I . NO. 3. MARCH 1992 [4] Murat Guven1, Birsen Yazici, Kiwoon Kwon, Eldar Giladi and Xavier Intes, 'Effect of discretization error and adaptive mesh generation in diffuse optical absorption imaging: II', Inverse Problems 23 (2007) 1135–1160 [5] J.P. Suárez, A. Plaza, 'Four-triangles adaptive algorithms for RTIN terrain meshes', Mathematical and Computer Modelling 49 (2009) 1012–1020 [6] Sutthisak Phongthanapanich, Pramote Dechaumphai, 'Adaptive Delaunay triangulation with object-oriented programming for crack propagation analysis', Finite Elements in Analysis and Design 40 (2004) 1753–1771

53

[7] Ronald H.W. Hoppe, Barbara Wohlmuth, 'Multilevel iterative solution and adaptive mesh refinement for mixed finite element discretizations', Applied Numerical Mathematics 23 (1997) 97-117 [8] Hua Ji, Fue-Sang Lien, Eugene Yee, 'A robust and efficient hybrid cut-cell/ghost-cell method with adaptive mesh refinement for moving boundaries on irregular domains', Comput. Methods Appl. Mech. Engrg. 198 (2008) 432–448 [9] Karel Segeth, 'A review of some a posteriori error estimates for adaptive finite element methods', Mathematics and Computers in Simulation xxx (2009) xxx–xxx [10] R. Verfurth, 'A posteriori error estimation and adaptive mesh-refinement techniques', Journal of Computational and Applied Mathematics 50 (1994) 67-83 [11] A. Bowyer, 'Computing Dirichlet Tessellations', The computer journal, Vol. 24, No. 2, 1981 [12] Maria-Cecilia Rivara and Gabriel Iribarren, 'The 4-triangles longest-side partition of triangles and linear refinement algorithms', MATHEMATICS OF COMPUTATION Volume 65, Number 216 October 1996 [13] Miguel A. Padro´n, Jose´ P. Sua´rez, A´ngel Plaza, 'Refinement based on longestedge and self-similar four-triangle partitions', Mathematics and Computers in Simulation 75 (2007) 251–262 [14] Jim Ruppert, 'A Delaunay Refinement algorithm for quality 2-dimensional mesh generation', Journal of Algorithms 18, 548-585, 1995 [15] A. Plaza, R. Montenegro & L. Ferragutt, 'An improved derefinement algorithm of nested meshes', Advances in Engineering Software J (1996) 5 I-57 S [16] Svetozar Margenov, Nikola Kosturski, 'MIC(0) preconditioning of 3D FEM problems on unstructured grids: Conforming and non-conforming elements', Journal of Computational and Applied Mathematics 226 (2009) 288297 [17] David Cohen-Steiner, Éric Colin de Verdière, Mariette Yvinec, 'Conforming Delaunay triangulations in 3D', Computational Geometry 28 (2004) 217–233 [18] B. V. Saunders,'A Boundary Conforming Grid Generation System for Interface Tracking', Computers Math. Applic. Vol. 29, No. 10, pp. 1-17, 1995 [19] A. Plaza, R. Montenegro & L. Ferragutt, 'An improved derefinement algorithm of nested meshes', Advances in Engineering Software ?J (1996) 5 I-57 [20] Maria-Cecilia Rivara, 'Lepp-bisection algorithms, applications and mathematical properties', Applied Numerical Mathematics 59 (2009) 2218–2235 [21] Gary L. MillerU, Dafna Talmor, Shang-Hua Teng, 'Optimal Coarsening of 6 Unstructured Meshes', Journal of Algorithms 31, 29]651999. [22] Matthias M¨oller, 'Hierarchical grid adaptation for hybrid meshes', 5th European Congress on Computational Methods in Applied Sciences and Engineeering (ECCOMAS 2008) June 30 –July 5, 2008 [23] T. Coupez, H. Digonnet, R. Ducloux, 'Parallel meshing and remeshing', Applied Mathematical Modelling 25 (2000) 153±175

54

[24] Ángel Plaza, Alberto Márquez, Auxiliadora Moreno-González, José P. Suárez, 'Local refinement based on the 7-triangle longest-edge partition', Mathematics and Computers in Simulation 79 (2009) 2444–2457 [25] Jonathan Richard Shewchuk, 'Delaunay refinement algorithms for triangular mesh generation', Computational Geometry 22 (2002) 21–74 [26] I. Babuka and A. K. Aziz, 'On the angle condition in the finite element method', SIAM J. NUMER. ANAL. Vol. 13, No. 2, April 1976 [27] A. Plaza, R. Montenegro, L. Ferragutt, 'An improved derefinement algorithm of nested meshes', Advances in Engineering Software ?J (1996) 5 I-57 [28] José P. Suárez, Ángel Plaza, Graham F. Carey, 'The propagation problem in longestedge refinement', Finite Elements in Analysis and Design 42 (2005) 130–151 [29] J. U. Brackbill, 'A method to suppress the Finite-Grid instability in plasma simulations', Journal of Computational physics 114, 77-84, (1994) [30] Rashid Mahmood, Peter K. Jimack, 'Locally optimal unstructured finite element meshes in 3 dimensions', Computers and Structures 82 (2004) 2105–2116 [31] Kevin Burrage , 'Efficiently Implementable Algebraically Stable Runge-Kutta Methods ', SIAM Journal on Numerical Analysis, Vol. 19, No. 2 (Apr., 1982), pp. 245258 [32] J.E. Akin, M. Singh, 'Object-oriented Fortran 90 P-adaptive finite element method', Advances in Engineering Software 33 (2002) 461–468 [33] Masaaki Yokoyama, Yasuhiro Umehara, 'A p-adaptive three dimensional boundary element method for elastostatic problems using quasi-Lagrange interpolation', Advances in Engineering Software 34 (2003) 577–585 F [34] F. Tin-Loi, N.S. Ngo, 'Performance of a p-adaptive finite element method for shakedown analysis', International Journal of Mechanical Sciences 49 (2007) 1166–1178 [35] E. Schnack, F.W. Wang, A.J. Li, 'Phase-field model for the chemical vapor infiltration of Silicon Carbide', (under review) [36] I. Babuzka, Miloslav Feistauer, Pavel Solin, 'On one approach to a posteriori error estimation for evolution problems solved by the method of lines', Numerische Mathematik, 2001, vol.89 , no.2, pp 225-256 [37] Sergey Korotov, 'Global a posteriori error estimates for convection–reaction– diffusion problems', Applied Mathematical Modelling 32 (2008) 1579–1586 [38] Biyue Liu, 'An error analysis of a finite element method for a system of nonlinear advection–diffusion–reaction equations', Applied Numerical Mathematics 59 (2009) 1947–1959 [39] M. Moller and D. Kuzmin, 'Adaptive mesh renement for high-resolution finite element schemes', INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS 2006; 52:545–569 [40] I. Babuska, I. Lee, C. Schwab, 'On the a posteriori estimation of the modeling error for the heat conduction in a plate and its use for adaptive hierarchical modeling', Applied Numerical Mathematics 14 (1994) 5-21

55

[41] Ste´phane Bordas, Marc Duflot, 'Derivative recovery and a posteriori error estimate for extended finite elements', Comput. Methods Appl. Mech. Engrg. 196 (2007) 3381– 3399 [42] O.C. Zienkiewicz, B. Boroomand, J.Z. Zhu, 'Recovery procedures in error estimation and adaptivity Part I: Adaptivity in linear problems', Comput. Methods Appl. Mech. Engrg. 176 (1999) Ill-125

56

h-adaptive grid generation - phase field modelling ...

This work deals with the application of adaptive grid generation technique to ..... A suitable data structure is then generated by traversing the region and finding ...

3MB Sizes 2 Downloads 307 Views

Recommend Documents

Single Phase Grid Connected Pv+Fuel Cel Hybrid ...
MPP, and a dc/ac inverter for interfacing the PV system to the grid. .... experimental setup consists of an Agilent modular solar array simulator to emulate PV .... hill-climbing method for maximum power point in microgrid standalone photovoltaic.

Novel Topology Generation Concept for Grid Connected PV ... - IJRIT
The grid neutral line connected to directly to the negative pole of the dc bus, .... The six switches are control in the inverse sine carrier pulse width modulation.

area functionals for high quality grid generation - UNAM
Keywords: Variational grid generation, grid, quality grids, convex functionals. ..... C strictly decreasing convex and bounded below function such that ( ) 0 f α → as ...

Novel Topology Generation Concept for Grid Connected PV ... - IJRIT
Novel Topology Generation Concept for Grid Connected PV ... fundamental component compared to the conventional sinusoidal PWM without any pulse ...

area functionals for high quality grid generation - UNAM
Page 2 ..... in the optimization process required to generate the final convex grid have to deal with few non convex cells whose area is very close to zero, for ...

area functionals for high quality grid generation
EDP's (in spanish) M.Sc. Thesis. Facultad de Ciencias, U.N.A.M, (2006). [8] F.J. Domínguez-Mota, Sobre la generación variacional discreta de mallas ...

Multicomponent phase-field model for extremely large ...
Jan 31, 2014 - We develop a multicomponent phase-field model specially formulated to robustly ... The phase-field method introduces a new thermodynamic.

Isogeometric collocation for phase-field fracture models
Jul 8, 2014 - [24] leads to higher regularity in the exact phase-field solution, ...... there is no need to exchange additional information across patches during.

A comparison of Stefan and Phase Field modeling ...
*Tel: (613) 541-6000 x 6611, Fax: (613) 542-9489, Email: [email protected] ... Phase Field models to describe heat and mass transfer in liquid and solid uranium.

Different Regimes of the Geomagnetic Field Generation ...
GPS as a whole can be described in terms of fractals. (see, for ... log. Different Regimes of the Geomagnetic Field Generation. During the Last 165 Ma. M. Yu.

ALE 17. Phase Changes and Phase Diagrams
The temperature decreases as more and more of the liquid is converted into a gas. ... Use the data below to calculate the total heat in Joules needed to convert ...

Phase-field model for solidification of Fe–C alloys -
Nov 18, 2016 - Abstract. The phase-field model for binary alloys by Kim et al. is briefly introduced and the main difference in the definition of free energy density in interface region between the models by Kim et al. and by Wheeler et al. is discus

Phase-field boundary conditions for the voxel finite cell ...
Minnesota, 500 Pillsbury Drive S.E., Minneapolis, MN 55455, USA; Phone: +1 ..... based on CT scans in Section 5, the distinction between a hard tissue ..... solvability of the system, we assign a very small stiffness (in our case c ... quantitative C

MARMOT Phase-Field Model for the U-Si System - INL Research ...
MARMOT Phase-Field Model for the U-Si System. L.K. Aagesen. D. Schwen. September 2016. Idaho National Laboratory. Fuel Modeling and Simulation ...

PDF The Grid - Audiobooks
PDF The Grid - Audiobooks

Existence of phase-locking in the Kuramoto system under mean-field ...
under mean-field feedback and we show how, generically, the “standard” (with zero feedback gain) Kuramoto fixed point equation is locally invertible in terms of ...

DC grid - ORBi
every converter based on the frequency of the AC area it is connected to. Additionally, rather than directly controlling the power injections into the DC grid, as is done in [1], [2],. [3], [5], the new scheme controls the DC voltages of the converte

Read The Grid - Download
Read The Grid - Download

DC grid - ORBi
converges, after a change of load demands in the AC ar- .... mi) and of the variables (fi, Pmi, Pli, Pdc ... tions linking the voltage values to the other variables.

Grid-Thursday.pdf
Therapy! The Worst. Consequence of Your. Best Ideas -. Adam Provost. Whoops! There was a problem loading this page. Grid-Thursday.pdf. Grid-Thursday.pdf.