BIODEGRADABLE POLYMERIC DRUG DELIVERY: PARALLEL SIMULATION AND OPTIMAL DRUG RELEASE PROFILES

BY ASHLEE NICOLE FORD B.S., University of Oklahoma, Norman, 2005

THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Chemical Engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 2009

Urbana, Illinois

Adviser: Professor Richard D. Braatz

To My Family

ii

Acknowledgments

I would like to thank my advisor Professor Richard Braatz, all of my labmates, particularly Nicholas Kee, Paul Arendt, and Masako Kishida, and my collaborators Professor Daniel Pack and Kalena Stovall for their guidance. I also am thankful for the assistance of Jim Davenport, Yolanda Small, J. J. Sestrich, Eugene Frankfurt, Shaun Joy, and Professor Laxmikant Kale. I am extremely grateful for the loving support of my dear friends and family who have encouraged me throughout the research and thesis writing process. This work was made possible through support of the Department of Energy Computational Science Graduate Fellowship provided under grant number DE-FG02-97ER25308.

iii

Table of Contents

Chapter 1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 2 Analysis of Optimal Drug Release from Biodegradable Polymer Microsphere Arrays . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Minimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8 8 9 10 19

Chapter 3 Parallel Simulation of Drug Release from Biodegradable Polymer Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Model and Parallel Algorithm . . . . . . . . . . . . . . . . . . . . . . 3.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20 20 22 28 35

Chapter 4 Conclusions and Future Work . . . . . . . . . . . . . . . . 4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36 36 36

Appendix A HAP Crystal Growth in Polycarbonate Membrane Pores A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Experimental Methods . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Results and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . .

40 40 40 42

Appendix B Molecular Dynamics Simulations of Benzene Binding to Glucose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 Molecular Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Model and Simulation Details . . . . . . . . . . . . . . . . . . . . . . B.4 Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . .

44 44 47 50 53

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

iv

Chapter 1

Introduction

Controlled-release drug delivery systems are being developed as alternatives to conventional medical drug therapy regimens for pharmaceuticals that require frequent administrations. The types of drugs of particular interest are deactivated quickly due to short in vivo half-life and eliminated from the body before the active agent can be completely utilized. Traditional drug regimens include oral, topical, and injection dosage forms. Figure 1.1 illustrates the differences between concentration profiles for controlled-release and traditional drug delivery systems. For traditional drug delivery, the concentration peaks shortly after the dose is administered. In order to prevent peaks of high toxic concentration, traditional therapies must use sufficiently low concentration dosages. The concentration diminishes with time as the drug is used by or expelled from the body. The drug must be re-administered to maintain the concentration in the therapeutic regime and to minimize the time elapsed at low, ineffective concentrations. With controlled-release drug delivery, the drug concentration can be sustained at desired levels within the therapeutic regime for extended periods of time. Controlled-release systems have the potential to provide better control of drug concentrations, reduce side effects caused by concentration extremes and repeated administrations, and improve compliance as compared to conventional regimens [1–3]. Despite these advantages, the implementation of controlled-release drug delivery devices for human patients has been gradual as the design of controlled-release devices depends heavily on trial-and-error experiments due to

1

Drug Concentration in Blood Plasma

Controlled-release Delivery System Traditional Delivery System

Toxic

Therapeutic

Ineffective

Time

Repeated dosages

Figure 1.1: Typical release profiles for controlled-release and traditional drug delivery systems.

incomplete understanding of the mechanisms that regulate the release processes. Computational modeling can help develop this understanding and can be used to predict drug release behavior for a wide range of conditions. Such predictive capability could be a useful tool for designing microspheres for obtaining desired drug release profiles. Poly(lactic-co-glycolic acid) (PLGA) microspheres have been extensively studied for controlled-release drug delivery mainly because the polymer is biodegradable and bioabsorbable and the size range provides the release rates appropriate for many drug compounds. These qualities allow for the passive degradation of the polymer in aqueous environments such as living tissues and for the harmless incorporation of degradation products into the surrounding

2

medium [4]. Many experimental studies have been conducted to characterize the PLGA microsphere degradation and erosion processes and the release of drug molecules from the microspheres [5–21]. Several processes contribute to the overall rate of drug release from PLGA microspheres including chemical degradation of the polymer by the autocatalytic ester hydrolysis reaction, polymer erosion, pore structure evolution as a result of mass erosion, and drug release by diffusion through the polymer matrix and the aqueous pore structure. In this thesis, the term degradation refers to the process during which the polymer chains are hydrolyzed to form oligomers and monomers. The term erosion refers to the loss of mass due to diffusion of small oligomers and monomers out of the polymer matrix. These definitions are the same as those given by Gopferich [4] and have been widely adopted in the literature. Figure 1.2 shows a schematic for the overall drug release process from microspheres made of bulk-eroding polymers such as PLGA. The drug molecules are initially distributed throughout the polymer matrix in a manner dependent on the microparticle fabrication technique. Before the drug release is initiated, water from bodily fluids in vivo or from the buffer medium in vitro must hydrate the polymer matrix of the microspheres. The hydration is a relatively fast process compared with the timescales for polymer degradation and erosion—on the order of a few minutes compared to weeks or months [22]. The water in the polymer matrix can hydrolyze the polymer chains to break them into smaller fragments. Small PLGA oligomers are capable of diffusing out of the bulk leaving void volumes in the polymer due to the oligomer mass loss. The void volumes can be connected as pores. The drug is transported through the polymer matrix and through pores due to a concentration gradient. The drug diffuses more readily in aqueous pores than in the polymer matrix, so the effective drug diffusivity increases as the pore network develops due to the hydrolysis of the polymer. The hydrolytic polymer degradation, the pore network development, and the drug transport are illustrated in Steps 3 and 3

Polymer Microsphere Drug Release Process H2O

H2O

H2O

H2O

1. A polymer microsphere initially contains some drug molecules.

H2O

H2O

H2O

H2O

2. Water moves into the microsphere.

3. Pores form in the microsphere as the polymer chemically reacts with the water.

4. Drug molecules travel out of the polymer through the pores.

Figure 1.2: Drug release process for bulk-eroding biodegradable polymer microspheres.

4 in Figure 1.2 and occur in concert for the duration of the drug release process. There also can be an initial burst effect where a large percentage of the drug is released within the first day. This effect has been reported for some formulations of PLGA microspheres; however, as this process can be diminished or eliminated by adjusting the fabrication technique [23], it is not considered in this work. Due to the relatively short time required for hydration, the erosion for PLGA and similiar linear polyester materials is referred to as “bulk erosion,” which is characterized by the reaction and diffusion processes occurring throughout the microspheres rather than progressing as a front from the surface inward. It has been observed that the microsphere bulk also can erode in a heterogeneous fashion–from 4

the particle center outward–depending on the particle size; larger microspheres have been shown to experience faster erosion in their centers than smaller microspheres [17–20, 24]. The cause of the heterogeneous mass loss is generally attributed to the autocatalytic hydrolysis reaction by which the polymer chemically degrades [20, 21, 24, 25]. The hydrolysis reaction is catalyzed by acids including the carboxylic acid end groups of the polymer chains and protons present in the buffer medium or in bodily fluids. The hydrolysis reaction proceeds by cleavage of the ester bonds of the PLGA polymer chains. At the onset of degradation, all particle sizes hydrolyze PLGA at similar rates while generating acidic byproducts. Hydrolysis eventually leads to erosion when sufficiently small water-soluble oligomer fragments from degraded PLGA are transported away from the reaction site. If the diffusion process controlled the drug release without influence by polymer degradation, larger microsphere sizes would be expected to have smaller relative release rates than smaller particles as the diffusion pathways would be longer and the concentration gradients would be smaller. Contrary to this intuitive diffusion-controlled behavior, polymer degradation does influence the drug release rates for different sized particles. In domains close to the external surfaces of microspheres, the diffusion lengths are sufficiently small for the acidic oligomer hydrolysis byproducts to diffuse out of the particles; in smaller microspheres, the entire volume can have short diffusion lengths. Acidic polymer fragments which remain in particles have hindered mobility in regions farther from the external surfaces where transport is limited by greater diffusion lengths. This leads to an accumulation of acidic degradation byproducts in the interior of larger microspheres, which results in a change in the microenvironment pH. The acidic endgroups further catalyze the hydrolysis reaction leading to accelerated degradation particularly in the interior of large microspheres due to the limited acid transport out of the center. Over time, the autocatalytic effect becomes more pronounced, and microspheres can form heterogeneous, hollow interiors [25]. Small 5

microspheres without long diffusion lengths are less susceptible to acidic buildup and heterogeneous degradation. Experimentalists have reported evidence of local pH drop due to accumulation of the acidic byproducts of the polymer hydrolysis and have detected degradation rates which increase with polymer particle size as a result of autocatalysis [6, 11–15, 20, 24, 26, 27]. The autocatalytic effects compete with the diffusion effects for control of the drug release rates [6]. Controlled-release drug concentration profiles can be determined through experiments for certain conditions or can be modeled to predict release behavior for a wider set of design parameters. One problem associated with controlled-release drug delivery is determining the optimal release profile desired for a particular application. Once the desired release profile is known, the microsphere formulation conditions can be modified to achieve the drug concentration profile. Chapter 2 of this thesis contains the derivation for the optimal release profile needed by living cells for tissue regeneration when the polymer microspheres are arranged in a linear array. Modeling controlled-release drug delivery is important for investigating large ranges of conditions that are not accessible experimentally and for predicting the conditions that would be profitable to explore without wasting laboratory materials. An accurate computational model could augment the understanding of the release mechanisms and serve as an aid for planning experiments, designing drug delivery systems to be tested, and manufacturing pharmaceutical products for actual patient use [28, 29]. It is highly desirable that computational models be applicable to a large number of chemical species in the system, include multiple physics simultaneously, and have fine spatial and temporal resolutions to simulate the system accurately. The development of a parallel algorithm for implementing a simple model for autocatalytic degradation and drug release from PLGA matrices is described in Chapter 3. This algorithm could be applied readily to more sophisticated models allowing faster code execution than sequential algorithms. Faster code execution 6

would be important for applying the model to explore large ranges of particle design conditions in order to obtain the optimal conditions for desired release profiles. The final chapter in this thesis gives conclusions and highlights future work for modeling controlled-release drug delivery from biodegradable polymers. The appendices include reports on other projects tangential to this work that were completed during the course of the MS thesis project.

7

Chapter 2

Analysis of Optimal Drug Release from Biodegradable Polymer Microsphere Arrays 2.1

Introduction

The mathematical analysis presented in this chapter was performed by Ashlee N. Ford in Fall 2006. The framework and equations were shared with Masako Kishida as an introduction to the physical problem of controlled-release drug delivery for biological tissue regeneration, and she expanded on these results in her own research. Controlled-release drug delivery using polymer microspheres as described in this thesis has been used to deliver different types of small molecules and proteins both in vitro and in vivo over extended periods of time [30–33]. It is hypothesized that proteins such as growth factors could be released in a prescribed manner from a three-dimensional array of polymer microspheres in order to maintain the concentration needed by living cells to regenerate human tissue. An example application is growth factor release to ameloblast cells to produce human tooth enamel crystals that could grow in a controlled manner to produce enamel if the environment could accurately simulate the natural conditions for teeth formation. The analysis presented here provides an expression for the optimal release of growth factor from a collection of polymer microspheres to provide linear uptake at the cell-medium interface.

8

2.2

Minimization Problem

In this analysis, the total concentration of growth factor released from a one-dimensional polymer microsphere array is investigated. Future work should consider the details of the arrangement of the particles needed to produce the optimal overall concentration profile in three dimensions. The objective function for the optimization of growth factor release and subsequent uptake is Z min C(0,t)



[Jdes (t) − kC(1, t)]2 dt

(2.1)

0

subject to the partial differential equation for one-dimensional diffusion of growth factor between the microsphere array and the cells separated by unit length ∂C ∂ 2C =D 2 , ∂t ∂x

0 < x < 1, t > 0

(2.2)

with the initial condition C(x, 0) = 0

(2.3)

C(0, t) = C0

(2.4)

∂C kC(1, t) + D =0 ∂x x=1

(2.5)

and boundary conditions

and

where C(x, t) is the concentration of growth factor between the source polymer microsphere array and the target cells, D is the diffusivity of growth factor through the medium, k is the mass transfer coefficient at the interface between the medium and the cells, Jdes (t) is the biological rate of uptake of growth factor by the cells, and C0 is the constant growth factor concentration released from the microsphere array.

9

2.3

Results

Solution to the Partial Differential Equation Homogeneous boundary conditions simplify the separation of variables technique for solving partial differential equations. The boundary condition at x = 0, Equation 2.4, is nonhomogeneous. To homogenize the boundary conditions, let C(x, t) ≡ CI (x) + CII (x, t) where CI is the steady-state solution and CII is the time-variant solution. Because CI is the steady-state value,

∂CI ∂t

= 0. For CI the

partial differential equation reduces to the ordinary differential equation ∂ 2 CI = 0, ∂x2

0
(2.6)

with boundary conditions CI (0) = C0

(2.7)

dCI kCI (1) + D = 0. dx x=1

(2.8)

and

The general solution to the differential equation for CI is CI (x) = Ax + B

(2.9)

where A and B are constants determined by the boundary conditions. At x = 0, CI = C0 = A0 + B.

(2.10)

dCI = A. dx

(2.11)

Therefore, B = C0 . At x = 1,

Substituting expressions in terms of A and B into Equation 2.8 gives k(A(1) + B) + DA = 0. 10

(2.12)

Solving for A gives A=

−kC0 . k+D

(2.13)

With the values of A and B inserted into Equation 2.9, the solution to the ordinary differential equation in Equation 2.6 becomes CI (x) = C0 −

kC0 x. k+D

(2.14)

Substituting the combination of steady-state CI from Equation 2.14 and time-variant CII terms for C(x, t) into Equation 2.2 yields ∂C ∂CI ∂CII ∂CII = + = ∂t ∂t ∂t ∂t

(2.15)

 2  ∂ 2C ∂ CI ∂ 2 CII ∂ 2 CII D 2 =D + =D ∂x ∂x2 ∂x2 ∂x2

(2.16)

for the left-hand side and

for the right-hand side. Putting Equations 2.15 and 2.16 together with the initial condition and the boundary conditions in Equations 2.3-2.5 gives ∂CII ∂ 2 CII =D , ∂t ∂x2

0 < x < 1, t > 0.

(2.17)

with initial condition 0 = C(x, 0) = CI (x) + CII (x, 0) =⇒ CII (x, 0) = −CI (x)

(2.18)

and boundary conditions C(0, t) = C0 = CI (0) + CII (0, t) = C0 + CII (0, t) =⇒ CII (0, t) = 0

11

(2.19)

and ∂C dCI ∂CII 0 = kC(1, t) + D = kCI (1) + D + kCII (1, t) + D ∂x x=1 dx x=1 ∂x x=1 ∂CII =⇒ kCII (1, t) + D = 0. ∂x x=1

(2.20)

The partial differential equation for CII (x, t) in Equations 2.17-2.20 is a homogeneous boundary value problem that can be solved using separation of variables. Substituting CII (x, t) = φ(x)G(t) into Equation 2.17 gives the separable PDE G0 φ = DGφ00 .

(2.21)

Separating variables and setting the result equal to a constant yields G0 φ00 = = −α2 . DG φ

(2.22)

The Sturm-Liouville problem in the x-direction is φ00 + α2 φ = 0

(2.23)

φ(0) = 0

(2.24)

dφ k + φ(1) = 0. dx x=1 D

(2.25)

φ(x) = A cos (αx) + B sin (αx).

(2.26)

with boundary conditions

and

The solution is of the form

Applying the boundary condition at x = 0, φ(0) = 0 = A + 0. 12

(2.27)

Therefore, A = 0. At x = 1, dφ = αB cos α. dx x=1

(2.28)

Substituting the expressions in terms of B and α into Equation 2.25 gives αB cos α +

k k B sin α = B(α cos α + sin α) = 0. D D

(2.29)

Equation 2.29 holds for all values of α such that −αD . k

(2.30)

G0 = −αn2 DG

(2.31)

tan α = The differential equation in time is

where αn are the roots of Equation 2.30. The solution to the differential equation is of the form G(t) = g exp −αn2 Dt



(2.32)

where g is a constant. By the principle of superposition, CII (x, t) =

∞ X

Bn sin (αn x) exp (−αn2 Dt).

(2.33)

n=1

Applying the initial condition from Equation 2.18 gives ∞

CII (x, 0) = −CI (x) =

X kC0 x − C0 = Bn sin (αn x). k+D n=1

13

(2.34)

By orthogonality, the coefficients Bn can be expressed as R1 0

Bn =

C0

kx − 1 sin (αn x) dx k+D . R1 2 sin (α x) dx n 0



(2.35)

Finally, combining both terms for C(x, t) from Equation 2.14 and Equation 2.33 gives C(x, t) = CI (x) + CII (x, t) ∞

X kC0 x = C0 − + Bn sin (αn x) exp (−αn2 Dt) , ∀ n = 1, 2, ... k + D n=1

(2.36)

where αn are the roots of Equation 2.30 and the Bn coefficients are given by Equation 2.35. Solution to Minimization Problem Recall Equation 2.1, Z min C(0,t)



[Jdes (t) − kC(1, t)]2 dt

0

where Jdes (t) is the desired uptake concentration profile at the cell-medium interface located at x = 1 and C(x, t) is the actual concentration profile with the boundary condition C(0, t) = C0 from the source. Evaluating the concentration profile in Equation 2.36 at x = 1,  Z 1 ∞ X kC0 kC0 x 2 C(1, t) = C0 − +2 − C0 sin (αn x) dx. sin αn exp (−αn Dt) k+D k + D 0 n=1 (2.37) The integral in Equation 2.1 is minimized by setting its derivative with respect to C0 equal to zero: ∂ ∂C0

Z



[Jdes (t) − kC(1, t)]2 dt = 0

0

14

(2.38)

Z 0

Z



∂  [Jdes (t) − kC(1, t)]2 dt = 0 ∂C0

(2.39)



∂ [Jdes (t) − kC(1, t)] dt = 0 ∂C0 0   Z ∞ ∂Jdes (t) ∂C(1, t) [Jdes (t) − kC(1, t)] −k dt = 0. ∂C0 ∂C0 0 2 [Jdes (t) − kC(1, t)]

(2.40) (2.41)

Differentiating C(1, t) with respect to C0 gives ∂C(1, t) k =1 − ∂C0 k+D ( ∞ )  Z 1 X kC x ∂ 0 2 sin αn exp (−αn2 Dt) − C0 sin (αn x) dx + ∂C0 k + D 0 n=1  Z 1 ∞ X k kx 2 sin αn exp (−αn Dt) =1 − +2 − 1 sin (αn x) dx. k+D k+D 0 n=1 (2.42) By comparing Equation 2.42 to Equation 2.37, the expression can be simplified to ∂C(1, t) C(1, t) = . ∂C0 C0

(2.43)

Substituting Equation 2.43 into Equation 2.41 yields Z 0





 ∂Jdes (t) C(1, t) [Jdes (t) − kC(1, t)] −k dt = 0. ∂C0 C0

(2.44)

The desired concentration profile at the cell-medium interface, Jdes (t), must be specified before the optimization can proceed. Here consider Jdes (t) which depends linearly on the growth factor concentration at the cell-medium interface. In the future, other forms for Jdes (t) can be considered. Let Jdes (t) = AC(1, t) + B where A and B are constants. If A = k, the mass transfer coefficient, then the minimization problem objective function, Equation 2.1, becomes Z min C0



(B)2 dt.

0

15

(2.45)

The minimum value is zero which corresponds to B = 0 which matches the intuitive reasoning that there should be no dependence on the source concentration when the desired uptake rate is identical to the actual uptake rate at the cell-medium interface. For A 6= k, Equation 2.44 is used to determine the optimal C0 value. With the linear form of Jdes (t), the differentiated term of Jdes (t) in Equation 2.44 becomes ∂Jdes ∂C(1, t) AC(1, t) =A = . ∂C0 ∂C0 C0

(2.46)

Substituting this and the linear expression for Jdes (t) into Equation 2.44 yields Z 0





 AC(1, t) C(1, t) [AC(1, t) + B − kC(1, t)] −k dt = 0 C0 C0

(2.47)

which is equivalent to Z 0



C(1, t) dt + C0 (A − k)2 B(A − k) C0



Z



0

C(1, t) C0

2 dt = 0.

(2.48)

Solving for C0 outside the integrals gives R∞ −B 0 C(1, t)/C0 dt R∞ C0 = . (A − k) 0 (C(1, t)/C0 )2 dt

(2.49)

The integral in the numerator of Equation 2.49 can be expanded as Z 0



C(1, t) dt = C0

Z



"

0

+2

∞ X

k 1− k+D



sin αn exp (−αn2 Dt)

Z 0

n=1

1



#  kx − 1 sin (αn x) dx dt. k+D (2.50)

16

Evaluating the integral in time gives Z 0



C(1, t) dt = C0

 ∞ k t k+D 0 ∞ X − sin αn exp (−α2 Dt) Z

 1− +2

n

αn2 D

n=1

0

1



∞  kx − 1 sin (αn x) dx k+D 0 (2.51)

which is equivalent to ∞

Z 0

  C(1, t) k dt = lim 1 − t t→∞ C0 k+D  Z  ∞ X 2 sin αn 1 kx + − 1 sin (αn x) dx. αn2 D 0 k + D n=1

(2.52)

The integral in the denominator of Equation 2.49 can be expanded as Z 0





C(1, t) C0

2



Z

"

k 1− k+D

dt = 0

2

∞ h X + 2 sin αn exp (−αn2 Dt) n=1 1

 i2 kx − 1 sin (αn x) dx k+D 0  X ∞ k +4 1− sin αn exp (−αn2 Dt) k + D n=1 #   Z 1 kx − 1 sin (αn x) dx dt. k+D 0 Z



17

(2.53)

Evaluating the time integral gives Z 0





C(1, t) C0

2

2 ∞ k dt = 1 − t k+D 0 " ∞ X − sin αn exp (−α2 Dt) n + 2 2D α n n=1 ∞ # 2  Z 1 kx − 1 sin (αn x) dx k+D 0 0  X ∞ k − sin αn exp (−αn2 Dt) +4 1− k + D n=1 αn2 D ∞  Z 1 kx − 1 sin (αn x) dx k+D 0 0 

(2.54)

which is equivalent to Z 0





C(1, t) C0

2

2 k dt = lim 1 − t t→∞ k+D  Z  ∞ i2 h X − sin αn 1 kx − 1 sin (α x) dx + 2 n αn2 D k+D 0 n=1   X Z  ∞ k − sin αn 1 kx +4 1− − 1 sin (αn x) dx. k + D n=1 αn2 D k+D 0 

(2.55) Combining Equations 2.52 and 2.55 to evaluate C0 with Equation 2.49 yields the indeterminate form

∞ . ∞

Using l’Hospital’s Rule, C0 can be expressed as the ratio

of the derivatives of the numerator and the denominator with respect to t, C0 = lim

t→∞

k k+D 2 k − k+D

−B 1 − (A − k) 1



=

−B (A − k) 1 −

k k+D

.

(2.56)

Equation 2.56 gives the optimal source concentration, C0 , for A 6= k and constant values of diffusivity D and the mass transfer coefficient k that minimizes the difference between the desired linear uptake rate, Jdes (t) = AC(1, t) + B, and the actual uptake rate at the cell-medium interface, kC(1, t).

18

2.4

Conclusions

In this chapter, the total concentration of growth factor released from a one-dimensional polymer microsphere array was optimized to best approximate a desired linear uptake rate for target living cells with constant values for the diffusivity and the mass transfer coefficient. Extensions of this work should consider the details of the arrangement of the particles needed to produce the optimal overall concentration profile in three dimensions; additionally, a variety of desired uptake rates should be explored.

19

Chapter 3

Parallel Simulation of Drug Release from Biodegradable Polymer Matrices 3.1

Introduction

This chapter describes parallelization efforts for a simplified model for biodegradable polymer degradation and drug release. This work was completed by Ashlee N. Ford, Eugene Frankfurt, Shaun Joy, and J.J. Sestrich as the final project for the Spring 2008 CS 420 Parallel Programming course at UIUC taught by Professor Laxmikant V. Kale–Ashlee N. Ford contributed the model and collaborated with the others on the parallel algorithm and implementation. The project was presented as a poster by Ashlee N. Ford at the 2008 Scientific Discoveries in Advanced Computing Conference. A sequential model for autocatalytic PLGA degradation and drug release is described in the following section. A parallel algorithm is detailed in this chapter for the drug release model. Parallelizing the code allows many more design conditions to be explored in a reasonable time frame than would be possible with only a sequential algorithm for the model. For a system size of 260 chemical species, 10 spatial points, and 18,000 time iterations, the computation of the sequential version of the model required 1.2 MB of storage and took 875 seconds to complete with an explicit algorithm. Other numerical methods for solving the system of partial differential equations have been considered including using the method of lines to reduce the problem to a system of ordinary differential equations in time that could be solved by the Runga-Kutta 2nd and 3rd order or 4th and 5th order methods. Although the system size was reduced by the relaxed time step size 20

restriction, the MATLAB solvers for these methods had storage requirements that exceeded the virtual memory of the single processor machine on which the code was run (Dell Intel Xeon 3.6 GHz CPU with 3 GB of RAM). Parallel algorithms for solving the diffusion equation and other linear parabolic partial differential equations by explicit and implicit finite difference methods or some combination of the methods have been described in the literature [34, 35]. Other research has focused on numerical solutions for parabolic partial differential equations that include nonlinear nondifferential terms that arise in reacting chemical systems [36]. The parallel implementation used in this project combines aspects of the parallel explicit finite difference schemes and solutions for the nonlinear equations along with a Jacobi relaxation algorithm for solving the system of partial differential equations for the drug delivery simulation model. The simulation model described in this chapter is a simplistic model for the autocatalytic polymer degradation reaction and release of dispersed drug molecules from cubic PLGA matrices. The objective of the model is to describe size-dependent behavior observed experimentally. Recently, other researchers have suggested that autocatalytic polymer degradation is the main mechanism by which the diffusive drug release is accelerated, and this process depends strongly on particle size [12, 13, 19]. Simultaneously modeling the mathematics of diffusion and autocatalytic chemical reactions and coupling the two phenomena through degradation-dependent effective drug diffusivity rather than independently modeling the processes in a purely sequential non-coupled manner is an important step towards accurately describing the actual overall release process.

21

3.2

Model and Parallel Algorithm

Model Diffusion of chemical species through a rectangular region is a fundamental transport problem modeled by the Fickian diffusion equation. The simulation model considers the simultaneous diffusion of multiple chemical species that interact through reactions. The polymer chains in the PLGA matrices degrade via the autocatalytic hydrolysis reaction Pn + W + H+ Pr + Pn−r + H+

(3.1)

where Pn is a polymer chain of length n, H+ is the acid catalyst, W is water, N is the maximum degree of polymerization of the polymer, 2 ≤ n ≤ N , and 1 ≤ r < n. The only source of H+ is assumed to be the carboxylic acid end group on each polymer chain, COOH. The dispersed pharmaceutical molecules are transported out of the polymer matrix by diffusion with an effective diffusivity that increases as the degradation reaction proceeds. The simulation model expresses this functionally by increasing the drug effective diffusivity exponentially as the average molecular weight of polymer decreases. This diffusivity dependence is Ddrug

 PN iM c  Mn,0  Pi,0 = Ddrug,0 exp αt = Ddrug,0 exp αt Pi=1 N Mn i=1 iM cPi 

(3.2)

where Ddrug is the diffusivity of the drug, Ddrug,0 is the initial diffusivity of the drug, α is a parameter for the dependence of the diffusivity on the polymer molecular weight change, t is time, Mn is the number-average molecular weight of the polymer, Mn,0 is the initial polymer molecular weight, M is the molecular weight of the monomer, cPi is the concentration of Pi , and cPi,0 is the initial concentration of Pi .

22

A system of partial differential equations for three-dimensional reaction and diffusion in Cartesian coordinates describes the transport of the species throughout the domain as functions of position and time. Equation 3.3 shows the general form of each equation in the system with cj being the concentration of the jth species (Pi for 1 ≤ i ≤ N , COOH, and drug), Dj is diffusivity of the jth species, and gj (~c) is the net generation of the jth species from the hydrolysis reaction which can depend on all of the other species present: ∂cj = ∇· (Dj ∇cj ) + gj (~c). ∂t

(3.3)

The drug is assumed to be inert so gdrug = 0. The generation terms for the COOH and Pn species are derived from material balances on the chemical reaction given by Equation 3.1. The reaction term for Pn is shown in Equation 3.4 and that for COOH is shown in Equation 3.5 where k is the kinetic rate constant for the reactions. gPi = −(i − 1)kcCOOH cPi + 2kcCOOH

N X

cPn

(3.4)

n=i+1

gCOOH

N X = kcCOOH (i − 1)cPi

(3.5)

i=1

Algorithm The problem lends itself particularly well to parallelization since the computation of spatial discretization points can be distributed among the various parallel nodes. The main consideration is determining the best way to divide the points in the three-dimensional space among the computational nodes in order to maximize performance and scalability. The amount of time a node spends on computation must be balanced with the amount of time that node spends communicating with other processors. This same balance has to be attainable when utilizing a large number of processors with a minimum problem size. 23

The Jacobi relaxation algorithm serves as a useful template for parallelizing this model as it involves the same spatial decomposition concerns. Since the computation at each point requires the data from the neighboring points and the number of points is allowed to be much greater than the number of processors, groups of adjacent points are kept on the same processors. The block decomposition version of the Jacobi relaxation scheme is used here because it has the lowest isoefficiency of the various other versions. Decomposition by blocks is also the most intuitive for conceptualizing the problem. Since individual points communicate with their adjacent neighbors as if they were blocks, it is natural to extend this behavior to groups of points. The decomposition divides the space into a hierarchy of units. The unit at the lowest level of the hierachy is called a point. Calculations are carried out using the points. Each node on the machine contains a region or collection of regions with each region containing an identical number of points. This setup is illustrated in Figure 3.1. Virtual processors are shown rather than physical processors because CHARM++, the parallel language used in this project, is an object-oriented language that assigns work to virtual processors that actually can be located on any of the physical processors in order to balance the processor workloads. Initially, all processor nodes contain the same number of region objects, but regions can be moved between nodes for load balancing and for cases of node failure. Calculations involving points with all neighbors within their own region are performed with no need for communication between processors; however, the calculations for points located on the edges of regions need values from neighboring regions. Each region is responsible for communicating the values of the points located on its boundaries to the adjacent regions. All regions are programmed similarly with only minor differences for those regions located on the global boundaries. Each region sends its border data for the current time step to its neighboring regions and waits to receive its neighbors’ values for the same time 24

Virtual Processor

Virtual Processor

Boundary values Regions

Regions

Local calculations Points

Figure 3.1: Schematic of regions and points as conceptualized for the algorithm.

step. Once a region receives the border data from all of its neighboring regions, it proceeds with calculations, and the cycle repeats for subsequent time steps. Each point has coordinates x, y, and z ; molecular weight; diffusivities for drug, polymer, and carboxylic acid; an array of species concentrations; and other physical parameters. At each point and each time step, the molecular weight, the drug diffusivity, and the generation term for each species are calculated using the array of species concentrations at that point. To update the species concentrations at a particular time step, the six nearest neighbor points’ concentration values are used for the three-dimensional finite difference scheme. For points in the interior of a region, this involves no communication, as the necessary data is available on the same processor. The border values communicated from neighboring regions are required for the finite difference calculations for points on the boundaries of regions. A significant percentage of the calculations for a region are performed for the majority of the points without requiring values communicated by neighboring

25

regions; this allows the communication time and the computation time to overlap where the communication controller handles the transmission of data while the math unit computes the updated local values. Because sending and receiving of data is done asynchronously in CHARM++, it is possible for regions to become one time step ahead of neighboring regions. To handle this, each region is capable of buffering data for its current time step and the next time step. The following example for two adjacent regions called regions 1 and 2 illustrates this point. Consider the case where region 1 sends its data, receives the necessary data, and begins the calculations for the current time step. Meanwhile, region 2 finishes the calculation for the current time step before region 1 which causes it to send its border data to region 1 before region 1 needs the data for the next time step. Region 1 buffers this data until it finishes its current calculation and is able to move on to the next time step. According to the algorithm, region 2 is unable to proceed until region 1 also moves on to the next time step to communicate its border data to region 2; therefore, the buffer size of one time step worth of data should be sufficient. This method of buffering reduces communication latency by staggering the communications between processors along the databus which lowers communication congestion and contention between nodes for access to the databus. Nodes accomplish more of their tasks before waiting to receive data at the cost of a higher memory footprint. The algorithm chosen for this work prevents significant load imbalances from developing. All nodes are initialized with the same number of regions. Most regions receive nearly uniform amounts of computational work because the number of points in each region is uniform and each point not located on the global boundaries requires the same amount of computation. Those regions containing points on the global boundaries perform fewer calculations as the concentration values on the global boundaries are held constant and no update calculations are performed for 26

those points. This slight difference in the amount of computation for each region should not result in a significant imbalance of work between the regions with points located along the borders of the global domain and regions with all points in the interior of the domain. Redistribution of the regions to different nodes in order to balance the workload per node is minimized so few processing cycles and databus transactions are used to gather processor load data and to migrate regions between nodes using the built-in load balancer in CHARM++. Another function of the automatic load balancer is to reduce communication latency by moving virtual objects which communicate with each other frequently close to one another. Each region needs to communicate with six or fewer other regions depending on its position in relation to the global boundaries. The best scenario is to have neighboring regions located on the same node or nodes that are in close proximity to one another. A parallel machine with three-dimensional mesh topology would be the ideal for this application since it would mirror the spatial decomposition of the problem. The assignment of neighboring regions to nearby processor nodes is performed during the initialization period rather than dynamically throughout the runtime like a typical load balancer since the communication patterns are known a priori. This mitigates the potential costs of dynamic load balancing for communication efficiency. A concern for parallelizing this problem is to determine the optimal number of points to allocate to each region for different numbers of regions. Having a large concentration of points in each region increases the calculation workload for each region which translates to more computations being done locally before requiring communication with other regions, but having large point concentrations also increases the numbers of points on borders of regions leading to larger amounts of data being sent to and received from each region in each time step. A smaller number of points per region lowers the communication overhead but might underutilize the processor nodes thereby reducing the overall effective speedup of 27

the parallel program. It is important to carefully balance these effects to choose the appropriate number of points per region since poor choices can create bottlenecks and cause the program to run significantly below its potential optimal performance level. In the ideal case for the entire range of numbers of nodes, each region would maximally utilize its node while keeping the communication time to a small fraction of the computation time. The variation of the ratio of computation to communication as a function of the number of points, P , is calculated by 16 P 3 /P 2 for any interior region. There is also a physical limit to the resolution of the spatial discretization since it is not useful for the drug release application to continually decrease the discretization size much beyond the relevant length scales. Not being able to infinitely decrease the discretization size to increase the number of points in the domain limits the scalability of the program to very large machines because the minimum discretization size forces regions to contain fewer than the optimal numbers of points when the program is run on sufficiently large numbers of nodes.

3.3

Results and Discussion

The system of partial differential equations modeling drug release was solved numerically using the explicit finite difference method with uniform initial concentrations of one for all species and constant concentrations of zero for all the species on the boundary surfaces for the entire three-dimensional polymer matrix. The concentrations at each spatial discretization point depended on the neighboring points for the diffusion term and on the other species concentrations at the same point for the reaction term. The drug effective diffusivity term for each point depended on the total average molecular weight among all the polymer chain lengths at that point. The explicit finite difference scheme was preferred for this application since the amount of computation per iteration was much less for the explicit scheme than for the implicit scheme which required a solution for a global 28

nonlinear algebraic system of equations for each time step. Each side length of the cubic polymer matrix measured 100 µm. A spatial refinement of one million increments reached the continuity limit for the polymer chain concentrations as the monomer length was approximately 5×10−4 µm. For numerical stability of the explicit finite difference method, the time step had to be less than or equal to half of the square of the spatial discretization size. The algorithm operated most efficiently when it was run on cubic numbers of processors; each processor was able to receive an identical number of the regions of the three-dimensional domain which resulted in an even distribution of computational work. Having similiar workloads minimized the amount of time processors needed to wait on one another for their border data and resulted in good performance. Performance was maximized when each processor had regions on the global boundaries as well as regions entirely on the interior of the domain since this made the interprocessor wait times uniform and helped reduce the computation differences between global boundary regions and interior regions. The algorithm performed mostly as expected. The processing load was spread fairly evenly among the processers with nodes generally deviating from the average load by at most 10% in runs on small numbers of nodes and even less than that in runs on large numbers of nodes. The largest load differences were exhibited in a relatively small subset of nodes due to the slight load imbalance caused primarily by the difference in the computation time of the regions on the edges and corners of the global domain and the computation time of the interior regions. Since the edge and corner regions had less to compute per time step due to the constant concentration boundary conditions, they finished each time step quickly and remained idle while they waited on the interior regions to finish in order to share updated neighboring data. The load imbalances due to relative domain positioning were especially evident when the program was run with twenty-seven regions where there were eight 29

corner regions with three planes on the global boundaries, twelve edge regions with two planes on the global boundaries, six edge regions with one plane each on the global boundaries, and one interior region with no intersection with the global boundaries. Figures 3.2 and 3.3 summarize the utilization of the processors for this example with twenty-seven regions, and the figures show clear differences between the processor utilization of each of the different subgroups. In these figures, the lighter the color that is displayed, the more heavily the processor was being used at that point; the rows contain data for individual processors, and the horizontal axis is the progression of iterations. A corner region such as the one on the bottom row of Figure 3.2 had a much larger percentage of dark coloration indicating a significant amount of idle time which makes sense because the corner region often had to wait on its neighboring regions with fewer global boundary border planes to finish computing their concentration updates for each time step. The regions toward the middle of the domain have very little dark shading and almost no black in the figure since they were the bottlenecks to the computation. The middle row of the figure represents the only completely interior region in this case, and it was utilized fully at almost all time iterations. Figure 3.4 displays the average processor usage summary with the average processor utilization level being about 88%, the interior region using roughly 98% of its processor (processor 13), the edge regions using between 85 and 95%, and the corner regions using between 75 and 85% of the maximum processor usage. This usage distribution held true for cubic numbers of processor nodes twenty-seven or larger with one region per virtual processor. When the program was run with cubic numbers of processors with one region per virtual processor, there was roughly a 7% scaling penalty for each increase in the number of processors. For example, running the program on sixty-four processors was 7% slower than running the program on twenty-seven processors with the same number of points per region. This decrease in performance was most likely due to the increased communication costs when using more nodes. The larger 30

Figure 3.2: Processor utilization summary for simulation using 27 processor nodes. Horizontal rows represent data for individual processor, and the horizontal axis is the progression of iterations. Lighter colors represent higher CPU usage, and darker colors represent lower CPU usage.

Figure 3.3: Magnified view of a brief interval of the processor utilization summary for simulation using 27 processor nodes.

31

Figure 3.4: Average processor usage summary for simulation using 27 processor nodes.

Figure 3.5: Processor utilization summary for simulation using 8 processor nodes with 8 regions and 3 × 3 points per region.

32

the number of nodes used, the greater the average distance between each node increasing the time to communicate with the other nodes. This scalability penalty likely could be reduced if the regions were mapped to nodes in a topologically aware manner that matched the algorithm’s virtual region topology. The scalability of the program was good overall because the program was characterized by a large number of computations per data element transmitted for any non-trivial number of points per region. This was demonstrated by the heavy processor utilization for an extended period of time between each short idle period shown in Figure 3.2. Even when the number of points was reduced to 3 × 3 × 3 per region, the processors remained very busy as shown in Figure 3.5. Since the problem size grew at a faster rate than the communication overhead, the performance of the algorithm was limited by datasets at the low end of the problem size spectrum rather than at the high end to maintain the optimal point concentration per region. A key issue with the scalability of this algorithm was its large memory footprint. Numerous test runs with high densities of points and regions on each processor failed due to a lack of available heap memory as the malloc function for memory allocation was unable to complete successfully. Therefore, sufficiently large problem sizes will require a certain minimum number of nodes in order to handle the memory requirements. The maximum memory limitation was reached when using fairly large concentrations of data at nodes, but this limit could be avoided for larger system sizes if the program used the available memory more carefully. The decision to buffer extra data at each of the regions for performance gains increased memory requirements more than predicted. A special simulation case involved eight processors with one region per processor. In this case, there were no regions that did not touch the global boundaries as there are no interior regions when a cube is divided evenly into eight smaller cubes. This eight processor case yielded a more even work distribution than other cases since the computation times of the regions were essentially uniform 33

rather than in cases with larger cubic numbers of regions. In fact, the average processor usage for this case jumped 10% from 85% on runs with more than eight nodes to 95% on runs with eight nodes for regions with 100 × 100 × 100 points. This case shows that processor utilization can be increased with a smaller number of processors; however, the memory limitations come into effect for communication when the number of points is large. Also, having few processors with many points can diminish the speedup of the parallel algorithm since each processor must perform many calculations as in the case of using the sequential algorithm on a single processor. The algorithm developed in this chapter was intended for massive parallelization with a large number of processors, and code execution speedup is a higher priority goal than maximizing the utilization of processors. Therefore, it is recommended to use more than eight processors for running the program and to explore the upper limit of the number of processors that can be used while maintaining efficient performance. Using the built-in dynamic load balancer in CHARM++ for the algorithm when it was run with cubic numbers of processors and one region per processor actually hindered the performance of the simulation. With the load balancer turned on, the program ran about 5% slower on average than without the load balancer for small numbers of processors. For large numbers of processors, the load balancer became even more of an impediment. When ran on 100 processors, the load balancer caused a 65% slowdown in the program. The potential benefits of running the load balancer did not justify the cost of its overhead since the workload was fairly evenly distributed by the computational algorithm and communication latency was minimized a priori by distribution of neighboring regions onto the same or adjacent nodes. Even though the processor utilization summaries did reflect more evenly balanced loads after using the load balancer, the performance slowdown required to obtain this balance was too great especially since balancing the workload was a lower priority than speedup and parallel efficiency considerations. 34

3.4

Conclusions

The parallel program implemented in this project seems promising for use for simulating drug release from polymer matrices in a parallel manner. The performance is much improved over the MATLAB sequential application for the model used here. The parallel algorithm could be modified to consider a more sophisticated model and could be used to increase the efficiency of the simulations of that model. Also, the algorithm could be modified to consider spherical polymer particle geometry.

35

Chapter 4

Conclusions and Future Work

4.1

Summary

This thesis investigates the derivation of the polymer microsphere drug release required to match the needed uptake rate for cells for a tissue regeneration application and the development of a parallel algorithm for simulating drug release from a cubic matrix of biodegradable polymer. The present work can be extended to determine the optimal drug release rate for other applications with different uptake rates or geometries and can be used to parallelize more sophisticated computational models for polymer degradation and drug release.

4.2

Future Work

Many mathematical models have been developed to predict degradation, erosion, and drug release from PLGA microparticles [28, 29, 37–45]. Researchers have proposed drug diffusion and dissolution models based on Fickian diffusion and solubility effects [20, 46], microstructural models involving degradation leading to erosion in pore networks [28, 47, 48], and polymer degradation models which consider the kinetics of random and chain-end scission mechanisms or intermediate combinations of the two mechanisms for PLGA in solution [28, 49]. Often the models focus on only one of the processes involved in PLGA drug delivery or treat the processes independently rather than in a coupled manner [28, 37, 40], and the few models that have included autocatalytic effects have not addressed all of the 36

processes involved in the overall drug release mechanism for the PLGA microsphere system [38, 39, 41–43]. The discrepancy between theoretical predictions made from existing models considering uncatalyzed depolymerization kinetics and actual drug release experimental data has been attributed to the failure to adequately model autocatalysis [20]. The drug delivery processes interact in a complicated way making it absolutely critical to model the effects in a coupled manner rather than independently in order to produce models that are predictive not correlative [4]. Correlative models that fit the specific data used to verify them are only good for future studies with physical conditions similar to those used in the experiment on which the correlation is based. In contrast, predictive mechanistic models have the capability to apply generally to multiple data sets and can be used to model behavior for varied physical conditions. Despite the size-dependence of autocatalytic PLGA degradation and observed spatial variations in pH, there is no published model that tracks the hydrogen ion concentration as a function of space and time for determination of the intraparticle pH while modeling degradation kinetics, polymer molecular weight distribution variation, and drug transport with varying diffusivity—all of which influence drug release and polymer degradation through the polymer microspheres. The mathematics of autocatalytic hydrolysis kinetics have been described for other autocatalytic reactions in solution [50, 51] but have not yet been applied to the hydrolytic degradation of solid PLGA microspheres [41]. The hypothesis of the proposed work is that simultaneously modeling the mathematics of diffusion, autocatalytic chemical reactions, chemical equilibria, and pore formation–the phenomena that contribute to the degradation of polymer particles and the release of drug molecules–rather than independently modeling any of the phenomena in a purely sequential manner would accurately describe the actual overall release process. Such a mechanistic model would use kinetic expressions derived from molar balances on each reacting species to account for the 37

autocatalytic PLGA hydrolysis. The concentrations of protons from the release medium and from the autocatalytic end groups, the distribution of polymer chains of different molecular weights, and the concentration of drug molecules would be tracked as functions of time and radial position in the microspheres using Fickian diffusion with variable effective drug diffusivity which depends on the extent of the erosion of the microspheres. The proposed microsphere drug release model would include complete kinetics for polymer degradation in PLGA microparticles with autocatalytic effects and fully coupled reaction-diffusion equations for all species. The model equations would consist of a system of nonlinear partial differential equations to be solved numerically for sufficiently small computational time steps for an extended period of time in order to capture an entire release profile. Such a model should be thoroughly validated with experimental data. The numerical algorithm for performing the drug delivery simulations could be parallelized and other robust numerical algorithms could be explored. The high resolution simulation of the coupling between reaction and diffusion should capture important dynamical phenomena observed in experiments that cannot be modeled with the models in the literature that have simpler numerical solutions but do not take the coupling or the autocatalysis into account. The inclusion of the spatial variation of autocatalytic effects would be a unique contribution. The mechanistic model being developed for PLGA microspheres could be extended to apply to core-shell microparticles and microcapsules, which are important options for encapsulating drugs for delivery in a multi-stage pulsatile release fashion or for protecting proteins by suspension in an aqueous core for time-delayed delivery after polymer microcapsule degradation to prevent protein deactivation due to interactions with the polymer. The predictions for the core-shell microparticles and microcapsules would be compared to experimental results. The model would determine the pH as a function of position within the microparticle, 38

which could be used to design microparticles that limit the microenvironment pH within the particles to ranges in which the released molecule is stable. Further extensions of this work could aid in the development of a rigorous mechanistic model that could simulate many possible polymer microparticle fabrication designs to predict drug release profiles for a variety of formulation variables in order to optimally design biodegradable polymeric microparticles for controlled release. Changing the design variables of core diameter and shell thickness along with the distribution of molecular weights and pore sizes would enable the design of microparticles to produce a large spectrum of obtainable release profiles. These profiles would include zeroth-order release and pulsatile release with a range of shapes for the individual pulses. The model could be used to compute an optimal distribution of microparticles, which can be relevant when restrictions are placed on the microenvironment pH, particle drug loading, or particle size. Determining the optimum fabrication design for obtaining a desired drug release profile would be important for experimentalists and manufacturers making microparticles for use in patients.

39

Appendix A

HAP Crystal Growth in Polycarbonate Membrane Pores A.1

Introduction

The experimental study presented in this appendix was performed by Ashlee N. Ford in Fall 2006. The proof-of-concept findings were shared with Nicholas C. S. Kee and Paul D. Arendt who expanded on these results in their own research. Calcium hydroxyapatite Ca5 (PO4 )3 OH (HAP) crystals occur naturally in human teeth, forming a three-dimensional structure similar to a basket-weave pattern. It is hypothesized that a synthetic crystal structure can be formed to mimic the human teeth crystal structure by growing crystals in the pores of a three-dimensional polymer. It may be possible to form specific pore geometries by selectively dissolving one of the components of a block copolymer that forms a desired three-dimensional microstructure. As a proof of concept, experiments were conducted to determine whether HAP crystals can form inside the pores of a polymer membrane film. The work was based on experiments by Loste et al. [52] where calcium carbonate crystals formed in 0.2-10 µm diameter pores of track-etch polycarbonate membranes.

A.2

Experimental Methods

Track-etch polycarbonate membranes (Whatman Filtration Products) 10 µm thick with 3 µm diameter pores were used. The pore size was selected for use in this experimental study because it was shown to be the optimal size for single crystal 40

Figure A.1: HAP crystal growth experimental apparatus.

calcium carbonate formation in pores [52]. Solutions of Ca2+ and HPO2− 4 were prepared from calcium chloride dihydrate (CaCl2 · 2H2 O) and potassium dihydrogen phosphate (KH2 PO4 ) solids (Sigma-Aldrich), respectively. The apparatus shown in Figure A.1 was used as the experimental vessel. The apparatus consisted of two 105◦ glass adapter bends joined together. A track-etch membrane was attached and sealed to the inner part of one of the adapter bends with household adhesive before the adapter bends were joined and wrapped with Teflon tape. The membrane was oriented perpendicular to the surface on which the apparatus rested. Five milliliters of 0.2 M Ca2+ was added to one side of the apparatus, and an equal volume of 0.1 M HPO2− 4 was added to the other side. These concentrations were used to approach the solubility of HAP in water. The liquid volumes in the apparatus were allowed to equilibrate for one hour. Then 0.5 mL of 1 M Ca2+ and

41

Figure A.2: Optical microscope image of HAP crystals on polycarbonate membrane: (a) 20X zoom, (b) 40X zoom. Small black dots are pores, and larger objects in the foreground are crystals.

0.5 mL of 1 M HPO2− 4 were added simultaneously via pipette less than 1 cm from their respective sides of the membrane in the apparatus. Fifteen minutes later, the process was repeated. After an additional 15 minutes, the process was repeated again. Thirty minutes after the final addition, the liquid was drained from the apparatus, and the membrane was removed. The membrane was allowed to air dry for a few minutes before it was analyzed under an optical microscope. This experiment was repeated to demonstrate reproducibility.

A.3

Results and Conclusions

Crystals were observed on the surface of each membrane after removal from the apparatus. Figure A.2 shows some of the images captured with an optical microscope with magnification up to 40X. From the images, it was not possible to determine if crystals formed in the pores or merely on the surface of the membrane. Analysis with SEM was postponed until further experiments were completed by other experimentalists. It was concluded that the 3 µm diameter pores are too small to detect crystallization with 40X zoom using an optical microscope. It is recommended to use 10 µm pore diameter membranes for further experiments. Ideally, these pores

42

will be sufficiently large to visualize the interior of the pores with the optical microscope. An alternative technique has been proposed that could eliminate the effects of concentration gradients in the apparatus used in this experiment. The membrane could be dipped in a supersaturated solution of Ca2+ and HPO2− 4 and then dried to remove solvent. It is suggested to use 0.5 M Ca2+ and 0.3 M HPO2− 4 solutions to have supersaturation conditions with the stoichiometric ratios sufficient for seed crystals to form in the pores of the membrane with the this technique. These were the concentrations used by Spanos et al. [53] to form HAP seed crystals. The hypothesis is that the seed particles would form inside the pores and subsequent dips would induce growth on the initial seed particles. Crystal growth in the pores could be observed as a function of time if this hypothesis is correct.

43

Appendix B

Molecular Dynamics Simulations of Benzene Binding to Glucose B.1

Introduction

This appendix describes molecular dynamics simulations that were completed by Ashlee N. Ford as part of the Department of Energy Computational Science Graduate Fellowship research practicum at Brookhaven National Laboratory (BNL) during Summer 2007. The work was supervised by Yolanda A. Small and James W. Davenport of the BNL Computational Science Center. The conversion of cellulose to ethanol is a key process for producing renewable biofuels as an alternative energy source. Cellulose is a polymer of glucose that resists breakdown into its monomeric subunits making it difficult to ferment into ethanol and other biofuels. A worldwide effort works to improve the performance of known hydrolytic enzymes and to develop new enzymes to hydrolyze cellulose. Cellulose binds specifically to certain regions of proteins that are complimentary to the carbohydrate functional groups; therefore, protein structure is vital for recognition by cellulose substrates. Cellulose has hydrophilic hydrogen bonding interactions with water solvent molecules due to the hydroxyl groups of the glucose constituents. Protein sites that bind to cellulose often do so by replacing the solvent-cellulose interactions with hydrogen binding through acidic side chains. In contrast, the surfaces formed by aliphatic carbon-hydrogen bonds of cellulose are hydrophobic and typically form the top and bottom faces of the cellulose polymer. Hydrophobic aromatic protein residues arranged parallel to the binding groove may 44

stabilize substrate binding [54]. Figure B.1 illustrates the structure of a protein known to hydrolyze cellulose [55]. The locations of the aromatic residues are highlighted in red and are shown as three-dimensional representations. The phenyl rings are generally oriented parallel to the binding domain. A simplified model system was considered to better understand the effects of aromatic groups arranged parallel to the aliphatic surfaces of cellulose. One benzene molecule, which is chemically similar to the aromatic rings in the protein residues Tryptophan and Phenylalanine, was positioned in a stacked orientation relative to a glucose molecule, the subunit of cellulose. The glucose chair configuration was used where the equatorial positions were occupied by the hydrophilic hydroxyl groups and the axial hydrogen positions rendered the top and bottom surfaces of the molecule hydrophobic. This system was previously studied and simulated using molecular dynamics (MD) [54]. A weakness of the previous study is that dynamics were calculated for only 150 ps. Currently, typical MD simulations collect data for 5-10 ns of simulated time. Thermodynamic properties of a system are determined by considering the contributions of each of the microstates that are possible for the system to experience. These microstates are used for ensemble averaging calculated using Equation B.1; the ensemble average is denoted by hAi where A is some property, β is the reciprocal of the product of the Boltzmann constant and the temperature, H is the Hamiltonian, qi for 1 ≤ i ≤ M represent positions, pi for 1 ≤ i ≤ N represent the momenta, and τ represents all the states. R Ae−βH(q1 ,q2 ,...,qM ,p1 ,p2 ,...,pN ) dτ hAi = R −βH(q1 ,q2 ,...,q ,p1 ,p2 ,...,p ) M N dτ e

(B.1)

The time average of the property is given by Equation B.2 and is denoted by ¯ A. 1 A¯ = lim t→∞ t

M 1 X Adτ ≈ Ai M i=1 τ =0

Z

t

45

(B.2)

Binding Groove

Figure B.1: Catalytic domain of thermophilic endonuclease E2 from Thermomonospora fusca (PDB code 1TML). Aromatic residues highlighted in red and shown as three-dimensional representations.

46

For sufficiently long sampling times, a system is assumed to have sampled all the allowable configurations in phase space. Therefore, the ensemble average of a property should be equivalent to the time average of the property, that is hAi = A¯ . This is called the ergodic hypothesis [56]. Because 150 ps is a very short time period, the goal for this project was to repeat the simulations of Palma et al. [54] for longer time periods in order to determine if the original sampling period was sufficient for the ergodic hypothesis to hold. From the simulations, the potential of mean force (PMF) for the approach of glucose and benzene can be calculated as a function of the separation distance between those molecules. The PMF is an effective free energy potential which depends on state variables of the system, represents the force required to bring molecules from an infinite seperation distance to a specified distance, and captures the effects of solute rearrangement in the medium [57]. Obtaining the PMF vs. molecule separation distance plot was the main objective for this project because the comparison between the plots for the two different simulation durations can be used to evaluate the effects of longer computational sampling time.

B.2

Molecular Dynamics

Molecular dynamics (MD) is a computer simulation technique involving the integration of Newton’s equations of motion for a system of particles, typically the atoms of interest in a molecular scale process. Newton’s second law in the form used to solve for the position and velocity of each particle i is F xi = m i

d 2 xi . dt2

(B.3)

MD simulations are carried out at very small time steps, on the order of 1 fs, and continued for a certain amount of time. Generally, the system is equilibrated for some period of time until the properties are stable with time. Then computer 47

experiments are performed for longer times to study the behavior of the system under specified conditions. Thermodynamic properties can be calculated using the position and velocity data of the particles. The size of the system and the simulation time are limited by the computational resources available [58]. For this project, MD simulations were performed using the GROMACS software package. For any MD simulation, the force field must be specified to describe the interactions that particles experience which contribute to the total force on the particles. Several force fields exist with parameters that reflect different types of interactions for many categories of particles. The GROMOS united-atom force field 53A6 for carbohydrates was chosen for the benzene and glucose system. The force field was parameterized using free enthalpy of hydration data for apolar solvents and is well-suited for carbohydrates such as glucose [59]. The force on an atom is the negative derivative of the potential energy with respect to atom position coordinates. The potential energy, V (~r; s), relates the interaction energy of the atoms to the atom coordinates, ~r, and the force field parameters, s. The potential energy terms considering only physical interactions are given by [59]. V (~r; s) = V bonded (~r; s) + V nonbonded (~r; s)

(B.4)

V bonded (~r; s) = V bonds (~r; s) + V angles (~r; s) (B.5) +V

improper dihedrals

(~r; s) + V

proper dihedrals

(~r; s)

V nonbonded (~r; s) = V van der Waals (~r; s) + V electrostatics (~r; s)

(B.6)

The individual terms of the bonded potential energy are given by Equations B.7-B.10 where Nb is the number of covalent bond interactions; bn is the actual bond length for the nth bond between atoms i and j; Nθ is the number of covalent bond angle interactions; θn is the actual value of the nth angle between atoms i, j, and k; Nξ is the number of improper harmonic dihedral angle interactions; ξn is the actual value of a dihedral angle between atoms i, j, k, and l; Nφ is the number of 48

proper torsional dihedral angle interactions; φn is the actual value of the torsional dihedral angle between atoms i, j, k, and l; δn is the phase shift; mn is the multiplicity of the torsional dihedral angle; and Kb , b0 , Kθ , θ0 , Kξ , ξ0 , and Kφ are parameters specified by the force field.

V

bonds

(~r; s) = V

bonds

(~r; Kb , b0 ) =

Nb X 1

 2 Kbn b2n − b20,n

(B.7)

Kθn [cos(θn ) − cos(θ0,n )]2

(B.8)

n=1

V

angles

(~r; s) = V

angles

(~r; Kθ , θ0 ) =

Nθ X 1 n=1

V

improper dihedrals

(~r; s) = V

improper dihedrals

2

4

(~r; Kξ , ξ0 ) =

Nξ X 1 n=1

V

proper dihedrals

(~r; s) = V

proper dihedrals

(~r; Kφ ) =

Nφ X

2

Kξn [ξn − ξ0,n ]2

(B.9)

Kφn [1 + cos(δn ) cos(mn φn )]

n=1

(B.10) The terms of the nonbonded potential energy are van der Waals interactions expressed by the Lennard-Jones interaction function in Equation B.11 and electrostatic interactions that are shown in Equation B.12 consisting of contributions from direct Coulombic, cutoff distance-dependent reaction-field, and distance-independent reaction-field interactions. In Equations B.11 and B.12, C12 and C6 are Lennard-Jones interaction parameters that depend on the types of atoms involved and the interaction between them, rij is the distance between atoms i and j, qi and qj are the partial charges on the atoms i and j, 0 is the dielectric permitivity of vacuum, 1 is the relative permitivity of the medium, Rrf is the reaction-field cutoff distance which is used for determining the interaction of atom i with the induced field of a continuous dielectric medium outside a cutoff distance due to the presence of atom j, and Crf is a parameter for the reaction-field

49

contribution to electrostatics. V

van der Waals

(~r; s) = V

van der Waals

 X  C12ij C6ij − 6 (~r; C12, C6) = 12 rij rij pairs i,j

V electrostatics (~r; s) = V electrostatics (~r; q) =

X pairs i,j

(B.11)

" # 1 C r2 1 − 12 Crf qi q j 1 2 rf ij − − 3 4π0 1 rij Rrf Rrf (B.12)

B.3

Model and Simulation Details

The first step for modeling the binding of glucose and benzene molecules was to create computational representations of the molecules in the protein database file format using Discovery Studio Visualizer. A benzene ring was positioned parallel to the plane containing the three carbon atoms of the standard form of glucose β-D-glucopyranose which are bonded to axial hydrogen atoms. The intermolecular separation distance was varied from 3.25 ˚ A to 15.5 ˚ A by increments of 0.25 ˚ A. The starting configuration is shown in Figure B.2. Each of the different spacings between molecules is referred to as a “window of λ” or simply a “window.” The name refers to the coupling parameter, λ, used for free energy (FE) calculations. FE can be used to obtain the PMF vs. distance plot. The energy function U was modified with λ between 0 and 1 as the energy transitioned from state A to state B where the states referred to the minimum and maximum separation distances, respectively. The linear relationship for the transition between states is U (λ) = (1 − λ)UA + λUB .

(B.13)

The system was equilibrated for each of 50 λ values, and then the production phase of the simulation was performed for each window. After the production

50

Figure B.2: Benzene molecule (left) separated from glucose molecule (right) by 3.25 ˚ A.

phase, the free energy difference, ∆A(λi → λi+1 ), was calculated as the ensemble average over neighboring λ windows given by Equation B.14 where kB is the Boltzmann constant, T is temperature, and ∆U = Ui+1 − Ui . The total free energy change between state A and state B is the sum of the free energy changes for all the values of λ [60] and can be calculated using ∆A(λi → λi+1 ) = −kB T ln hexp(−∆Ui /(kB T ))i .

(B.14)

A different topology file was needed for each separation distance between molecules. The positions for each atom in the system along with all the types of particle interactions were specified in the topology file for each separation distance. GROMACS has codes that convert protein database files to topology files. Since the force field parameters for Phenylalanine (PHE) and glucose initiator (GLCI) were used for the conversion, some editing of the topology file was done to accurately represent benzene and glucose. The manipulations of the topology file were determined to be sufficient when all bonds, angles, and dihedral interactions were specified and the molecular structure maintained its form for the entire duration of a sample simulation.

51

The alignment of the molecules was considered to be very important so care was taken to see that the perpendicular separation distances remained constant during simulations and that the molecules remained parallel to each other. To keep the benzene and glucose molecules aligned parallel without any horizontal shifting, the distance between the three aliphatic carbon atoms of glucose with axially bonded hydrogen atoms and corresponding carbon atoms of the benzene molecule were constrained, and the angles between the two molecules were restrained. The structure for each separation distance was solvated and minimized separately. Vacuum simulations were also performed for comparison purposes. The topology files for the vacuum simulations were the same as their solvated counterparts except without solvent molecules present. Each of the structures for vacuum simulations was minimized and run separately as the solvated structures were. The simulations were run with the GROMACS molecular dynamics software package with the GROMOS 53A6 force field. For each window of λ, the solute pair was contained in a cubic box with length 1.7 ˚ A around the solute in all directions. The computational boxes had periodic boundary conditions and were solvated by water unless the simulations were for vacuum conditions. The LINCS algorithm was invoked to constrain bonds to hydrogen atoms. The Particle Mesh Ewald method was chosen for long-range electrostatics with a cutoff distance of 8 ˚ A. The van der Waals switching distance was set to 6 ˚ A with a cutoff distance of 7 ˚ A. In each window, the simulation consisted of an equilibration scheme followed by a production run for data collection. The reference temperature was 300 K, and the time step was 1 fs. The equilibration scheme had four parts: 10,000 steps of steepest descent minimization, 10,000 steps of L-BFGS minimization, 50 ps of simulation with the NVT ensemble with temperature coupling time constant of 0.1, and 50 ps of simulation with the NPT ensemble with pressure coupling time constant of 0.5 and reference pressure of 1 atm. Production data collection

52

consisted of 5 ns of NVT simulations with stochastic Langevin dynamics. The simulations were performed in fifty windows of λ to calculate free energy (FE).

B.4

Discussion and Conclusions

The data was analyzed to determine a number of properties including the FE profile in order to construct the PMF vs. separation distance plot. After analyzing radial distribution data and observing the long-term behavior of several properties including density and constraints as a function of time, it was determined that the simulations were unstable, and the vacuum simulations were not completely minimized. Ultimately, all the data from the study were invalid due to the instabilities that propagated throughout the simulations. To continue this work, the next step will be to determine how to adjust the run parameters, such as temperature and pressure coupling time constants and electrostatic cutoff distances, to produce stable simulations for a single window of λ before putting all windows into production. Running and monitoring longer NPT equilibration runs should allow for stabilization of the density. Monitoring the initial run to ensure that joined trajectories restart from previous trajectories also should be an important aspect for stabilizing the constraint distance. As highly parallelized computer processors become available, there is interest in understanding how MD codes scale on these new machines. Stable results for the methods described in this appendix could be used to compare sequential performance to parallel simulations run on larger numbers of processors. The goal of the comparison would be to analyze the code to determine how algorithms can be adjusted to make efficient use of high performance machines such as New York Blue, the IBM BlueGene/L machine installed in 2007 at Brookhaven National Laboratory.

53

References

[1] U. Edlund and A. C. Albertsson. Degradable Polymer Microspheres for Controlled Drug Delivery. Advances in Polymer Science, 157:67–112, 2002. [2] N. K. Varde and D. W. Pack. Microspheres for Controlled Release Drug Delivery. Expert Opinion on Biological Therapy, 4(1):35–51, 2004. [3] S. Freiberg and X. X. Zhu. Polymer Microspheres for Controlled Drug Release. International Journal of Pharmaceutics, 282(1-2):1–18, 2004. [4] A. Gopferich. Mechanisms of Polymer Degradation and Erosion. Biomaterials, 17(2):103–114, 1996. [5] F. Alexis. Factors Affecting the Degradation and Drug-Release Mechanism of Poly(lactic acid) and Poly[(lactic acid)-co-(glycolic acid)]. Polymer International, 54(1):36–46, 2005. [6] D. Klose, F. Siepmann, K. Elkharraz, S. Krenzlin, and J. Siepmann. How Porosity and Size Affect the Drug Release Mechanisms from PLGA-Based Microparticles. International Journal of Pharmaceutics, 314(2):198–206, 2006. [7] N. Faisant, J. Siepmann, and J. P. Benoit. PLGA-Based Microparticles: Elucidation of Mechanisms and a New, Simple Mathematical Model Quantifying Drug Release. European Journal of Pharmaceutical Sciences, 15(4):355–366, 2002. [8] N. Faisant, J. Akiki, F. Siepmann, J. P. Benoit, and J. Siepmann. Effects of the Type of Release Medium on Drug Release from PLGA-Based Microparticles: Experiment and Theory. International Journal of Pharmaceutics, 314(2):189–197, 2006. [9] J. C. Kang and S. P. Schwendeman. Determination of Diffusion Coefficient of a Small Hydrophobic Probe in Poly(lactide-co-glycolide) Microparticles by Laser Scanning Confocal Microscopy. Macromolecules, 36(4):1324–1330, 2003. [10] R. Langer and N. Peppas. Chemical and Physical Structure of Polymers as Carriers for Controlled Release of Bioactive Agents - A Review. Journal of Macromolecular Science-Reviews in Macromolecular Chemistry and Physics, C23(1):61–126, 1983.

54

[11] R. A. Kenley, M. O. Lee, T. R. Mahoney, and L. M. Sanders. Poly(lactide-co-glycolide) Decomposition Kinetics In Vivo and In Vitro. Macromolecules, 20(10):2398–2403, 1987. [12] C. Berkland, E. Pollauf, C. Raman, R. Silverman, K. Kim, and D. W. Pack. Macromolecule Release from Monodisperse PLG Microspheres: Control of Release Rates and Investigation of Release Mechanism. Journal of Pharmaceutical Sciences, 96(5):1176–1191, 2007. [13] K. Fu, D. W. Pack, A. M. Klibanov, and R. Langer. Visual Evidence of Acidic Environment within Degrading Poly(lactic-co-glycolic acid) (PLGA) Microspheres. Pharmaceutical Research, 17(1):100–106, 2000. [14] L. Li and S. P. Schwendeman. Mapping Neutral Microclimate pH in PLGA Microspheres. Journal of Controlled Release, 101(1-3):163–173, 2005. [15] B. S. Zolnik and D. J. Burgess. Effect of Acidic pH on PLGA Microsphere Degradation and Release. Journal of Controlled Release, 122(3):338–344, 2007. [16] J. Siepmann, N. Faisant, J. Akiki, J. Richard, and J. P. Benoit. Effect of the Size of Biodegradable Microparticles on Drug Release: Experiment and Theory. Journal of Controlled Release, 96(1):123–134, 2004. [17] C. Berkland, K. Kim, and D. W. Pack. PLG Microsphere Size Controls Drug Release Rate through Several Competing Factors. Pharmaceutical Research, 20(7):1055–1062, 2003. [18] S. M. Li. Hydrolytic Degradation Characteristics of Aliphatic Polyesters Derived from Lactic and Glycolic Acids. Journal of Biomedical Materials Research, 48(3):342–353, 1999. [19] I. Grizzi, H. Garreau, S. Li, and M. Vert. Hydrolytic Degradation of Devices Based on Poly(DL-lactic acid) Size-Dependence. Biomaterials, 16(4):305–311, 1995. [20] J. Siepmann, K. Elkharraz, F. Siepmann, and D. Klose. How Autocatalysis Accelerates Drug Release from PLGA-Based Microparticles: A Quantitative Treatment. Biomacromolecules, 6(4):2312–2319, 2005. [21] D. Klose, F. Siepmann, K. Elkharraz, and J. Siepmann. PLGA-Based Drug Delivery Systems: Importance of the Type of Drug and Device Geometry. International Journal of Pharmaceutics, 354(1-2):95–103, 2008. [22] J. Siepmann and A. Gopferich. Mathematical Modeling of Bioerodible, Polymeric Drug Delivery Systems. Advanced Drug Delivery Reviews, 48(2-3):229–247, 2001. [23] S. D. Allison. Analysis of Initial Burst in PLGA Microparticles. Expert Opinion on Drug Delivery, 5(6):615–628, 2008. [24] A. C. R. Grayson, M. J. Cima, and R. Langer. Size and Temperature Effects on Poly(lactic-co-glycolic acid) Degradation and Microreservoir Device Performance. Biomaterials, 26(14):2137–2145, 2005. 55

[25] F. von Burkersroda, L. Schedl, and A. Gopferich. Why Degradable Polymers Undergo Surface Erosion or Bulk Erosion. Biomaterials, 23(21):4221–4231, 2002. [26] A. Shenderova, A. G. Ding, and S. P. Schwendeman. Potentiometric Method for Determination of Microclimate pH in Poly(lactic-co-glycolic acid) Films. Macromolecules, 37(26):10052–10058, 2004. [27] K. C. Wong-Moon, X. Sun, X. C. Nguyen, B. P. Quan, K. Shen, and P. A. Burke. NMR Spectroscopic Evaluation of the Internal Environment of PLGA Microspheres. Molecular Pharmaceutics, 5(4):654–664, 2008. [28] R. P. Batycky, J. Hanes, R. Langer, and D. A. Edwards. A Theoretical Model of Erosion and Macromolecular Drug Release from Biodegrading Microspheres. Journal of Pharmaceutical Sciences, 86(12):1464–1477, 1997. [29] D. Y. Arifin, L. Y. Lee, and C. H. Wang. Mathematical Modeling and Simulation of Drug Release from Microspheres: Implications to Drug Delivery Systems. Advanced Drug Delivery Reviews, 58(12-13):1274–1325, 2006. [30] C. Raman, C. Berkland, K. Kim, and D. W. Pack. Modeling Small-Molecule Release from PLG Microspheres: Effects of Polymer Degradation and Nonuniform Drug Distribution. Journal of Controlled Release, 103(1):149–158, 2005. [31] S. Takada, T. Kurokawa, K. Miyazaki, S. Iwasa, and Y. Ogawa. Sustained Release of a Water-Soluble GP IIb/IIIa Antagonist from Copoly(-lactic/glycolic)acid Microspheres. International Journal of Pharmaceutics, 146(2):147–157, 1997. [32] M. Takenaga, Y. Yamaguchi, Y. Ogawa, A. Kitagawa, S. Kawai, Y. Mizushima, and R. Igarashi. Administration of Optimum Sustained-Insulin Release PLGA Microcapsules to Spontaneous Diabetes-Prone BB/Wor//Tky Rats. Drug Delivery, 13(2):149–157, 2006. [33] B. S. Zolnik and D. J. Burgess. Evaluation of In Vivo-In Vitro Release of Dexamethasone from PLGA Microspheres. Journal of Controlled Release, 127(2):137–145, 2008. [34] R. Tavakoli and P. Davami. 2D Parallel and Stable Group Explicit Finite Difference Method for Solution of Diffusion Equation. Applied Mathematics and Computation, 188(2):1184–1192, 2007. [35] J. Q. Gao and G. X. He. An Unconditionally Stable Parallel Difference Scheme for Parabolic Equations. Applied Mathematics and Computation, 135(2-3):391–398, 2003. [36] B. F. Towler and R. Y. K Yang. Numerical Solution of Non-linear Paraoblic PDEs by Asymmetric Finite Difference Formulae. The Chemical Engineering Journal, 12(2):81–87, 1976. 56

[37] J. Siepmann, N. Faisant, and J. P. Benoit. A New Mathematical Model Quantifying Drug Release from Bioerodible Microparticles Using Monte Carlo Simulations. Pharmaceutical Research, 19(12):1885–1893, 2002. [38] A. G. Thombre and K. J. Himmelstein. A Simultaneous Transport-Reaction Model for Controlled Drug Delivery from Catalyzed Bioerodible Polymer Matrices. AIChE Journal, 31(5):759–766, 1985. [39] F. Mollica, M. Biondi, S. Muzzi, F. Ungaro, F. Quaglia, M. I. La Rotonda, and P. A. Netti. Mathematical Modelling of the Evolution of Protein Distribution within Single PLGA Microspheres: Prediction of Local Concentration Profiles and Release Kinetics. Journal of Materials Science-Materials in Medicine, 19(4):1587–1593, 2008. [40] S. N. Rothstein, W. J. Federspiel, and S. R. Little. A Simple Model Framework for the Prediction of Controlled Release from Bulk Eroding Polymer Matrices. Journal of Materials Chemistry, 18(16):1873–1880, 2008. [41] Y. Wang, J. Pan, X. Han, C. Sinka, and L. Ding. A Phenomenological Model for the Degradation of Biodegradable Polymers. Biomaterials, 29(23):3393–3401, 2008. [42] P. Arosio, V. Busini, G. Perale, D. Moscatelli, and M. Masi. A New Model of Resorbable Device Degradation and Drug Release - Part I: Zero Order Model. Polymer International, 57(7):912–920, 2008. [43] J. T. He, C. L. Zhong, and J. G. Mi. Modeling of Drug Release from Bioerodible Polymer Matrices. Drug Delivery, 12(5):251–259, 2005. [44] H. Antheunis, J. C. van der Meer, M. de Geus, W. Kingma, and C. E. Koning. Improved Mathematical Model for the Hydrolytic Degradation of Aliphatic Polyesters. Macromolecules, 42(7):2462–2471, 2009. [45] M. Zilberman and A. Malka. Drug Controlled Release from Structured Bioresorbable Films Used in Medical Devices–A Mathematical Model. Journal of Biomedical Materials Research Part B: Applied Biomaterials, 89(1):155–164, 2009. [46] A. G. Ding, A. Shenderova, and S. P. Schwendeman. Prediction of Microclimate pH in Poly(lactic-co-glycolic acid) Films. Journal of the American Chemical Society, 128(16):5384–5390, 2006. [47] V. Lemaire, J. Belair, and P. Hildgen. Structural Modeling of Drug Release from Biodegradable Porous Matrices Based on a Combined Diffusion/Erosion Process. International Journal of Pharmaceutics, 258(1-2):95–107, 2003. [48] A. Gopferich and R. Langer. Modeling Monomer Release from Bioerodible Polymers. Journal of Controlled Release, 33(1):55–69, 1995. [49] J. E. J. Staggs. Discrete Bond-Weighted Random Scission of Linear Polymers. Polymer, 47(3):897–906, 2006. 57

[50] H. Nishida, M. Yamashita, M. Nagashima, N. Hattori, T. Endo, and Y. Tokiwa. Theoretical Prediction of Molecular Weight on Autocatalytic Random Hydrolysis of Aliphatic Polyesters. Macromolecules, 33(17):6595–6601, 2000. [51] G. L. Siparsky, K. J. Voorhees, and F. D. Miao. Hydrolysis of Polylactic Acid (PLA) and Polycaprolactone (PCL) in Aqueous Acetonitrile Solutions: Autocatalysis. Journal of Environmental Polymer Degradation, 6(1):31–41, 1998. [52] E. Loste, R. J. Park, J. Warren, and F. C. Meldrum. Precipitation of Calcium Carbonate in Confinement. Advanced Functional Materials, 14(12):1211–1220, 2004. [53] N. Spanos, D. Y. Misirlis, D. G. Kanellopoulou, and P. G. Koutsoukos. Seeded Growth of Hydroxyapatite in Simulated Body Fluid. Journal of Materials Science, 41(6):1805–1812, 2006. [54] R. Palma, M. E. Himmel, and J. W. Brady. Calculation of the Potential of Mean Force for the Binding of Glucose to Benzene in Aqueous Solution. Journal of Physical Chemistry B, 104(30):7228–7234, 2000. [55] M. Spezio, D. B. Wilson, and P. A. Karplus. Crystal-structure of the Catalytic Domain of a Thermophilic Endocellulase. Biochemistry, 32(38):9906–9916, 1993. [56] D. A. McQuarrie. Statistical Mechanics. University Science Books, Sausalito, CA, 2nd edition, 2000. [57] T. Schlick. Molecular Modeling and Simulation: An Interdisciplinary Guide. Spring-Verlag, New York, 2002. [58] D. Frenkel and B. Smit. Understanding Molecular Simulation: From Algorithms to Applications. Academic Press, New York, 1st edition, 1996. [59] C. Oostenbrink, A. Villa, A. E. Mark, and W. F. van Gunsteren. A Biomolecular Force Field Based on the Free Enthalpy of Hydration and Solvation: The GROMOS Force-Field Parameter Sets 53A5 and 53A6. Journal of Computational Chemistry, 25(13):1656–1676, 2004. [60] A. R. Leach. Molecular Modelling: Principles and Applications. Longman, Essex, England, 1st edition, 1996.

58

BIODEGRADABLE POLYMERIC DRUG DELIVERY

Computational modeling can help develop this understanding and can be used to .... use [28,29]. It is highly desirable that computational models be applicable to a large number of chemical species in the system, include multiple physics ... The mathematical analysis presented in this chapter was performed by Ashlee N.

677KB Sizes 0 Downloads 138 Views

Recommend Documents

Polymer–drug conjugates: Progress in polymeric ...
by specific cellular enzymes are utilized to achieve ..... phone (DVS), nitrogen mustard and bis-sulphonyl ... Random PEG conjugation (PEGylation) of small.

Oral pulsed dose drug delivery system
Jul 19, 2001 - (75) Inventors: Beth A. Burnside, Bethesda, MD (US);. 424/458' ..... Impax Laboratories, Inc.'s Reply Memorandum in Support of the Motion to ...

Particles incorporating surfactants for pulmonary drug delivery
Jul 12, 1999 - Ganderton, “The generation of respirable clouds from coarse .... tion from solution and aerosol by rat lung,” Int. J. Pharma ceutics, 88:63—73 ...

Polymeric alkoxysilanes
Apr 24, 1978 - example, polysiloxane mixtures with terminal sulfuric acid groups are .... cal stirrer, addition funnel and Claisen head adapter with attached ...

Particles incorporating surfactants for pulmonary drug delivery
Jul 12, 1999 - David A. Edwards, State College, PA ... (List continued on next page.) ..... cal Parameters, Particle Size and Particle Concentration,” J. Contr.

Oral pulsed dose drug delivery system
Jul 19, 2001 - Dispersions for Controlled Release, Pharmaceutical Tech nology, Apr. 1984. Rosen ..... States District Court for the Southern District of Florida,.