1

Computing Distance Functions and Minimal Paths

In the last lecture, we gave one possible way for implicitely representing a simple closed curve - via the signed distance representation. At first sight it looks quite computationally expensive to compute, but now we derive a fast method to compute the distance function, and indeed more general distance functions. The clever algorithm that we present is in [1]. Given a closed compact and smooth surface S ⊂ Ω ⊂ Rn (n ≥ 2) (in the simplest case the surface will be a curve c ⊂ R2 ), we consider the problem of computing for each x ∈ Ω the weighted distance to the surface: Z 1

dS (x) = inf

γ∈Γx

φ(γ(t))|γ 0 (t)| dt = inf Lφ (γ) γ∈Γx

0

(1)

where φ : Ω → R+ is called the metric (or local cost) and Γx = {γ : [0, 1] → Ω : γ(0) = x, γ(1) ∈ S}. Notice that dsγ = |γ 0 (t)| dt (that is the arclength element of the path γ), and thus Z 1 φ(γ(t))|γ 0 (t)| dt

(2)

(3)

0

is a weighted length of γ (the weight φ being spatially dependent). Thus, we see that dS (x) is the weighted length of the smallest weighted length path starting from x and ending at any point on S. Notice that if φ = 1, then the integral is just the length of γ and dS (x) = inf L(γ) = inf |x − y| γ∈Γx

y∈S

(4)

since straight lines are the shortest length paths, and the straight line distance from x to a point y ∈ S is just |x − y|. So in the case φ = 1, dS reduces to the distance function that we have defined in the previous lecture. An example use of the general problem for non-constant φ is for object detection : one finds the path that corresponds to the boundary of an object. A simple way of doing this is to choose φ(x) =

1 1 + |∇I(x)|2

(5)

where I : Ω → R is the image. Note the boundary of an object many times has large image gradients, and thus the minimal path would align with the object boundary; the minimal path algorithm has been used for detecting roads in aerial images and vessel detection in medical images [1]. We shall now derive an algorithm whereby we can compute dS (x) for each x ∈ Ω quickly and thereby also compute the minimal path (wrt the length defined by Lφ ) from each x ∈ Ω to S, all in one shot! 1

1.1

Euler-Lagrange Equations

By now we know that in order to find possible minima of Lφ so as to compute dS , we locate critical paths, i.e., we look for paths such that ∇Lφ = 0, which is simply those paths that satisfy the Euler-Lagrange equations. Thus, we compute the Euler-Lagrange equations. To do so, we compute the directional derivative, but first we note the permissible class of perturbations of a path γ ∈ Γx . Note that a permissible perturbation h : [0, 1] → Rn is such that γ + εh ∈ Γx for small ε > 0. This implies that γ(0) + εh(0) = x ⇒ h(0) = 0

(6)

γ(1) + εh(1) ∈ S ⇒ h(1) · NS (γ(1)) = 0

(7)

where NS (γ(1)) is the normal to S at the point γ(1). Note the last relation simply says that the perturbation h(1) can move γ(1) only in a tangent direction to the surface S (otherwise if we were to move in the normal direction, then γ(1) + εh(1) ∈ / S). Thus, VΓx = {h : [0, 1] → Rn : h(0) = 0, h(1) · NS (γ(1)) = 0}

(8)

are the permissible perturbations of Γx . Now, d Lφ (γ + εh) dLφ (γ) · h = dε ε=0 Z 1 d 0 0 = φ(γ(t) + εh(t))|γ (t) + εh (t)| dt 0 dε ε=0 Z 1 γ 0 (t) 0 0 = ∇φ(γ(t)) · h(t)|γ (t)| + φ(γ(t)) 0 · h (t) dt |γ (t)| 0 Z 1 1 d γ 0 (t) = ∇φ(γ(t)) − 0 φ(γ(t)) 0 |γ 0 (t)| dt |γ (t)| dt |γ (t)| 0 t=1 γ 0 (t) + φ(γ(t)) 0 · h(t) |γ (t)| t=0

(9) (10) (11) (12) (13)

where in the last expression, we have integrated by parts. We first note that ds = |γ 0 (t)| dt,

d 1 d γ 0 (t) = 0 , γs (t) = 0 ds |γ (t)| dt |γ (t)|

(14)

where s denotes the arclength parameter of γ. Also, note that the boundary term vanishes (obviously, at t = 0 it vanishes since h(0) = 0). We argue that for a minimal path γ, that γs (1) · h(1) =

γ 0 (1) · h(1) = 0; |γ 0 (1)|

(15)

indeed, γs (1) should be normal to the surface S; otherwise if there were a component of γs (1) tangent to the surface, then we could end the path at y + γs (1) ∈ S for small and note there is some 0 < t0 < 1 where γ(t0 ) = y + γs (1). We could then define a new path γ˜ as γ˜ (t) = γ(t0 t), t ∈ [0, 1]

2

(16)

and this new path would have smaller length (wrt Lφ ) than γ, contradicting that γ is minimal. Therefore, γs (1) must be normal to S, and note that by our permissible perturbations, we have that h(1) is tangent to S. Hence, we see that γs (1) · h(1) = 0. Now, Z d dLφ (γ) · h = ∇φ(γ(s)) − (φ(γ(s))γs (s)) · h(s) ds (17) ds γ where we have made a change of variable to the arclength variable, s. We therefore see that the EulerLagrange equations are d ∇Lφ (γ) = ∇φ(γ(s)) − (φ(γ(s))γs (s)) = 0. (18) ds

1.2

Eikonal Equation and Relation to E-L of Lφ

Now in previous lectures, we would guess an initial path and perform gradient descent of Lφ , and then converge to a local minimum. However, there is a much smarter method to obtain the global minimum without any initial guess of the path. To see this suppose that we solve the PDE ( |∇U (x)| = φ(x) x ∈ Ω ; (19) U (x) = 0 x∈S this equation is known as the eikonal equation, and we shall see fast methods to compute its solution shortly, but for now assume that we have solved the PDE and have the solution U . Define a path by the differential equation below: ( γ 0 (t) = ∇U (γ(t)) . (20) γ(0) = x Then γs (t) =

γ 0 (t) ∇U (γ(t)) ∇U (γ(t)) = = , ⇒ φ(γ(t))γs (t) = ∇U (γ(t)). |γ 0 (t)| |∇U (γ(t))| φ(γ(t))

(21)

Also, ∇φ(γ(t)) = ∇(|∇U (x))|x=γ(t) = HU (γ(t))

∇U (γ(t)) |∇U (γ(t))|

(22)

where HU is the Hessian of U . Now, ∇φ(γ(s)) −

d ∇U (γ(t)) d (φ(γ(s))γs (s)) = HU (γ(t)) − (∇U (γ(t))) = ds |∇U (γ(t))| ds ∇U (γ(t)) HU (γ(t)) − HU (γ(t))γs (t) = 0. (23) |∇U (γ(t))|

Thus, we see that γ that solves the differential equation (20) solves the Euler-Lagrange equations for Lφ . We compute now the weighted length of the path γ that solves (20), but we first note that ∇U (γ(t)) d U (γ(t)) = ∇U (γ(t)) · γs (t) = ∇U (γ(t)) · = |∇U (γ(t))| ds |∇U (γ(t))|

(24)

and thus, Z Lφ (γ) =

1

0

Z

Z |∇U (γ(t))| ds =

φ(γ(t))|γ (t)| dt = 0

1

0

0

3

1

d U (γ(t)) ds = U (γ(1)) − U (γ(0)). (25) ds

If we choose x ∈ S then γ(0) = x ∈ S and then Lφ (γ) = U (γ(1)),

(26)

which says that U (y) is the length of the path that solves the E-L equation and starts at some point in S and ends at y.

1.3

Global Minimum of Lφ

We show now that in fact γ defined by (20) is the global minimum path from x and ending at some point in in S. To see this, we define the following curve evolution (surface evolution for higher dimensions ; the argument works there too, but we show it for curves to keep the notation simple) : ( 1 ∂t c(t, p) = φ(c(t,p)) N (t, p) t > 0 ; (27) c(0, ·) = S t=0 that is, the initial curve is the set S (which for the sake of simiplicity of notation, assume is a curve), and then we deform the curve in the outward normal direction at a speed proportional to 1/φ. Note that φ ≥ 0, and so the curve is moving at outward from S. We note that this is the same form of equation that we have seen in the last lectures on Region Competition. We now show that c(t, ·) corresponds to the t level set of U , that is, c(t, ·) = {x ∈ Rn : U (x) = t} and that U (x) is in fact the length of the global minimal path from x to S. To show this, we apply an inductive argument. That is, suppose that c(t, ·) is the t level set of U and that U (c(t, p)) is the length of the global minimum path from c(t, p) to S, then we show that c(t + ∆t, ·) for ∆t > 0 small is the t + ∆t level set of U , and U (c(t + ∆t, p)) is the global min. path length from c(t + ∆t, p) to S. Now for ∆t > 0 small, consider the global minimum path from c(t + ∆t, p) to the curve c(t, ·) = {x ∈ n R : U (x) = t}. Note the minimal paths emanating from c(t, ·) to outside c(t·) will be normal to c(t, ·) (this is using the same reasoning above in the previous section when we concluded that h(1)·NS = 0). Also, if a point x is in a (small) neighborhood outside of c(t, ·), then the minimal path from x to c(t, ·) will be a straight line that is normal to c(t, ·) and touches x. This is because minimal paths are locally approximated by straight lines (φ is nearly constant in a neighborhood if it is continuous). Thus since c(t + ∆t, p) is in neighborhood outside c(t, ·), the minimal path from c(t + ∆t, p) to c(t, ·) is simply the line segment from c(t, p) to c(t + ∆t, p), and then c(t + ∆t, p) − c(t, p) = ∆t

1 1 N (t, p), |c(t + ∆t, p) − c(t, p)| = ∆t , φ(c(t, p)) φ(c(t, p))

(28)

the later is the Euclidean length from c(t, p) to c(t + ∆t, p), and thus the minimal path length from c(t, ·) to c(t + ∆t, p) is Lφ = φ(c(t, p)) ds = φ(c(t, p))|c(t + ∆t, p) − c(t, p)| = ∆t. (29) Therefore, the global minimal path from c(t + ∆t, p) to S will be the global minimal path from c(t + ∆t, ·) to c(t, ·) concatenated with the minimal path from c(t, ·) to S. But by the inductive hypothesis, we know that the global minimum path length from c(t, p) to S is U (c(t, p)), and so 1 N (t, p) = U (c(t, p)) + ∆t = t + ∆t, (30) U (c(t + ∆t, p)) = U c(t, p) + ∆t φ(c(t, p)) that is c(t + ∆t, ·) is the t + ∆t level set of U . 4

The preceding argument is also known as the Principle of Dynamic Programming. To summarize, we have just established that U (x) is the global minimal path length from x to S, and that the level set {U = t} is c(t, ·). Moreover, the global minimal path is computed using (20).

1.4

Fast Marching Algorithm

The equations (27) are the basis for a fast algorithm for computing U known as Fast Marching [2] (see also [3]). Indeed, the idea of the algorithm is simply to evolve the curve c (propagating the level sets of U outward) and simultaneously recording the arrival times t at the each of the points of c(t, ·) using the eikonal equation (19). Note that information (i.e., the initial value U = 0 on S) is propagating outward (since φ > 0) and thus, we must use an upwinding difference scheme to discretize (19) as we have done in the previous lecture for the region-based term of region competition. The Fast Marching algorithm then works by updating the labels of points, where the labels are FAR (points that have not been touched by the front, c(t, ·)), ALIVE (points that have been already passed by the front), and TRIAL (points that the front is currently visiting). The TRIAL points are where the algorithm solves for the value of U using only the information of U at the currently ALIVE or TRIAL points so as to obey the upwinding scheme. The algorithm works on a discrete grid and in two dimensions (generalization to higher dimensions is trivial) as : Initialize as ( ( +∞ ij ∈ /S FAR ij ∈ /S Uij = , lij = (31) 0 ij ∈ S TRIAL ij ∈ S where lij denote the label of the pixel ij. Then loop the following until all grid points have been marked ALIVE : 1. im , jm = arg minij∈TRIAL Uij 2. lim jm = ALIVE 3. for each of the neighbor kl of im jm • If lkl = FAR, then lkl = TRIAL • If lkl = ALIVE, then solve (max(u − Ui−1,j , u − Ui+1,j , 0))2 + (max(u − Ui,j−1 , u − Ui,j+1 , 0))2 = φij

(32)

for u, and set Ukl = min(u, Ukl ). The computation of the minimum above can be computed quickly using a heap structure to always keep the TRIAL points ordered. The complexity of this algorithm is then O(N log N ) where N is the number of grid points (log N comes from searching the heap structure).

1.5

Fast Sweeping Algorithm

Another algorithm for computing the solution of the eikonal equation is fast sweeping [5] (see also [4]), which does not require the book-keeping of a heap and labels as the Fast Marching Method, and indeed converges after a few iterations each costing O(N ). The algorithm initializes ( +∞ ij ∈ /S Uij = (33) 0 ij ∈ S 5

and then iterates the following sweeps 1. for j = 1, . . . , Ny , for i = 1, . . . , Nx , solve (32) and set Uij = min (u, Uij ) 2. for j = 1, . . . , Ny , for i = Nx , . . . , 1 solve (32) and set Uij = min (u, Uij ) 3. for j = Ny , . . . , 1, for i = 1, . . . , Nx , solve (32) and set Uij = min (u, Uij ) 4. for i = Nx , . . . , 1, for j = Nx , . . . , 1, solve (32) and set Uij = min (u, Uij ) until convergence. The algorithm converges rapidly and in fact within 2-3 iterations, scheme converges.

References [1] L.D. Cohen and R. Kimmel. Global minimum for active contour models: A minimal path approach. International Journal of Computer Vision, 24(1):57–78, 1997. [2] J.A. Sethian. A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Sciences of the United States of America, 93(4):1591, 1996. [3] J.N. Tsitsiklis. Efficient algorithms for globally optimal trajectories. Automatic Control, IEEE Transactions on, 40(9):1528–1538, 1995. [4] A.J. Yezzi Jr and J.L. Prince. An Eulerian PDE approach for computing tissue thickness. Medical Imaging, IEEE Transactions on, 22(10):1332–1339, 2003. [5] H. Zhao. A fast sweeping method for eikonal equations. Mathematics of computation, 74(250):603–628, 2005.

6