Skin Research and Technology 2005; 11: 123–131 Printed in Denmark. All rights reserved

Copyright & Blackwell Munksgaard 2005

Skin Research and Technology

Skin lesions segmentation and quantification from 3D body’s models G. Guillard and J M. Lagarde Centre Jean-Louis Alibert, Institut de Recherche Pierre Fabre, Toulouse, France

Background/aims: Characterising large areas of the body has always been problematic. The aim of this article is to test a method to evaluate the developed surface areas of dermatological lesions from a 3D textured model of the body’s envelope. Method: We applied the active contour method to isolate the lesions. Then, by means of the 3D model obtained, we calculated the area. This was tested on standards of known areas. Results: For the standards, the standard deviation between the calculated and theoretical surfaces was under 3%.

Conclusion: The results obtained indicate the feasibility of the method for studying the efficiency of dermatological treatment.

I

then we included software to map the limits of the lesions. To obtain a model of the outer envelope of the human body, point and texture data must be acquired. Most systems use profilometry: the WB4 (Whole-Body scanner) from Cyberware Co. (Monterey, CA, USA) (6), the VITUS system from Vitronic (Wiesbaden, Germany) (7, 8), and NKV1100s Voxelan Co. (Kanagawa, Japan). This last device covers the two sides of the body separately, then, to obtain a full model, markers are laid out on the body to enable fusion. Our system is also based on profilometry but additionally includes a calibration system (9) to overcome this problem. It establishes a direct correspondence between the spatial co-ordinates and the camera co-ordinates. Other principles have been applied such as fringe projection and stereovision with motif projection. The current resolution of the body scanner allows neither the characterisation of large lesions nor mapping of small ones. To do so, we used the active contour method, currently very popular in image processing whether for the detection of contours, for the analysis of move-

imagery is a rapidly expanding field. Epiluminescence (1), ultrasound (2), confocal microscopy (3), coherent optical tomography (4), and even MRI (5) are increasingly used in the exploration of the skin and of dermatological pathologies. These sometimes quantitative techniques of exploration aim to characterise surfaces or volumes of skin of limited dimensions. In contrast, for large areas of the body, no reliable systems are available. To approach this question, the dermatologist only has his eyes, his memory, and at best a camera with all the problems of development, storage of the shots, standardisation of lighting and the position of the subject. With the human eye, repeatability is always poor: the risks of inter- and intra-operator variations are high. Digital photography only partially resolves these problems and is of no help in dealing with body shape. These problems had to be overcome before lesions could be routinely quantified so, in a preliminary step, we developed a system of scanning and 3D reconstruction of the body’s envelope including densitometric information, N DERMATOLOGY,

Key words: skin – active contours – 3D modelling – developed surface

& Blackwell Munksgaard, 2005 Accepted for publication 13 July 2004

123

Guillard and Lagarde

ment or for segmentation. The method, which was introduced in Kass et al. (10), was improved, using different approaches, by Caselles et al. (11) and Cohen et al. (12). Although this method is used in medicine for MRI (13) and ultrasound imaging (14), for instance, it has not, to our knowledge, been applied to dermatology. The version that we used was that of Xu and Prince (15), called GVF for gradient vector flow.

Material, Principle and Method Description The dimensions of the body scanner are 1.5 m  2 m  2.6 m that includes three separate elements (Fig. 1):  a fixed part with the base and a mast;  a vertically mobile, ‘U’-shaped part carrying the sensors (the subject stands upright between the arms of the ‘U’);  the drive unit, which raises and lowers the ‘U’ by means of a worm screw. Each of the sensors, the profilometers, located at the four corners of the ‘U’, is composed of three elements:  a laser diode producing a sheet laser (after calibration, the four sheets fuse to form a plane that draws an uninterrupted profile around the body of the subject);  a camera that firstly observes the profile to generate 3D information and secondly records an image of the texture which is then mapped into the 3D model;  a halogen spotlight to ensure repeatable lighting.

Fig. 1. The robot

124

The whole robot set-up is controlled by a PC. Several of these set-ups were built to aid in the design and to standardise data acquisition. They all maintain the subjects in a standard position. Principle of data acquisition In this set-up, the 3D data are generated by profilometry. A camera set above or below the sheet laser observes the profile. The principle of triangulation is then used to find the position of each of the points of the profile. To compute an area, the profilometer is slid up or down (Fig. 2). Treatment of the data The first type of data obtained is a set of points with their spatial co-ordinates (Fig. 3) generated during the profilometry step. Using these points, we set up a lattice that will be used to determine the area. To obtain this lattice, the approach chosen was a method known as ‘marching cubes’ (16–18). The lattice obtained with this method is a succession of adjacent triangles (Fig. 4).

Profile

Camera α

Laser plan Laser

Profilometer

Fig. 2. Principle of the profilometry

Fig. 3. Torso of a man: points

Skin lesions segmentation and quantification

Fig. 4. Torso of a man: lattice

Fig. 6. Texture grafted onto lattice

Fig. 7. Torso of a man: complete model

In this way, we finally obtain the full 3D model (Fig. 7). Principle In this section, we shall present the active contours method and how we used its output to calculate the area of the lesions detected. Fig. 5. Torso of a man: texture

The second type of data is textures. They are contained in the four images from the four cameras (Fig. 5). The images are taken either before or after the profilometry measurements. The next step is to apply the textures to the area. For each triangle, if we know its normal, we know the camera that sees it best. In the texture image from this camera, we recover the corresponding triangle (Fig. 6). In this figure, the triangle illustrated is very much larger than is actually the case.

Active contours Active contours are frequently used in image processing, especially to detect the boundaries of objects. They are contours that are first roughly plotted around an object (e.g. a lesion) in the image. Their position is then subjected to a finetuning process:  an internal force repelling the line and preventing the contour from collapsing on itself;  an external force pushing the curve towards a particular element of the image (e.g. a boundary) (Fig. 8).

125

Guillard and Lagarde

follows: @C ðv; tÞ ¼ aC00 ðvÞ  bC00 00 ðvÞ  HPðCðvÞÞ @t

Fig. 8. Active contour detection of a spot: initial contour position and its forces for a detected spot on the left, final contour position of the spot on the right

For a differentiated function f and a variable x we note fx for @ f/@ x. The classic model This model appeared for the first time in Kass et al. (10). A contour is given by: C :O ! R2 v7!ðxðvÞ; yðvÞÞ The whole set of contours is denoted A and the energy is defined by: E :A ! R

Z   2 2 akC0 ðvÞk þbkC0 ðvÞk þPðCðvÞÞ dv C7!EðCÞ ¼ O

where P: R2 ! R2 is a potential function. The first two terms, ajjC0 ðvÞjj2 þ bjjC00 ðvÞjj2 , represent the internal force of the model, its rigidity. The last term, P(C(v)), is the external force, the one that pushes the model towards the required contour. For P(C(v)), we take: g: R ! R strictly decreasing, P(C(v)) 5 lg(jjHIðCÞjj) where I is the image and lAR. For instance:

When the solution C(v, t) stabilises, the term @ C/@t(v, t) cancels out enabling the Euler– Lagrange equation to be solved. This model has several disadvantages:  Inability to detect topology changes.  Dependence on the parametrisation chosen.  Requirement for the accurate initialisation of the contour for two reasons related to the characteristics of HI(C).  Although the vectors obtained do point towards the boundaries, and are perpendicular to them, they only have a high amplitude in the immediate vicinity of the borders.  In homogeneous regions, the vectors are zero (Thus, the external force, which moves the active contour, is only strong close to the border). Figure 9, which reports the successive iterations of the active contour, illustrates the problem of the behaviour encountered with this model. The GVF active contour This method, described in Xu and Prince (15), is the one we chose to use. In this approach, the classic external force P(C(v)) is replaced by a field of vectors V(x, y), known as a GVF field. The new equation can then be written: @C ðv; tÞ ¼ aC00 ðvÞ  bC0000 ðvÞ þ VðCðvÞÞ @t The solution to this equation is called the GVF active contour.

gðxÞ ¼ x2 gðxÞ ¼ 1=ð1 þ x2 Þ Once this energy has been defined, we find the contour that minimises it: aC00 ðvÞ  bC00 00 ðvÞ  HPðCðvÞÞ ¼ 0 So, the contour we are looking for is the solution to this differential equation. To solve the equation, the active contour is made dynamic by processing C as if it were as much a function of time as a function of v. The partial derivative of C with respect to time then

126

Fig. 9. Successive iterations of the standard model of an image featuring a ‘U’

Skin lesions segmentation and quantification

The GVF field is defined as the field V(x, y) and is divided into its horizontal and vertical components V(x, y) 5 (VH(x, y), VV(x, y)), minimising the energy:       ZZ @VH 2 @VH 2 @VV 2 þ þ e¼ m @x @y @x   2  @VV þ þ jHf j2 jV  Hf j2 dx dy @y where f represents the contour map of the image. This field cannot be expressed as the opposite of the potential gradient, it is irrational. The solution, therefore, does not minimise the energy E. However, this disadvantage is outweighed by the qualities of the active contour that is generated. It can be noted that when |Hf| is small, the energy is dominated by the sum of the squares of the partial derivatives of the vector field. In addition, when |Hf| is large, the second term predominates and is minimised by V 5 Hf. Thus, the field can be considered as equal to the gradient of the contour map when the gradient is sharp but decreases slowly in homogeneous regions. The parameter m is a regularisation parameter and regulates the ratio of the influence between the two terms: mH2 VH ðx; yÞ  ðVH ðx; yÞ  fx ðx; yÞÞ ðfx 2 ðx; yÞ þ fy 2 ðx; yÞÞ ¼ 0 mH2 VV ðx; yÞ  ðVV ðx; yÞ  fy ðx; yÞÞ ðfx 2 ðx; yÞ þ fy 2 ðx; yÞÞ ¼ 0 where H2 is the Laplacian operator. Considering C as a function of time, the system is resolved by finding the state of stability of the following system: mH2 VH ðx; yÞ  ðVH ðx; yÞ  fx ðx; yÞÞ H ðfx 2 ðx; yÞ þ fy 2 ðx; yÞÞ ¼ @V @t 2 mH VV ðx; yÞ  ðVV ðx; yÞ  fy ðx; yÞÞ V ðfx 2 ðx; yÞ þ fy 2 ðx; yÞÞ ¼ @V @t Application to the body scanner and to surface characterisation We apply this active contour model to each of the four texture images. After initialising the contour in each of the images, the lesion is isolated by the GVF and the result obtained is mapped into the 3D model. To characterise the surface, we start from the contour isolated by the GVF and then an image

Fig. 10. Mask after gradient vector flow detection of the contour

Fig. 11. First image for validation of gradient vector flow

mask is obtained: the outer part of the lesion is at 255, the inner part is at 0 (Fig. 10). Then, when the texture is added, we calculate the two surface areas. The first (S1) is the sum of the areas of the triangles covering the space that are entirely within the contour. The second surface area (S2) is the sum of the areas of the triangles crossed by the contour. The surface area, S, of the spot is then estimated, for a given aA[0, 1], by: S ¼ aS1 þ ð1  aÞS2 : The best possible a will be determined below.

Method Validation of GVF To validate the GVF technique, it was used on various types of image from the simplest to the type it was actually chosen to deal, i.e. images produced by the body scanner. The first image is a ‘U’ shape on a grey background with Gaussian noise (Fig. 11). The second is an actual dermatological spot (Fig. 12). The third is a texture image from the body scanner on which an imaginary lesion has been drawn (Fig. 13).

127

Guillard and Lagarde

Fig. 12. Second image for validation of gradient vector flow

Fig. 14. Fourth image for validation of gradient vector flow

Fig. 13. Third image for validation of gradient vector flow

The fourth is an image from the body scanner of an imaginary lesion drawn on the skin (Fig. 14). The lesion is only visible for three of the four cameras. Validation of surface characterisation Once the lesion has been isolated in each of the four body scanner images, it must be checked that the area has been calculated correctly. To do so, we used standard cylinders on which we placed shapes of known surface area. We used two cylinders of different diameters (12 and 18 cm), two different shapes (circle and

128

Fig. 15. Two examples of standards for the validation of area computation

square), both shapes having three possible surface areas (Fig. 15). Note that the shapes were measured with a ruler and cut out with scissors implying a certain error in the measurement of the 2D area.

Results and Discussion Validation of GVF Figures 16–19 illustrate the application of the GVF technique on Figs 11–14.

Skin lesions segmentation and quantification

Fig. 16. First image for the validation of gradient vector flow, result

Fig. 19. Fourth image for the validation of gradient vector flow, result

Fig. 17. Second image for the validation of gradient vector flow, result

Fig. 18. Third image for the validation of gradient vector flow, result

In Fig. 16 the active contour correctly follows round between the two arms of the ‘U’, as expected. Concerning Fig. 17, the contour has detected the lesion efficiently. This result is interesting in that this is a real dermatological image. The third result (Fig. 18) is again quite convincing. In addition, it is promising since it is an image from the body scanner. Finally, the fourth result (Fig. 19) poses a problem. Although in the centre of the body the

contour has been correctly detected, the same is not true for the parts of the body seen against the black background. The contour is strongly attracted by the sharp gradient between the black, corresponding to the environment of the robot and the very light grey corresponding to the normal skin of the subject. However, the problem must be situated in the context of the body scanner. Each dubious area also appears on the image from another camera. On the other image, the area will no longer present a problem since it will be far from the line separating the body from the background and therefore will be correctly detected by the GVF. It is this other image that will be chosen for mapping the texture. Indeed, the fact that the area of interest is far from the edge of the body means that the camera has a better view of it. This is illustrated in Fig. 20. This figure shows two types of points: the points of the active contour and the projected vertices of each facet on the image plane of the camera that sees the facet best. The ‘dubious’ area is therefore dealt with by the camera giving an image plane in which the contour is detected correctly.

Validation of surface characterisation For different values of a, we calculate S 5 aS11(1  a) S2, for the various standards. Then the relative deviation between the theoretical and calculated surface areas is evaluated as a percentage.

129

Guillard and Lagarde

relative deviation

Relative deviation value as a function of alpha 0.90 0.80 0.70 0.60 0.60

0.61

0.62

0.63 0.64 alpha

0.65

0.66

0.67

Fig. 22. Representation of the mean value of the relative deviation of the calculated area from the real area

TABLE 1. Real area, calculated area, difference between them and relative deviation Cylinder diameter (cm)

Fig. 20. The choice of camera cancels the gradient vector flow fault

Squares 12 18 12 18 12 18 Circles 12 18 12 18 12 18

Real surface area (cm2)

Calculated surface area (cm2)

Relative deviation (%)

200 200 300 300 450 450

200.27 200.98 306.62 299.14 452.60 453.15

0.13 0.49 2.21 0.29 0.58 0.70

113.1 113.1 201.1 201.1 314.2 314.2

112.94 112.40 197.47 200.09 314.44 314.63

0.14 0.62 1.80 0.50 0.08 0.14

8.00

Correlation between calculated and real surface areas

6.00 4.00 2.00 0.00 0.00

0.20

0.40

0.60

0.80

1.00

alpha

Fig. 21. Representation of the mean value of the relative deviation of the calculated area from the real area

Finally, we calculate the mean of the relative deviations. This mean is plotted vs. a in Figs 21 and 22. From Fig. 21 it can be seen that optimal values of a are between 0.60 and 0.70 and from Fig. 22 between 0.63 and 0.64. The means for these two a are 0.64 and 0.65, respectively: we therefore applied a 5 0.63. For this value of a 5 0.63, Table 1 reports the theoretical areas, the calculated areas, the difference between the two and finally the standard deviation. For this a, the results are very promising since the error is under 3%. It should, however, be noted that two results stand out from the others.

130

calculated surface area

relative deviation

Relative deviation value as a function of alpha

400

y = 1.012x−2.4893 R 2 =0.9996

300 200 100 100

200

300

400

real surface area

Fig. 23. Correlation between real and theoretical areas

While the greatest error is 0.70% for 10 of the results, the errors for the last two are 1.80% and 2.21%. It is possible that the error comes from the standard shapes that were drawn and measured with a ruler and cut out by hand. If we consider the linear regression curve (Fig. 23), the slope is very close to 1 (1.012) with a coefficient R2 of 0.9996. These results are very satisfactory.

Skin lesions segmentation and quantification

Conclusion The work presented should enable us to set up a larger-scale validation phase. This will be especially necessary on real epidermal lesions. It should then be possible to follow a patient suffering from a dermatological disease in conditions of repeatability far superior to those currently available. Although the robot presented here does not allow accurate mapping of small lesions, we are developing a new version with a higher resolution – both in 3D and for texture.

References 1. Dummer W, Blaheta H-J, Bastian BG, Schenk T, Brocker EB, Remy W. Preoperative characterization of pigmented skin lesions by epiluminescence microscopy and highfrequency ultrasound. Arch Dermatol 1995; 131: 279–285. 2. Lucassen GW, van der Sluys WLN, Van Herk JJ, Nuy¨s AM, Wierenga PE, Barel AO, Lambrecht R. The effectiveness of massage treatment on cellulite as monitored by ultrasound imaging. Skin Res Technol 1997; 3: 154–160. 3. Rajadhyaksha M, Grossman M, Esterowitz D, Webb RH, Anderson RR. In vivo confocal scanning laser microscopy of human skin: melanin provides strong contrast. J Invest Dermatol 1995; 104: 946–952. 4. Gelikonov V, Sergeev A, Gelikonov G et al. Characterization of human skin using optical coherence tomography. SPIE 2927: 27–34. 5. Richard S, Querleux B, Bittoun J, Jolivet O, Idy-Peretti I, De Lacharriere O, Leveque JL. Characterization of the skin in vivo by high resolution magnetic resonance imaging: water behaviour and age-related effects. Soc Invest Dermatol 1993: 705–709. 6. Addleman S. Whole-body 3D scanner and scan data report. Proc SPIE 1997; 3023: 2–5. 7. Stein N. Virtuell oder realita¨t: schnelle dreidimensionale ganzko¨rpervermessung. Fernseh- Kino-Tech 1996; 50: 236–240.

8. Stein N, Minge B. Viro 3D – fast three-dimensional full body scanning for human and other living objects. Proc SPIE 1998; 3313: 60–64. 9. Fouchet X. Mode´lisation Corporelle pour le Suivi de Le´sions Dermatologiques. PhD thesis, ENSEEIHT, LAAS-CNRS de Toulouse, 1999. 10. Kass M, Witkin A, Terzopoulos D. Snakes: active contour models. Int J Comput Vis 1988; 1: 321–331. 11. Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Comput Vis 1997; 22: 61–79. 12. Cohen LD. On active contour models and balloons. CVGIP: Image Understanding 1991; 52: 211–218. 13. Guidry DL. Active contours: with application to motion artifact cancellation and segmentation in MRI. PhD thesis. Universite´ de Louisville, 1999. 14. Bossart P-L. De´tection de contours re´guliers dans des images bruite´es et texture´es association des contours actifs et d’une approche multie´chelle. PhD thesis, Institut National Polytechnique de Grenoble, 1994. 15. Xu C, Prince JL. Snakes, shapse, and gradient vector flow. IEEE Trans Image Process 1998; 7: 359–369. 16. Lorensen WE, Cline HE. Marching cubes: a high resolution 3D surface construction algorithm. Comput Graphics 1987; 21: 163–169. 17. Algorri MA Ge´ne´ration et Simplification de Maillages pour la Reconstruction de Surfaces a` Partir de Points non Structure´s. Phd thesis, Ecole Nationale Supe´rieure des Te´le´communications, Paris, 1995. 18. Hoppe H. Surface reconstruction from unorganized points. PhD thesis, University of Washington, 1994.

Address: Jean-Michel Lagarde Cerper Institut de Recherche Pierre Fabre Hotel Dieu Saint-Jacques 2 rue de Viguerie 31025 Toulouse France Tel: 33 5 62 48 85 00 Fax: 33 5 62 48 85 99 e-mail: [email protected]

131

Skin lesions segmentation and quantification from 3D ...

dermatological lesions from a 3D textured model of the body's envelope. Method: We applied the active contour method to isolate the lesions. Then, by means of ...

1MB Sizes 3 Downloads 200 Views

Recommend Documents

Automated segmentation and quantification of liver and ... - AAPM
(Received 2 June 2009; revised 16 October 2009; accepted for publication 8 December 2009; published 25 January 2010). Purpose: To investigate the potential of the normalized probabilistic atlases and computer-aided medical image analysis to automatic

Pulmonary Artery Segmentation and Quantification in ...
With Sickle Cell Disease,” Journal of the American College of Cardiology 49(4), ... methods: evolving interfaces in computational geometry, fluid mechanics,.

Pulmonary Artery Segmentation and Quantification in ...
As a result, this algorithm presents a map of radius size (and ..... line filter for segmentation and visualization of curvilinear structures in medical images,” Med ...

Renal Segmentation From 3D Ultrasound via Fuzzy ... - IEEE Xplore
ing system, Gabor filters, hydronephrosis, image segmentation, kidney, ultrasonography. I. INTRODUCTION. ULTRASOUND (US) imaging is the preferred med-.

Unsupervised Segmentation and Categorization of Skin ...
In the sequence we use stochastic texture fea- tures to refine the suspicious ... 1: Illustration of the three segment categories obtained by the proposed method.

Automatic Skin Lesion Segmentation Via Iterative Stochastic ieee.pdf
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Automatic Ski ... stic ieee.pdf. Automatic Ski ... stic ieee.pdf. Open. Extract. Open with. Si

From Quantification and Intensification to Slack ...
'All of the apple is red/The whole apple is red.' The majority view in the literature is that all and its cognates are universal quan- tifiers (i.e. have a meaning that is ...

From Quantification and Intensification to Slack ...
Nov 12, 2011 - apple ist is rot. red. 'All of the apple is red/The whole apple is red.' 1I would like to thank Melanie Bervoets, Paul Égré, Thomas Graf, Ed Keenan ...

Meaningful Mesh Segmentation Guided By the 3D ...
PDL Laboratory, National University of Defense Technology, P.R. China ..... Stanford Computer Graphics Laboratory, the Caltech Multi-resolution Modeling.

Robust variational segmentation of 3D bone CT data ...
Oct 3, 2017 - where c1 and c2 are defined as the average of the image intensity I in Ω1(C) and Ω2(C), respectively ..... where we leave the definition of the Dirichlet boundary ∂˜ΩD in (33) unspecified for the moment. .... 1FEniCS, an open-sour

Efficient 3D Endfiring TRUS Prostate Segmentation with ...
mations, which worked well for the reported applications. However, direct .... region of the 2D slice Si, and ui(x), i = 1 ...n, be the labeling function of the prostate.

Meaningful Mesh Segmentation Guided By the 3D ...
ments. It has become a key ingredient in many mesh operations, such as texture map- ping [3][4], shape ..... Group and 3D Meshes Research Database by INRIA GAMMA Group. Finally, we wish to thank ... In: EuroGraphics 2006,. Tutorial, pp.

A Linear 3D Elastic Segmentation Model for Vector ...
Mar 7, 2007 - from a databank. .... We assume that a bounded lipschitzian open domain. O of IR3 ... MR volume data set according to the Gradient Vector.

3D Mesh Segmentation Using Mean-Shifted Curvature
meaningful and pleasing results, several ingredients are introduced into the .... function definition which uses the product of kernels to filter two feature components ..... Pan, X., Ye, X. and Zhang, S.: 3D Mesh segmentation using a two-stage ...

Text Extraction and Segmentation from Multi- skewed Business Card ...
Department of Computer Science & Engineering,. Jadavpur University, Kolkata ... segmentation techniques for camera captured business card images. At first ...

Unary quantification redux
Qx(Ax) is true iff more than half of the entities in the domain of quantification ... To get a feel for the distinctive features of Belnap's system, we present a simplified.

Quantification of hydrophobic and hydrophilic ...
Dec 10, 2007 - processing, one of the reaction products is water, which is remained adsorbed .... Vacuum Ultra Violet photons having sufficient energy to cause photolysis of water molecules adsorbed to the material so as to ..... Vapor Deposition (PE

Unary quantification redux
Qx(Ax) is true iff more than half of the entities in the domain of quantification ... formula as asserting the proposition with the free variables x referring to the ... strate its correctness, let us first check some examples to see how Belnap's ana

Notes from the Nurse—Skin Infections Skin infections account for up to ...
Skin infections account for up to 10% of time-loss injuries in some sports and can cause serious illness. Skin infections can be spread from one student or athlete to another. You can help protect your student and athletes from becoming sick or losin

Notes from the Nurse—Skin Infections Skin infections account for up to ...
Do not touch other people's wounds or bandages. • Do not share personal items like towels or razors. If you use any shared gym equipment, wipe it down before ...