3-D Pre-Operative Planning System for Robotic Surgery

Hugues Fontenelle 2001

“3-D Pre-Operative Planning System for Robotic Surgery” by Hugues Fontenelle

“Travail de Fin d’Etudes” (“End of Studies Work” – Thesis) Submitted to “Unité Electronique et Informatique” (“Department of Electronics and Computer Science”)

In partial fulfilment of the requirements for the degree of

“Ingénieur Industriel en Electricité option Informatique” (B.Sc. 4-year university-level)

At

ECAM, Brussels, Belgium This thesis was completed under an ERASMUS exchange program agreement with

Høgskolen i Oslo (HiO), Oslo, Norway And

The Interventional Centre National Hospital of Norway June 7, 2001 1

Thesis supervisors: Lars Aurdal Ph. D. Interventional Centre National Hospital of Norway

Ole Jacob Elle M. Sc. Interventional Centre National Hospital of Norway

Veslemøy Tyssø M. Sc. Høgskolen i Oslo Norway

Francis Gueuning Ph. D. ECAM, Brussels Belgium

2

Contents 1.

Introduction....................................................................................................................5

2.

Previous work.................................................................................................................7

3.

2.1.

About image guidance… ........................................................................................7

2.2.

About robotic surgery… .........................................................................................7

Image Processing and Visualisation................................................................................9 3.1.

Planning .................................................................................................................9

3.2.

Segmentation and thresholding ...............................................................................9

3.3.

Representation of solids in 3D .............................................................................. 11

3.3.1.

Cubic model.................................................................................................. 11

3.3.2.

Organ, complex solids................................................................................... 12

3.4.

Determining if a point is reachable, using the fv model......................................... 13

3.4.1.

3.4.1.1.

Implementation this algorithm ............................................................... 14

3.4.1.2.

Cubic model.......................................................................................... 15

3.4.2. 3.5.

Algorithm of proximity ................................................................................. 16

Determining if a volume is reachable by means of Raster Ray Tracing ................. 18

3.5.1.

4.

Algorithm of free linear path......................................................................... 13

Algorithm ..................................................................................................... 18

3.5.1.1.

DDA ..................................................................................................... 19

3.5.1.2.

Shadowing ............................................................................................ 20

3.5.2.

Interpretation of RRT images ........................................................................ 21

3.5.3.

Slicing........................................................................................................... 21

3.5.4.

Advantages ................................................................................................... 22

Robotics and Simulation............................................................................................... 23 4.1.

Robots in surgery.................................................................................................. 23

4.1.1.

4.1.1.1.

Teleoperated robots for minimal invasive surgery.................................. 23

4.1.1.2.

Autonomous robot systems.................................................................... 24

4.1.1.3.

Navigated interactive robots .................................................................. 24

4.1.1.4.

Micro machines ..................................................................................... 24

4.1.2. 4.2.

Introduction .................................................................................................. 23

Robotic surgery for Minimally Invasive Surgery........................................... 24

The ZEUS robot system........................................................................................ 25

4.2.1.

Description.................................................................................................... 25 3

4.2.2.

Forward kinematics....................................................................................... 29

4.2.3.

Inverse kinematics......................................................................................... 30

4.2.3.1.

Inverse position ..................................................................................... 30

4.2.3.2.

Inverse orientation................................................................................. 31

4.2.3.3.

Iterative algorithm ................................................................................. 32

4.2.4. 4.3. 5.

Simulation ............................................................................................................ 32

Planning ....................................................................................................................... 35 5.1.

Positioning............................................................................................................ 35

5.1.1.

The Coordinate systems ................................................................................ 35

5.1.1.1.

Voxel coordinate system........................................................................ 35

5.1.1.2.

Virtual coordinate system ...................................................................... 36

5.1.1.3.

Room coordinate system........................................................................ 36

5.1.1.4.

Robot coordinate system........................................................................ 38

5.1.2.

6.

Collision detection ........................................................................................ 32

Registration................................................................................................... 38

5.1.2.1.

Fiducials................................................................................................ 38

5.1.2.2.

Tracking ................................................................................................ 39

5.2.

Integration of robotics with visualisation............................................................... 39

5.3.

Surgeries............................................................................................................... 40

5.3.1.

CABG........................................................................................................... 40

5.3.2.

Liver ............................................................................................................. 41

Discussion .................................................................................................................... 42 6.1.

The goal................................................................................................................ 42

6.2.

The images ........................................................................................................... 42

6.3.

The robots............................................................................................................. 43

6.4.

The future of the project........................................................................................ 44

7.

Conclusion ................................................................................................................... 46

8.

Bibliography................................................................................................................. 47

Table of figures .................................................................................................................... 52

4

1.Introduction The medical field, and particularly surgery, is constantly under improvement, thanks to both experience and research. Efforts are made to reduce surgery invasiveness and to reduce duration of intervention, hospital stay, recovery, and so on, resulting in better quality of life for the patient, and cost savings. Laparoscopy is a special surgical technique eliminating the need for large incisions. Actually, only small ports are inserted through small incisions, called trocars, which allow specifically designed tools to go through. These long and thin tools have a clamp or a cutting edge in one end, and a handle in the other. The surgeon operates with two or more of these laparoscopic tools, aided by a camera, called the endoscope, which also penetrates the patient through another trocar. Limitations for this method are: the need for an assistant to hold the endoscope; and the precision still required in the delicate manipulation of these tools. A robotic arm can hold the endoscope, and the surgeon can control it by voice. Laparoscopic instruments can also be held by robotic arms. The surgeon then conducts the operation remotely, behind a console, having some kind of joystick to guide the tools. A 3-D visualisation system can give him the necessary depth of view. Advantages are principally an increased precision (the surgeon can now operate on areas less than one millimetre); the hand tremor can be filtered; and the operation could be conducted by a trained surgeon on a remote site. New problems are arising at the same time. If the surgeon is not colliding with himself, the robotic arms are! The initial configuration (and the placement of entry -trocars) are then crucial. The purpose of this work is to help finding optimal port-access placements for robotic surgery. Those ports should allow the laparoscopic tool to reach and move around the target, without damaging vessels or hitting bone. Moreover, external robotic arms should not collide. The modelling of an organs shape, robotic movements, computations, and visualisations will be handled by MatLab®, the Language of Technical Computing, from MathWorks™, Inc. This “matrix laboratory” allows fast computation for matrices and vectors, and is the language of choice for research, development and analysis. A second reason for using MatLab® is that previous work like the modelling of the ZEUS robotic system were already done using this language.

5

The chapter 2 (Previous Work), briefly introduces previous works in this field works that led to this work. Chapter 3 (Image Processing and Visualisation) introduces the modelling and visualisation techniques used to produce 3D models of human anatomy. A tool for helping in port placement is presented. Chapter 4 (Robotics and Simulation) describes the mathematical model of the ZEUS robot, and its application to a dynamic simulation. Chapter 5 (Planning) integrates the two main parts of this work: the model visualisation along with the robotic simulation. A discussion is given in chapter 6, followed by the conclusion (chapter 7).

6

2.Previous work 2.1.

About image guidance…

Image guidance is recognised as an increasingly important part of surgery, in planning and simulation [Jolesz 1997]. Also in intra-operatively image-guided procedures are used to insert needles and perform biopsies without damaging other tissues. Some software has been designed for 3D planning and/or guidance, like the 3D-Slicer®, developed at MIT, which allows for automatic registration, semi-automatic segmentation, 3D surface model generation, 3D visualization, and quantitative analysis of various medical scans [Gering, Nabavi et al. 1999], [Gering 1999]. ANALYSE®, a planning software developed at the Mayo Clinic, using Virtual Reality technology, offers support both for pre-operative surgical and treatment planning and for post-operative evaluation [Robb, Camp et al. ].

2.2.

About robotic surgery…

The ARTEMIS system, as a prototype, was the first six DOF master-slave manipulator for endoscopic surgery to be reported in the literature [Schurr, Buess et al. 2000], in 1996. Later came the commercially available ZEUS™ Robotic Surgical System (ComputerMotion®); and DaVinci™ Surgical System (IntuitiveSurgical™), 1999. In [Chiu, Dey et al. 2000], a virtual cardiac surgical planning platform (VCSP) for endoscopic and robotic coronary artery bypass grafting (CABG) was developed. MR images for the internal mammary arteries (IMA’s), the left coronary artery and the heart, CT images for the ribs and the chest, were segmented, imported into the platform and visualized in 3D. A virtual endoscope simulated the view during an operation. Simulated tools helped to determine the optimum port placement configuration (for optimal access to the vessels), in an interactive manner. So the VCSP image-guided surgical system allowed the surgeon to visualize the patient’s thorax in 3D, and pre-operatively try port-placement in order to gain optimal access. A crucial step towards totally endoscopic robotic CABG was made by [Kiaii, Boyd et al. 2000]: safe internal thoracic artery (ITA) harvesting was done remotely using the ZEUS™ robotic system and a curved blade. In all the 19 patients the ITA was prepared successfully for continuing CABG. [Kappert, Cichin et al. 2000] also concluded this approach to be satisfactory, having performed 27 harvestings. 7

Total CABG, which includes anastomosis were performed on pigs by [Ducko, Edward R. Stephenson et al. 1999], and on human cadaver torsos by [Tabaie, Reinbolt et al. 1999]. Robotic coronary artery bypass were successfully performed on human patients by [Reichenspurner 1999] and [Aybek, Dogan et al. 2000], on a beating heart, without cardiopulmonary bypass. These surgical reports conclude that it is -or it will- be feasible to conduct robotical endoscopic CABG, in the case of one (or maximum two) anastomosis. However, all conclude also that proper port placement is a crucial issue, and that further work has to be done to ensure full access to the target areas. Also, [Mohr, Falk et al. 1999] did five successful remote CABG with a DaVinci (IntuitiveSurgical, Inc.) robotic system. However, in two of the five cases, the operation had to be converted to a standard MIDCAB procedure, because of conflicts between the arms. This is one of the first papers identifying the problem occurring with robotic system, although without naming it… collision. Last but not least, [Austad, Elle et al. 2001, pending] explains how the Intervention Centre at the National Hospital of Norway has performed 12 robot-assisted CABG-procedures on pigs, on beating heart, using the ZEUS system. Robotic tool movements during the surgery were acquired using the FlashPoint system (IR cameras) and visualised. Mathematical model of the robotic arms, as well as a collision detection routine is developed, to incorporate in a planning system. The paper demonstrates the importance of the initial robot’s settings and the need for such a planning system.

8

3.Image Processing and Visualisation 3.1.

Planning

Computed Tomography (CT), Magnetic Resonance (MR), ultrasounds and X-rays produce grey-scale images. For diagnostic imaging, the radiologist has to analyse them slice by slice, reconstructing mentally the patient anatomy. The differences between the reality and these cross-sectional grey-scale images require a lot of training and experience from the physician. By concatenating all slices into one 3D rendering, the work of the radiologist can be facilitated. Three dimensional reconstruction can especially benefit the planning for surgical intervention, for purposes of exact localisation and targeting [Jolesz 1997]. Conventional surgery relies on: visualisation (the eyes of the surgeon provides him direct feedback) mechanical access (the hands of the surgeon hold directly the instruments) Recent trends emphasise minimally invasive surgery, where the surgeon accesses the site through natural openings or small incisions, using thin tube-like instruments. This surgery technique is called key-hole surgery, or laparoscopy for the abdominal region, or thoracoscopy for the thoracal region. The surgeon now suffers from the lack of visualisation and the awkwardness and lack of tactile feedback when using these laparoscopic instruments. One way to compensate this loss is to provide more visualisation, before the surgery for planning, and intra-operatively[SPL 1996]. Also, Virtual Endoscopy provides the point of view of the endoscope without invasion. This 3D reconstruction benefits planning and training for the surgeon [Freudenstein, Bartz et al. 2000]. [Robb 1996] presents Virtual Endoscopy using the Visible Human® datasets, which is a complete, anatomically detailed, three-dimensional CT and MR of both the normal male and female human bodies [NLM ].

3.2.

Segmentation and thresholding

Segmentation is the process involved in reducing images to information [Russ 1999]. The grey-scaled images from CT or MR are separated into background/foreground, highlighting features, to produce black and white images. This operation is called thresholding. One way to achieve thresholding is brightness classification: all pixels falling within a certain range of brightness are features or foreground, and set to 1, the other are background, and set 9

to 0. Note that choosing 1 for the features is a convention, and can be inverted. One commercial software package, Photoshop®, has a build-in tool designed to perform such brightness thresholding. The user clicks on one pixel of the image with the MagicWand®, to separate the feature from the background, within a certain range. One other way to separate feature is morphologic segmentation [Fishman, Kuszyk et al. 1996], which relies on the user to determine if part of an image is background or not. Thus this is manual work. Here, a CT data set of the abdominal region has been processed. Because the liver has a brightness level nearly equal to the surrounding, it is necessary to correct all slices manually. The following Figure 1 and Figure 2 present one slice of the raw data set, then the segmented binary image of the liver.

Figure 1 Raw image of abdominal region - CT scan

Figure 2 Segmented image of the liver

10

3.3.

Representation of solids in 3D

3-D objects can be represented as a surface using connected triangles, using the Marching Cubes algorithm, one of the highest-resolution isosurface construction algorithms [Nagae, Agui et al. 2001]. The MatLab® function ‘isosurface’ compute the boundary of a threedimensional binary image, eventually segmented from a grey-scaled image. Note that MatLab can handle the segmentation, thresholding the image with an ‘isovalue’. All voxels whose brightness are lower or equal to this isovalue belong to the foreground. They are described by two matrices: a ‘vertices’ matrix containing the coordinates of all the point, and a ‘faces’ matrix referencing, for each triangle, the three points making it. Thus, the vertices matrix is a N by 3 matrix, and the faces matrix is a M by 3 matrix.

3.3.1. Cubic model As an illustration, let’s define a cube by its isosurface representation. The Figure 3 shows the coordinate system and where the eight points are.

Figure 3 Faces and vertices representation of a cube

11

000 100 101100 vertices = 001 101 111 011 A cube is made of six faces, but since we can only specify triangle, we’ll have twelve faces : 123 134 556778 126 faces= 146357   427387 276 148 185

These vertices and faces can be patched, then illuminated, and surface-rendered with a Gouraud or Phong algorithm. Some transparency features can also be added for convenience.

3.3.2. Organ, complex solids The process of modelling natural objects (bones, liver…) from medical image data is reverse engineering [Zollikofer and Leon 1995]. The images are segmented, producing one binary data set for each feature. These black and white three-dimensional images are isosurfaced, using the MatLab function, to model the different organs. One different colour is attributed to each of the segments. Green for liver, yellow for vessels, red for tumour, grey for bones is an arbitrary choice, allowing good visualisation in Figure 4.

12

Figure 4 Liver, tumour, vessels, ribs

3.4.

Determining if a point is reachable, using the fv

model This paragraph addresses the simple problem of knowing if a straight line between a selected entry-point and a target is free of obstacles. The obstacles can be arteries, veins, and bones that the surgeon wants to avoid. The path between the origin and the target is a straight line since the robotic tools, the laparoscopic instruments and biopsy needles are all straight. Let’s return to our simple cubic model. We can translate it and define both the centre of the cube, and the target point as the origin of the coordinates system. The entry-point is somewhere in space, external to the cube. So the question is: is there a linear path from the entry to the target free of obstacles? The answer is obviously no, since the cube is completely closed. So let’s open the upper face, and find an algorithm deciding if, yes or no, the path is free.

3.4.1. Algorithm of free linear path First let’s intersect the line (entry-target) with the plane made by one face (a triangle). The intersection point can be inside the triangle or outside it. If it is outside, there is no collision 13

for this first face. If the intersection point is inside, we have to decide if it’s located on the line between the entry and the target, or after the target. If after, there is no collision and we can compute the next faces, one by one, until we encounter a collision. The schematic representation of this algorithm is: D = distance(entry , target) i=1

I = intersection(facei , line(entry, target)) I ∈ facei ? Yes D’ = distance(entry, Ii)

No No

i=i+1

D’ ≤ D ?

Last face ? Yes

Yes

Path not free !!!

Path free !!

3.4.1.1.Implementation this algorithm The equation of the plane extending a triangle is simply a vertor. This vertex is normal to the plane, and is the cross product of two vertices belonging to the plane, thus to the triangle.

[

P≡a p x+bp y +c p z =d p where the normal to the plane →n = ap,bp,cp

]=→ s −s ×→ s −s 2

1

2

1

dp is the height of the plane, easily found by replacing x, y, z with one known point (peak of the triangle) The parametric equation of a line is two equations of two planes, which are carefully choosed so that the line belongs to both. The cross product between the vector V=entry-target and one arbitrary vector will give a plane: al1x+b l1y+c l1z=d l1 the other plane can be given with one other arbitrary vertex and lead to a l2x+b l2y+c l2z=d l2 The three equations have three unknowns: x, y, z, which are precisely the coordinates of the intersect point. If 14

 a p bp c p  M = al1 bl1 cl1  al2 bl2 cl2  And i = [x ; y ; z], the unknowns; D =[dp ; dl1 ; dl2] then M*i=D and i = M-1*D Now, it has to be determined if this intersection is inside the triangle. Two edges of each triangle can be considered as vectors having the same origin. Moreover, those two vectors can be considered as the two unit vectors of a planar non-orthogonal coordinates system, as illustrated in Figure 5.

Figure 5 Intersection inside the triangle

Let’s give  x10  u =  y10  0  z1  where x10 represent the x1 vector in frame 0. The coordinates of the intersection i can be given in this new system: i1=u-1*i0 If both coordinates are positive, it means that i is in the first quadrant. If i is also in the first quadrant of a coordinate system defined by two other vectors of the triangle, then we can say that i is inside it. 3.4.1.2.Cubic model If the cube is open at the top, an entry-point located above will be free, one located down or on one side will result in a collision with one of the surfaces. But what is the exact part of the space that will reach the target (the centre of the open-cube)? We are interested in determining entry-point on a patient’s skin, thus on a surface. The body envelope could be modelled as a cylinder. 15

Computing the surface analytically is difficult, if not impossible, since we have a discrete representation of the volumes. Thus computing point by point is more appropriate. The canvas we draw here is a lattice of points equally spaced, making a cylinder when all linked as small triangles. Each point of this canvas will be attributed a fourth coordinate: the result of the linear reach algorithm (either 0 or 1). It is assumed as an hypothesis that if three points on the canvas, forming a triangle, are all three collision free, then the face itself is collision free. Free faces coloured in blue and colliding faces coloured in red, for the open-cube reachability problem, will give the following results (here the cube is opened up and left in Figure 6):

Figure 6 Open cube with cylindrical map showing if the centre is straightly accessible

The algorithm presented goes through all points of the canvas, and for each, through all point of the model vertices. With the liver model, which counts 18725 vertices of bones, 6945 of vessels, a canvas of 3588, the processing time would have been ca. 100 hours on a PIII650128Mb. No results are therefore available. The next algorithm simplifies the process, introducing some errors at the same time.

3.4.2. Algorithm of proximity This algorithm computes the distances between the line entry-target and all the vertices, and return the minimal one. Vertices that project further than the target are rejected. Then, a 16

certain path is said to be free if this minimal distance is still larger than a certain value. This pivot shouldn’t be too large, or the target would never be accessible; neither too small, or a face around the line would be taken into account. The Figure 7 explains this.

Figure 7 Errors occur using the Proximity algorithm

One of the faces building the model is represented, and called ABC. A line is drawn between the entry and the target, which intersect the triangle in I. Projected distances of the triangle’s peaks are measured Da, Db, Dc, and could all be greater than the pivot choosed previously, although it is clear that the intersection is inside the triangle. This algorithm is therefore only an approximation, but need to be considered: the processing time is better and it has been possible to produce some results, as in Figure 8.

17

Figure 8 Canvas: blue areas for free paths

3.5.

Determining if a volume is reachable by means of

Raster Ray Tracing The processes used for shadowing in computer graphics [Foley, Dam et al. 1990] were found to be useful here, since the target volume can be considered as a collection of discrete light sources. The main difference lies in that the shadows of the obstacles should be projected on the body surface, not on a simple plane.

3.5.1. Algorithm The Raster Ray Tracing technique consists in tracing rays from the light source to the projection plane (in the discrete image). For each ray, each pixel in the plane receive a value depending of the crossed voxels, the illumination, and many other parameters. Here, we’ll work in the raw data set, setting a line going through each point of the light source (the target) and each point of the obstacle, and actually tracing the ray in the segment between the obstacle’s point and the edge of the data set. Figure 9 shows the shadow of ‘b’, with ‘a’ as a light source. 18

Figure 9 Shadows with DDA

3.5.1.1.DDA The Digital Differential Analyser is one of the first algorithms designed to connect pixels between two points on a screen, in a linear manner. The following code only works in 2D, and in the first quadrant. p0 = [x0; y0] p1 = [x1; y1] dy = y1 - y0 dx = x1 - x0 m = dy/dx y* = y0 for i = x0 to x1 light_pixel(y*, i) y=y+m i=i+1 end

19

One little problem is however encountered: if the m slope of the line is greater than 1, the lighted pixels may not be connected. This can be solved by exchanging x and y variables, thus incrementing y instead of x. Some improvement is also required to allow all four quadrants. 3.5.1.2.Shadowing But here, we don’t want a line between the two points, but a line from the second point to the border, straight away from the first point, as in Figure 9. We can decide to fill the line with 1 or, as in this case, to increment the previous value. Each point of the “light source” (the tumour in the next example) is computed with each point of the obstacle (the bones). Here is a 2D result for the 33rd slice (Figure 10).

Figure 10 Shadowing a 2D slice with DDA

Moreover we want it in 3D, so we have to take the z–axis into account. The same algorithm, with a slope mz = dz/dx, or mz = dz/dy, depending if x or y is incremented by 1, is used. This time, the problem of disconnected voxels in the z direction can not be overcome. In practice, this is no so important: in the z direction, the slopes are rarely greater than 1, since there are far less slices than rows or columns in the data set. In the future, this problem should be solved using another algorithm. 20

3.5.2. Interpretation of RRT images Any voxel with a value of 0 directly sees the target. Any other is blinded by an obstacle. There is also some depth information: a voxel with a value of 1 has only one obstacles voxel in the line between it and the target. Greater voxel values mean bigger obstacles to cross. So the resulting three dimensional matrix is a grey level one. It can be thresholded to 0 to tell which part of the volume can access the target, or thresholded to 1 if we can tolerate to go through one pixel. This pixel could be a small vessel we don’t want to take into account. However, the actual technology of scanners isn’t sufficient to implement avoidance of vessels, since they are difficult to see; and even a voxel may indicate a large vessel we can’t tolerate to go through, especially in the liver.

3.5.3. Slicing The ray traced volume still doesn’t provide information either on a cylindrical surface representing the body envelope, either on the body envelope itself. Projection can be achieved by specifying a surface into the discrete volume. The surface is, as in MatLab, made of vertices and faces. Each face will be associated with its nearest voxel in the volume. If this voxel is in the shadow, the face will be coloured in red; otherwise in blue. Figure 11 illustrates this.

21

Figure 11 Slicing: blue areas for free paths

3.5.4. Advantages The results presented here are similar to the shadow generation techniques used for better visualisation [Lengen, Meyer et al. 1998]. Except that the purpose is different: giving a guideline for port-access placement. Also, the use of the 3D shadowed matrix is quite modular: there is no need to specify the projection surface before ray tracing the data set volume. It means that this surface can be a plane, a cylinder, or a complex surface like the body envelope, and that changing the surface won’t require a recomputation of the volume. This can be done nearly instantly. However, the computational time for the tracing process is high, and linearly dependent of the product of the number of voxels in the target and in the obstacles.

22

4.Robotics and Simulation 4.1.

Robots in surgery

Robots in the medical industry can first be divided into robots developed to assist people having physical disabilities [Miller ], and robots for use in surgery, presented here.

4.1.1. Introduction For more than a decade, assisting robots in surgeries have began to emerge. Besides the industry, some research groups have developed their own robotic systems. Only six commercial suppliers throughout the world exists, each of them having sold only a few units, because of the lack of standards slows down the development speed and limits the amortisation. However, several reasons in favour of robotic devices justify the trend [Lueth and Bier 1999]: At all time robots show the same power and execute their task well-balanced The behaviour of a robot during execution is exactly known before execution, during execution and can be protocolled for documentation, evaluation and education Robots shake or vibrate much less than a human being Robots can move much more precisly in space and time than a human being Robot can react much faster than a human being Robots can be controlled remotely Robots and control systems can coordinate complex processes or events and are able to cooperate with other machines in the OR within milliseconds Those advantages, if not attained yet, are aims. Robots currently on the market can be classified into four groups[Lueth and Bier 1999]: Tele-operated robots for minimal invasive surgery Autonomous robot systems Navigated interactive robots Micro machines 4.1.1.1.Teleoperated robots for minimal invasive surgery Robots don’t work on their own, but rather, they are tele-operated by the surgeon. They can carry an endoscope, as AESOP (ComputerMotion) which is voice-controlled; or EndoAssist 23

(Armstrong), which is guided by the surgeon head’s movements. A few are offered to control the instruments, as DaVinci (IntuitiveSurgical) or ZEUS (ComputerMotion). They are operated by a joystick-like device. A telerobotic platform for purposes of eye surgery, was developed in a collaboration between NASA-JPL and MicroDexterity Systems, Inc (MDS). This fully 6-DOF slave's robot reproduces the movements, filtered and scaled, performed by the surgeon, through the master system [Charles, Das et al. 1997]. 4.1.1.2.Autonomous robot systems Available robots on the market are RoboDoc (IntegratedSurgicalSystems) and CASPAR (OrtoMaquet), which perform holes drilling and shaping for total knee replacement, and hip replacement. Robots help to shape prosthesis which allow a good match between the prosthesis component and the bone. A software tool here is also required to plan the operation. Actually, the system isn’t fully automated: since the surgeon is asked to supervise and co-operate with the machine along the process [Harris, Jakopec et al. 1998]. 4.1.1.3.Navigated interactive robots Those robots are pre-programmed or teached to carry, guide and move instruments, essentially microscopes in neuro-surgery applications. MKM (CarlZeiss) and SurgiScope (DeeMed/Elekta)

are

used

as

microscope

support

systems,

and

Neuromate

(immi/IntegratedSurgicalSystems) is a passive device for endoscope/catheter/biopsy guidance. An experimental interactive robot system for maxillo-facial surgery, OTTO, was developed by [Lueth, 1998 #26]. 4.1.1.4.Micro machines Micro machines as automated instrument tip are under development: they will be able to perform dedicated special tasks such as automatic sewing or stitching of vessels, or for automated biopsy. Advancement in microchip and wireless technology may allow development of swallowable camera, micro robots or implants magnetically and remotely navigated [Mack 2001].

4.1.2. Robotic surgery for Minimally Invasive Surgery New trends in surgery focus on minimising invasiveness, with endoscopic techniques and laparoscopy, thus avoiding open surgery [Mack 2001]. The surgeon does no longer see or touch directly what he operates: instead he holds thin tools inserted through tiny holes, and 24

use the camera as his navigational device. Several advantages are encouraging the use of this technique, along with 3D-visualisation [Boyd, Desai et al. 2000]: Better quality of life (QoL) for the patient Smaller incisions Reduced recovery-time Lowered costs, since the robot replaces the expensive human assistance for holding the endoscope [Schurr, Arezzo et al. 1999]. Reduced operating-time (although it is sometimes the contrary because of the lack of

training) However the hand shaking (tremoring) of the surgeon, and the lack of precision, are still a problem when operating small parts. Thus arose the idea to make the endoscope and the tools held by a tele-operated robot. Robotic laparoscopy is applicable to different kind of surgeries. The main advantages of the robotic system, compared to laparoscopic procedures, are: Motion scaling, which allows precise movements in tiny volumes Tremor filtering, which prevents the hand shaking Intuitive motion (with laparoscopy, the surgeon has to move the instrument left for the tip to go right; here, direct motion is restored) [Damiano 2000] Visualisation controlled with robotic assistance In addition: reduced fatigue, precise surgical movements, added stability, and surgeon comfort [Tabaie, Reinbolt et al. 1999].

4.2.

The ZEUS robot system

The model of the ZEUS robot system has been described by Arve Austad [Austad 2001]. The present chapter is build based on his work.

4.2.1. Description The ZEUS robot system, commercialised by ComputerMotion, Inc., consists of three (or possibly more) separate arms. One holds the endoscope, called AESOP, and is available as a stand-alone product. The camera movements are voice-controlled by the surgeon. The other arms, referred here as ZEUS arms, hold the surgical instruments, and are tele-manipulated by a joystick-like device. The robot arms have one prismatic joint and five revolute joints. The symbolic representation is shown in Figure 12.

25

Figure 12 The ZEUS arm - Symbolic representation

Link 1 is called the body or trunk. Link 2 is called the upper arm and link 3 the fore arm. The trunk is usually normal to the operating table but it is possible to rotate the trunk +/-15° about the x0 -axis. The forearm has a bend which can be fixated in 15° steps in the range [-45°, 45°]. Joints 4 and 5 are passive and their coordinates (q4 and q5 ) depends on the incision point and the coordinates of the other joints. Joint 6 rotate the instruments or endoscope. For the Zevs arms q6 is set by the user while for AESOP q6 is automatically set so that x6 is parallel to the plane x0 y0 . Or in other words, the horizontal lines in the camera images are parallel to the OR table. The frames attached to each link is set according to the Denavit-Hartenberg-convention [Spong and Vidyasagar 1989]: The axis xi+1 is perpendicular to the axis zi 26

The axis xi+1 intersects the axis zi Using this convention, the homogenous transformation Ai from frame i to frame i-1 is represented as a product of four basic transformations. Ai = Rot z,θ i ⋅Trans z,d i⋅Trans x,a i⋅Rot x,α i

cθ i = s0θi 0 

− sθicαi cθicαi sα i 0

s θ i sα i −cθ is αi cα i 0

aicθ i  aicθ i  di  1 

where c? i=cos(?i) and so on, and: ai is the distance between the axis zi-1 and zi measured along the axis xi ; a i is the angle between the axis zi-1 and zi measured in a plane normal to xi . The positive sense for a is determined from zi-1 to zi by the right hand rule; di is the distance between the origin oi-1 and the intersection of the xi axis with z i-1 measured along the z i-1 axis; ? i is the angle between the xi-1 and xi axis measured in a plane normal to the z i-1 axis. The link parameters are shown in Table 1 where * denotes active variables and # denotes passive variables. Link

a

a

d

?

1

0

0

d1 *

0

2

a2

0

0

?2 *

3

0

p/2+a 3

d3=dftana 3

p/2+? 3 *

4

0

-p/2

d4=de/cosa 3+df

?2 #

5

a5

p/2

0

-p/2+? 5 #

6

0

0

d6 = L

?6 #

Table 1 Link Parameters for ZEUS Robot System

Constants

Value

a2

381 mm

a3

∈ −π ,− π ,− π ,0, π ,π ,π , 4 6 12 12 6 4

de

82.5 mm

{

Description Length of upper arm

}

Bend of forearm (in radians) Length of elbow

27

df

300 mm

Length of forearm

a5

14 mm

Offset in collar

d6 = L

~370 mm

Length

of

instruments

/

endoscope

Table 2 Constants of the ZEUS Robot System

Table 2 presents the constants of the robotic system. We also use d3, the distance between o3 and the bend of the forearm: d3 =de ⋅tan(α3) ; and d4, the distance between o3 and o4:

d4 =

de + d f . cos(α3)

Variables

Range

d1

381 mm

?2

[-7p/8 ; 7p/8]

?3

[-141° ; 141°] = [-2.46 ; 2.46]

?4

[-p/4 ; 5p/4]

?5

[-p/2 ; p/2]

?6

[0 ; 2p] + 2kp (k∈Z)

Table 3 Variables of the ZEUS Robot System

The top view of the positioning arm offers another representation [Zeus 1999], as in Figure 13.

Figure 13 Top View of Positioning Arm

28

It should be noted that this 6-DOF robot only offers 3 DOF's to the user, while inside the patient, because the incision point is fixed. It is usually refered to a 4-DOF, since the rotation of the instrument can be taken into account [Ducko, Edward R. Stephenson et al. 1999]. Movements shown in Figure 14 are: left-right, up-down, in-out, and rotation.

Figure 14 Four DOF's

4.2.2. Forward kinematics The homogenous transforms Ai from frame i to frame

i-1

can be given using the parameters.

Since ? i is the only variable angle, cos? i will be written ci for simplicity, cos(?i+? j) will be written cij… 1 A1= 00 0 

0 1 0 0

0 0 0 0 1 d1 0 1 

c2 A2 =s02 0 

−s2 c2 0 0

0 a2c2 0 a2s2  1 0  0 1 

−s3 c3sα3 c3cα3 0  α3 0  A3= c03 sc3sαα33 s−3csα 3 d3  0 0 0 1  c4 A4 =s04 0   s5 A5 =−0c5 0 

0 −s4 0  0 c4 0  −1 0 d4  0 0 1 0 −c5 a5s5  0 −s5 −a5c5 1 0 0 0 0 1 

29

c6 A6=s06 0 

−s6 c6 0 0

0 0 0 0 1 d6 0 1

4.2.3. Inverse kinematics 4.2.3.1.Inverse position The position of joint o4 could be determined with the matrices representation. The homogenous transformation from frame 4 to the base frame is:

T04 = A1⋅ A2⋅ A3⋅ A4 From this set of equations the inverse kinematics from the base to the coordinates of joint o4 can be found. We prefer here a geometric approach, simpler for this particular robot. Figure 15 shows the projection of the robot onto the x0 y0-plane.

Figure 15 Projection of robot onto x0y0-plane

Using the law of cosines we see that cθ 3 =

(o40x )2 +(o40y )2 −a22 −(cα3d 4 )2 2a2cα 3d 4

where o04x and o04y are the coordinates of the joint o4 in frame 0. The solution ?2 is not unique since the elbow can be “up” or “down”, depending of the configuration. If we note that sθ 3 = ± 1−cθ 3 , we have:   θ 3 =arctan ± 1−cθ 3  c θ 3   30

Both solutions –“elbow-up” and “elbow-down”- are covered by this approach, by choosing the positive or negative sign, respectively. ? 2 can be found also:

(

 o0  θ 2 =arctan 40y −arctan cα3d 4s3 a2 + cα3d4 s3  o4x 

)

Projection of the robot on the x0z0-plane gives d1:

d1 =o40z +d f sα3 4.2.3.2.Inverse orientation The position of the system depends on orientation, and vice versa, so they can not be found separately. Thus solving the problem analytically is hard or even impossible. Another approach is to solve the problem numerically. Note that since ? 4 and ? 5 are passive joints, their value depends on the incision point. We suppose that o4 and the incision point pi are known. In frame 3, the coordinates of pi are:

p 3i = T 3 0 ⋅ p

0 i

0

where T3 is the transform matrix from base frame to frame 3; pi3 the incision point in frame 3 coordinates; pi0 the incision point in base coordinates. The points o3, o4, o5 and pi all lies in a common plane, so angle ? 4 can be found from the projection of pi3 onto the x3y3-plane.  p3  θ 4 =arctan i3y   pix 

[

]

The solution is unique when θ 5∈ − π ,π and p3i y ≠0 , p3i x ≠0 . 2 2 When p3i x = p3i y =0 , the incision point lies on the z3-axis, and the system is singular. This situation should be avoided.

[

]

If θ 4∈ −π ,π the configuration is called “instrument up” (the y5-axis is pointing upwards); 2 2 otherwise the situation is said “instrument down” (the y5-axis is pointing downwards). The angle of joint 5 can be determined geometrically, as shown in Figure 16.

31

Figure 16 Angle of joint 5

 ψ =arccos a5  pi −o4

[ ]

  , ψ∈ 0,π 2 

 ( pi −o4 )•(o4−o3)   φ =arccos  pi −o4 ⋅ o4−o3 

θ5=ψ −φ 4.2.3.3.Iterative algorithm Such an algorithm was designed by [Austad 2001]. It sometimes converge out of the workspace, or has no solution at all, if o5, the collar, is close to singularities (which is only of theoretical interest since the singularities are avoided by physical restriction of the joints). Knowing the length of the tool, and having as inputs the incision point and the tip of the instrument, we calculate o5. Instead of running the iterative algorithm, we prefer to compute an approximation, neglecting the collar, stating that o4=o5. Thus the inverse position and the inverse orientation are directly calculated.

4.2.4. Collision detection The robotic system is modelled by segments of lines. For one particular robot, each of its segments is processed with each segment of the other robots, one by one. The process involved is quite simple: it computes the minimal distance between the two segments.

4.3.

Simulation

Knowing the tip position of the tool, the entry-point and the base of each robot, all joints can be computed with the inverse kinematics process or, with a slight approximation, with the inverse positioning and the inverse orientation calculus. Simplified visualization is achieved by drawing segments of lines between each joint, while adding a modelled operating table.

32

Different tip positions can be chosen around the target, for each arm. Figure 17 shows a helicoidal test path.

Figure 17 Test path for tip of instruments

Then joint positions are computed and each arm is drawn, tip position by tip position, resulting in a movie of a hypothetical surgery scene. The collision detection routine is run during the movie, recording the minimum distance between each arm. One image of the movie is shown in Figure 18 and Figure 19, with the minimum distance at this time between the green and the yellow manipulator, between the green and the black (the endoscope), then between the yellow and the black.

33

Figure 18 Robotic simulation and collision detection (far)

Figure 19 Robotic simulation and collision detection (near)

34

5.Planning The 3D models can be integrated into the robotic simulation, to produce a scene representing the operating room. Robotic movements will be simulated and visualised, resulting in a preoperative planning tool. Before such an integration can be achieved, we need to establish relationships between the different elements.

5.1.

Positioning

The following articles refer to the problem: [Lea, Watkins et al. 1995; Brandt, Radermacher et al. 1997; Harris, Jakopec et al. 1998; Gering 1999; Austad 2001; Austad, Elle et al. 2001, pending]

"In love, war, and surgery, positioning is the most important thing." Antoni Raimondi, a great Italian pediatric neurosurgeon.

Registration deals with the relative spatial relation between the robot, the operating table and room, the acquired images and the patient’s anatomy. Various coordinates systems must be established, and the relations between them defined.

5.1.1. The Coordinate systems 5.1.1.1.Voxel coordinate system This coordinate system locates voxels in the 3-D array of CT or MRi images. The data set is acquired slice by slice, generally from the head of the patient to her feet. The Z axis represent the number of the slice. The X and Y axis are defining the coordinates of each of those twodimensional images. The exact way of addressing voxels depends of how the data’s are stored on the disk, and how the scan itself was run. Here, the Z axis runs from the bottom (feet) to the top (head) slice. The Y axis runs from the right to the left, the X axis from up to down, when looking from the bottom. The next figure shows the coordinate system. The bones had been segmented and modelled for better understanding (although it is clear that this system refers to the raw data’s: the voxels) in Figure 20.

35

Figure 20 Voxel coordinates system

The frames used here are made of 59 slices of 512x512 pixels. Thus the range for the [X Y Z] axis are [1:512 1:512 1:59]. To improve computation and visualisation speed, the data set had been shrink to 128x128x59, reducing the amount of data by 16. 5.1.1.2.Virtual coordinate system This coordinate system scales the voxel coordinates system to the 3-D space, allowing scaled representation and measurements. The thickness between pixels depends on the scanner and the resolution. Images acquired here have 0.605468 millimetres space between pixels (X and Y axis), and 3. 000000 millimetres betweens slices (Z axis). Virtual coordinate system is not used her; instead, voxels are directly converted to the room. Space between pixels in X and Y are quadrupled since only one out of four pixels had been kept to produce the 128x128x59 matrix. 5.1.1.3.Room coordinate system The room, or table coordinates system, help for positioning on the operating table. The origin of the system is in the middle of the table. The X axis runs along the length of the table, from the middle to the feet; the Y axis runs along the width of the table, from the middle to the right (seen from upside), the Z axis runs normal to the table, from the middle, upwards. So, converting the voxel system to the room system can be done as following: 36

 0 0 −1  X  X  Y = Z   0 −1 0 ⋅ Y    room −1 0 0   Z  voxel to reflect the new orientation. To take the scaling, as explained in the virtual coordinate system paragraph, the conversion matrix becomes:

0 0 −3.000000  X   X  = 0 −0.605468∗4 0 Y  ⋅ Y  0 0  Z  room −0.605468∗4   Z  voxel

Moreover, the data set has to be translated onto the operating table. If we assume that the centre of the raw data set lies on the Z axis of the room, then:

−3.000000 0 0  X  − ∗ 0 0 . 605468 4 0 Y  Z  = ∗ 0 . 605468 4 0 0  1   room  0 0 0

x pos   X  y pos ⋅ Y  z pos   Z  1   1  voxel

where xpos, ypos, zpos, define the origin the voxel coordinate system, stated in room coordinates. Here, with a data set of 128x128x59: xpos = 30 * 3.000000 mm ypos = 64.5 * 4 * 0.605468 mm zpos = 128 * 4 * 0.605468 mm The Figure 21 shows the modelled data’s placed on the operating table.

Figure 21 Registration of model on the operating table

37

We should note that the patient is never placed exactly in the middle of the table, and neither parallel to it. This means that further translation and rotation have to be carried to reflect the exact position of the patient. Neglecting the rotation assumes that the patient lies on the operating table in the same position as during image acquisition. This is not always true, since patients are sometimes tilted to the left right, to improve accessibility during surgery. 5.1.1.4.Robot coordinate system The robot coordinates system has its origin at the base of the robot and the same orientation as the room coordinates system. All movements of arms and joints are related to this base. Thus, one such system exists for each robot.

5.1.2. Registration Planning is done on images, on a virtual basis. Jumping to the surgical environment requires some adaptation. The relationships between the voxel and the room coordinates systems has been described previously, with a simplified approach: the patient lying in the exact same position as in the scanner! If some device can be targeted on the scans (measured on the images) and found again on the patient’s body (measured in the room), then a conversion can be established. 5.1.2.1.Fiducials Fiducials are reference features visible on both the image and in the anatomy. Anatomic fiducials can be the vertebra, the ribs or the sternum, since those areas can be easily located on both the scans and the patient. In some cases, however, artificial fiducials are preferred since there are no evident anatomic marks. Iron screws are used in knee surgery or, to avoid interference, titanium screws [Harris, Jakopec et al. 1998], stereotactic frames in neurosurgery. Here small capsules filled with vitamin E [Robb, Hanson et al. 1996], which appear on the images, can be used. Fiducials have to be well-contrasted to be visible on the scans. Of course those artificial fiducials have to stay in place between image acquisition and surgery. But, to allow more freedom and convenience, the exact place of the pin can be marked with a special pen (remaining around one week). Since, a single point fiducial provides only three of the six constraints needed to define a coordinate system, three of them are required. . 38

5.1.2.2.Tracking Once fiducials can be acquired on the image, their position need to be measured in the room coordinates. This can be done with an optical tracking system, like FlashPoint™ (Images Guided Technologies, Inc.). Three infrared cameras track three LED’s: one for the position, one for the direction (building a vector), and one for the orientation (building a plane). FlashPoint™ was already used to track the robot movements.

5.2.

Integration of robotics with visualisation

Figure 22 and Figure 23 show one image of the movie, like in the robotic simulation alone, but this time with the anatomic model.

Figure 22 Robotic simulation along with model visualisation (far)

39

Figure 23 Robotic simulation along with model visualisation (near)

5.3.

Surgeries

5.3.1. CABG Coronary Artery Bypass Grafting (CABG) is a type of heart surgery, performed to reroute blood around a clogged artery. The coronary artery, responsible for supplying blood and oxygen to the cardiac muscle, is clogged by fat or cholesterol which accumulates over time. The narrowing of these arteries is called atherosclerosis, and can cause heart attack [AHA 2000]. The blocked area is bypassed by grafting either a vein –the saphenious vein from the leg [Gandsas ]-, or by harvesting an artery from the chest wall –the internal thoracic artery (ITA) [AHA 2000]. The suture between the two vessels (ITA or saphenious vein, and the coronary artery) is called an anastomosis. The conventional CABG is open-heart: a median thoracotomy is performed through the chest. However, efforts are made to diminish invasiveness of surgeries. Aims are: to decrease the trauma by minimising access, and to avoid the use of cardio-pulmonary bypass (CPB) [Falk, Diegeler et al. 2000]. 40

Along with trauma considerations, the pain the patient endures is recorded. Methods for determining Quality of Life (QOL) are [Diegeler, Walther et al. 1999]: the verbal rating scale (VRS), a five-step approach that allows differentiation between no pain, mild pain, moderate pain, severe pain, and unbearable pain; the visual analog scale (VAS), which quantifies pain on a scale from 1 (no pain) to 10 (the worst pain the patient has ever experienced). The MIDCAB is one of the minimally invasive method: a lateral mini-thoracotomy is done in the fourth intercostal space, with eventually the off-pump technique (OPCAB), in which no cardio-pulmonary bypass (CPB) is used [Diegeler, Walther et al. 1999].

5.3.2. Liver Several surgical procedures concern the liver: Cholecystectomy (the cut of the gallbladder) Laparoscopic intervention on the choledochus Liver resection for benign and malign tumour Roofing of liver cysts (removing cysts on the periphery) Tumour staging, where the tumour is inspected endoscopically to see whether it is possible to treat it or not. This minimally invasive procedure prevents percutaneous procedures that would fail, because the tumour is unreachable. This occur when the tumour is too close to major vessels. Local ablation of liver tumour by means of radio-frequency, laser, or cryo-ablation. Research is currently done on hepatic minimally invasive surgery, demonstrating that laparoscopic procedures are safe, and lead to the same advantages as minimally invasive surgery does [Edwin, Mala et al. 2001].

41

6.Discussion 6.1.

The goal

Pre-operative planning of the robot settings, in order to reach the targets and avoid collision, is the main purpose of this thesis. The shadow casting process, that is, the projection of the obstacles onto the body surface, is mainly giving hints for trocar placement, restricting possible areas. But it doesn’t intend to replace the laparoscopic surgeon, who is trained and knows where to place such ports. What he cannot guess, however, is how to place the robots: the instruments must keep the ability to reach the targets and work around them, and the external arms shouldn’t collide together. Nevertheless, in some experimental cases, the surgeon has no idea of how to access a certain volume. For tumour resection in liver surgery, for example, the path to reach the tumour is sometimes crowded with vessels, depending on how far in the tumour is lodged. Hitting these vessels is dangerous because of the important hemorrhage this would cause. In such a specific case, this tool would be more than a guideline.

6.2.

The images

For tumour resection, like in the liver, CT scans are usually required, precisely to locate the tumour. For CABG, it happens that even MR is not mandatory. So a planning that relies on such images would be pointless, since performing a scan (especially a CT) for purpose of planning would be refused by the ethical committees supervising medical practices. Moreover, such scans are expensive. A patient’s anatomy model would be very valuable here. Few parameters, such as weight and height, would be asked for the model to be scaled to the actual patient. Targets and incision points would be determined on this model and applied to the real case. Certainly a study would have to be conducted, to compare the model with –let’s say- one hundred persons, to verify its accuracy and applicability.

Stating that a 3D data set is available, processing it is not an easy task. In some cases, features appear nearly with the same brightness as their surrounding. Thresholding of this parameter is therefore difficult and manual work has to be done, increasing the cost and involving some

42

latency. Note that other segmentation methods can be useful here, like texture thresholding [Russ 1999].

Differences may occur between the image acquisition and its use during surgery. Lung filling and the beating heart represents dynamic motion, changing drastically the way organs are arranged in the thorax. The beating heart problem can be overcome by selecting a larger target, englobing all positions of the coronary artery. Images will be soon available in 4D, including the time-parameter, thanks to recent advances in CT and MR scanner technology [Freiherr 1999]. For the lung filling however, physicians will have to decide if the planner is worthy without taking that into account. If not, a simulation tool would have to be developed, improving the patient’s anatomy model explained before to valid changes during surgery.

6.3.

The robots

In this first step towards a complete pre-operative planner, the surgeon determines a target, incision points, and sets the robot’s bases. Then he sees if it works: if the instruments really reach the target, and if the robots don’t collide. If a collision is suspected, he can move one of the robots and try again. The minimum distance between each arm will help him in doing so. A path is then traced for the tip of the instruments. A helicoidal movement is a first approximation of how the surgeon will manage to reach the target, but another path can be set. The inverse kinematics and the collision detection is run, for each position. Only discrete position are tested; the only way to ensure that no collision will occur, in a particular target volume, is to increase the number of tip positions. Also here, the minimum distance recorded during the simulation1 will provide tips for changing the settings. For the user to know which part of the robot to move, it would be useful to highlight which part of the robot was the closest to another, during the simulation. Then perhaps the user will decide to change the elbow position from “up” to “down”, instead of moving the whole setting, for example.

It is important to note that the present simulation doesn’t check if the variable of the ZEUS system, like the angle parameters, are within range. So it may be possible that some don’t fall within their physical limits. This improvement would have to be done before going further. 1

Only little improvement is necessary to provide this information, since minimum distance are already

computed, but for each tip position independently.

43

To this point, the presented planner is still passive and doesn’t find anything by itself. It provides training for the robot’s settings, a simulation which is a verification tool (go/no-go decision based on a criteria sets by the user: the ultimate minimal distance), and visualisation. The next step towards automation would be for the planner to propose base’s placement. Each robot’s base can be fixed on a rail along the operating table, left or right. Some position would lead to uncomfortable robot’s setting: the instruments wouldn’t reach the targets, or some variables would be out of range. Avoiding that and, if possible, find an optimum for the robot’s setting could be a continuation of this project. Optimum here means that variables are in the middle of their range, maximising the possible motion of the robot.

Determining and visualising the volume swept by robotic arms could be interesting. An analytic target volume would lead to a complex analytic volume [Martin and Stephenson 1990], and the arm would only move within. If none of the three shapes (one per robot) superpose with another, it would mean that the particular setting is totally collision-free when accessing the specified target volume. If some part of space belongs to two or more swept volume, that would mean that a certain area couldn’t be accessed by all the tools at the same time, but only a few among them.

6.4.

The future of the project

This project will continue to be developed, as long as there is an interest in minimally invasive robotic surgery. Problems still prevail over advantages nowadays. The collision bug tends to be fixed here, as well as adequate settings for the robots, although it isn’t perfect yet. The high cost of the robotic system also lead to some reluctance for its use. Last but not least, the training cost is very high. Currently, a digital trainer is under development by SimSurgery™, Norway [Røtnes, Kaasa et al. 2001]. This trainer computes real time simulation of tissue mechanics, sutures, instruments and bleeding for the coronary artery bypass, giving the point of view of the endoscope. A further step towards realism: setting a camera watching the patient and the robotic system from outside. The robotic system is now modelled with thick lines. A 3D realistic model of the robot system would have to be created, and incorporated into the scene. The behaviour of the robot would be matched with its kinematic model, so that the surgeon would directly see the totality of the system surrounding a virtual patient.

44

Also this patient’s body is under development by SimSurgery™.

To summarise future possible improvements: develop a scalable anatomic model improve automated segmentation techniques from scans model differences between pre-operative scan and intra-operative anatomy (gas insufflation, lung deflation) model movements (beating heart) add constraints to the robotic model (range of variables) add intelligence into the robotic model (automated determining of bases placement) provide more information about collision (where and in which situation it may occur) model analytic volumetric target and corresponding envelope swept by robotic arms create realistic model for visualisation and integration into a digital trainer

45

7.Conclusion While the surgeon obviously has perfect control of his own arms, this is not so for the robots used in surgery. In particular, since the robot doesn't know where the different arms are, relative to each other, collisions may occur. In many cases this means that the ongoing robotic procedure must be stopped, in the worst case, the patient could be hurt. To minimise the risk of such collisions, it is crucial to have the right initial configuration of the robots, including arm base position, arm parameters, and port placements. The goal of this work was to propose robot settings that allow the target to be reached, and in such a way that collision between arms does not occur. In its current state, the tools proposed are a simulation and a visualisation of the consequences of a particular setting. In the future, this tool must be made more professional (a user interface for instance should be added) and optimal robot settings should be determined. The present work and its discussion gives clues for future development. Surgeon’s needs were established, discarding requests that were medically or technically unfeasible. Understanding, targeting and specifying these needs for minimally invasive surgery was certainly the most time consuming part of this work. Also, the multi-disciplinary environment require constant re-qualification, an exhausting task, which is also very rewarding.

46

8.Bibliography AHA American Heart Association (2000) "Bypass Surgery, Coronary Artery", accessed March 13, 2001, http://www.americanheart.org/Heart_and_Stroke_A_Z_Guide/bypass.html

AHA American Heart Association (2000) "Minimally Invasive Heart Surgery", accessed March 13, 2001, http://www.americanheart.org/Heart_and_Stroke_A_Z_Guide/mininv.html

Austad, A. (2001). A model of the Zeus robot and the human torso. Oslo, Interventional Centre: 10.

Austad, A., O. J. Elle, et al. (2001, pending). Computer Aided Planning of Trocar Placement and Robot Settings in Robot Assisted Surgery. Proceeding of the 15th International Congress and Exhibition - Computer Assisted Radiology and Surgery.

Aybek, T., S. Dogan, et al. (2000). “Robotically Enhanced Totally Endoscopic Right Internal Thoracic Coronary Artery Bypass to the Right Coronary Artery.” The Heart Surgery Forum 3(4): 322-324.

Boyd, W. D., N. D. Desai, et al. (2000). “A Comparison of Robot-Assisted Versus Manually Constructed Endoscopic Coronary Anastomosis.” The Annals of Thoracic Surgery 70: 83943.

Brandt, G., K. Radermacher, et al. (1997). “A compact robot for image guided orthopaedic surgery: Concept and preliminary results.” Lecture Notes in Computer Science 1205: 767776.

Charles, S., H. Das, et al. (1997). Dexterity-enhanced Telerobotic Microsurgery. 8th International Conference on Advanced Robotics (ICAR `97); and NASA University Centers Conference, Monterey, CA; and Albuquerque, NM.

Chiu, A. M., D. Dey, et al. (2000). “3-D Image Guidance for Minimally Invasive Robotic Coronary Artery Bypass.” The Heart Surgery Forum 3(3): 224-231. 47

Damiano, R. J. J. (2000). “Editorial: Endoscopic coronary artery bypass grafting--The first steps on a long journey.” J Thorac Cardiovasc Surg 120: 806-807.

Diegeler, A., T. Walther, et al. (1999). “Comparison of MIDCAB Versus Conventional CABG Surgery Regarding Pain and Quality of Life.” The Heart Surgery Forum 2(4): 290296.

Ducko, C. T., J. Edward R. Stephenson, et al. (1999). “Robotically-Assisted Coronary Artery Bypass Surgery: Moving Toward A Completely Endoscopic Procedure.” The Heart Surgery Forum.

Edwin, B., T. Mala, et al. (2001). “Liver Tumours and Minimally Invasive Surgery: A Feasible Study.” Journal of Lapaoendoscopic & Advanced Surgical Techniques 11(3).

Falk, V., A. Diegeler, et al. (2000). “Endoscopic Coronary Artery Bypass Grafting on the Beating Heart Using a Computer Enhanced Telemanipulation System.” The Heart Surgery Forum 2(3): 199-205.

Fishman, E. K., B. S. Kuszyk, et al. (1996). “Surgical Planning for Liver Resection.” Computer 29(1): 64-72.

Foley, v. Dam, et al. (1990). Computer Graphics: Principles and Practice (2nd Edition in C), Addison-Wesley.

Freiherr, G. Medical Device & Diagnostic Industry (1999) "Clinical Needs and Technology Drive

Medical

Imaging

to

4-D",

accessed

http://www.devicelink.com/mddi/archive/99/10/004.html

Freudenstein, D., D. Bartz, et al. (2000). “A New Virtual Planning System for Neuroendoscopic Interventions (Ein neues virtuelles Planungssystem für neuroendoskopische Eingriffe).” Dirk Klinische Neuroradiologie 10(4): 153-160.

48

Gandsas, A. Laparoscopy.com "Laparoscopy.com - Internet site for Laparoscopic Surgery", accessed March 13, 2001, http://www.laparoscopy.com/

Gering, D. T. (1999). A System for Surgical Planning and Guidance using Image Fusion and Interventional MR. Department of Electrical Engineering and Computer Science, Massachusets Institute of Technology: 106.

Gering, D. T., A. Nabavi, et al. (1999). An Integrated Visualization System for Surgical Planning and Guidance using Image Fusion and Interventional Imaging. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Cambridge England.

Harris, S. J., M. Jakopec, et al. (1998). “Interactive pre-operative selection of cutting constraints, and interactive force controlled knee surgery by a surgical robot.” Lecture Notes in Computer Science 1496: 996-1006.

Jolesz, F. A. (1997). “Image-Guided Procedures and the Operation Room of the Future.” Radiology 204: 601-612.

Kappert, U., R. Cichin, et al. (2000). “Robotic-Enhanced Dresden Technique for Minimally Invasive Bilateral Internal Mammary Artery Grafting.” The Heart Surgery Forum 3(4): 319321.

Kiaii, B., W. D. Boyd, et al. (2000). “Robot-Assisted Computer Enhanced Closed-Chest Coronary Surgery: Preliminary experience Using a Harmonic Scalpel® and ZEUS™.” The Heart Surgery Forum 3(3): 194-197.

Lea, J. T., D. Watkins, et al. (1995). “Registration and Immobilization in Robot-Assisted Surgery.” Journal of Image Guided Surgery 1: 80-87.

Lengen, R. H. v., J. Meyer, et al. (1998). “Shadow generation for volumetric data sets.” Visual Computer 14: 83-94.

Lueth, T. C. and J. Bier (1999). “Robot Assisted Intervention in Surgery.” Neuronavigation Neurosurgical and Computer Scientific Aspects. 49

Mack, M. J. (2001). “Minimally Invasive and Robotic Surgery.” JAMA 285: 568-572.

Martin, R. R. and P. C. Stephenson (1990). “Sweeping of three-dimensional objects.” Computer-Aided-Design 22(4): 223-34.

Miller “Assistive Robotics: An Overview.” Lecture Notes in Computer Science 1458: 0126.

Mohr, F. W., V. Falk, et al. (1999). “Computer-Enhanced Coronary Artery Bypass Surgery.” The Journal of Thoracic and Cardiovascular Surgery 117(6): 1212-1215.

Nagae, T., T. Agui, et al. Imaging Science and Engineering Laboratory, Tokyo Institute of Technology (2001) "Isosurface construction from volume data", accessed 29th March, 2001, http://www.shobi-u.ac.jp/~tnagae/pub/spala94/

NLM National Library of Medecine "The Visible Human Project®", accessed March 13, 2001, http://www.nlm.nih.gov/research/visible/visible_human.html

Reichenspurner, H. (1999). “Robotically Assisted Endoscopic Coronary Artery Bypass Procedures Without Cardiopulmonary Bypass.” J Thorac Cardiovasc Surg 118: 960-961.

Robb, R. A. (1996). Virtual (Computed) Endoscopy: Development and Evaluation Using The Visible Human Datasets. Presented at the Visible Human Project Conference, Bethesda, Maryland, Mayo Foundation/Clinic.

Robb, R. A., J. J. Camp, et al. Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN "Computer Aided Surgery and Treatment Planning at the Mayo Clinic", accessed March 13, 2001, http://www.mayo.edu/bir/home.html

Robb, R. A., D. P. Hanson, et al. (1996). “Computer-Aided Surgery Planning and Rehearsal at Mayo Clinic.” Computer 29(1): 39-47.

50

Røtnes, J. S., J. Kaasa, et al. (2001). Digital trainer developed for robotic assisted cardiac surger. The 9th Annual Medicine Meets Virtual Reality Conference (MMVR), Newport Beach, California.

Russ, J. C. (1999). The Image Processing Handbook (3rd Edition), CRC Press - Springer IEEE Press.

Schurr, M. O., A. Arezzo, et al. (1999). “Trocar and instrument positioning system TISKA: an assist device for endoscopic solo surgery.” Surgical Endoscopy 13: 528-531.

Schurr, M. O., G. Buess, et al. Springer-Verlag New York, Inc. (2000) "Robotics and telemanipulation technologies for endoscopic surgery: A review of the ARTEMIS project", accessed

April,

9th

2001,

http://link.springer.de/link/service/journals/00464/contents/00/20067/paper/index.html

SPL Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital (1996) "About the SPL (Surgical Planning)", accessed March 13, 2001, http://splweb.bwh.harvard.edu:8000/pages/aboutspl/about.html

Spong, M. W. and M. Vidyasagar (1989). Robot Dynamics and Control.

Tabaie, H. A., J. A. Reinbolt, et al. (1999). “Endoscopic Coronary Artery Bypass Graft (ECABG) Procedure with Robotic Assistance.” The Heart Surgery Forum 2(4): 310-317.

Zeus (1999). ZEUS™: Directions for Use, Computer Motion, Inc.

Zollikofer, C. P. E. and M. P. d. Leon (1995). “Tools for Rapid Prototyping in Biosciences.” IEEE Computer Graphics and Applications 15(6): 48-55.

51

Table of figures Figure 1 Raw image of abdominal region - CT scan ............................................................. 10 Figure 2 Segmented image of the liver ................................................................................. 10 Figure 3 Faces and vertices representation of a cube............................................................. 11 Figure 4 Liver, tumour, vessels, ribs..................................................................................... 13 Figure 5 Intersection inside the triangle ................................................................................ 15 Figure 6 Open cube with cylindrical map showing if the centre is straightly accessible ........ 16 Figure 7 Errors occur using the Proximity algorithm ............................................................ 17 Figure 8 Canvas: blue areas for free paths ............................................................................ 18 Figure 9 Shadows with DDA................................................................................................ 19 Figure 10 Shadowing a 2D slice with DDA.......................................................................... 20 Figure 11 Slicing: blue areas for free paths ........................................................................... 22 Figure 12 The ZEUS arm - Symbolic representation............................................................. 26 Figure 13 Top View of Positioning Arm............................................................................... 28 Figure 14 Four DOF's........................................................................................................... 29 Figure 15 Projection of robot onto x0y0-plane ...................................................................... 30 Figure 16 Angle of joint 5 .................................................................................................... 32 Figure 17 Test path for tip of instruments............................................................................. 33 Figure 18 Robotic simulation and collision detection (far).................................................... 34 Figure 19 Robotic simulation and collision detection (near) ................................................. 34 Figure 20 Voxel coordinates system..................................................................................... 36 Figure 21 Registration of model on the operating table ......................................................... 37 Figure 22 Robotic simulation along with model visualisation (far) ....................................... 39 Figure 23 Robotic simulation along with model visualisation (near)..................................... 40

52

3-D Pre-Operative Planning System for Robotic Surgery

Jun 7, 2001 - (B.Sc. 4-year university-level). At. ECAM, Brussels ...... ai is the distance between the axis zi-1 and zi measured along the axis xi ; ai is the angle ...

2MB Sizes 0 Downloads 134 Views

Recommend Documents

Robotic Catheters for Beating Heart Surgery
To reach the entire workspace with the required dexterity and stiffness, the ...... friction and backlash in guidewire-sheath system cloud the tactile information ...

Planning and visualization for automated robotic crane ...
and tools required to implement fully automated robotic crane erection processes ... visualization of crane operations to optimize crane usage while at the same ...

Artificial Pheromone System for Robotic Swarms ... - University of Lincoln
Nature is one of the best sources of inspiration for solutions to different problems in different domains. Swarm robotics system [1] is such a domain in which.

Recommendations and Guidelines for Preoperative ...
History of angina, coronary artery disease, myocardial infarction ... Surgical Classification System ... Examples: Heart disease that only slightly limits physical.

3D Motion Planning Algorithms for Steerable Needles ...
gles, the second equation describes the end point constraint that should be satisfied. Note that these equations do not relate to any actual physical needle.

3D Motion Planning Algorithms for Steerable Needles ... - Ken Goldberg
extend the inverse kinematics solution to generate needle paths .... orientation, and psn ∈ T(3) the vector describing the relative position of ..... is merely a matter of calling β1 either a control action or an ..... a solution or a structured i

3D Motion Planning Algorithms for Steerable Needles ...
As the current accuracy of medical data is limited, these .... 4 and realizing that the three centers of rotation (marked by × in the ..... Iowa State University. Minhas ...

Motion Planning For Steerable Needles in 3D Environments with ...
Obstacles Using Rapidly-Exploring Random Trees and Backchaining. Jijie Xu1, Vincent Duindam2, Ron ... quickly build a tree to search the configuration space using a new exploring strategy, which generates new states ..... are run on a laptop with Int

Cheap Mini Polarized 3D System For Home Cinema Projectors ...
Cheap Mini Polarized 3D System For Home Cinema Proj ... ssive Modulator Free Shipping & Wholesale Price.pdf. Cheap Mini Polarized 3D System For Home ...

Integration of a Robotic system in the neurosurgical ...
the integration of the robotic system in the operation theatre fulfilling the different ... safety unit was designed as a stand-alone system, connected between the ...