Proceedings of ICDL 2006 [CD-Rom], Bloomington, IN, June 3, 2006

Modeling Reaching and Manipulating in 2- and 3-D Workspaces: The Posture-Based Model Jonathan Vaughan

David A. Rosenbaum

Ruud G. J. Meulenbroek

Psychology Department, Hamilton College Clinton, NY 13323 [email protected]

Psychology Department, Pennsylvania State University University Park, PA 16802 [email protected]

Nijmegen Institute for Cognition & Information, Radboud University 6500HE Nijmegen, The Netherlands [email protected]

the neural-implementation level. Our work on movement planning focuses on what is computed (what information is used in motor planning and what the primitives of that planning may be), and how that information is represented and manipulated in the service of action, and it relies on computer simulations of observed movements. Although we have not explicitly related components of our model to components of the brain, we believe there are plausible neurophysiological analogs of the computational procedures we have invoked. At the very least, none of the procedures we have developed seems to be neurally implausible.

Abstract – How to represent and plan movement in the face of the articulatory redundancy of the skeletal system is a central challenge for understanding how physical actions are organized. By emphasizing the role of the goal posture as a representational primitive of the skeletal system, the posture-based model of movement planning provides one means for translating a simple specification of task constraints, such as moving from a starting posture to touch a target, into a movement trajectory. It also provides a description of how such actions may be learned. The model has been shown to plan realistic movement trajectories for reaching, grasping, handwriting, circumventing obstacles, and using tools in 2-D workspaces, in all of which arm motion has been restricted to either a sagittal or horizontal plane. Recent work extends the model to the planning of movements in a 3-D workspace. The model’s manipulation of a tool around obstacles closely resembles the trajectories generated by participants. This newer work enlarges the domain of prospective motor control (action planning and learning) that the model can address.

Our posture-based model addresses several generic types of goal-directed actions, including reaching, grasping, moving around obstacles, handwriting, and the use of simple tools. It provides a description of movement kinematics (the temporal sequence of positions in space) but does not yet address movement dynamics (forces and torques). We have deferred the treatment of dynamics until we feel sure of the validity of our model’s claims. These are easier to evaluate in the domain of kinematics than in the domain of dynamics.

Index Terms – Motor control, motor learning, modeling, trajectory, obstacle avoidance, 3-D workspace, tool use

I. INTRODUCTION There are a number of ways to understand how physical tasks are planned and executed. Kinesiologists have tended to concentrate on the physical mechanics and dynamics of actions. Neurophysiologists have tended to concentrate on the activity of nerve cells in the spinal cord and brain. Cognitive psychologists have tended to concentrate on the antecedents of action, whether from a bottom-up (stimulus-driven) or topdown (context-driven) perspective. Despite the wealth of research by kinesiologists and neurophysiologists on motor planning and control, cognitive psychologists have given this topic short shrift [1]. The authors of this paper (all cognitive psychologists) have attempted to rectify this situation, believing that the study of motor planning and control can profitably be analyzed from a cognitive psychological perspective and that cognitive psychology itself can benefit from more research on motor planning and control. In our work we have been inspired by the three levels of analysis that David Marr [2] distinguished as being fruitful for the scientific understanding of human perception and performance: the computationaltheoretical level, the algorithmic-representational level, and

One way our model departs from some others is that it follows the lead of Nobel Laureate Herbert Simon [3], who recognized that planning is not invariably an optimization process, but may often be described as satisficing – finding a solution that is adequate even if it is not perfect. As Simon noted, arriving at approximate solutions can save time and processing resources as compared to arriving at optimal solutions. Our model builds on this insight. II. Goals of the model The central motivation of our work is to address what Bernstein [4] identified as the "degrees of freedom" problem. Because virtually every physical task can be accomplished in an infinite number of ways, task requirements rarely constrain the actor to a single way of achieving the task. Consider turning on a light switch. There are numerous ways to flip the switch. One might use a finger, the palm, or even – if encumbered with packages – the elbow. There are also an infinite number of trajectories that could bring each of those end effectors to the switch. The selection of a goal posture and the selection of an effective trajectory are the computationally challenging problems we seek to solve via our model.

The research reported here was supported by NIH grant R15NS41887-01 (to JV), NSF grant SBR-94-96290 (to DAR), and NIH grant KO2-MH0097701A1 (to DAR).

1

III. Assumptions of the model

fro movement can be described as a motion from a starting posture to a "bounce" posture and back.

The model, which focuses on manual positioning movements, is described in detail elsewhere [5, 6]. It makes four main assumptions. The first assumption is that the initial step in planning is to identify a goal posture that is likely to accomplish the task. This assumption fits with a number of observations in the literature, for instance that memory for positions is better than memory for movements [7], and that defining postures is central to determining movement trajectories [8]. The second assumption is that in tasks where more than one posture must be adopted in the course of the movement, the second posture is typically planned before the first [9]. The third assumption is that movements (which are planned only after goal postures have been determined) follow a bell-shaped velocity profile at the level of joint rotation. The fourth and final assumption is that movements in 2- and 3-D workspaces are generated either by single-axis rotations [6, 10] or through the combination of simultaneous single-axis rotations.

The model uses the same four-step algorithm for generating a bounce posture (the terminus of the reversible bounce movement) as for generating the goal posture: It identifies, from the store of recently adopted goal and bounce postures, a candidate bounce posture; it generates variations of the candidate bounce posture by testing bounce movements superimposed on the direct movement from the start to the goal; and then it superimposes on the direct movement a bounce movement that effectively circumvents the obstacle, using a velocity profile given by v = sin(t) × sin(2t), 0 ≤ t ≤ π. The combination of the direct movement and bounce movement results in a complex movement that can circumvent the obstacle (Fig. 1). Finally, it updates posture storage by adding any newly generated goal or bounce postures to it, and forgetting unused ones. (Details may be found in [6]). D. Grasping

IV. Applications of the model in 2-D workspaces

The model addresses grasping in a similar fashion. A grasping task is achieved by finding a posture that touches the object to be grasped with a precision or power grip. The object itself can constitute an obstacle (e.g., if the fingers were to open too close to it), so the direct planning algorithm identifies a goal posture that grasps the object, and then the bounce planning algorithm plans a trajectory that avoids collision with the target as well as any intervening obstacles (Fig. 2). Thus, in grasping as well as in simple reaching movements, the obstacle-avoiding algorithm serves to avoid collision with the tobe-grasped object [5].

A. Posture Representation To model tasks in a 2-D workspace, we have represented postures as stickfigures, consisting of rigid, straight, limb segments, each of which is attached to the preceding (more proximal) segment by a joint with a fixed axis of rotation perpendicular to the workspace plane. B. Planning Direct Movements The model has been applied to several tasks in 2-D workspaces, described in this and succeeding sections. In each task, the model plans postures using a four-step algorithm [5, 6]. First a candidate goal posture is selected from the posture store, which contains recently adopted goal postures and “bounce” postures (see below). Second, once a candidate goal posture is found in the posture store, it is used to generate variations on that posture that may better fit the requirements of the task (such as being more accurate). Third, a default movement is planned from the starting posture to the now selected goal posture by simultaneously interpolating along a bell-shaped velocity profile, v = sin2(t), 0 ≤ t ≤ π. This velocity profile closely approximates a minimum-jerk movement in joint space. In the case of direct movements, no further shaping of the movement is needed. The movement is executed, and a memory is formed of the goal posture that was just planned.

E. Generating and Learning New Postures As noted above, in planning movements, the model refers to a history of stored postures, but there is no assurance that there will always be a suitable posture in that posture store for the task at hand. Therefore, as part of the planning process, the model may generate new postures based on stored postures by slightly modifying the angles between the limb segments in the stick-figure representation. Newly-generated goal postures are retained in the posture store, while the oldest stored postures are lost (see [5, 6] for details). Because the joint angles are confined to a single axis of rotation in the 2-D workspace, each joint angle may be varied to be slightly larger or smaller than the candidate posture. In addition to the applications just described, the model can address a number of variations in normal movement. For example, it can automatically compensate for changes in joint mobility, such as the consequences of injury. Similarly, a hand-held pointing tool can be added to the model simply by modifying the length of the most distal limb segment.

C. Reaching Around Obstacles In cases where there is an obstacle between the starting posture and the planned goal posture such that a direct movement would collide, it is necessary to modify the movement trajectory. To accomplish this, the model superimposes a second, reversible, movement on the direct movement. This to-and-

2

V. EXTENDING THE MODEL TO 3-D While the model described so far has been successful in planning and executing movements in 2-dimensional workspaces, extending the model to the more realistic 3-dimensional space, whether inhabited by humans or robots, is nontrivial. Modeling in 3-D raises several challenges that are not encountered in the 2-D case. Some arise simply from the higher dimensionality of the workspace. For example, for most joints, there are more degrees of freedom of movement in 3-D workspaces than in 2-D. The shoulder, which in 2-D is limited to flexion or extension when restricted to horizontal movements, can also be abducted or adducted, as well as internally or externally rotated, when it can move in 3-D. An additional complication arises from the differences in the kinds of rotations permitted by joints in 3-D workspaces. In the 2-D case, all the joint axes of rotation are parallel. Thus, the order of joint rotations is unimportant because successive rotations commute. However, in the 3-D workspace, there are many possible axes of joint rotation. The axes of rotation of joints are usually not parallel, and joint rotation order does matter, because successive rotations in 3-D do not commute [12]. A. Posture Representation To address the challenges related to joint rotations in 3-D, we note that a sequence of joint rotations about two or more axes can always be reduced to a single rotation about some third axis [10]. The 3-D model uses a quaternion-based representation [10, 13] for postures and movements. In the 3-D working environment, planning can be accomplished using the same principles as in the 2-D case, by substituting the quaternionbased posture representation for the angle-based representation of the 2-D model.

Fig. 1. Observed (left) and modelled (right) 2dimensional movement trajectories around an obstacle, for six seated participants reaching with a tool around an obstacle in a sagittal plane. Even though the cartoon figure may appear three-dimensional, all movements are confined to a plane. From [6].

B. Planning Direct Movements The second challenge is to determine a trajectory for the direct movement from the start to the goal, which (in the 2-D case) is of necessity constrained to a particular axis of movement. In 3-D, movement from one static joint orientation to another can always be accomplished by a single-axis rotation along the geodesic between the two orientations. This is the minimumlength path from one orientation to the other. Thus, the model makes direct movements in 3-D by the simultaneous interpolation of all joints along the most direct path (for each joint) from the start to the goal posture following the geodesic [10]. Additional computations may be required when obstacles are present, as described below.

Fig 2. Top view of a simulated reach to grasp an object, in which the reach must circumvent an intervening obstacle. From [5].

3

C. Reaching Around Obstacles

4). In some blocks of trials a horizontal or vertical obstacle was interposed between the targets. Postures were recorded with infra-red emitting diodes on the shoulder, elbow, wrist, and pointer tip, using an OPTOTRAK motion recording system (Northern Digital, Inc., Waterloo, Ontario, Canada). Simulated trajectories were synthesized by first computing a direct movement trajectory from the observed start posture to a computed goal posture. Joint attitudes were represented as a array of quaternions, to facilitate computation of arm trajectories as single-axis rotations in joint space. The direct movement from the observed start posture to the computed goal posture was implemented as simultaneous single-axis rotations of all joints, following a bell-shaped velocity function proportional to sin2(t) for 0 ≤ t ≤ π. The model next computed a reversible bounce movement and superimposed it on the direct movement with a symmetrical velocity profile, proportional to sin(t) × sin(2t). Direct and composite simulated trajectories were compared with observed trajectories by visual inspection, by comparing the simulated and observed axes of joint rotations, and by comparing the simulated and observed trajectories of reference points (joints and the distal tool tip).

The third challenge in 3-D movement planning is circumventing obstacles. In the 3-D model, if an obstacle is expected to impede direct movement from the start to the goal, the model uses the same algorithm as for 2-dimensional movements to compute a trajectory that circumvents an obstacle [6]. It first identifies a bounce posture, to which a reversible movement from the start posture and back can be made. Then, the bounce movement is superimposed on the direct movement (Fig. 3). As in the 2-D case, because the bounce movement adds no net displacement to the direct movement, it changes the shape of the combined movement but does not affect the finally attained goal posture. Because of the noncommutativity of rotations in 3-D, however, the simulation always applies the bounce movement after the direct movement in the simulations we have run so far. D. Generating and Learning New Postures The final challenge of the 3-D case is generating new postures that have not been previously learned, so that they may used in the present task and retrieved for use in future tasks. This is critical because for some novel tasks, no stored posture may suffice. Recall that in the 2-D case, the model tweaks candidate postures available in the posture store. The 3-D model uses a similar strategy, except that its tweaking is done by generating new postures that vary (in small steps) in many possible rotational directions from the candidate goal or bounce posture, not just along a single rotation axis.

VII. MODELING RESULTS IN 3-D The simulated movements closely resembled the movements made by our human participants (Fig. 4), as seen in Figs. 5 through 8. The computed trajectories captured the characteristics of the observed trajectories in a number of respects. Both show a relatively symmetrical bell-shaped velocity profile, and in both cases the patterns of rotation at each of the joints contributing to the movement are similar. For instance, both observed and simulated movements show similar biphasic rotation velocities around the same axis of rotation.

VI. OBSERVED MOVEMENTS IN 3-D The model has been used to plan reaches corresponding to human reaches in 3-D space. To obtain data about humans reaching in 3-D, we asked eight seated subjects to bring the end of a hand-held pointer to each of a sequence of targets on two vertical panels, 24 × 48 in, hinged on the long side (Fig.

One way of comparing the simulated trajectories to the observed trajectories is to ask if they would be distinguishable from those generated by a sample of human participants. To

Fig. 4. Touching target 4 with a hand-held tool while joint positions are recorded using IREDs on the limbs (not shown). A vertical obstacle stands between the left and right panels. Targets are numbered left to right: top row, 1-4; middle row, 5-8; bottom row, 912. A vertical obstacle is shown, but in some trials a horizontal obstacle was used (see Fig. 7).

Fig 3. Algorithm for generating an obstacle-avoiding trajectory given a starting posture, a target to be touched and an intervening obstacle.

4

address this question, we simulated five performances of the same two movements as they were made by five of our participants. The movements were from target 4 to target 1, around a vertical obstacle, and from target 1 to target 10, around a horizontal obstacle. We asked how closely each of the participants’ performances resembled the others’, and how closely the simulated performance resembled that of the participants. Both observed and simulated movements were normalized to a constant 1-sec. duration, in 21 time steps. For each of three reference points on the arm (the elbow joint, the wrist joint, and the tip of the tool), the root mean squared (RMS) distance between one compared trajectory and the other at each time step was computed. Both the differences between participants and the differences between each participant and the simulation varied. However, the similarity of the model to the participants did not differ markedly from the similarity of the participants to each other. The RMS distances between participants were generally comparable in mean and variability to the distances between each participant and the simulation (Table I). Indeed, where the difference between participant-participant and participant-model fits approached significance, the differences favored the model: The model was more similar to the participants than the participants were, on the average, to each other.

Fig. 5. Observed movement in the case of a vertical obstacle, from the OPTOTRAK perspective (left) and from the top (right). Note the foreshortening of the obstacle from the top perspective. In this trial, the participant moved the tool tip from target 4 to touch target 2.

Fig. 6. Simulated movement corresponding to the observed movement shown in Fig. 5.

VIII. CONCLUSIONS The posture-based model provides a computationally efficient method of determining complex trajectories in the multijointed hand and arm. To the extent that the synthesized trajectories capture the main characteristics of the observed trajectories, we feel that the extension of the posture-based model of motion planning in 3-D is worth considering further, as a potentially effective method of resolving the degrees of freedom problem for reaching, grasping, and avoiding obstacles. In accomplishing this, the model has the virtue of parsimony: A trajectory from a starting posture, around an obstacle, to touch a target can be fully specified using just two computed postures, the goal posture and the bounce posture.

Fig. 7. Observed movement in the case of a horizontal obstacle, from the OPTOTRAK perspective (left) and from the side (right). Note the foreshortening of the obstacle from the side perspective. In this trial, the participant moved the tool tip from target 2 to touch target 10.

TABLE I Mean RMS distance (cm) between joint locations . Mean (95% c.i.) t13, sig. P vs. P 12.70 (2.60) 1.216, n. s. P vs. M 9.94 (3.11) Wrist P vs. P 18.30 (3.92) 2.990, p < .02 P vs. M 10.50 (1.77) Tool tip P vs. P 14.55 (2.36) -0.567, n. s. P vs. M 15.77 (3.07) Vertical Elbow P vs. P 11.69 (2.77) 0.908, n. s. P vs. M 9.45 (3.47) Wrist P vs. P 15.26 (3.49) 2.506, p < .05 P vs. M 8.98 (2.42) Tool tip P vs. P 15.03 (3.29) 0.074, n. s. P vs. M 14.83 (3.51) * P vs. P: comparison of two participants; P vs. M: comparison of one participant with the model. Obstacle

Horizontal

Fig. 8. Simulated Movement for the horizontal obstacle. Compare with Fig. 7.

5

Joint

Elbow

Contrast*

The model's achievements and attractions notwithstanding, it is not without limitations. First, certain movements, such as those that go behind an obstacle, do not appear to be adequately modeled using only a single bounce movement. It remains to be seen whether the model can be generalized to address a broader range of movements such as these. Second, the model does not explicitly compute the dynamics of movement nor take into account the impedance of the musculoskeletal system in specifying a trajectory. It will be important in the future to address this fundamental challenge. Now that the model has proved workable both in 2-D and in 3-D, we are heartened by the belief that the basic assumptions of the model will provide a firm footing for this next step in our efforts to build a cognitive model of motor planning.

[3] H. A. Simon, Administrative Behavior; A Study of Decision-Making Processes in Administrative Organization, New York: Free Press, 1965. [4] N. Bernstein. The Coordination and Regulation of Movements, London: Pergamon, 1967. [5] D. A. Rosenbaum, R. G. J. Meulenbroek, J. Vaughan, & C. Jansen, C. “Posture-based motion planning: Applications to grasping;” Psychological Review, 108, 709-734, 2001. [6] J. Vaughan, D. A. Rosenbaum, & R. G. J. Meulenbroek, “Planning reaching and grasping movements: The problem of obstacle avoidance,” Motor Control, 5, 116-135, 2001. [7] M. M. Smyth, “Memory for movements,” In M. M. Smyth & A. M. Wing (Eds.), The Psychology of Human Movement. London: Academic Press, 1984, pp. 83-117. [8] M. S. Graziano, C. S. R. Taylor, & T. Moore, “Complex movements evoked by microstimulation of precentral cortex,” Neuron, 34, 841851, 2002. [9] D. A. Rosenbaum, F. Marchak, H. J. Barnes, J. Vaughan, J. Slotta, & M. Jorgensen, “Constraints for action selection: Overhand versus underhand grips,” in M. Jeannerod, Ed., Attention and Performance XIII: Motor Representation and Control. Hillsdale, NJ: Lawrence Erlbaum Associates, 1990, pp. 321-342. [10] M. D. Klein Breteler, & R. G. J. Meulenbroek, “Modeling 3D object manipulation: synchronous single-axis joint rotations?” Experimental Brain Research, 168, 395–409, 2006. [11] J. Vaughan, D. A. Rosenbaum, & R . G. J. Meulenbroek, "Modeling reaches around obstacles in 3 dimensions", in preparation. [12] C. C. A. M. Gielen, E. J. Vrijenhoek, & T. Flash, “Principles for the control of kinematically redundant limbs,” in M. Fetter, H. Misslisch, & D. Tweed, Eds., Three-Dimensional Kinematics of Eye-, Head-, and Limb-movement. Chur, Switzerland: Harwood Academic Publishers, 1997, pp. 285-297. [13] S. L. Altman, Rotations, Quaternions, and Double Groups, Oxford: Clarendon Press, 1986.

ACKNOWLEDGEMENT We are grateful for the contributions of Mary Klein Breteler and Steve Jax to the development of these ideas, and to Stephanie Godleski, Steve Jax, Aram Kudurshian, and Kimbery Lantz for the acquisition of the empirical data. Stan Gielen and Michael Turvey made valuable suggestions during the model's development.

REFERENCES [1] D. Rosenbaum, “The Cinderella of psychology: The neglect of motor control in the science of mental life and behavior,” American Psychologist, 60, 308-317, 2005. [2] D. Marr, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, San Francisco : W.H. Freeman, 1982.

6

Modeling Reaching and Manipulating in 2- and 3-D Workspaces: The ...

The model's manipulation of a tool around obstacles closely resembles the ..... Kudurshian, and Kimbery Lantz for the acquisition of the empirical data. Stan.

964KB Sizes 0 Downloads 116 Views

Recommend Documents

Modeling of Effort Perception in Lifting and Reaching ...
Jun 26, 2001 - measures of effort and use this information to predict fatigue or the risk of injury. Lifting and reaching tasks were performed in seated and standing ... required to perform these tasks were rated on a ten point visual analog scale. S

Modeling cell migration in 3D
Mar 31, 2008 - lack of high quality data of cell movement in 3D. However, this .... force is proportional to the velocity of cell and is dependent on the cell shape ...

Reaching and Object Movement Capability in the ...
data gathered are reduced using a kinematics representation of the human body, which produces the dynamic posture data necessary to define the strength and ...

3D Modeling Introduction.pdf
Download. Connect more apps... Try one of the apps below to open or edit this item. 3D Modeling Introduction.pdf. 3D Modeling Introduction.pdf. Open. Extract.

Interactive Workspaces
provide access to simulations or data and lets social ... devices and applications in the room makes it easy to .... portal to a virtual world, workspace interactions ...

Dynamic Shoulder Loads in Reaching and Materials ...
This presentation describes the dynamic shoulder moments and peak strength requirements resulting from the performance of typical arm reach and materials handling tasks. It is shown by a dynamic biomechanical analysis that normal right arm reaches to

Modeling Hand Trajectories during Reaching Motions
Technical Report #383, Department of Statistics, University of Michigan, 2000. We describe a method for modeling hand trajectories during human reaching ...

Planning Reaching and Grasping Movements
The theory is implemented as a computer model rendered as a stick-figure animation ... ment of Psychology at Pennsylvania State University, University Park, PA 16802-3408. J. Vaughan is with the ...... Human Movement Science, 19,. 75-105.

Planning Reaching and Grasping Movements: The ...
the subject kept his or her head tilted to the right, so he or she could visually ... factors that were estimated from the obstacle-present data provided a good fit for.

download Mediation and Negotiation: Reaching ...
... in Law and Business For Mobile by Elizabeth Trachte-huber, Populer books ... Law and Business For android by Elizabeth Trachte-huber, unlimited Mediation ...