A Control-Theoretic Investigation of Dynamic Spatial Behavior Bernard F. Mettler and Zhaodan Kong Department of Aerospace Engineering and Mechanics University of Minnesota 2008 Advances in Computational Motor Control symposium, Society for Neuroscience conference
Abstract Human are highly capable in spatial control tasks, whether it involves their own body or when operating a vehicle. For example, they can maneuver a helicopter with agility in complex spatial environments: under ideal conditions (perfect knowledge and absence of disturbances) they can operate at the physical limits of the vehicle; in degraded conditions, they will adapt their performance to ensure safe operation. From a control theoretic standpoint, generating trajectories in real time under partial knowledge is computationally intractable. The brainʼs performance in these complex task raises several fundamental questions on the underlying neurological processes, including: How is spatial information represented? How is the behavior, i.e. control action underlying the behavior, generated? How are the dynamics taken into account? How are the uncertainties inherent in the task (partial knowledge of the environment, disturbances) factored in the decisions? Current and previous work on motor control covers a range of behaviors including arm, posture, or eye motion. Research on spatial behavior has mostly focused on spatial representations and navigation. Little research has be devoted to dynamical and higher-dimensional control problems such as those involved in fast motions in complex spatial environments. To study these dynamic spatial control skills we set up an experimental facility (Figure 1) that allows recording human control behavior of a miniature helicopter in a variety of spatial control tasks. Helicopters are ideal tools to investigate these spatial control skills because they are highly maneuverable and can move freely in all three dimensions, yet their dynamics are not trivial and humans still surpass autonomous control algorithms. In a first step we performed experiments on a goal directed acquisition task (see Figure 2), where the human subject has to control the helicopter to a goal set (defined by a terminal velocity with tolerances on magnitude and direction). To analyze the spatial control performance we developed a technique to extract the closed-loop (CLP) value function (VF) [1]. The VF characterizes the performance of the human operator, i.e., the vehicle dynamics driven by the “human” control policy in terms of spatial distributions of vehicle state and a pre-specified performance metric (here time to reach the target). To be able to characterize the brain processes involved in these dynamic spatial control skills the next question then is to identify the particular control realization that produces this closed loop VF. The solution to this problem is not unique; different control realizations will most likely produce the same CLP VF. Our hypothesis is that the control process involves a combination of predictive control and a open-loop (OL) VF [2]. The OL VF can be viewed as an approximate solution that combines spatial and behavioral information (akin to the conjunctive representations described in [3]). This architecture is attractive because (i) it provides an approximation to an infinite-horizon optimal control policy; (ii) it requires less online computation; (iii) it provides a family of control realizations that allow adaptation of the control performance to the level of a priori knowledge and level of disturbances. In this model, the realizations are parameterized by the length of the prediction horizon. Hence the two bounding families in the set of realizations are an infinite-horizon optimization (no a priori task information) and zero-horizon optimal feedback policy (full knowledge of the task in form of a VF). We describe the human performance as captured by the OL VF with those obtained from the optimal control process (Figure 3) and investigate the range of realizations that verify the human performance. References: [1] B. Mettler and Zhaodan Kong, “An Extrernal Fields Approach for the Analysis of Human Planning and Control Performance,” Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Passadena, May 2008. [2] B. Mettler and Zhaodan Kong, “Receding Horizon Trajectory Optimization with a Finite-State Value Function Approximation”, Proceedings of the American Control Conference, Seattle, 2008.
[3] F. Sargolini et al. “Conjunctive Representation of Position, Direction, and Velocity in Entorhinal Cortex”, Science, 2006.
Fig. 1 – Experimental testbed used to study human spatial control and cognitive functions.
Fig. 2 – (Top) Spatial interception task with a miniature RC helicopter. (bottom left) Illustration of a simple spatial planning task for a vehicle idealized as a mass point. (bottom right) The time-optimal trajectories for different starting locations. Notice the complex spatial distribution of the velocity and course angle.
Fig. 3 – (Top) Trajectories from human pilot compared with computed time-optimal trajectories. (Bottom) the extracted value function (shown as vector field; the contour lines represent time to go).