Vis Comput (2009) 25: 487–497 DOI 10.1007/s00371-009-0321-9

O R I G I N A L A RT I C L E

Real time falling animation with active and protective responses Zhigeng Pan · Xi Cheng · Wenzhi Chen · Gengdai Liu · Bing Tang

Published online: 3 March 2009 © Springer-Verlag 2009

Abstract Combined with motion capture and dynamic simulation, characters in animation have realistic motion details and can respond to unexpected contact forces. This paper proposes a novel and real-time character motion generation approach which introduces a parallel process, and uses an approximate nearest neighbor optimization search method. Besides, we employ a support vector machine (SVM), which is trained on a set of samples and predicts a subset of our ‘return-to’ motion capture (mocap) database in order to reduce the search time. In the dynamic simulation process, we focus on designing a biomechanics based controller which detects the balance of the characters in locomotion and drives them to take several active and protective responses when they fall to the ground in order to reduce the injuries to their bodies. Finally, we show the time costs in synthesis and the visual results of our approach. The experimental results indicate that our motion generation approach is suitable for interactive games or other real-time applications.

Z. Pan · X. Cheng · G. Liu · B. Tang State Key Lab of CAD&CG, Zhejiang University, Hangzhou, 310058, China Z. Pan e-mail: [email protected] X. Cheng e-mail: [email protected] G. Liu e-mail: [email protected] B. Tang e-mail: [email protected] W. Chen () Department of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China e-mail: [email protected]

Keywords Character animation · Physics based modeling · Reactive motion · Balance detection · Protective responses

1 Introduction Motion capture techniques can produce very natural-looking character animation, which includes rich motion detail and style. But the re-use of the mocap data is limited by the available methods for editing and modifying data in controllable ways. As designing an appropriate control scheme is difficult, only a limited number of methods consider reactive motion. Physics based dynamic simulation is commonly used to add responses to characters. Nevertheless, due to the complexity of the dynamic controller design and the lack of rules for human motion, even seemingly simple tasks are prohibitively difficult to achieve robustly using only physics. Generating realistic character animation which responds to contact forces is an active area in character animation [1, 2]. The technique which combines motion capture and dynamic simulation together can generate a realistic and interactive character motion. Most previous work is not yet able to synthesis character animation in real-time because of the long time to search in a large motion capture database. Besides, the characters often lack the action characteristics of humans when being pushed, and falling to the ground like “rag-dolls” [3]. In most cases, a character needs to refer back to a mocap sequence after the dynamic simulation. The search routine to find an appropriate mocap return-to sequence often takes a lot of computing time. In this paper, we aim at reducing the search time by introducing a SVM to predict a subset of the mocap database. The subset of the database includes several appropriate return-to mocap sequences. In addition, our system employs a parallel process to handle the display

488

task and the computing task in order to speed up the motion synthesis. Humans will take actions to protect themselves when being pushed or hit in everyday life [4–6]. Few papers have yet considered the active and protective behaviors in the dynamic simulation. In order to simulate the characters in locomotion, the system needs a balance detection algorithm. In this paper, we estimate the balance of our characters which are in locomotion by the displacement and the velocity of their center-of-mass (COM). Based on human biomechanics [7, 8], several actions to maintain balance such as grasping a nearby object, or taking a forward step are integrated into our system. So our simulated characters can keep balance to avoid a fall or take protective action to reduce injuries when falling to the ground.

2 Related work Previous work on generating reactive motions concentrated on either reproducing animation from motion data by kinematics, or by physically modeling the interacting characters using dynamics. Several researchers focused on the techniques that synthesize new motions by processing and editing existing motions. Motion graphs were introduced to provide a solution by automatically discovering the ways in which the original data could be reassembled to produce natural looking motion [9]. Reitsma and Pollard [10] explored the responsiveness of motion graphs. They repeatedly sampled states and performed searches to create the final motion. Arikan et al. [2] presented a method for animating a character being pushed. When a push occurs, a user-trained oracle helps to find a response motion from the database that will give the best visual quality with a corresponding deformation parameter. Heck and Gleicher [11] proposed a method to augment motion graphs by using parameterized motions. Their method produced high quality and controllable motions. Unlike these methods, we dynamically generated reactive motions by physical simulation. Several character motion researchers proposed different approaches to create physically based animation. The work described by Faloutosos et al. [12] handled balancing, falling and other everyday actions. Their dynamic controller was designed for adults, and the characters in their system absorbed the shock of the impact with their hands. Fang et al. [13] presented a physically based optimization method by defining and exploring a restricted class of optimization problems. In their method, the physics constraints were included and the first derivatives of the constraints and the objective functions could be computed in linear time. Therefore, their optimization method can handle more complex characters. Yin et al. [14] developed a simple control strategy which can generate a large variety of gaits and styles

Z. Pan et al.

in real-time, including walking in all directions. Their work solved some of the problems in character animation, such as modeling multiple gaits, stylized motions, reaction to variable terrain, and reactions to external forces. Some researchers present approaches which combine motion capture and dynamic simulation together. Shin et al. [15] presented a method for touching up an edited motion to satisfy physical constraints. Shapiro et al. [16] introduced a hybrid method which allowed characters to be controlled to switch between mocap and physical simulation. They built on a supervisory control scheme to create a general framework for moving between the two motion representations. Komura et al. [17] introduced momentumbased inverse kinematics to create reactive motions. Zordan et al. [18] introduced a method to utilize simulation to generate physically plausible motion. They included a rewinding process in the system and then made the motion blend into the final motion sequence. We use a parallel process to allow the simulated characters to produce interactive responses to multiple external perturbations in series, and employ biomechanical controllers to generate the protective behaviors so that our characters become more creative. Several investigations revealed that after the contact forces, active and protective responses are an effective means to reduce injury to the body [19], such as using the arms to break a fall or taking a squatting action. Kry et al. [20] introduced a technique of interaction capture to address the difficulties of dealing with contact during motion capture. Although their focus was on hands and grasping, the method could be applied to arms, interaction through the use of tools, and compliant articulated structures in general. Pai et al. [8] introduced a method based on dynamic stable regions. They considered the COM displacement and the COM velocity of the characters as a stabilization problem. Wu et al. [21] reported that humans will take several protective actions when they fall to the ground, such as grasping a nearby object, swaying with their ankles or taking a step. Yin et al. [22] presented several techniques for generalizing a controller for physics based walking to significantly different tasks, such as climbing up a large step, or pushing a heavy object. Although their methods could generate satisfactory character motions, the calculation of the controller simulation is more complex than ours. Machine learning is popular in computer graphics in recent years. Mount et al. [23] proposed a search method called approximate nearest neighbor (ANN) search which can achieve a better space-time performance than the ordinary k-nearest neighbor (KNN) search method because ANN search method uses a random strategy and relaxed the constraints on accuracy. Zordan et al. [24] employed a supervised learning routine to quickly classify the physical motion online just following an impact among the set of examples in the database. Their approach computed dynamic

Real time falling animation with active and protective responses

Fig. 1 Motion generation diagram

responses to unanticipated interactions faster than real-time. Treuille et al. [25] employed a low dimensional reinforcement learning method, and outperformed greedy methods for navigational problems while retaining the low memory overhead. Based on their work, we employ a SVM to predict the subset of the mocap database, which includes the appropriate return-to mocap sequences. By adopting the SVM, the search time is reduced significantly.

3 Overview Our system drives the character to move based on a mocap sequence until receiving an interactive input signal (at the time A). The collision is detected at the time B, and the system passively simulates the character until the time C. At the time C, our system detects the balance of the character, and then driving the character to keep balance and to take several protective actions when he falls to the ground (from the time C to the time D). Our SVM predicts the subset of the mocap database in advance, which includes the appropriate returnto mocap sequences. Next, the system uses an approximate nearest neighbor (ANN) optimization search routine to seek the most appropriate return-to mocap sequence from the previous selected subset of the mocap database. The transition point in the selected return-to motion sequence is determined at the time E. Finally, the system blends the two motions from the time D to the time E and generates the transition motion. At this point, the whole interactive character animation is produced. The overview of our system is showed in Fig. 1. 3.1 Character model To create believable responses and generate realistic motions, we need to set up a model of our character based on the motion data. We map the recorded data to our character which is presented as an articulated figure of a series of body parts connected by joints. The articulated character we chose includes 15 joints. Each joint is specified as either a revolute joint or a spherical joint, with one or three degrees of freedom (DOF), respectively. There are three DOF

489

at the neck, back, waist, hips and shoulders, and one DOF at the knees, ankles, elbows and wrists. We use a state vector q = [T , R, θ1 , . . . , θn ] to represent the pose of our character, where T and R are the translation and rotation angles of the root joint, and θ1 , . . . , θn are the rotation angles of the other joints in coordinates relative to their parent nodes. The size of each body part of our characters is determined by its skeleton from the motion capture data. We obtain the masses and the inertial parameters of the characters from a basic biomechanical model which has human bodies similar to ours. The same model is used for contacting with the ground and between the characters. In addition, we choose to set the ground friction in the dynamic simulation calculation to 1.0 (that is a rough surface) in order to achieve satisfying simulation results. 3.2 Dynamic simulation Our character motion is initialized by a mocap sequence. The bounding box of each body part is used to estimate the contact state between the characters and the environment. If a collision occurs, the dynamic simulation process is activated at once. The COM velocity of the character and the momentum of each joint after the collision are re-calculated, and the new values are input to the character dynamic model. During the dynamic simulation process, our system obtains the position and the rotation angle of each joint at any time. As Miall et al. [26] described in their work, after a collision, a human cannot respond to the contact force within 150–250 ms, which is called the reaction time (in Fig. 1, the reaction time is from the time B to the time C). Therefore, we divide the dynamic simulation process into two phases: 1) the passive simulation; 2) the active response. In the passive simulation phase (within the reaction time), we simulate the characters using the “rag-doll” controller. The time C to the time D is the active response phase. In this phase, we use our active and protective controller based on human biomechanics. 3.3 Parallel process In order to generate and display the interactive character motion in real-time, we design two parallel processes. One process displays the character motion. Another process simulates the characters, searches for the return-to mocap sequence, generates the transition motion, and sends the results to the display process. The two parallel processes are activated immediately when the system detects a collision. The simulation process uses a small time step while the display process uses a large time step based on the frame rate in the animation. By adopting the two parallel processes, the system does not need to rewind the result of the active simulation, so

490

Z. Pan et al.

the synthesis of the character motion is consecutive and in real time. Otherwise, as the search for the return-to motion sequence and the calculation of the transition motion take up some time, the system must wait for a while after the simulation phase completes. 3.4 Search optimization Zordan et al. [18] reported in their paper that the comparison evaluation stage occupied 70% of the whole computing time. We employ the ANN search method proposed by Mount et al. [23]. The ANN search method calculates a Minkowski distance, which is identified as dist(p, q) for the two dimensional data p and q. We use a balanced box decomposition (BBD) tree to subdivide the whole space into several regions of O(d) complexity. The ANN introduces a parameter ε to guarantee that any nearest neighbor result is within a factor of (1 + ε) of the actual nearest neighbor distance. Given any query point q ∈ R d , we can find the desired point p in the space by satisfying (1): dist(p, q) ≤ (1 + ε)dist(p ∗ , q),

(1)

where p ∗ is the true nearest neighbor to q. That is to say, p is within an error ε of the true nearest neighbor. The relaxed specification gives a tangible guarantee on accuracy, while providing an O(log3 n) expected run time and an O(n log n) space requirement, where n is the frame number in the mocap database. When comparing two poses, we first align the roots of the two frame motions, and then enter the positions of the joints as parameter vectors of fixed dimension to evaluate the similarity of the two poses. Although the ANN can help us to reduce the comparison time for the two sequences, our system still needs a few seconds to search for the return-to mocap sequence. Therefore, we need to consider a method to narrow the search. 3.5 Support vector machine The search time correlates with the size of the database, so searching in a relatively small mocap database will improve the comparison efficiency [1]. Nonetheless, if the database contains too few mocap sequences, maybe no satisfactory return-to sequence can be found. The SVM and the artificial neural network are the two common classification methods. In our experiment, in a relatively small mocap database like ours, the SVM was much faster than the artificial neural network in the training time and achieved about 20% in accuracy over the artificial neural network in classification. Therefore, we employ a SVM to predict a subset of our mocap database which needs to be searched later based on the poses after the active simulation. Each subset of the mocap database includes several sequences in which the character uses a similar strategy to recover after he falls to the ground. In our implementation,

leaning forward/backward to upright, getting up back, getting up front or getting up left/right from ground divide the whole mocap database into 6 subsets. The feature attributes to be input into the SVM should be carefully chosen. Increasing the size of the feature attributes requires a larger training set and longer training time, but, on the other hand, the feature attributes cannot be too small or the SVM is not able to differentiate certain database subsets. Based on human biomechanics, the state of the bodies and the information for balance should be included in the input vector. The result of our experiment shows that selecting the displacement and the velocity of COM, head, chest, root, lower spine at the end of the active simulation (namely the time D in Fig. 1) as the input vector is a good trade-off between the training time and the prediction accuracy. For example, if we added the state of the arms, the classification accuracy increased by 2% but required 2.5 times in the training time. If we removed the head state from the input attributes, the accuracy decreased by 10%, and removing the other attributes achieved the similar results. The SVM finds a partition in the space of training data, and builds a model to contain the attribute information to predict the target class of testing data. In our SVM, we have L training samples, and each training sample includes both a vector of feature attributes and a subset label. More formally, let TS = (x1 , y1 ), (x2 , y2 ), . . . , (xL , yL ) be a set of training samples where xi ∈ R n are the samples of the attribute vector with n dimensions, and yi are the class labels associated with each observation, where i is the index of the training samples which varies from 1 to L. We create the SVM by solving the optimization problem, see (2)–(3):

ω,b,ξ

 1 T ξt ω ω+C 2

s.t.

  yt ωT φ(xt ) + b ≥ 1 − ξt ,

L

min

(2)

t=1

ξ ≥ 0.

(3)

In (2), C is a weighting penalty for the error term ξ , and C > 0. The variable ω is a weight and b is a probability estimate. φ(xi )T φ(xj ) is defined to be the kernel function K(xi , xj ). In our system, we use the RBF kernel function, as (4) shows, where γ is a user-defined kernel parameter and γ > 0:   K(xi , xj ) = exp −γ xi − xj 2 . (4) The training process runs only once when the system sets up. The samples are obtained by applying unpredicted forces from different directions, using different magnitudes and acting on different body parts to the characters in locomotion. Our controller simulates each of the sample cases to get the final pose of the characters, and we hand selecting the desired subset of the mocap database. After training, when the active simulation process completes, our SVM can

Real time falling animation with active and protective responses

491

where Vcom is the COM velocity. Equation (6) can be expanded into a two-dimensional form. Denoting the projection of the COM on the ground plane as d = (x, y) and its velocity v = dr/dt, the stability condition can be stated as d + v ∗ ω, which should be within the BOS. In active simulation, our controller will drive the characters to take several balance keeping and protective actions when they fall to the ground. 4.2 Protective responses Fig. 2 The simplified inverted pendulum model

predict one subset of the mocap database which needs to be searched later. By taking the advantage of the SVM, we only need to search in a relatively small database (about 1/6 of the whole).

4 Active and protective responses 4.1 Balance estimation After the passive simulation, the character may fall, so we need to estimate the balance state of the character at that time. According to Newtonian mechanics, if an object is still and balanced, then the vertical projection of the COM must be within the base of support (BOS) [8]. The ‘base of support’, or ‘supporting area’, is defined as the possible range of the center of pressure (COP), which is the origin of the ground reaction vector. The BOS loosely equals the area below and between the feet (in two-feet standing). However, as the COM of the character has velocity in locomotion, even if the COM is within the BOS, balance may be impossible when the COM velocity is directed outward. In order to find a balance estimation condition which is suitable for fast computing, we create a simplified inverted pendulum model [27], as Fig. 2 shows. In Fig. 2, Dcom is the COM displacement, Dcop is the COP displacement, and Dcop_min and Dcop_max are two boundaries of the support region. The character body is modeled as a single mass m balancing on top of a stick of length L. Based on Newtonian mechanics [7], (5) holds in the sagittal plane:  (Dcom − Dcop )mg ≈ mLDcom ,

(5)

 is the second order derivative of Dcom . We where Dcom 2 let ω = L/g, and tanh(t/ω) ≤ 1 holds for any t, where ‘tanh’ is the hyperbolic tangent function. By solving (5), we have (6): ∗ Dcom + Vcom ω ∈ [Dcop_min , Dcop_max ],

(6)

Several human biomechanics researchers proposed that a human will grasp a nearby object to keep his balance if possible [19]. Nevertheless, in many situations, there are no nearby and graspable fixtures when a fall occurs. Therefore, the character has to take a stepping or a windmill-like action using his arms instead. The injury risk during a fall depends on the position and the velocity of the body segments at the contact moment with the ground, as well as the location, the direction, and the magnitude of the forces applied to the body during the contact stage of a fall [6]. Sometimes, protective behaviors will lead a human to a balanced and upright pose. While in other cases, he will still fall to the ground. The occurrence of the falling to the ground or not depends on the reaction time, the strength of the character and the disturbance intensity [28]. When the characters are about to fall, our dynamic controller drives them to take several protective actions, such as using their arms to break a fall, or squatting. These actions are the effective ways of decreasing the likelihood of injury by reducing the impact forces experienced on their bodies [5]. Strategy 1: Grasping a nearby object A human will grasp a nearby object to keep balance. For example, one armrest is an ideal object. By using a bounding box based collision detection method, our controller detects the nearby fixed objects, which are within a certain distance from the upper limbs of the character. In our experiment, the distance is set to be 0.5 meter. Then the controller decides with which limb to grasp the object, and rotates the upper body to make the character face the object. The controller drives each character joint using (7):    τ = ks θ (t) − θcur − kd (θcur ), (7) where ks and kd are the coefficients of the join stiffness and  are the current joint the joint damping gains. θcur and θcur angle and the current joint angular velocity in the dynamic simulation, respectively. θ (t) is the desired joint angle to be tracked by the controller. In order to obtain the θ (t), the controller determines the final target pose of the character after the grasping. Then using the Jacobian based pseudoinverse, the controller applies

492

Z. Pan et al.

Fig. 4 One squatting sample in a backward fall Fig. 3 Estimation of the minimum step length in the sagittal plane

inverse kinematics (IK) to get the angle of each joint in the final target pose. To resolve the IK redundancy, we employ the method proposed by Whitney [29]. By interpolating between the joint angle of the initial pose and the final target pose, the intermediate joint angles at any time can be calculated. After the character takes a grasping action successfully, the controller drives him to withdraw his two upper limbs from the object.

Polynomial curve fitting is employed to obtain the relationship between the torqueτ and the angular displacement of the ankle joint θ , as (10) shows [30]: τ = −0.0024θ 3 − 0.091θ 2 + 3.4θ + 160.

After a successful stepping, the COM velocity of the character reduces from Vcom to zero. As we solve for the minimum step length, the projection of the final COM position lies in the fore part of the right leg. The reduction in the kinetic energy KE and the change in the potential energy P E are calculated using (11)–(12):

Strategy 2: Protective stepping

KE = Protective stepping is common and important in balance keeping. Humans can reduce the injury during a fall or even avoid a fall by stepping forward or backward. Fabio et al. [6] found that protective stepping increases the time interval between the hand impact and the pelvis impact. In our stepping controller, the final stepping pose is important to compute the desired joint angles. A stepping pose is mainly determined by the step length and direction. The step length helps the controller to determine the disturbance intensity. The stepping direction is consistent with the direction of Vcom after the collision. We create a 4-segments model in the sagittal plane to determine the minimal length of a forward step for keeping balance (backward stepping is similar), as Fig. 3 shows [21]. In Fig. 3, the left is the stance leg, while the right takes a forward step. The variable A is the horizontal distance from the ankle to the heel. Lfoot is the foot length. Lstep is the minimum length of the step needed in order to absorb the extra forward energy. During the stepping process, w1 and w2 represent the work done by the two legs, see (8)–(9):  w1 =

cos−1 (

 w2 =

π−cos−1 (

Dcom −Lfoot +A ) L

π−cos−1 (

cos−1 (

Lfoot −A ) L

Lfoot −A ) L

Lstep 2L )

τ dθ,

τ dθ.

(8)

(9)

(10)

2 − 12 mVcom    , foot +A sin2 cos−1 Dcom −L L

−1 Lfoot − A

P E = mgL sin π − cos L

−1 Dcom − Lfoot + A . − sin cos L

(11)



(12)

According to the work–energy principle of Newtonian mechanics, the equation w1 + w2 + KE + P E = 0 holds. Given the values of Vcom and Dcom , the corresponding Lstep can be calculated. Based on Lstep , the final stepping pose is determined. Each joint angle can be calculated by an IKbased method. Then the θ (t) of each joint at any time can be obtained by interpolating between the initial joint angle and the final joint angle after stepping. Our controller uses (7) to drive each joint of the character. Strategy 3: Squatting Another method for reducing injury to the character is to absorb energy in the lower body during a fall [28], as occurs in squatting. By taking a squatting action, one character is able to substantially reduce the impact velocity during a fall which is caused by a sudden loss of balance. If the body contacts the ground with the trunk in an upright position, a reduction in the total loss of the potential energy of a fall can be achieved, and the effectiveness of the squatting action

Real time falling animation with active and protective responses

Fig. 5 Breaking a fall, the first row shows a forward fall, while the second row shows a backward fall

may depend on the time during descent when the response is initiated [5, 6]. The longer the response time is, the weaker the protective effect that can be achieved [28]. Figure 4 is a squatting sample during a backward fall generated by our system. Our controller drives each joint in the upper and lower limbs, moves them close to each other and keeps the upper body in an upright pose to protect the character during a fall. In the design of the controller, a standard squatting pose is created in advance. Then based on the direction and the magnitude of the contact force, our controller adjusts the standard squatting pose to generate the desired squatting pose. The calculation of θ (t) is similar to the method used in Strategy 1. Our controller also employs (7) to drive each joint to reach the desired angle of the squatting pose. When the characters fall forward, they can take advantage of the forward momentum, and rolling forward on the ground to reduce the injury to their bodies, as do gymnasts, athletes or martial arts exponents. But this rolling forward needs special training and most ordinary people cannot do it. Strategy 4: Breaking a fall Stretching out the arms to break a fall can also reduce the likelihood or the severity of injury by decreasing the impact velocity of the hip and avoiding impact near the hip [5]. Hsiao et al. [4] suggested that if the hands touch the ground 50 ms before the hip impact, it would be the most effective. Therefore, the timing of the hand impact is important for the distribution of the impact force between the wrist and the shoulder. Figure 5 shows two samples of breaking a fall generated by our system. Our controller controls the two upper limbs of the character to make him break a fall. We create a standard pose of breaking a fall. See the last picture of the first row in Fig. 5. There are two main factors that affect the final pose of breaking a fall: 1) the velocity and the direction of a fall; 2) the distance from the arms to the ground during a fall. If a fall is fast, the two upper limbs will be close to each other and give more protection. The direction of a fall determines the rotation angle of the whole body. If the character is close to the ground when he falls to the ground, his two upper limbs will be far apart from each other.

493

Fig. 6 The motion transition process in our system

Our controller uses (7) to drive each joint of the two upper limbs to break a fall. Especially, when a character falls backward to the ground, besides taking a squatting action, he can choose to break a fall. At that time, the controller first rotates the upper body of the character to make him face the ground, and then uses the above control algorithm.

5 Transition motion generation After our SVM predicts the subset of the mocap database, our system matches the poses in the dynamic active simulation with the motion sequences in our database. In order to compute the distance for a pair of frames, a window of frames are extracted and compared. The distance metric, recorded as D(fs , fm ), which is defined as the difference between the dynamic simulation and the mocap sequences. D(fs , fm ) uses the joint positions and the orientations of the poses in a small transition window that starts at fs0 and fm0 . The size of the window equals the time of a typical transition. Within the window, the distance function between frame fsi and frame fmi is defined as the weighted sum of distances between the positions and the orientations of the corresponding body parts [31], as (13) shows: D(fs , fm ) =

ws  J   wpj pj (fsi ) − T (fs0 , fm0 )pj (fmi ) i=0 j =0

 + wθj θj (fsi ) − T (fs0 , fm0 )θj (fmi ) . (13)

In (13), ws is the size of the transition window, and J is the number of the character joints. pj (fsi ) and θj (fsi ) are the global position and the orientation of joint j at frame fsi , respectively. T (fs0 , fm0 ) is a coordinate transformation operation. Based on the root position of the first frame in the window, frame s0 , the root position of the first frame in the window, frame m0 , rotates θx , θy , and θz around the x-, y-, and z-axis, respectively, and then the root is translated so that the directions and positions of the character in the frame s0 and frame m0 are the same. The weights wpj and wθj scale the linear and the angular distance of each body part. When we select wpj and wθj , two factors should be considered. One factor is the mass of each body part, and obviously, the heavier body part has larger weight. Another

494

Z. Pan et al. Table 1 Timing comparisons. 1) Original exhaustive search. 2) The ANN optimization search only. 3) The ANN optimization search together with the SVM prediction Return-to Type

Fig. 7 Time cost comparison between the ordinary search and the ANN optimization search

factor is the location of the body part in the skeleton rootedhierarchy, if one body part is close to the root, it has larger weight. When D(fs , fm ) is within the tolerance, the returnto sequence in the mocap database is selected, then the simulated motion should be smoothly blended into the target return-to motion sequence. In order to blend into the target motion smoothly, first of all, we must align the two motion sequence in the smooth transition window, and then we take a transformation to adjust the pose of the sequences in the window, and interpolating linearly for the root position. The rotation angle of each joint is interpolated by the slerping quaternion. The motion transition process is shown in Fig. 6. Usually, transition operation will break the motion constraints. For example, the feet of the character may slide on the ground or the toe tips may impale the ground. So we need the constraint to transition at the same time. In the system, we employ the method proposed by Lee et al. [32]. By detection and marking constraints of the footprints, our system generates the new constraints based on the two constraints of the original sequences. 6 Implementation and results To validate our approach, we designed a series of experiments to test the time cost and the robustness of our motion generation approach in the system. Compared with the previous dynamic response method proposed by Zordan et al. [18], a remarkable speed up in the motion synthesis efficiency and more realistic visual quality are achieved. All of our experiments run on an Intel Core2 1.86 GHz CPU and all the timings are in seconds. In the main PD controller (see (7)), which drives the characters to take the protective behaviors, the gain value ks = 400 Nm/rad, kd = 40 Nms/rad are used for all the joints. Joint limits are also considered. Our simulation time step is 0.005 s while the display time step is 0.040 s. Our database contains 191 recovery mocap sequences, which are about 20 thousand frames in total. In the approximate nearest neighbor optimization search, d = 96 and ε = 0.2. Figure 7 shows the experimental result

Ordinary

Optimization

Search

Search

+SVM

Leaning to Upright

1.665

0.705

0.091

Getting Up Back

1.741

0.727

0.096

Getting Up Front

1.745

0.729

0.096

Getting Up Left

1.719

0.722

0.094

of the time cost comparison between the ordinary search and our ANN optimization search for the same size of the mocap database to compare a single frame. We trained the SVM using 600 training samples, and using the same key attributes to be the input vector as described in Sect. 3.5. Table 1 summarizes the time cost of our optimization search compared to the original exhaustive search for the four typical return-to motion types. From Table 1, we can see the computational efficiency achieves a significant improvement, so that our motion generation method is suitable for interactive and real-time applications like computer games and other real time applications. We show in the accompanying video a variety of examples to illustrate the motion generation algorithm in our system. In addition, this video includes several sample reactive motions, in which the characters take several protective actions when they fall to the ground. Figure 8 shows one character responding to the contact force from another character, and then taking protective behaviors such as: 1) a backward stepping; 2) a windmill-like action using his arms; 3) rotating his body and facing the ground; 4) breaking the fall. Finally, the motion transitions to mocap sequence for getting up. Figure 9 shows one character responding to several unexpected forces, and then taking the combined protective behaviors. After he falls to the ground, the return-to mocap sequences are blended naturally into the active simulation motion.

7 Discussion and conclusion In this paper, we have introduced a novel approach to generate interactive character animation with protective behaviors in real time. In the generated motions, the characters will take active and protective actions when they fall to the ground. Our research goes with the continuing trends of more physically simulated characters that act like real humans. We have employed an approximate nearest neighbor optimization search and a parallel process to speed up the motion synthesis. Besides, a SVM is used to predict a subset

Real time falling animation with active and protective responses

495

Fig. 8 Animation filmstrips. One example of the active and protective response to respond to a boxing impact (view left to right)

of our mocap database which should be searched later after the active simulation. Our characters in the system can respond to unexpected contact forces and transition to returnto mocap sequences in real-time. In the active simulation, our controller focuses on how to drive the characters to keep balance in locomotion and reduce the injury to their bodies during a fall based on human biomechanics. However, we have not considered the protective poses in anticipation of an impending collision, so our characters will not expect a contact in advance. Besides, our model of the protective stepping can only handle single steps, and is not able to apply to the cases in which multi-stepping is needed, In addition, our system assumes that all the characters have the same response time and the sufficient protective experi-

Fig. 9 Animation filmstrips. One example of the active and protective responses to respond to the unexpected forces (view left to right)

496

ences, so the time interval in the passive simulation is fixed. Fabio et al. [6] pointed out that humans have different response times and support strength, so they will take protective actions according to their own characteristics. We should let users adjust the variables before the dynamic response to provide variety. Also, our approach is not yet able to work in non-flat surfaces. We leave these extensions for our future work. Even with the limitations, our research work achieves several novel advances in generating interactive and realistic character motion. We look forward to continuing progresses in the future. Acknowledgements Thanks to Annette Paul for his suggestion on SVM, Tiancheng Li for paper writing, Williams Qierxi for his generous help and valuable discussion, Geoff Wyvill and all the anonymous reviewers for helping us to improve the work and its presentation. Thanks to other colleagues in our laboratory for their helpful discussion and remarks with us. This research work is co-sponsored by the project “Intelligent Interaction and Navigation in VE” (Grant No: 08dz0580208), and is also funded by Intel, the project 863 (Grant No: 2006AA01Z303) and the NSFC (Grant No: 60533080).

References 1. Mandel, M.: Versatile and interactive virtual humans: Hybrid use of data-driven and dynamics-based motion synthesis. PhD thesis, Carnegie Mellon University (2004) 2. Arikan, O., Forsyth, D.A., O’Brien, J.F.: Pushing people around. In: Proceedings of the 2005 ACM Siggraph/Eurographics Symposium on Computer Animation, SCA ’05, Los Angeles, California, July 29–31, 2005, pp. 59–66. ACM, New York (2005) 3. Tang, B.: Study on generating reactive motions for human animation. PhD thesis, Zhejiang University (2006) 4. Hsiao, E.T., Robinovitch, S.N.: Biomechanical influences on balance recovery by stepping. J. Biomech. 32(9), 1099–1106 (1999) 5. Sabick, M.B., Hay, J.G., Banks, S.A.: Active response decrease impact forces at the hip and shoulder in falls to the side. J. Biomech. 32(9), 993–998 (1999) 6. Fabio, F., Stephen, N.R.: Reducing hip fracture risk during sideways falls: Evidence in young adults of the protective effects of impact to the hands and stepping. J. Biomech. 40(9), 2612–2618 (2007) 7. Hof, A.L., Gazendam, M.G.J., Sinke, W.E.: The condition for dynamic stability. J. Biomech. 38(9), 1–8 (2005) 8. Pai, Y.C., Patton, J.: Center of mass velocity-position predictions for balance control. J. Biomech. 30(4), 347–354 (2007) 9. Kover, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Trans. Graph. 21(3), 473–482 (2002) 10. Reitsma, P.S.A., Pollard, N.S.: Evaluating motion graphs for character navigation. In: Proceedings of the 2004 ACM Siggraph/Eurographics Symposium on Computer Animation (Grenoble, France, August 27–29), pp. 89–98 (2004) 11. Heck, R., Gleicher, M.: Parametric motion graphs. In: Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games I3D’07 (Seattle, Washington, April 30–May 2), pp. 129– 136. ACM, New York (2007) 12. Faloutsos, P., Panne, M.V.D., Terzopoulos, D.: Composable controllers for physics-based character animation. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH ’01, pp. 251–260. ACM, New York (2001)

Z. Pan et al. 13. Fang, A.C., Pollard, N.S.: Efficient synthesis of physically valid human motion. ACM Trans. Graph. 22(3), 417–426 (2003) 14. Yin, K.K., Loken, K., Panne, M.: SIMBICON: Simple biped locomotion control. ACM Trans. Graph. (SIGGRAPH 2007) 26(3), Article 105, 10 pages (2007). DOI:10.1145/1239451.1239556 15. Shin, H.J., Kovar, L., Gleicher, M.: Physical touch-up of human motions. In: The 11th Pacific Conference on Computer Graphics and Applications, pp. 194–203 (2003) 16. Shapiro, A., Pighin, F.: Hybrid control for interactive character animation. In: The 11th Pacific Conference on Computer Graphics and Applications, pp. 455–461 (2003) 17. Komura, T., Ho, E.S.L., Lau, R.W.H.: Animating reactive motion using momentum-based inverse kinematics. Comput. Animat. Virtual Worlds 16(3), 213–223 (2005) 18. Zordan, V.B., Majkowska, A., Chiu, B., Fast, M.: Dynamic response for motion capture animation, ACM Trans. Graph. 24(3), 697–701 (2005) 19. Wu, M., Ji, L.H., Jin, D.W., Wang, R.C., Zhang, J.C.: Recovery strategy from perturbations of the upper body during standing using mechanical energy analysis. J. Tsinghua Univ. (Sci&Tech) 43(2), 152–155 (2003) (in Chinese) 20. Kry, P.G., Pai, D.K.: Interaction capture and synthesis. ACM Trans. Graph. 25(3), 872–880 (2006) 21. Wu, M., Ji, L.H., Jin, D.W., Pai, Y.C.: Minimal step length necessary for recovery of forward balance loss with a single step. J. Biomech. 40(9), 1559–1566 (2007) 22. Yin, K., Coros, S., Beaudoin, P., Panne, M.: Continuation methods for adapting simulated skills. ACM Trans. Graph. 27(3), 1–7 (2008) 23. Mount, D., Arya, S.: Approximate nearest neighbour queries in fixed dimensions. In: Proceedings of the Fourth Annual ACMSIAM Symposium on Discrete Algorithms, pp. 271–280 (1993) 24. Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu, C.C.: Interactive dynamic response for games. In: Proceedings of the 2007 ACM SIGGRAPH Symposium on Video Games Sandbox’07, (San Diego, California, August 4–5), pp. 9–14. ACM, New York (2007) 25. Treuille, A., Lee, Y., Popovi’c, Z.: Near-optimal character animation with continuous control. ACM Trans. Graph. (SIGGRAPH 2007) 26(3), Article 7, 7 pages (2007). DOI:10.1145/1239451.1239458 26. Miall, R.C., Weir, D.J., Wolpert, D.M., Stein, J.F.: Is the cerebellum a Smith predictor? J. Motor Behav. 25(3), 203–216 (1993) 27. Komura, T., Leung, H., Kuffner, J.: Animating reactive motions for biped locomotion. In: Proceedings of ACM Symposium on Virtual Reality Software and Technology, pp. 32–40 (2004) 28. Stephen, N.R., Rebecca, B., Jessica, M.: Effect of the “squat protective response” on impact velocity during backward falls. J. Biomech. 37(9), 1329–1337 (2004) 29. Whitney, D.E.: Resolved motion rate control of manipulators and human prostheses, IEEE Trans. Man-Mach. Syst. 10(2), 47–53 (1969) 30. Delp, S.L., Loan, J.P., Hoy, M.G., Zajac, F.E., Topp, E.L., Rosen, J.M.: An interactive graphics-based model of the lower extremity to study orthopedic surgical procedures, IEEE Trans. Biomed. Eng. 37(8), 757–767 (1990) 31. Tang, B., Pan, Z.G., Zheng, L., Zhang, M.M.: Interactive generation of falling motions. Comput. Animat. Virtual Worlds 17(3–4), 271–279 (2006) 32. Lee, J., Shin, S.Y.: A hierarchical approach to interactive motion editing for humanlike figures. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 39–48. ACM, New York (1999)

Real time falling animation with active and protective responses Zhigeng Pan is a professor in the State Key Lab of CAD&CG, Zhejiang University. He is also the acting director of DEARC (Digital Entertainment and Animation Research Center) in Zhejiang University. He received his Bachelor’s degree and Master’s degree from Computer Department, Nanjing University in 1987 and 1990, respectively, and his Doctor’s degree from Computer Science department, Zhejiang University in 1993. He is the Editor-in-Chief of the international journal “The International Journal of Virtual Reality” and the Co-EiC of the international journal of “Transactions on Edutainment”. His research interests include distributed VR, multimedia, multi-resolution modeling, realtime rendering, virtual reality, visualization and image processing. Xi Cheng is currently a doctoral candidate in Computer Science at Zhejiang University. He received his Bachelor’s degree in Computer Science at Chu-Kechen Honors College from Zhejiang University in 2005. He has been a system analyst of computer software in China since 2005. His research interests include character animation, physical simulation, and 3d avatar modeling.

497 Wenzhi Chen is an associate professor in the Computer Science Department of Zhejiang University. He received his Bachelor’s degree, Master’s degree and Ph.D. from the Computer Department, Zhejiang University in 1992, 1999 and 2005. His main research interests include virtual reality, virtual environment, visualization, image processing, computer architecture and operating system.

Gengdai Liu is currently a doctoral candidate in the State Key Lab of CAD&CG, Zhejiang University. He received his Bachelor’s degree in Information Engineering and the Master’s degree in Systems Engineering from Xi’an Jiaotong University in 2002 and 2005, respectively. His research interests include virtual reality and character animation.

Bing Tang received his Doctor’s degree in Computer Science, Zhejiang University in 2006. He received his Bachelor’s degree in Mechanics and Master’s degree in Computer Science from South-west Jiaotong University in 2000 and 2003, respectively. His research interests are character animation, physical simulation, mobile graphics, multiprojector based immersive virtual environment.

real time falling animation.pdf

Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.
Missing:

731KB Sizes 2 Downloads 106 Views

Recommend Documents

Real Time Research
including the use of mobile hand-held computers, cloud-based data storage ... computer modelling techniques and multivariate statistical analyses as well as ...

Real-Time Bidding
Display Network, Think with Google and YouTube are trademarks or registered trademarks of Google. Inc. All other company and product names may be.

accurate real-time windowed time warping - CiteSeerX
used to link data, recognise patterns or find similarities. ... lip-reading [8], data-mining [5], medicine [15], analytical .... pitch classes in standard Western music.

accurate real-time windowed time warping - CiteSeerX
lip-reading [8], data-mining [5], medicine [15], analytical chemistry [2], and genetics [6], as well as other areas. In. DTW, dynamic programming is used to find the ...

Real Time Systems -
Real-time programming is assembly coding, priority interrupt programming, and writing device drivers. Real-time systems operate in a static environment.

Real Time Protocol (RTP) - EPFL
From a developer's perspective, RTP belongs to the application layer rather than the transport layer. 3. Real Time Transport Protocol (RTP). ❑ RTP. ○ uses UDP.

real time programming.pdf
servicescontact test and automation. Math toolkit for real time. programming math toolkit for real time. Embrio a visual, real time development tool for the arduino.

Time-Suboptimal Real Time Path Planner for a ...
Abstract – The purpose of this paper is to plan a path for humanoid robot called MAHRU on real-time in a partially dynamic environment. And a path planner should consider the kinematic constraints of the humanoid robot and generate a line-based and

Evaluation of Real-time Dynamic Time Warping ...
real-time score and audio synchronisation methods, where it is ...... beat tracking software BeatRoot and the audio alignment software MATCH. He was ...

Discrete Real-Time and Stochastic-Time Process ...
Performance Analysis of Distributed Systems ... process algebra that embeds real-time delays with so- ... specification language set up as a process algebra with data [5]. In addition, in [21] ...... This should pave the way for bigger case studies.

Real-Time Video Compression
Can degrade easily under network overload or on a slow platform. ... technique does not take advantage of the similarities between adjacent frames. ...... case, although complex wiring is required (512 individually wired 16-bit words), the load ...

Real-Time Data Resources Flyer.pdf
Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Real-Time Dat ... ces Flyer.pdf. Real-Time Dat ... ces Flyer.

The Arrival of Real-Time Bidding
In “Real-Time Bidding with Google” we briefly describe three of Google's ... there was no easy way to achieve de-duplicated reach or to cap the number of .... To differentiate the process from the technology, we will call this process “real-.

real time volume graphics pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. real time volume graphics pdf. real time volume graphics pdf. Open. Extract. Open with. Sign In. Main menu.

Introduction to Real-Time Systems -
Periodic Task Scheduling. 110 / 233. Page 2. Problem formulation. 111 / 233. Page 3. Timeline scheduling. 112 / 233 ... Rate Monotonic (RM). 119 / 233 ...

Shareablee Launches Real-Time Social ... - Automotive Digest
Leaderboards, Tracks Video & Competitors to Set Strategies. What does this mean to Customers & the Social Media Market? Enables customers to group social ...

real-time-water-quality-management.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

Falling Stars (Falling Stars, #1) by Sadie Grubor.pdf
Page 1 of 1. Page 1 of 1. Falling Stars (Falling Stars, #1) by Sadie Grubor.pdf. Falling Stars (Falling Stars, #1) by Sadie Grubor.pdf. Open. Extract. Open with. Sign In. Main menu. Page 1 of 1.