Vis Comput (2009) 25: 479–485 DOI 10.1007/s00371-009-0343-3

O R I G I N A L A RT I C L E

Fragment-based responsive character motion for interactive games Xi Cheng · Gengdai Liu · Zhigeng Pan · Bing Tang

Published online: 3 March 2009 © Springer-Verlag 2009

Abstract Fragment-based character animation has become popular in recent years. By stringing appropriate motion capture fragments together, the system drives characters responding to the control signals of the user and generates realistic character motions. In this paper, we propose a novel, straightforward and fast method to build the control policy table, which selects the next motion fragment to play based on the current user’s input and the previous motion fragment. During the synthesis of the control policy table, we cluster similar fragments together to create several fragment classes. Dynamic programming is employed to generate the training samples based on the control signals of the user. Finally, we use a supervised learning routine to create the tabular control policy. We demonstrate the efficacy of our method by comparing the motions generated by our controller to the optimal controller and other previous controllers. The results indicate that although a reinforcement learning algorithm known as value iteration also creates the tabular control policy, it is more complex and requires more expensive space–time cost in synthesis of the control policy table. Our approach is simple but efficient, and is practical for interactive character games.

Keywords Motion fragment · Character control · Motion graph · Responsive character animation

X. Cheng · G. Liu · Z. Pan () · B. Tang State Key Lab of CAD&CG, Zhejiang University, Hangzhou, 310058, China e-mail: [email protected]

1 Introduction Demand for realistic and responsive character animation is increasing in interactive games and other entertainment applications. A considerable number of important advances have been achieved in recent years [1–3]. Motion capture is an efficient method for creating realistic character motion. However, editing the motion capture data online to make the characters respond to unpredictable control signals is still a challenge. Ordinary game engines are able to enforce this reaction by translating and rotating the root of the character to obey the control signals. However, those engines can not generate the pose-level character animation because of the discrepancies between the displayed animation and the character root motion manifest as visual artifacts like footskate [4]. This paper proposes a fast and automatic method to generate more responsive character motions according to the user’s control signals. The system creates a control policy table to determine the next motion fragment, and strings short motion fragments together to generate a long motion sequence. In order to reduce the synthesis cost of the control policy table, we first use the optimal control to find out the next fragment class to play based on one sample control signal trace. Then we utilize a classification method borrowed from machine learning, which allows the system to predict the appropriate choice of the next fragment class and fill the indexes of the control policy table. Figure 1 is a screenshot of our game application. We evaluate the quality of the character motion generated by our method and other, earlier methods. The space–time cost in synthesis of the control policy table is also analyzed in our experiments.

480

Fig. 1 One user controls the character with a joystick in our game application

2 Related works Motion capture is an effective way of simulating realistic character animations. By selecting, concatenating and interpolating the existing motions, numerous new animations can be created. A motion graph is often used to find out the possible transitions between motions [5]. The motion graph creates a graph data structure from motion capture data and allows characters to follow the path which meets the requirements by using a bound algorithm. Gleicher et al. proposed a method to emphasize the connectivity of the motion graph and improved the efficiency of motion selection and the transition generation [6]. A hierarchy motion graph is presented by Arikan and Forsyth. They employed a randomized search algorithm for synthesizing new motions which met the temporal and position constraints [7]. Safonova and Hodgins described a discrete reduced space representation of human motions [8]. Their representation could be viewed as a combination of the motion graph and the interpolation techniques, because the generated motion was an interpolation of two time-scaled paths through a motion graph. However, the above algorithms were not able to provide responsiveness of characters in the situations. Besides, some visual artifacts were introduced (for example, transition from a forward walk to a jump), so they were not well suited for interactive character applications such as games. Heck and Gleicher proposed a method to augment the representation of motion graphs by using parameterized motions [9]. Their method could produce high quality and controllable motions. Reitsma and Pollard explored responsive animation using motion graphs [10]. They repeatedly sampled state and performed searches to calculate a mean lag time of over three seconds between user input and resulting action. Creating fragment-based character motion is similar to using a motion graph [11]. In the fragment-based character motion, the system segments an original motion capture sequence into several short pieces and rearranges them to meet the specified requirements. All the motion segments

X. Cheng et al.

are short and each segment is able to have transition to others. McCann and Pollard improved a fragment-based animation generation method proposed by Schödl et al. [12]. They created a control model of the user and developed a tabular policy-based controller offline. Then their system determined the next motion fragment to play online based on the previous motion fragment and the current user’s control signal [4]. Nevertheless, their method required very expensive space–time cost in synthesis of the control policy table. In comparison, our approach is more efficient in synthesis of the control policy. Reinforcement learning has become popular in the research area of computer graphics in recent years. Treuille et al. employed a low-dimensional reinforcement learning method, and outperformed the greedy method for navigational problems while retaining the low memory overhead [13]. Although the representation is compact, their method could solve difficult problems such as obstacle avoidance. Arikan et al. showed that motion synthesis from a graph structure can be cast as dynamic programming, and they presented an interactive motion synthesis algorithm where the user could control qualitative properties of the synthesized motion as well as details [14]. Many character animation researchers worked on classifying human activities for various purposes. Chai and Hodgin proposed an approach for a low-cost motion capture mechanism [15]. They classified human motion from a small number of control signals by employing low-resolution video cameras and a small set of retro-reflective markers. Zordan et al. employed a classifying method in machine learning called support vector machine (SVM) to quickly classify the physical motion online just following an impact among the set of examples in the database [16]. Their approach could compute the dynamic response to unanticipated interactions faster than real-time for interactive game and other online applications. Similarly to their work, our method also uses a SVM to create the control policy table. In addition, there are continuing trends toward more physically simulated characters, and several groups of researchers have investigated physically based responsive character motion [17–19]. Zordan et al. introduced a method to predict the trajectory and compute the dynamic response by a rewinding mechanism, and then blended the physically generated motion with the reactive motion from the database [20].

3 The control policy table Our responsive character motion is generated by creating a stream of motion fragments from our motion capture fragment database. The control policy table is two-dimensional and stores the indexes of the next motion fragment class.

Fragment-based responsive character motion for interactive games

481

Fig. 2 Overview of the character motion generation process in our system

This fragment class includes the most suitable motion fragments to connect to the previous fragment and match the current input signal (see Fig. 2). In this section, we will introduce the table generation process in detail. We need to consider the following two factors when selecting a motion fragment to connect to the previous fragment: (1) the transition smoothness between the two connected fragments; (2) how closely the next fragment matches the current control signal. We build the control policy table before the motion generation process, so the transition quality of the two fragments can be calculated in advance. As McCann et al. described in their research work, if the system knew exactly how the user would change the control signal in the future, the best possible choice of the next fragment could be made [4]. In our system, we record examples of the user’s control signals called “control signal traces,” which are used as samples in the training process of our machine learning routine. 3.1 Fragments clustering If we only select the best motion fragment in the situation, the generated character responsive motion will lose some variation present in the motion fragment database, so our control policy table stores the classes of good next fragments instead of simply one. After determining the best motion fragment class to play next, our system chooses only one motion fragment at random in this class to provide variety. We use a clustering method to achieve our classification objective. Formally, all the motion fragments in the database will be divided into n classes {Q1 , . . . , Qn }. By choosing an appropriate class number n, we can get a good trade-off between the increased computing time in the clustering and the similarity of the motion fragment in one class. Fragment f includes the poses {p1 , . . . , pm }. First, we choose the initial means of all the fragment classes, recorded as {μ1 , μ2 , . . . , μn }. If Dis(f, μk ) obtains the minimum when choosing a certain k, then we let f ∈ Qk , where Dis(f, μk ) calculates the distance of fragment f and μk by summing

Euclidean distance between bone tips of the corresponding poses. Next, we recalculate the new means of all the classes and repeat the above process until all the motion fragments are classified. At that time, the fragments in the same class are similar to each other. Our fragment classes include: walking forward, walking backward, running forward, sidestep left/right, sidestep up/down, jumping forward, jumping up/down, standing, turning left/right in place, etc., and all the different variants of them. 3.2 Obtaining training samples The control signals in our system are the combinations of the velocity and direction flag, the turning flag, and the jumping flag (see Fig. 2). Although in fact, the control signals are in a continuous, high-dimensional space, we simply select a set of points in this control space, please see Sect. 4 for more details. One control signal trace is recorded. We constrain the root of a character to follow the control signals, but the pose of the character is not animated. In this situation, the optimal control is possible, because system knows all the control signals. Dynamic programming is used to calculate the optimal fragment to respond to the control signal. For each fragment class Q, we choose the fragment f in Q as the representative of Q, and choosing fnext in Qnext as the representative of Qnext , where Qnext is the next fragment class to follow the current fragment class Q. Then we build table T , where Q Ti gives the value of choosing fragment class Q at step i Q along the control path. Ti is calculated in (1):  Q Q  Ti = max Quality(Q, Qnext , ci ) + Ti+1next , Qnext

(1)

where ci gives the control signal at step i along the control path. The value at the last step of table T is set to zero. In (1), Quality(Q, Qnext , ci ) is a quality notion of selecting a given next fragment class, which is calculated in (2):

482

X. Cheng et al.

Quality(Q, Qnext , ci )  −1 = 1 + Dis(pm , p1_next ) −1   × 1 + ControlDis Ev(fnext ), ci ,

(2)

where Dis(pm , p1_next ) calculates the distance between the last pose of fragment f in class Q and the first pose of fragment fnext in class Qnext by summing Euclidean distance between bone tips. ControlDis(Ev(fnext ), ci )is a weighted Euclidean distance between the control vectors of Ev(fnext ) and ci . The control vectors include velocity, turning and jumping flags of the character. Ev(fnext ) is the evident control of fnext , which is estimated by the center-of-mass motion of fnext . In (2), (1 + Dis(pm , p1_next ))−1 can be seen as the smoothness of the transition between the two fragments, f and fnext , with zero being unrealistic and one being pleasing to the eye. The (1 + ControlDis(Ev(fnext ), ci ))−1 can be seen as how closely the evident control of fragment fnext matches the control signal ci , with zero being performing the wrong action and one corresponding to performing the correct action. This factorization takes both fragment transition quality and control quality into account. The optimal fragment class path {Q0 , . . . , Qn } can be obtained by selecting the maximum T0 and stepping forward through time, see (3)–(4). Q

Q0 = arg max T0 ,

(3)

 Q  Qi+1 = arg max Quality(Qi , Q, ci ) + Ti+1 .

(4)

Q

Q

Fig. 3 Stitching of the two motion fragments

{(Q, c)}. {yi } are the sample labels of the next fragment class and yi ∈ {Q}. The SVM is created by solving the following optimization problem showed in (5).  1 T ξt ω ω+a ω,b,ξ 2 t=1   Subject to : yt ωT φ(xt ) + b ≥ 1 − ξt , L

min

(5) ξ ≥ 0,

where ω is a weight to be adjusted, b is the estimated probability. By varying ω and b, xi are mapped to a multidimensional space with separating partitions found with maximal margins. The variable a is a weighted penalty for error term ξ and a > 0. φ(xi )T φ(xj ) is defined to be the kernel function K(xi , xj ) and in our system, we use the radial basis kernel showed in (6).

3.3 Generating the control policy table

  K(xi , xj ) = exp −γ xi − xj 2 ,

Now, we get the next fragment class Qi+1 to play by giving the previous fragment class Qi and the control signal ci in the current step. Before our system controls the character online when receiving the user’s input, the decision of choosing a certain fragment class should be calculated in advance. Our control policy table contains |fragment classes| × |control signals| indexes, because each decision only depends on the current control signal and the previous fragment class. We employ a machine learning algorithm called SVM to quickly predict the next fragment class, and build the tabular control policy. The SVM operates by finding a partition in the input data space, and is used to fit functions which maximize the error margin between the samples found in a training set. Our training samples are from the data which is obtained in Sect. 3.2. Each training sample includes an input vector and a label of the next fragment class. The input vector includes the previous fragment class and the current control signal. Formally, we let TS = {(x1 , y1 ), . . . , (xL , yL )} be the training samples, where {xi } are the input vectors and xi ∈

where γ is a user-defined kernel parameter and γ > 0. After training the SVM, we send the vector (Q, c) into the SVM, and the control policy table is generated. As we train the SVM-based on the samples of the user’s control signals, and employ dynamic programming to obtain the optimal control results, this policy table is high-quality and low space–time cost in the synthesis. However, some of the direct transitions between two motion fragments will introduce visual artifacts. For example, jumping immediately after a walking motion may induce foot-skate. In order to solve this problem, we insert a standin-place motion fragment between the two motion fragments when they are inappropriate to connect directly. In addition, we perform a simple stitch between the two connected motion fragments (see Fig. 3). For translation, linear interpolation is employed, and for rotations, our system interpolates using slerping of quaternions, again employing a simple easy-in-easy-out (EIEO) weighting in time across the stitching.

(6)

Fragment-based responsive character motion for interactive games

483 Table 1 Summary of the synthesis time and the process memory usage of the four controllers Types

Fig. 4 Control signals. Two circles represent the different velocities (running or walking), the left part is the jumping signal, and the right part is the turning signal

Online

Synthesis Time

Memory Usage

Greedy

Yes

8 seconds

< 10 Mbytes

SVM

Yes

2.5 minutes

10–20 Mbytes

Iterate

Yes

15 minutes

>100 Mbytes

Optimal

No

2 minutes

<10 Mbytes

4 Implementation and results We put the control policy table to work in the context of a simple game. User operates a joystick to control the velocity/direction, the turning and the jumping of a character. A one-minute game play track is recorded. The control signals are defined on 102 points in the control space (see Fig. 4), which are all the possible combinations of the velocities in two circles, the turning flag in {−1, 0, 1}, and the jumping flag in {0, 1}. Our motion fragments are drawn from some of the motion sequences in the CMU motion capture database. The motion sequences are selected to include walking, running, jumping, sidestepping, turning, etc. The lengths of the fragments are from 100 to 500 ms. Each fragment includes a cycle of action. The SVM is trained on 600 data examples. However, the motion fragments which are segmented directly from the motion capture data sequences may miss some specific control directions. In order to fill the deficiencies, we synthesize new motion fragments by interpolation and simple transforming of the original fragments. Our fragment pool in the system includes both the original and the synthesized motion fragments, and contains 1784 fragments in total. Based on several experiment results, we choose 97 as the fragment class number, and dividing all the fragments into 97 fragment classes. The control policy table of this experiment contains 97 × 102 table indexes. To evaluate our controller (represented as SVM-model), we compare it with: (1) the greedy controller (represented as Greedy-model), this controller does not care about the future control signals and only calculates the quality of transition and control at the current step; (2) the controller proposed by McCann et al. (represented as Iterate-model), which generated a tabular control policy using value iteration with a control model [4]; (3) the optimal controller (represented as Optimal-model), which knows all of the future control signals. We ran all the timing tests on an Intel Core2 1.86 GHz Processor with 1 GB memory. Synthesis of the Greedymodel only costs a few seconds, while synthesis of the Iterate-model needs over 10 minutes, and our SVM-model

Fig. 5 Comparison of the trace quality as a percentage of optimal

needs about 1/4 time compared with the Iterate-model. Runtime for the above three controllers are negligible, because the motion fragment selection is simply a matter of table lookup. Although the Optimal-model is unsuitable for online control, we use it as a point of comparison since it generates the highest quality motion. We compare the quality of the motions generated by each controller with the Optimalmodel controller, and representing the results as a percentage of the Optimal-model. The quality values are calculated using (2) in the previous section. Results for the space–time costs of all the controllers are given in Table 1. In this table, we find out that Greedymodel has the lowest cost in both time and space, because there is no need to consider the future control signals. Although the Optimal-model and our SVM-model need approximately the same cost, the former can not run online. The Iterate-model is the most expensive in space–time cost, so this method will be infeasible when the fragment data set becomes much larger. Figure 5 shows the result of running the controllers on several individual traces. We test eight different control traces. From Fig. 5, we find out that the quality of the Greedy-model is the lowest (about 0.40 of the Optimalmodel). The SVM-model has the similar motion quality with the Iterate-model (0.84 vs. 0.88 of the Optimal-model). The Optimal-model is the best one because all the control signals are known. The result strongly indicates that a user control model is useful.

484

X. Cheng et al.

Fig. 6 Animation filmstrips. Three examples of the responsive character motion from our system

In order to test the adaptability of our SVM-model and the Iterate-model, we change the user controlling the character in order to obtain different example traces. Each oneminute control trace also includes 600 control signals. The experimental result shows that the Iterate-model operates from 0.85 to 0.91 of the Optimal-model in motion quality (standard deviation = 1.7). The SVM-model operates from 0.81 to 0.90 of the Optimal-model in motion quality (standard deviation = 4.1). The result indicates that our user model is closer in fitting the training data, while the Iteratemodel controller captures more features of the control space. Several sample animations appear in the associated video, where Fig. 6 shows three of them.

5 Discussion and conclusion In this paper, we describe an approach to generate pose-level character control motion online. The generated motions are both responsive and realistic, so the approach is suitable for character game and other online applications. We assemble motion fragment together in a high-quality manner, and make a good compromise between the transition quality of fragments and the control quality to respond to user input signals. The control policy table is created based on the user’s control tracks, and considering all the possible transitions. Besides, a machine learning method is employed to speed up the synthesis of the control policy table. Based on the experimental results, our approach is scalable in the size of the motion fragments. As our character responds to control signals within the time of one fragment, the selection of the fragment length should be a trade-off between synthesis time of the control policy table and the reaction time of the character. Our tabular control policy stores the indexes of the fragment class instead of simply one motion fragment, so the final animation provides more

variety. In addition, most unnatural transitions between the motion fragments are removed by inserting an appropriate intermediate motion fragment. The experimental results indicate that our SVM-based controller can run on a larger fragment database than the value iteration-based controller proposed by McCann et al. [4]. However, there are several exciting directions to research in the future. First, we need to improve the adaptability of our method and achieve more stable motion quality when changing the game users. Maybe a better user control behavior model should be developed. Second, in order to speed up the synthesis process of the control policy table, a GPUbased method may be included to share the expensive load of the CPU. In addition, although our motion transition method is fast, a more excellent one should be introduced to improve the final quality of the output motion. Also, it will be interesting to combine the physics-based techniques to make the characters take protective actions when they face environmental hazards. There are continuing trends to generate more responsive and realistic character animation in real time. Like the technique described here, fragment-based character animations will become more satisfactory to audiences. We hope to improve the responsiveness of characters and the sense of immersion in interactive games and other entertainment applications. Acknowledgements The authors would like to thank Annette Paul for her suggestion on SVM, Tiancheng Li for paper writing, Williams Qierxi for his generous helps on valuable discussion, and all the anonymous reviewers for their many helpful comments. Motion capture data used in this research project is obtained from the CMU mocap database (mocap.cs.cmu.edu). This research work is co-sponsored by project “On-line Shanghai Expo” (grant No.:08dz0580208). In addition, this research is supported by NSFC (grant No.: 60533080), the Project 863 (grant No.: 2006AA01Z303) and the Intel-University Cooperation Research Project.

Fragment-based responsive character motion for interactive games

References 1. Magnenat-Thalmann, N., Thalmann, D.: Virtual humans: thirty years of research, what next? Vis. Comput. 21(12), 997–1015 (2005) 2. Kry, P.G., Pai, D.K.: Interaction capture and synthesis. ACM Trans. Graph. 25(3), 872–880 (2006) 3. Silva, M., Abe, Y., Popovic, J.: Simulation of human motion data using short-horizon model-predictive control. Comput. Graph. Forum 27(2), 371–380 (2008) 4. McCann, J., Pollard, N.: Responsive characters from motion fragments. ACM Trans. Graph. (SIGGRAPH 2007) 26(3), Article 6, 7 pages (2007). DOI:10.1145/1239451.1239457 5. Kover, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Trans. Graph. 21(3), 473–482 (2002) 6. Gleicher, M., Shin, H.J., Kovar, L., Jepsen, A.: Snap-together motion: assembling run-time animations. In: Proceedings of the 2003 Symposium on Interactive 3D Graphics, I3D’03, Monterey, California, April 27–30, pp. 181–188. ACM, New York (2003) 7. Arikan, O., Forsyth, D.A.: Interactive motion generation from examples. ACM Trans. Graph. 21(3), 483–490 (2002) 8. Safonova, A., Hodgins, J.: Construction and optimal search of interpolated motion graphs. ACM Trans. Graph. 26(3), Article 106, 11 pages (2007). DOI:10.1145/1239451.1239557 9. Heck, R., Gleicher, M.: Parametric motion graphs. In: Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games, I3D’07, Seattle, Washington, April 30–May 2, pp. 129– 136. ACM, New York (2007) 10. Reitsma, P.S.A., Pollard, N.S.: Evaluating motion graphs for character navigation. In: Proceedings of the 2004 ACM Siggraph/Eurographics Symposium on Computer Animation, Grenoble, France, August 27–29, pp. 89–98 (2004) 11. Pullen, K., Bregler, C.: Motion capture assisted animation: texturing and synthesis. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH’02, San Antonio, Texas, July 23–26, pp. 501–508. ACM, New York (2002) 12. Schödl, A., Essa, I.: A machine learning for video based rendering. Technical Report: GIT-GVU-00-11, Georgia Institute of Technology (2000) 13. Treuille, A., Lee, Y., Popovi’c, Z.: Near-optimal character animation with continuous control (SIGGRAPH 2007). ACM Trans. Graph. 26(3), Article 7, 7 pages (2007). DOI:10.1145/1239451.1239458 14. Arikan, O., Forsyth, D.A., O’brien, J.F.: Motion synthesis from annotations. ACM Trans. Graph. 22(3), 402–408 (2003) 15. Chai, J., Hodgins, J.K.: Performance animation from lowdimensional control signals. ACM Trans. Graph. 24(3), 686–696 (2005) 16. Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu, C.C.: Interactive dynamic response for games. In: Proceedings of the 2007 ACM SIGGRAPH Symposium on Video Games, Sandbox’07, San Diego, California, August 4–5, pp. 9–14. ACM, New York (2007) 17. Komura, T., Leung, H., Kuffner, J.: Animating reactive motions for biped locomotion. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST’04, Hong Kong, November 10–12, pp. 32–40. ACM, New York (2004) 18. Tang, B., Pan, Z.G., Zheng, L., Zhang, M.M.: Interactive generation of falling motions. Comput. Animat. Virtual Worlds 17(3–4), 271–279 (2006) 19. Yin, K.K., Loken, K., Panne, M.: SIMBICON: Simple biped locomotion control. ACM Trans. Graph. (SIGGRAPH 2007) 26(3), Article 105, 10 pages (2007). DOI:10.1145/1239451.1239556 20. Zordan, V.B., Majkowska, A., Chiu, B., Fast, M.: Dynamic response for motion capture animation. ACM Trans. Graph. 24(3), 697–701 (2005)

485 Xi Cheng is currently a doctoral candidate in the state key lab of CAD&CG, Zhejiang University. He received his Bachelor’s degree at Chu Kechen honours college, Zhejiang University, in 2005. He has been a computer software system analyst of China since 2005. His research interests include character animation, physical simulation, 3d avatar face modeling.

Gengdai Liu is currently a Doctoral candidate in the state key lab of CAD&CG, Zhejiang University. He received his Bachelor’s degree in Information Engineering and his Master’s degree in Systems Engineering from Xi’an Jiaotong University, in 2002 and 2005, respectively. His research interests include virtual reality and character animation.

Zhigeng Pan is a Professor in the State Key Lab of CAD&CG, Zhejiang University. He is also the acting director of DEARC (Digital Entertainment and Animation Research Center) at Zhejiang University. He received his Bachelor’s degree and Master’s degree from Computer Department, Nanjing University, in 1987 and 1990, respectively, and his Doctoral degree from Computer Science Department, Zhejiang University, in 1993. He is the Editor-in-Chief of the international journal “The International Journal of Virtual Reality” and the Co-EiC of the international journal “Transactions on Edutainment”. His research interests include: distributed VR, multimedia, multi-resolution modeling, real-time rendering, virtual reality, visualization and image processing. Bing Tang received his Doctoral degree in Computer Science at Zhejiang University in 2006. He received his Bachelor’s degree in Mechanics and Master’s degree in Computer Science from South-west Jiaotong University, in 2000 and 2003, respectively. His research interests are character animation, physical simulation, mobile graphics, multiprojector based immersive virtual environment.

fragment-based.pdf

video cameras and a small set of retro-reflective markers. Zordan et al. employed a classifying method in machine. learning called support vector machine (SVM) ...

503KB Sizes 1 Downloads 166 Views

Recommend Documents

No documents