J. Vis. Commun. Image R. 15 (2004) 446–466 www.elsevier.com/locate/jvci

Content-based retrieval for human motion dataq Chih-Yi Chiua, Shih-Pin Chaoa, Ming-Yang Wua, Shi-Nine Yanga,*, Hsin-Chih Linb a

Department of Computer Science, National Tsing Hua University, Hsinchu 300, Taiwan, ROC b Department of Information Management, Chang Jung Christian University, Tainan County 711, Taiwan, ROC Received 5 June 2003; accepted 5 April 2004 Available online 14 August 2004

Abstract In this study, we propose a novel framework for constructing a content-based human motion retrieval system. Two major components, including indexing and matching, are discussed and their corresponding algorithms are presented. In indexing, we introduce an affine invariant posture feature and propose an index map structure based on the posture distribution of raw data. To avoid the curse of dimensionality, the high-dimension posture feature of the entire skeleton is decomposed into the direct sum of low-dimension segment-posture features of skeletal segments. In matching, the start and end frames of a query example are first indexed into index maps to find candidate clips from the given motion collection. Then the similarity between the query example and each candidate clip is computed through dynamic time warping. Some experimental examples are given to demonstrate the effectiveness and efficiency of proposed algorithms.  2004 Elsevier Inc. All rights reserved.

q This study was supported partially by the MOE Program for Promoting Academic Excellence of Universities under the Grant No. 89-E-FA04-1-4 and the National Science Council, Taiwan, under the Grant NSC91-2213-E-309-002. * Corresponding author. Fax: +886-3-5723694. E-mail address: [email protected] (S.-N. Yang).

1047-3203/$ - see front matter  2004 Elsevier Inc. All rights reserved. doi:10.1016/j.jvcir.2004.04.004

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

447

1. Introduction The investigation of human motion has drawn a great attention from researches in computer vision or computer graphics recently. Fruitful results can be found in many applications such as visual surveillance (Haritaoglu et al., 2000; Oliver et al., 2000), diagnosis and therapy for rehabilitation (K} ohle et al., 1997; Meyer et al., 1997), athletic training (Chua et al., 2003; Davis and Bobick, 1998), person identification (BenAbelkader et al., 2002; Bobick and Johnson, 2001), animation generation (Arikan et al., 2003; Multon et al., 1999), user interface (Freeman et al., 1999; Lee et al., 2002), and so on. As tools and systems for producing and disseminating motion data improve significantly, the amount of human motion data grows rapidly. Therefore, an efficient approach to search and retrieve human motion data is needed. In this study, we consider the indexing and the matching problems of contentbased human motion data retrieval. Suppose that human motion data are composed of a consecutive sequence of frames, as shown in Fig. 1. In the sequence, each frame corresponds to a posture, which is structured information of human skeletal segments in the 3D space. These skeletal segments move simultaneously from one frame to the next forming multiple 3D motion trajectories as the sequence proceeds. The length of motion sequences are various depending on applications. However, in this study, we confine ourselves to motion data of relatively long sequence. For example, a 20 min Chinese martial art or aerobic performance has 72,000 frames at 60 Hz and each frame contains about 20 feature values associated to the posture. Therefore, it is not an easy task to process the query form ‘‘given a motion clip (sub-sequence) Q as a query example, find clips whose human motion trajectories are similar to Qs’’ from the given long-length motion sequence. Although some content-based video retrieval

Fig. 1. Human motion retrieval in a long-length sequence.

448

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

systems support query by segmented object motion (see Fig. 2), their frameworks are not well suited to deal with the above-mentioned motion sequence due to the difference in the representation. This issue will be discussed in the next section. In addition, each skeletal segment has kinematic constraints due to the physical nature of human being. However, we will exploit these constraints on skeletal segments to design more efficient indexing and matching algorithms for human motion retrieval. Given a sequence of human motion frames as our collection, each frame is regarded as a posture representing a hierarchical skeletal structure with a given affine invariant feature vector. For each skeletal segment (e.g., the torso, arm, or leg), we construct an index map according to its segment-posture distribution through selforganizing map (SOM) clustering. In matching, when a motion clip is specified as a query example, its start and end frames (postures) are indexed according to these index maps. Similar frames (postures) are sought based on the proposed candidate searching algorithm first, and then a dynamic time warping algorithm is used to find similar ones among chosen candidates. In our experiments, the motion capture data of Tai Chi Chuan, a traditional Chinese martial art, are used as the testing collection to show the effectiveness of the proposed framework. The main contribution of this study is that we propose effective and efficient indexing and matching algorithms for content-based human motion data retrieval. Motion data of the entire skeleton are decomposed as the direct sum of individual segments. Therefore the index problems of the respective skeletal segments are solved in low dimensions, and their solutions are integrated to search for candidates from the long-length motion sequence for final matching. Consequently, the time complexity of our approach will not grow exponentially with respect to the number of dimensions of the index space. Besides, our experimental results show that the use of the index map can indeed improve the retrieval performance.

Fig. 2. Segmented object motion retrieval.

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

449

The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 presents the overview of our framework. Sections 4 and 5 detail indexing and matching algorithm, respectively. Section 6 describes our experimental design and evaluates the retrieval results. Section 7 concludes our study and gives future directions.

2. Related work In the following, we investigate the background related to our work, including content-based video retrieval and human motion analysis. Since we restrict our attention to the indexing and the matching in human motion retrieval, this survey will omit some computer vision issues such as object segmentation and tracking. 2.1. Content-based video retrieval Video is becoming a key element in the next generation of multimedia computing and communication. The rapid growth of video data raises a significant interest for developing efficient video search tools based on visual features, including color, texture, shape, motion, etc. In the past decade, several content-based video retrieval (CBVR) frameworks (Ardizzone and Cascia, 1997; Deng and Manjunath, 1998; Jain et al., 1999; Lienhart et al., 2000; Naphade and Huang, 2001; Ponceleon et al., 1998; Shearer et al., 1996; Smoliar and Zhang, 1994) and review papers (Ahanger and Little, 1996; Antani et al., 2002; Brunelli et al., 1999; Idris and Panchanathan, 1997) have been published in numerous conferences and journals. From these surveys, they point out that the first step in CBVR is to segment a video sequence into shots. These shots are elemental units for indexing and matching. To reduce manual effort in parsing, shot boundaries can be detected automatically by finding both abrupt transitions (e.g., cuts) and gradual transitions (e.g., fades, dissolves, and wipes). After shot detection, the above-mentioned visual features are extracted from a video shot as the shot representation. Among these visual features, motion provides a noticeable visual cue with respect to spatiotemporal relation in a video sequence. To represent the motion feature, MPEG-7, the new standard of multimedia content description interface, defines four motion descriptors, including motion activity, camera motion, motion trajectory, and parametric motion (Jeannin and Divakaran, 2001). While many CBVR systems use the motion activity to model the global motion pace of a video shot, a number of systems (Chang et al., 1998; Dimitrova and Golshani, 1995; Dag˘ta+ et al., 2000; Nabil et al., 2001; Sahouria and Zakhor, 1999) utilize the motion trajectory to model more complex spatiotemporal relation. For example, Dimitrova and Golshani (1995) introduced three hierarchical analysis levels to recover motion information in digital video. At the low level, motion is traced by detecting MPEG macroblocks and motion vectors. Macroblock trajectories are then aggregated to extract rigid and nonrigid object motion at the intermediate level. At the high level, object motion is associated with semantic representation of

450

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

domain-dependent activities. Chang et al. (1998) built an object-based representation in their video retrieval system, VideoQ. The motion trajectory of an object is denoted as a list of translation vectors. Users can sketch an object together with a trajectory to formulate the query. The trajectories of two objects are matched based on either spatial relation or spatiotemporal relation. Sahouria and Zakhor (1999) developed a CBVR system to index surveillance video based on object trajectories. The object trajectory is represented by wavelet transforms and its coarse-scale component is employed for indexing and matching. Dag˘ta+ et al. (2000) proposed two motion retrieval models: the trajectory model for temporal absolute retrieval and the trail model for temporal invariant retrieval. They also described an efficient search technique for the retrieval case of spatial absolute and temporal absolute based on the fast Fourier transform. Nabil et al. (2001) presented a graph structure to model and retrieve video scenes. Graph nodes denote moving objects in a scene whereas graph edges denote spatiotemporal relationships among these objects. A set of operations for moving object algebra is also defined to support various and flexible query forms. The aforementioned motion retrieval frameworks mainly focus on a single object trajectory while giving less attention to multiple object trajectories. However, human motion, which contains multiple trajectories of skeletal segments, corresponds to a high-dimension spatiotemporal feature vector. As we know, naı¨ve table-lookup or tree-based techniques for high-dimension indexing are computationally expensive. To overcome the ‘‘curse of dimensionality,’’ Li et al. (1998) and Sundaram and Chang (1999) proposed algorithms to decompose a high-dimension feature vector into several low-dimension ones. Thus the retrieval time for a given query can be reduced by searching in low-dimension spaces. Another crucial issue is that the length of a consecutive human motion sequence may be very long. While most video retrieval systems segment a video sequence into small shots and perform matching at the shot level (see Fig. 2), the long-length motion sequence is not segmented in general because there is no interrupted transition (e.g., cuts, fades, etc.); it can be regarded as a very long shot. Therefore, in this study, we consider human motion retrieval as a process of finding sub-sequences that are similar to the query example in the given long-length motion sequence (see Fig. 1). In other words, the multi-trajectories and long-length in a motion sequence are two major issues that distinguish this study of motion retrieval from conventional motion studies in video. 2.2. Human motion analysis Human motion analysis is widely discussed in computer vision and computer graphics. In computer vision, an analysis system involves two major tasks: motion tracking and motion recognition. First the system tracks human motion trajectories in a sequence of frames. Then these trajectories are recognized by the system in order to understand what actions are performed by human. There are three well-known techniques to carry out the motion recognition task. They are template matching, hidden Markov models, and dynamic time warping (Moeslund and Granum, 2001; Wang et al., 2003).

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

451

The notion of template matching is simple: compare the input pattern with prestored patterns in the database. The computational cost of the template matching technique is lower than that of other two techniques. However, it is more sensitive to the variance of the movement duration. Hidden Markov models (HMMs) are nondeterministic state machines that are used to model time-varying data. Given an input, HMMs move from state to state according to the transition probabilities. Under a sound stochastic state-space framework, HMMs are widely applied in human motion recognition. Although this technique can overcome the above-mentioned disadvantage of template matching, the selection of proper HMM parameters for training is still a difficult problem. On the other hand, dynamic time warping (DTW) is a template-based dynamic programming matching technique. If the lengths of two patterns are different, DTW can compensate for length difference and preserve ordering. DTW has the advantage of robust performance and simple computation to match length-discrepancy patterns. In this study, we apply the DTW technique to match human motion trajectories. In computer graphics, the notion of motion similarity has been proposed for synthesizing new motions. Li et al. (2002) regarded motion sequences as texture images and proposed a stochastic process to model motion texture. Existing similarity measures for texture can be employed to compare two motion sequences. Kovar et al. (2002) constructed motion graphs by computing similarities of joint positions and velocities among motion clips. Human motion is then synthesized through traversing the motion graphs. Arikan et al. (2003) presented an effective annotation mechanism to connect motion clips with vocabularies through interactive revisions of support vector machines (SVMs). The user can specify words from these vocabularies and then their proposed system picks corresponding clips to generate desired motion. Although both computer vision and computer graphics researchers have proposed several different matching algorithms, a few of indexing algorithms are discussed in the literature.

3. Framework overview The organization of our framework is shown in Fig. 3. We simply divide the framework into two parts, namely, indexing and matching. The indexing part is done offline and the matching part is performed interactively for each user query. Main features of each part are highlighted as follows: Indexing. Given a sequence of human motion frames as our collection. We first extract a posture vector that is affine invariant to skeleton transformation to represent the entire skeleton in the motion frame. The posture is decomposed into several segment-postures of major skeletal segments. Then for each skeletal segment, an index map is constructed through self-organizing map clustering according to its segment-posture distribution. The index map preserving the topological property will facilitate the search for similar postures and improve the retrieval performance. Matching. Given a clip as a query example. The posture vector with respect to the query example is first divided into several segment-posture vectors. Then by indexing

452

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

Fig. 3. Framework overview.

these segment-postures in respective index maps and integrating their indices information to search for candidate clips, the time cost will not grow exponentially in the multi-trajectories and long-length motion sequence. The trajectory similarity between the query example and each candidate clip is computed through dynamic time warping, which is robust for matching two length-discrepancy patterns. In the following two sections, we detail the indexing and the matching algorithms of the proposed framework.

4. Indexing Suppose that a collection of human motion frames is given. To achieve efficient and effective retrieval, a well-structured index mechanism should be developed. The first step is to select appropriate features to represent spatiotemporal information. To retrieve similar human motion trajectories regardless of their absolute positions, orientations, and skeleton sizes, a feature representation that is invariant to skeleton translation, rotation, and skeletal scaling is proposed. Based on the proposed posture representation, we then cluster motion frames in the given collection through self-organizing map clustering to construct the index structure. In this section, we detail both the representation of the posture and the construction of the index structure. 4.1. Posture representation We define a simplified human skeleton model according to H-Anim (Web3D working group on humanoid animation, 1999) and MPEG-4 body definition parameters (BDPs) (MPEG-4 overview, 2002) standards. The human skeleton in a frame

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

453

consists of nine skeletal segments and a root orientation, as shown in Fig. 4. These skeletal segments are the torso, upper arms and legs, and lower arms and legs, while the root orientation is a unit vector starting at the root joint and directing to the torso facing direction. Note that the information of the human skeleton can be obtained either by object segmentation and tracking from video or by using motion capture devices (Parent, 2002). In this study, we use VICON 8 motion capture system to collect our motion data. In motion capture processes, the performer is typically instrumented in some way, for example, markers are attached to every skeletal segment, so that key feature points (e.g., joints and end effectors) can be easily located and tracked. Then nine skeletal segment positions and a root orientation can be computed by these feature point locations. For details, readers can refer to Parent (2002). For the sake of achieving affine invariance of body transformations, the segmentposture of each skeletal segment is represented by the local spherical coordinate relative to the root orientation. Fig. 5 illustrates the segment-posture representation of a

Fig. 4. The human skeleton in a frame.

~ Fig. 5. Feature extraction of a skeletal segment vector ov.

454

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

~ be a skeletal segment vector at joint o, and or ~ be the vector skeletal segment. Let ov starting at o and being equivalent to the root orientation vector. Suppose p is the plane passing through the joint o and parallel to the floor (e.g., the XZ plane). Let the pro~ and o~r on p be the ov ~ 0 and or ~ 0 , respectively. Then h and , the spherical cojection of ov ~ 0 to ordinate elements of v, are measured in angular radians counterclockwise from or 0 ~ ~ ~ and from ov ~ to N , respectively, where N is the normal vector of p at o. Consequentov ly, the segment-posture of the skeletal segment is represented as a 2D vector (h, u). Note that h is in the interval [0, 2p) and wraps around, whereas u is in the interval [0, p]. We use the following notations in later paragraphs. Let X be a sequence of human motion frames in our collection. For the ith frame Fi 2 X, suppose its skeletal segments are indexed by positive integer j = 1, 2, . . ., 9, then Fi is represented by ð1Þ ð2Þ ð9Þ ðjÞ ðjÞ ðBi ; Bi ; . . . ; Bi Þ, where Bi ¼ ðhðjÞ i ; ui Þ is the segment-posture vector of the jth skeletal segment. After being extracted, these segment-posture vectors are used to construct the index structure for each skeletal segment. The index mechanism is presented in the next subsection. 4.2. Index map construction For each skeletal segment, we construct an index map through self-organizing map (SOM) clustering (Duda et al., 2001). The index map represents the distribution of segment-postures of the given motion collection such that similar segment-postures are mapped into the same or neighboring clusters. By preserving the topological property, such maps ease the work of search for similar segment-postures and improve the retrieval performance. Since each frame of motion data corresponds to the entire skeletal posture, for convenience, we consider the collection of motion data as a set of segment-posture sequence. Suppose that the segment-posture sequence of the jth skeletal segment is ðjÞ ðjÞ denoted as the set P ðjÞ ¼ fBi ji ¼ 1; 2; . . . ; ng, where Bi is the jth segment-posture vector with respect to frame Fi and n is the number of frames in our motion collection X. ðjÞ The index map of the jth skeletal segment is denoted as MðjÞ ¼ fC k jk ¼ 1; 2; . . . ; mg, ðjÞ where C k is the kth cluster center and m is the number of cluster centers. Our goal is to train the index map M(j) according to P(j). Cluster centers in the index map are first arranged as fixed-grid points. Then a SOM training algorithm is applied to adjust these cluster centers. The construction algorithm is described as follows. Algorithm 1. Index Map Construction. Input: P(j), the segment-posture sequence of the jth skeletal segment. Output: M(j), the index map of the jth skeletal segment. ðjÞ Step 1. Arrange C k ; k ¼ 1; 2; . . . ; m, evenly in the h–u space. ðjÞ Step 2. Permute Bi 2 P ðjÞ ; i ¼ 1; 2; . . . ; n, at random to form an input sequence S ¼ fB01 ; B02 ; . . . ; B0idx ; . . . ; B0n g, where B0idx 2 P ðjÞ . Initialize the index number of the input sequence idx = 1. ðjÞ Step 3. For input segment-posture B0idx , compute the nearest cluster center C k according to the following function:

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

455

ðjÞ

k  ¼ arg min jjB0idx  C k jj; k

where iÆi denotes the Euclidean distance. ðjÞ Step 4. Find the d-neighborhood NBd ðC k Þ as a set of cluster centers in the d-neighðjÞ borhood of C k , where d is the neighborhood radius. Then update all cluster centers ðjÞ in the NBd ðC k Þ according to: ðjÞ

ðjÞ

DC k ¼ gðB0idx  C k Þ;

ð1Þ

where g is a small positive learning rate. Step 5. Decrease the neighborhood radius d and learning rate g for next iteration. (In our case, d and g are decreased by a constant amount of Dd and Dg, respectively.) If g is smaller than a given threshold value, then stop; else set idx = [(idx mod n) + 1] and go to Step 3. We give an example to illustrate the above algorithm in Fig. 6. In Step 1, 24 · 13 fixed-grid points are chosen as the initial cluster centers in the (h, u) space (see Fig. 6A). In Step 2, since the distribution of parameters of the given motion data collec-

Fig. 6. (A) Initial cluster centers; (B) the segment-posture distribution of the left lower arm; (C) the segment-posture distribution of the left lower leg; (D) the index map of the left lower arm; and (E) the index map of the left lower leg.

456

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

tion is not homogeneous (see Figs. 6B and C), we permute the input sequence of motion data at random to prevent bias in constructing the index map. When an input segment-posture is presented, we compute its Euclidean distance to each cluster center to find the nearest one. Then the nearest cluster center and its neighboring cluster centers are updated according to Eq. (1). To achieve a better convergence, in Step 5, the learning rate and the size of a neighborhood set are decreased gradually at each iteration. Figs. 6B and C show the (h, u) distributions of raw data of the left lower arm and the left lower leg, respectively. Figs. 6D and E show their corresponding index maps after training. Note that dash lines indicate the index map wraps around the h dimension. ðjÞ After the index map M(j) is constructed, the jth segment-posture vector Bi 2 P ðjÞ ðjÞ is indexed by the nearest cluster center C k according to ðjÞ

ðjÞ

k  ¼ arg min jjBi  C k jj: k

ð2Þ

A set of consecutive frames with similar segment-postures may be indexed into ðjÞ same cluster center C k . For convenience, we group these consecutive frames into ðjÞ a motion clip indexed by C k . Therefore, the posture sequence P(j) can be segmented into motion clips and termed (quantified) as a cluster center. To save computation time in matching, we represent motion trajectories at the clip level rather than at the frame level.

5. Matching In this study, when a user specifies a clip (sub-sequence) as a query example Q, clips whose motion trajectories are similar to Qs are retrieved from the given motion collection. Since the frame number in the collection is large and each frame contains many feature values (18 feature values in our case), searching similar clips at an interactive rate is not a trivial task. To achieve this goal, we propose a novel matching mechanism. The query example is first indexed in low-dimension index maps according to its respective skeletal segments. By integrating index results from these skeletal segments, candidate clips are then extracted from the given motion collection. Finally we compute the similarity between the query example and each candidate clip through dynamic time warping. The matching mechanism is detailed in following sections. 5.1. Candidate clip searching Based on the proposed index structure, an algorithm is developed to search candidate clips from the given human motion collection for a query example. We illustrate the procedure in Fig. 7. Suppose that a clip is given as a query example. First, the start (end) frame of the query example is indexed into the index map of each skeletal segment to find the nearest cluster center and its neighboring cluster centers. If a frame in the given motion collection belongs to one of the above found clusters, we

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

457

Fig. 7. Candidate clip searching.

mark the frame on the frame axis. That is, the segment-posture of the marked frame is similar to that of the start (end) frame of the query example. Next, we count the marked frequency of each frame in the given motion collection by totaling all frame axes of index maps. Those frames whose marked frequencies are above a given threshold are denoted as start-candidates (end-candidates). Finally, candidate clips can be extracted according to the relative ordering among these start- and end-candidates. Let the query example be denoted as Q = {fi Œi = start, 2, 3, 4, . . ., end}, where the ð1Þ ð2Þ ð9Þ ðjÞ ith frame fi is represented by ðbi ; bi ; . . . ; bi Þ, bi is the segment-posture vector of the jth skeletal segment, and start and end are the first and last frames of Q, respectively. The searching algorithm is summarized as follows. Algorithm 2. Candidate Clip Searching. Input: The query example Q, the posture sequences P(j) of the given motion collection X, and the index maps M(j) of nine skeletal segments, j = 1, 2, . . ., 9. Output: A set of candidate clips.

458

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

Step 1. Initialize a histogram array H(1:n), where H(i) stores the occurring frequency for the ith frame and n is the number of frames in X. Step 2. Compute start-candidates for the jth skeletal segment, j = 1, 2, . . ., 9. Step 2.1. Compute the index of bðjÞ start (the first frame of Q): ðjÞ

k start ¼ arg min jjbðjÞ start  C k jj; k

ðjÞ

where C k 2 MðjÞ . (/* Find the nearest cluster center in the index map M(j). */) ðjÞ ðjÞ Step 2.2. Find the d-neighborhood NBd ðC k start Þ of C k start , where d is the neighborhood radius. If the segment-posture of frame Fi 2 X belongs (indexes) to ðjÞ NBd ðC k start Þ, set H(i) = H(i) + 1. Step 3. If H(i) is greater than a given threshold, mark the ith frame as a start-candidate in X. If several consecutive frames are marked as start-candidates, select their median frame as the start-candidate only. Step 4. Find end-candidates (similar to Steps 1–3). Step 5. Find all candidate clips. Step 5.1. Sort all frame indices of start-candidates and end-candidates found in Step 3 and Step 4. Step 5.2. Sweep the sorted list and report clip Y = (Fs, Fe) in X as a candidate clip according to following conditions, where Fs and Fe are the start-candidate and end-candidate, respectively, and s and e are frame indices in X: 1. 2.

s < e. There is no other start-candidates and end-candidates lies between Fs and Fe.

In the searching algorithm, we consider the high-dimension index space as the direct sum of low-dimension ones to reduce the time complexity for nearest neighbor search. Besides, the candidate clip searching algorithm can avoid exhausting search in the long-length sequence of motion collection. These advantages effectively shorten the computation time in matching. 5.2. Dynamic time warping In this study, we apply the dynamic time warping (DTW) technique (Parsons, 1986) to compute the similarity between the query example Q and the candidate clip Y. DTW is a well-known matching method in the field of speech and motion recognition. Even if the lengths of two patterns are different, DTW can compensate for length difference and preserve ordering. Assume that the frames in Q and Y are indexed according to Eq. (2) and grouped into motion clips. We denote the motion clips lists of the jth skeletal segment as ðjÞ

ðjÞ

ðjÞ

ðjÞ

ðjÞ

QðjÞ ¼q1 ; q2 ; . . . ; qðjÞ s ; . . . ; ql ; ðjÞ Y ðjÞ ¼y 1 ; y 2 ; . . . ; y ðjÞ t ; . . . ; ym ; ðjÞ

(j) and Y(j), l and m are the where qðjÞ s and y t are the sth and tth motion clips of Q (j) (j) number of clips in Q and Y , respectively. To compute the difference between

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

459

Q(j) and Y(j), that is, the jth segment-posture difference, we define the following equation: diff ðQðjÞ ; Y ðjÞ Þ ¼ costðl; mÞ: cost(s,t) is a recursive function: ðjÞ

ðjÞ

costð1; 1Þ ¼ jjq1  y 1 jj; costðs; tÞ ¼ Dðs; tÞ þ min fcostðs; t  1Þ; costðs  1; t  1Þ; costðs  1; tÞg; where ðjÞ Dðs; tÞ ¼ jjqðjÞ sk  y tk jj;

is the Euclidean distance in the (h, u) space of the jth skeletal segment, isk  tki 6 R, and R is the maximum amount of warping. Finally the similarity between Q and Y is calculated by computing the reciprocal of the sum of all diff(Q(j),Y(j)): SimðQ; Y Þ ¼

wj

P9

1

ðjÞ ðjÞ j¼1 diff ðQ ; Y Þ

;

where wj is the weight of the jth skeletal segment. In this study, we assign the highest weight to the torso and the lowest weight to the lower arms and legs. This is because the movement of the torso will also affect the upper and lower limbs. On the contrary the movement of the lower limbs will not affect the torso and the upper limbs. After computing similarities between the query example and candidate clips, we sort their similarities from high to low and retrieve the clips with the top k similarities as the results.

6. Experimental results We use the motion capture data of Tai Chi Chuan, a traditional Chinese martial art, as our testing collection to show the effectiveness of the proposed framework. The collection contains 23,130 frames that are captured continuously more than 10 min at 30 Hz frame rate. The motion is performed by a professional martial art master. It is known that entire performance of Tai Chi Chuan contains several similar or repeating motion clips. From the testing collection, a ground truth is established by manually choosing eight groups of clips and two clips are relevant if they belong to the same group. To verify the proposed framework, we implement it on Matlab and test on an Intel Pentium 4 2.4 GHz computer with 512 MB memory. In the following subsections, several retrieval scenarios and their corresponding performances in terms of accuracy and computing time are discussed. 6.1. Retrieval scenarios In this section, we demonstrate some retrieval results in Figs. 8–10 according to various query examples. For a query example in Fig. 8A, Figs. 8B–H are retrieved

460

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

Fig. 8. (A) The query example; (B–H) the retrieved clips.

clips displayed according to the descending similarity order. In these clips, a human skeleton is sketched every five frames. Red and green lines indicate the trajectories of hands and feet. If a retrieved clip is relevant to the query example in our ground truth, we mark a cross on the top left corner. Note that even if absolute body positions and orientations of these relevant clips are different from each other, the proposed framework can retrieve them correctly. Figs. 9 and 10 show other query examples and their retrieval results.

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

461

Fig. 9. (A) The query example; (B–H) the retrieved clips.

6.2. Retrieval accuracy The retrieval accuracy of the proposed framework is evaluated by the precision versus recall graph (PR graph) (Salton and McGill, 1983): precision ¼

recall ¼

#frelevant \ retrievedg ; # retrieved

#frelevant \ retrievedg ; # relevant

where # retrieved is the number of retrieved clips and # relevant is the number of relevant clips. To compare with the proposed indexing method, we also implement two other methods, namely, the fixed-grid and the k-d tree (Lu, 1999). In the proposed method, 312 cluster centers are trained to construct the 2D index map through SOM clustering for each skeletal segment. However, in the fixed-grid method, 312 cluster

462

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

Fig. 10. (A) The query example; (B–H) the retrieved clips.

centers are evenly distributed on the 2D index map without further training for each skeletal segment, as shown in Fig. 6A. In the k-d tree method, 312 cluster centers are trained through k-means clustering in the 18D space for entire skeletal segments. A k-d tree structure is built based on these cluster centers and a nearest neighbor search algorithm is implemented for the k-d tree structure (Hjaltason and Samet, 1995). Fig. 11 shows the PR graph for the three methods, where the X-axis denotes the recall and the Y-axis denotes the precision. In the PR graph, we choose following window sizes: 2, 4, 6, 8, and 10 as our samplings. Then the precision and recall are calculated for each window size. It is clear that for a given recall value, the precision of the higher curve is better than that of the lower curve (Salton and McGill, 1983). We observe that the retrieval accuracy of the proposed method is better than those of the fixed-grid and k-d tree methods. Further remarks based on the above observation are given as follows:

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

463

Fig. 11. The PR graph for three indexing methods.

 For the fixed-grid method, the distribution of cluster centers does not reflect the posture distribution of given motion collection. Some cluster centers are indexed by hundreds of posture vectors while some by none. We consider that the imbalanced clustering will impair the retrieval accuracy.  In the k-d tree method, we perform k-means clustering to arrange 312 cluster centers for entire skeletal segments. In the proposed and fixed-grid methods, however, there are 312 cluster centers for each of nine skeletal segments and totally 2808 cluster centers for the whole body. It is known that using more cluster centers can obtain better retrieval accuracy. Therefore, the k-d tree method has worse accuracy performance than other two methods.

6.3. Retrieval time Another performance evaluation is the time cost in matching. The matching time consists of the searching time (in candidate clip searching) and the similarity computation time (in dynamic time warping). To compute the matching time, a group consisting of five relevant clips in our ground truth is first selected as the testing group. Then given a query example from the testing group, we gradually enlarge the window size and perform retrieval. When all five relevant clips of the testing set are retrieved at a minimal window size, we evaluate the matching time for the minimal window size. Table 1 lists the matching time of the above-mentioned three methods. We observe that the retrieval time of the proposed method is less than those of the fixed-grid and k-d tree methods. Based on Table 1, we give the following observations:  For the similarity computation, the k-d tree method spends much more time than the proposed and fixed-grid methods. This is because indexing (nearest neighbor search) in a high-dimension k-d tree is computationally expensive; the time com-

464

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

Table 1 The matching time cost for three indexing methods Proposed

Fixed-grid

k-d tree

Testing set size Minimal window size Searching time Similarity computation time

5 6 0.110 s 0.187 s

5 12 0.125 s 0.234 s

5 6 0.531 s 0.031 s

Total time

0.297 s

0.359 s

0.562 s

plexity grows exponentially with the number of dimensions. However, other two methods are indexing in low-dimension spaces; the time complexity grows linearly with the number of spaces.  For the similarity computation, the k-d tree method spends less time than the other two methods. This is because the DTW algorithm is processed for entire skeletal segments in the k-d tree method. It does not need extra time for integrating information from every skeletal segment.  The matching time of the proposed method is slightly less than that of the fixedgrid method. In the fixed-grid method, since the imbalanced clustering impairs its retrieval accuracy, to retrieve all five relevant clips of the testing set requires a larger window size. Therefore, longer retrieval time is needed for processing the larger window size.

7. Conclusions and future directions In this study, we propose a novel framework for constructing a content-based human motion retrieval system. The major components, namely, indexing and matching, are discussed and their corresponding algorithms are presented. In indexing, we introduce an affine invariant posture representation and propose an SOM-based index map according to the distribution of the raw data. In matching, the start and end frames of the query example are used to find candidate clips from the given motion collection. Then the similarity between the query example and each candidate clip is computed by using the dynamic time warping algorithm. To avoid the curse of dimensionality, the high-dimension feature space of the entire skeleton are decomposed into the direct sum of low-dimension feature spaces of skeletal segments. Besides, several experimental tests show that the proposed indexing method performs better than conventional fixed-grid and k-d tree methods in the aspects of retrieval accuracy and computation time. For future work, we will develop a more friendly user interface to assist user for specifying their queries. For example, we try to develop a motion script language so that users can specify a motion example through motion scripts. Another interesting research direction is to extend our framework to motion synthesis by examples (Arikan et al., 2003; Kovar et al., 2002; Lee et al., 2002; Li et al., 2002).

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

465

Acknowledgments The authors acknowledge Opto-Electronics & Systems Laboratories, Industrial Technology Research Institute and Martial Art Master Ling-Mei Lu for their assistance in capturing motion data of Tai Chi Chuan.

References Ahanger, G., Little, T.D.C., 1996. A survey of technologies for parsing and indexing digital video. Journal of Visual Communication and Image Representation 7 (1), 28–43. Antani, S., Kasturi, R., Jain, R., 2002. A survey on the use of pattern recognition methods for abstraction, indexing, and retrieval of images and video. Pattern Recognition 35 (4), 945–965. Ardizzone, E., Cascia, M., 1997. Automatic video database indexing and retrieval. Multimedia Tools and Applications 4 (1), 29–56. Arikan, O., Forsyth, D.A., OBrien, J.F., 2003. Motion synthesis from annotations. ACM Transactions on Graphics 22 (3), 402–408. BenAbelkader, C., Cutler, R., Davis, L., 2002. Person identification using automatic height and stride estimation. In: IEEE International Conference on Pattern Recognition, Quebec City, Canada, August 11–15. Bobick, A.F., Johnson, A., 2001. Gait recognition using static activity-specific parameters. In: IEEE Computer Vision and Pattern Recognition, Kauai, Hawaii, December 8–14. Brunelli, R., Mich, O., Modena, C.M., 1999. A survey on the automatic indexing of video data. Journal of Visual Communication and Image Representation 10 (2), 78–112. Chang, S.F., Chen, W., Meng, H.J., Sundaram, H., Zhong, D., 1998. A fully automated content-based video search engine supporting spatiotemporal queries. IEEE Transactions on Circuits and Systems for Video Technology 8 (5), 602–615. Chua, P.T., Crivella, R., Daly, B., Hu, N., Schaaf, R., Ventura, D., Camill, T., Hodgins, J., Pausch, R., 2003. Training for physical tasks in virtual environments: Tai Chi. In: IEEE International Conference on Virtual Reality, Los Angeles, CA, March 22–26. Dag˘ta+, S., Al-Khatib, W., Ghafoor, A., Kashyap, R.L., 2000. Models for motion-based video indexing and retrieval. IEEE Transactions on Image Processing 9 (1), 88–101. Davis, J.W., Bobick, A.F., 1998. Virtual PAT: a virtual personal aerobics trainer, Workshop on Perceptual User Interfaces, San Francisco, CA, November 5–6, pp. 13–18. Deng, Y., Manjunath, B.S., 1998. NeTra-V: toward an object-based video representation. IEEE Transactions on Circuits and Systems for Video Technology 8 (5), 616–627. Dimitrova, N., Golshani, F., 1995. Motion recovery for video content analysis. ACM Transactions on Information Systems 13 (4), 408–439. Duda, R.O., Hart, P.E., Stork, D.G., 2001. Patten Classification. John Wiley & Sons, New York. Freeman, W.T., Beardsley, P.A., Kage, H., Tanaka, K.-I., Kyuma, K., Weissman, C.D., 1999. Computer vision for computer interaction. ACM SIGGRAPH Computer Graphics 33 (4), 65–68. Haritaoglu, I., Harwood, D., Davis, L.S., 2000. W4: real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (8), 809–830. Hjaltason, G.R., Samet, H., 1995. Ranking in spatial databases, International Symposium on Spatial Databases, Portland, Maine, August 6–9, pp. 83–95. Idris, F., Panchanathan, S., 1997. Review of image and video indexing techniques. Journal of Visual Communication and Image Representation 8 (2), 146–166. Jain, A.K., Vailaya, A., Wei, X., 1999. Query by video clip. Multimedia Systems 7 (5), 369–384. Jeannin, S., Divakaran, A., 2001. MPEG-7 visual motion descriptors. IEEE Transactions on Circuits and Systems for Video Technology 11 (6), 720–724. K} ohle, M., Merkl, D., Kastner, J., 1997. Clinical gait analysis by neural networks: issues and experiences. IEEE Symposium on Computer-Based Medical Systems, 138–143.

466

C.-Y. Chiu et al. / J. Vis. Commun. Image R. 15 (2004) 446–466

Kovar, L., Gleicher, M., Pighin, F., 2002. Motion graphs. ACM Transactions on Graphics 21 (3), 473–482. Lee, J., Chai, J., Hodgins, J.K., Reitsma, P.S.A., Pollard, N.S., 2002. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics 21 (3), 491–500. Li, C.S., Smith, J.R., Bergman, L.D., Castelli, V., 1998. Sequential processing for content-based retrieval of composite objects, SPIE Storage and Retrieval of Image and Video Databases, San Jose, CA, January 28–30, pp. 2–13. Li, Y., Wang, T., Shum, H.Y., 2002. Motion texture: a two-level statistical model for character motion synthesis. ACM Transactions on Graphics 21 (3), 465–472. Lienhart, R., Effelsberg, W., Jain, R., 2000. VisualGREP: a systematic method to compare and retrieve video sequences. Multimedia Tools and Applications 10 (1), 47–72. Lu, G., 1999. Multimedia Database Management Systems. Artech House. Meyer, D., Denzler, J., Niemann, H., 1997. Model based extraction of articulated objects in image sequences for gait analysis. IEEE International Conference on Image Processing, 78–81. Moeslund, T.B., Granum, E., 2001. A survey of computer vision-based human motion capture. Computer Vision and Image Understanding 81 (3), 231–268. MPEG-4 overview, ISO/IEC JTC1/SC29/WG11 N4668, March 2002. Availabe from: . Multon, F., France, L., Cani-Gascuel, M.-P., Debunne, G., 1999. Computer animation of human walking: a survey. The Journal of Visualization and Computer Animation 10 (1), 39–54. Nabil, M., Ngu, A.H.H., Shepherd, J., 2001. Modeling and retrieval of moving objects. Multimedia Tools and Applications 13 (1), 35–71. Naphade, M.R., Huang, T.S., 2001. A probabilistic framework for semantic video indexing, filtering and retrieval. IEEE Transactions on Multimedia 3 (1), 141–151. Oliver, N.M., Rosario, B., Pentland, A.P., 2000. A Bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (8), 831–843. Parent, R., 2002. Computer Animation: Algorithms and Techniques. Morgan Kaufmann. Parsons, T.W., 1986. Voice and Speech Processing. McGraw-Hill, New York. Ponceleon, D., Srinvasan, S., Amir, A., Petkovic, D., Diklic, D., 1998. Key to effective video retrieval: effective catalogin and browsing. In: ACM International Conference on Multimedia, Bristol, UK, September 12–16, pp. 99–107. Sahouria, E., Zakhor, A., 1999. A trajectory based video indexing system for street surveillance. In: IEEE International Conference on Image Processing, Kobe, Japan, October 24–28. Salton, G., McGill, M.J., 1983. Introduction to Modern Information Retrieval. McGraw-Hill, New York. Shearer, K., Venkatesh, S., Kieronska, D., 1996. Spatial indexing for video databases. Journal of Visual Communication and Image Representation 7 (4), 325–335. Smoliar, S.W., Zhang, H.J., 1994. Content-based video indexing and retrieval. IEEE Multimedia 1 (2), 62– 72. Sundaram, H., Chang, S.F., 1999. Efficient video sequence retrieval in large repositories. In: SPIE Storage and Retrieval of Image and Video Databases, San Jose, CA, January 26–29. Wang, L., Hu, W., Tan, T., 2003. Recent developments in human motion analysis. Pattern Recognition 36 (3), 585–601. Web3D working group on humanoid animation, specification for a standard humanoid, Version 1.1, August 1999.

Content-based retrieval for human motion data

In this study, we propose a novel framework for constructing a content-based human mo- tion retrieval system. Two major components, including indexing and matching, are discussed and their corresponding algorithms are presented. In indexing, we introduce an affine invari- ant posture feature and propose an index map ...

2MB Sizes 24 Downloads 312 Views

Recommend Documents

A Motion Trajectory Based Video Retrieval System ...
learning and classification tool. In this paper, we propose a novel motion trajectory based video retrieval system. For feature space representation, we use two ...

Motion Planning for Human-Robot Collaborative ... - Semantic Scholar
classes. Bottom: Evolution of the workspace occupancy prediction stored in a voxel map ... learn models of human motion and use them for online online navigation ..... tion space,” Computer Vision and Image Understanding, vol. 117, no. 7, pp ...

Fast Case Retrieval Nets for Textual Data
Fast Case Retrieval Nets for Textual Data. Sutanu Chakraborti ..... In addition to empirical evaluation on synthetic data, we also carried out experiments on a real ...

Learning Style-directed Dynamics of Human Motion for ...
or more 3D motion capture sequences labelled by their style values. Because each ... 1: An illustration of the HMM/Mix-SDTG model. is named “parametric HMM”, ..... [5] Lawrence R. Rabiner. A tutorial on hidden markov models and selected.

Human Motion Detection and Tracking for Video ...
Gradients combined with ADABOOST learning to search for humans in an image .... Frame (b) Background Subtracted Frame with motion region detected (c). Extraction of image ... E. Light Support Vector Machine – Training and Testing. The classificatio

Motion Planning for Human-Robot Collaborative ... - Semantic Scholar
... Program, Worcester Polytechnic Institute {[email protected], [email protected]} ... classes. Bottom: Evolution of the workspace occupancy prediction stored in ... learn models of human motion and use them for online online.

Memory-Based Human Motion Simulation
motor program views on the planning of human movements, this paper ... system consists of three components: a motion database (memory), a motion search.

Predicting Human Reaching Motion in ... - Semantic Scholar
algorithm that can be tuned through cross validation, however we found the results to be sparse enough without this regularization term (see Section V).

Mining Motifs from Human Motion
Abstract. Mining frequently occurring temporal motion patterns (motion motifs) is important for understanding, organizing and retrieving motion data. However ...