Particle Filter based Multi-Camera Integration for Face 3D-Pose Tracking Yuantao Gu, Yilun Chen, Zhengwei Jiang, Yuanming Chen, Chao Liao, Kunpeng Liu, and Kun Tang Department of Electronic Engineering, Tsinghua University, Beijing 100084, China [email protected]

Abstract. Face tracking has many visual applications such as humancomputer interfaces, video communications and surveillance. Color-based particle trackers have been proved robust and versatile for a modest computational cost. In this paper, a probabilistic method for integrating multi-camera information is introduced to track human face 3D-pose variations. The proposed method fuses information coming from several calibrated cameras via one color-based particle filter. The algorithm relies on the following novelties. First, the human head other than face is defined as the target of our algorithm. To distinguish the face region and hair region, a dual-color-ball is utilized to model the human head in 3D space. Second, to enhance the robustness to illumination variety, the Fisher criterion is applied to measure the separability of the face region and the hair region on the color histogram. Consequently, the color distribution template can be adapted at the proper time. Finally, the algorithm is performed based on the distributed framework. Therefore the computation is implemented averagely by all client processors. To demonstrate the performance of the proposed algorithm, several scenarios of visual tracking are tested in an office environment with three to four calibrated cameras. Experiments show that accurate tracking results are achieved, even in some difficult scenarios, such as the complete occlusion and the temptation of anything with skin color. Furthermore, the accessional information of our track results, including the head posture and the face orientation schemes, can be used for further work such as face recognition and eye gaze estimation, which is also explained by elaborated designed experiments.

1

Introduction

Tracking human face in a wide workspace has become a ubiquitous elementary task in both online and offline vision-based applications, including surveillance, human-machine interfaces, smart environments, video compression, augmented reality, and many more. Visual tracking applications require the adoption of motion estimation algorithms ensuring robustness in the presence of noise and stable tracking also in unstructured environments. Moreover, the information extracted from visual measurements must be available in real time.

2

Currently a widely adopted approach to robust motion estimation from visual measurements is based on the particle filter[7, 9–12], which was independently proposed by several research groups [4–6] and has been proved robust and versatile for tracking algorithms. Most of the available particle filter based trackers are driven by single view information, where both visual field and view information are limited. It has been proved that good tracking performance can be ensured using a multi-camera system, which guarantees broad view and information redundancy [3, 1, 14, 15]. Furthermore, it will be presented in this paper that complete 3D-pose information including face position and orientation can also be extracted from the multiple vision sensors, which is valuable to the subsequent processing work, e.g., face recognition and gaze detection. In the available single camera face trackers using particle filters, most of them have focused on tracking 2D objects or motions such as 2D contours[13], 2D affine transforms[23], and 2D ellipses[24, 7]. In some recent research results[20, 22], stereoscopic information are extracted by utilizing the elaborated 3D face models in particle filter. In [7, 8], particle filter is utilized basing on the model of color distribution, which is resistant to non-rigidity, rotation and partial occlusion. Some other algorithms integrate the color cue and shape cue, where multiple cues are gathered independently and then merged together, to improve the tracking performance [18, 19]. Compared to the single camera based methods, multi-camera visual tracking systems can be extremely helpful because the visual field is expanded and multiple view information can overcome occlusion[3, 1, 14, 15]. Tracking with a single camera easily generates ambiguity due to occlusion or depth. This ambiguity may be eliminated from another view. In this paper, unlike most of the available trackers, the human head other than face is defined as the target of our algorithm. We aim at integrating multicue with a 3D head model, a dual-color-ball, which naturally represents the target, i.e., the human head is composed of hair and face, two semi-sphere regions, and at the same time retains the model’s simplicity, which controls the computational cost of particle filter. Because the shape cue has been combined with the model, only color information is used to weighting the particles, i.e., integrating the distances between face color distribution and hair color distribution to their templates. The proposed model is used to set up the particle filter based tracking algorithm for a multi-camera system. To enhance the robustness to illumination variety of distributed cameras, the Fisher criterion is applied to measure the separability of the face region and the hair region on the color histogram. Consequently, the color distribution template can be adapted at the proper time. The multi-camera system allows managing a large amount of view information provided in real time and the information redundancy which can be effectively exploited to improve accuracy and robustness of the visual tracking task. Experimental tests are presented to show the computations feasibility and effectiveness of the proposed approach. The paper is organized as follows: section 2 provides a brief review of particle filter basic. The proposed tracking algorithm is introduced in section 3 by 4

3

subsections. In 3.1 a detailed vision on the proposed dual-color-ball head model is given, in 3.2 the definition of particle weight is described, in 3.3 we explain the utilization of Fisher criterion to adapt target template, and in 3.4 we present the implementation based on server/client structure. The experiments results are presented in section 4 and we conclude the whole paper in section 5 and direct the works in the future.

2

The basic particle filter

For the sake of completeness, the basic particle filter is first briefly reviewed. In many cases, the stochastic (hidden) state of a given system st , where subscript t denotes time, need to be estimated using the observations (stochastically) related with that state, denoted as y1:t = {y1 , y2 , · · · , yt }. Let’s consider a dynamic system with model given by the state equation and the observation equation: st = ft (st−1 , vt−1 ) ,

(1)

yt = ht (st , wt ) ,

(2)

where vt and wt are supposed to be noises. Moreover, it should be noted that no linearity hypothesis on ft and ht is done. From the Bayesian perspective, the tracking problem is to recursively calculate some degree of belief in the state st at time t, taking different values, given the data y1:t up to time t. Thus, it is required to construct the pdf p (st |y1:t ). It is assumed that the initial pdf p (s0 |y0 ) ≡ p (s0 ) which is also known as the prior is available. Then, in principle, the pdf p (st |y1:t ) may be obtained, recursively, in two stages: prediction and update. Suppose that the required pdf p (st−1 |y1:t−1 ) at time t − 1 is available. The prediction stage involves using the system model to obtain the prior pdf of the state at time t via the Chapman-Kolmogorov equation Z p (st |y1:t−1 ) = p (st |st−1 )p (st−1 |y1:t−1 ) dst−1 . (3) The probabilistic model of the state evolution p (st |st−1 ) is defined by the state equation and the known statistics of vt−1 . At time t, a measurement yt becomes available, and this may be used to update the state via Bayesian rule p (st |y1:t ) =

p (yt |st ) p (st |y1:t−1 ) , p (yt |y1:t−1 )

(4)

where the normalizing constant Z p (yt |y1:t−1 ) =

p (yt |st ) p (st |y1:t−1 ) dst .

(5)

The recurrence relations, eq.(3) and eq.(4), form the basis for the optimal Bayesian solution. In the case of linear Gaussian state space models, the recursive

4

construction of the posterior distribution can be handled analytically yielding the Kalman filter. However, for most cases in visual tracking problems the linear Gaussian hypothesis does not hold which indicates eq.(3) and eq.(4) is only a conceptual solution and is intractable in practice. Particle filter is a technique for implementing a recursive Bayesian filter by Monte Carlo simulations. The key idea is to approximate the distribution via discrete random measures defined by random samples with associated weights, named particles. For instance, if the distribution of interest is p (s) and its apªN © proximating random measures are χ = si , wi i=1 , χ approximates the distribution p (s) by N X ¢ ¡ p (s) ≈ w i δ s − si , (6) i=1

where si and wi are the samples and their weights, respectively, N is the number of particles used in approximation, δ (·) is the Dirac delta function. See Fig.1.

p(s) {s i,wi} Fig. 1. Approximation to continuous pdf by particles.

Based on the discrete approximation of pdf p (st |y1:t ), the complete procedure of particle filtering is described in the following paragraph. 1. Initialize: si0 ∼ p (s0 ) and w0i = 1/N , where i = 1, · · · , N 2. For t = 1, 2, · · · ¢ ¡ (a) Propagate: for i = 1, · · · , N sample sit from q st |sit−1 , yt (b) Calculate weight: i. Calculate un-normalized weights: for i = 1, · · · , N p(yt |sit )p(sit |sit−1 ) i wti = wt−1 q (sit |sit−1 ,yt ) wi ii. Normalize weight: for i = 1, · · · , N wti = PN t j wt j=1 ¡ ¢ PN (c) Estimate: E {g (st )} = i=1 wti g sit (d) Resample: ˆef f i. Calculate N ˆef f ≤ Nthreshold , for i = 1, · · · , N , ii. If N ³ ´ PN i st ∼ j=1 wtj δ st − sjt , and wti = 1/N Please refer [9–12] to further understand the particle filter algorithm.

5

3

The Particle Filter Based Multi-Camera Tracker

The tracking algorithm is implemented basing on the framework of particle filter, see Fig.2. All of the potential 3D-poses (position and orientation) are considered as the state space. Therefore the task of tracking becomes the estimation of probability distribution of the mentioned space. The tracking procedure at each iteration is as follows. – The available particles, which denote some selected possible face positions and orientations, are propagated according to the state equation, which describes the motion of head. – Then the new captured multiple images from distributed, calibrated cameras are utilized to calculate the particle weights, which are used to measure the selected 3D-poses. – Take expectation of the selected 3D-poses according to the associated weight and output as the tracking result. – The particle set is resampled, i.e., the 3D-poses are reselected based on the rule of importance sampling. Please notice that the raw vision data is only used in the step of calculating weight, which hints that the computation can be implemented averagely by all distributed processors. This point will be clear later. In the following subsections, the detailed tracking algorithm will be described.

Resample

Propagate

Calculate Weight

Estimate Output 3D-Pose Information

State Parameters Flow Multi-Camera

Vision Data Flow

Fig. 2. The block diagram of the particle filter based multi-camera tracking algorithm.

6

3.1

State Model

Most of the available face tracking algorithms define the human face region, utilizing the information of shape, color distribution, or both of them, as their targets [13, 7, 19]. However, the usual motion such as turn one’s face back or lower one’s head may cause the face completely lost from the camera view. Though the algorithms based on particle filter is robust to partial occlusion and instantaneous complete occlusion, the frequently happened face disappearance may dramatically degrade the tracker’s performance. Compared to the disappearance of face, the head region seldom outs of the camera view, which suggests the utilization of head as the target of face tracking. In general, the surface of human head is composed of two parts, the face region and the hair region. Consequently, a simple dual-color-ball model is proposed to model the human head in 3D space. One semi-sphere represents the face region and the other semi-sphere represents the hair region, see Fig. 3. It’s easy to recognize that the model matches most of the head cases, no matter race, gender, and age. Moreover, this model is rather simple to implement compared to the available ones [20, 22].

z Ih If

y

x

Fig. 3. Utilization of dual-color-ball to model the head region of different characters with different orientation.

To presented the proposed model, the state vector is defined as s = {x, y, z, θ, φ, x, ˙ y, ˙ z} ˙ ,

(7)

where {x, y, z} denote the position of the mentioned ball center, {θ, φ} denotes the face orientation, i.e., the elevation angle and the azimuth angle, respectively. A first order prediction model is adopted to update the state vector considering the moving velocity of the head, {x, ˙ y, ˙ z}. ˙ The particles are propagated by st = T · st−1 + wt−1 ,

(8)

7

where T and wt define the deterministic component and the stochastic component of the head state, respectively. Because the size of human heads are generally similar one another, the radius of the dual-color-ball, r, is considered as a constant other than a variable state parameter. There are two un-overlap semi-spheres, which denote face profile and hair profile, respectively. They are projected to the calibrated jth camera view as Ij,f and Ij,h , which are defined as the 2D face region and hair region, respectively, see Fig.4. According to the state definition, Different particle state presents different posture of the head in the 3D space, including the translation and rotation of the head.

v3 z

I3,f I ma

v1

x

v2

y

I2,h I2,f

u2

I1,f

eP

l an

u1

Im

Im ag

ag e

Pl an

I1,h

3 la ne ge P

e2

u3

I3,h

e1

Fig. 4. The projection from world coordination to multi-camera image plane.

3.2

Particle Weight

The criteria to calculate particle weight is essential to the performance of particle filter based multi-camera tracker. According to the literatures, the color based weight criteria is widely used due to robust to partial occlusion, rotation and scale invariant. The algorithm proposed in [7] could track the human face exactly in general conditions. In the mentioned algorithm, the color distribution is discretized into M bins, and the function h (x, y) is used to assign the color at position © (x, y)ªin the corresponding bin. In addition, the color distribution p (I) = p(u) (I) u=1,2,···,M , where region I is an ellipse representing the position and size of the human face, is yielded, X p(u) (I) = f k (x, y) δ [h (x, y) − u] , (9) (x,y)∈I

8

where u denotes PM the sequence number of the color bin, the normalized factor f ensures u=1 p(u) (I) = 1, and the kernel function k (x, y) is used to evaluate the contributions from different region. the distance between ª © Furthermore, p (I) and a distribution template, q = q (u) u=1,2,···,M , is measured using the Bhattacharyya coefficient, ρ (p (I) , q), v u M q u X p p(u) (I) q (u) , (10) d (I) = 1 − ρ (p (I) , q) = t1 − u=1

Finally, the above distance is mapped to the particle weight utilizing w= √

d2 1 e− 2σ2 , 2πσ

(11)

where σ is a constant parameter. The above weight criteria is modified to match the proposed dual-color-state in multiple vision sources scenario. Suppose there are J cameras in the system, let’s first consider the evaluation to the selected 3D-pose from the jth camera. Compared to the available ellipse model, the hair region of the dual-colorball model contains accessorial information and should be utilized to improve the evaluation accuracy. Consequently, p (Ij,f ) and p (Ij,h ), corresponding to the color distribution of face region and hair region, respectively, are combined to measure the jth sensor information. The distance from the supposed state to its template is defined as dj =

A (Ij,f ) · dj,f + A (Ij,h ) · dj,h , A (Ij,f ) + A (Ij,h )

(12)

where A (I) denotes the area of region I, dj,f and dj,h denote the distance from the face color distribution and hair color distribution to their templates, respectively, similar to eq.(10). q dj,r = 1 − ρ (p (Ij,r ) , qj,r ), (13) where r = h, f and qj,r denotes the respective template. It is straightforward that the distance is weighted and normalized according to their areas. Please notice that the templates of each camera source are different and it will be seen they are adaptive according to some criteria. The above definition can be extended to the multi-camera scenario. A boolean function is defined to distinguish whether the image from the jth camera is valuable, ½ 1 A (Ij,f ) + A (Ij,h ) ≥ Agate , bj = (14) 0 otherwise, where Agate is a constant parameter. Then the distances from all cameras are accumulated according to the above distinguish function to yield the measurement to the supposed state, which can be considered as the distance from the

9

3D-pose to a “multi-projected template”. PJ d=

j=1 bj

PJ

· dj

j=1 bj

.

(15)

Finally, the distance is mapped to particle weight by eq.(11). 3.3

Template Adaption

According to the literatures, the template adaption can improve the tracking accuracy and robustness. Most of the available adaption algorithms are based on the smoothing window qt = λ · qt−1 + (1 − λ) · pt−1 .

(16)

where 0 ¿ λ < 1 is a forgetting factor. However, this may lead to improper adaption, especially when the tracker is interested in an invalid region. Hence a measurement is necessary to evaluate whether the current tracking result is good enough to update the template. Generally, the face color and the hair color are different, which hints the invention of dual-color-ball model, also suggests that the difference of the two color distribution can be used to justify the current tracking result. The modified Fisher criterion is used to fulfill this task. Fj =

2 σj,f

|mj,f − mj,h |2 , 2 + |m 2 + σj,h j,f − mj,h |

(17)

2 where mj,r and σj,r denote the mean value and variance of the face color distribution, r = f, and the hair color distribution, r = h, respectively. |mj,f −mj,h |2 and 2 2 σj,f +σj,h are named between-class scatter and within-class scatter, respectively. The Fisher criterion can be considered as a measurement of the separability of the two semi-spheres. It can be demonstrated that when the position and the face-orientation of the state-ball is identical to the ones of the head figure, the hair color is completely distributed on one semi-sphere while the skin color is completely distributed on the other one. At this time, the Fisher criterion reaches to its maximum value. Please notice that the between-class scatter is also added to the denominator, which is used to normalize Fj . Because the illumination to each camera field are different, the templates should be adapted according to their individual justification. Consequently,

Fj ≥ Fgate

(18)

indicates that the adaption is encouraged, where 0 < Fgate < 1 is a constant parameter.

10

3.4

Distributed Implementation

Multiple cameras are distributed in the scenarios to get full perspective surveillance. The system is composed of several clients and one server, see Fig.5. The clients are all decorated with cameras and capture the raw images data, which is used to calculate the distance, dj , and Fisher coefficient, Fj , according to eq.(12) and eq.(17), respectively, corresponding to the particles. Then Fj is utilized locally to adapt templates, and dj are feed back from all clients to the unique server, where the particle weight is yield. consequently, resample and propagation is performed to generate new particle set, which is distributed to all clients for next iteration. The computation is implemented averagely by all clients, which enhances the tracking performance.

Fig. 5. The implementation framework of multi-camera tracking system.

4

Experiments Results

The proposed algorithm is tested in two scenarios to demonstrate the tracking performance and the advanced features, i.e., the preparation of input image for face recognition and some other algorithms that need the frond side of faces. In scenario 1, four calibrated cameras, numbered from 1 to 4, are located near the corners of an office room. Two actors, A and B, present themselves into the scope of cameras and A is the predefined target. A sits in a chair, moving around in a small area, while B walks towards A, and then turns and walks back. Please refer Fig.6 for detailed description. The tracking results show that the head position of A is exactly estimated, even in the instant that complete occlusion occurs, which indicates the possibility of face losing problem. Besides the position information, the face orientation of A is also achieved exactly. The utilization of orientation will be clearly explained in the next experiment.

11

1 C

2 C

A

B 4

C

C

3

Frame7

Frame45

Frame63

Frame79

Frame113

Camera 1

Camera 2

Camera 3

Camera 4

Fig. 6. The multi-camera location, actors motion, and tracking results in Scenario 1. (The multi-camera location and actors motion are plotted in the top figure, where four calibrated cameras numbered from 1 to 4 are placed near the corners of an office room. The dashed lines denote the view scope of cameras. The solid line denotes the motion trace of A and the dotted denotes that of B. The tracking results are showed in the bottom figure, where the white circle is the head profile and the white-dotted-mask is the face region.)

12

Please refer Fig.7 for scenario 2. In this scenario, where 3 cameras are placed at the same side of the room to capture the different views of actor A’s face. During the experiment, A sits in the chair, shifts left and right, and turns around his head to present different head postures. The tracking system outputs the most front-like face image. It can be easily understood that such image can be used for further processing, i.e., face recognition and gaze detection. During the experiments, the calibration is found to be a essential preparation of the tracking system. Since unfaithful projection may lead the client to evaluate the color distribution of invalid region, as will yield distinct tracking error. Thus the algorithm based on un-calibrated multi-camera is preferred and self-calibration methods also need more attention.

5

Conclusion

In this paper, the multi-camera tracker to achieve the face 3D-pose information is proposed based on particle filter and implemented based on server/client framework with distributed computation. In the described system, novel dualcolor-ball head model and modified Fisher criterion play essential roles in calculating particle weight and adapting target template. Accurate tracking results are achieved and demonstrated by experiments in different scenarios. The proposed algorithm can be used in vision surveillance, and the preparation for face recognition and gaze detection. However, calibration is necessary for multi-camera, which may limit the utilization of the proposed algorithm. The future work will be focused on self-calibration and un-calibrated multi-camera tracking.

Acknowledgement This project is partially supported by National Science Foundation of China (NSFC 60402030), and Development Research Foundation of the Department of Electronic Engineering, Tsinghua University. Dr. Yuantao Gu want to thank Miss Jie Wang, University of Toronto, for her valuable comments to improve the quality of the manuscript.

References 1. V. Kettnaker and R. Zabih, ”Bayesian multi-camera surveillance,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, (1999)253–259 2. Hu W., Tan T.,Wang L., Maybank S.: A Survey on Visual Surveillance of Object Motion and Behaviors. IEEE Trans. Systems, Man, and Cybernetics–Part C: Applications and Reviews, 34(2004)334–352 3. R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, ”Algorithms for cooperative multi-sensor surveillance,” Proc. IEEE, 89(2001)1456–1477 4. Isard M., Blake A.:CONDENSATION - conditional density propagation for visual tracking. International Journal on Computer Vision. 29(1998)5–28

13

C

C

1

C

A

3

2

Frame 20

Frame 40

Frame 58

Frame 71

Frame 99

Camera 1

Camera 2

Camera 3

Output Face

Fig. 7. The multi-camera location, actors motion, and tracking results in Scenario 2. (The multi-camera location and actors motion are plotted in the top figure, where four calibrated cameras numbered from 1 to 3 are placed at the same side of the room to capture the different head postures. The dashed lines denote the view scope of cameras. The solid line denotes the motion trace of A. The tracking results are showed in the bottom figure, where the white circle is the head profile and the white-dotted-mask is the face region. The output frond side face images are plotted in the right column.)

14 5. Gordon N., Salmond D.:Bayesian state estimation for tracking and guidance using the bootstrap filter. Journal of Guidance, Control and Dynamics, 18(1995)1434– 1443 6. Kitagawa G.: Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Satistics. 5(1996)1–25 7. Nummiaro K., Koller-Meier K., Van Gool L.: A Color-based Particle Filter. First International Workshop on Generative-Model-Based Vision. 1 (2002)53–60 8. Nummiaro K., Koller-Meier E., Van Gool L.: Object tracking with an adaptive colorbased particle filter. Symposium for Pattern Recognition of the DAGM. (2002) 9. Doucet A., Freitas N., and Gordon N.: Sequential Monte Carlo Methods in Practice. New York: Springer (2001) 10. Doucet A. On Sequential Simulation-Based Methods for Bayesian Filtering. Technical report University of Cambridge, Dept. of Engineering, (1998) 11. Liu J.S. and Chen R. Sequential Monte Carlo Methods for Dynamic Systems. J. Am. Statist. Ass., 93(1998) 12. Pitt M.K. & Shephard N. Filtering via simulation: auxiliary particle filter. J. Am. Statist. Ass (1999) 13. Li P., Zhang T., Pece A.E.C.: Visual contour tracking based on particle filters. Image and Vision Computing 21 (2003) 111–123 14. Utsumi, H. Mori, J. Ohya, and M. Yachida,:Multiple-view-based tracking of multiple humans, in Proc. Int. Conf. Pattern Recognition, (1998)197–601 15. S. L. Dockstader and A. M. Tekalp,:Multiple camera tracking of interacting and occluded human motion, Proc. IEEE, 89 (2001)1441–1455 16. A Azarbayejani, A Pentland, PC Section, C MIT , Real-time self-calibrating stereo person tracking using 3-D shape estimation from blob features. In Proc. of intl. conf. on Pattern recognition ,Austria, August, (1996)627–632 17. Lippiello V., Siciliano B., Villani L.: Robust Visual Tracking Using a Fixed Multicamera System. Proceedings of the 2003 IEEE International Conference on Robotics & Automation. Taipei, Taiwan. (2003)3333–3338 18. Birchfield S.: Elliptical head tracking using intensity gradients and color histograms. IEEE International Conference on Computer Vision and Pattern Recognition. (1998) 232–237 19. Shen C., Van den Hengel A., Dick A.: Probabilistic Multiple Cue Integration for Particle Filter Based Tracking. Proc. VIIth Digital Image Computing: Techniques and Applications. (2003)399–408 20. Dornaika F., Davoine F., Dang M.: 3D head tracking with particle filters. 5th Intl. Workshop on Image Analysis for Multimedia Interactive Services, Lisbon, Portugal, (2004) 21. Dornaika F., Davoine F., Dang V.M.: Online appearance-based face and facial feature tracking. International Conference on Pattern Recognition, Cambridge, UK, (2004) 22. Lu L., Dai X., Hager G.: A particle filter without dynamics for robust 3D face tracking. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. (2004) 23. Zhou S., Krueger V., Chellappa R.: Probabilistic recognition of human faces from video. Computer Vision and Image Understinding. 91:1-2 (2003) 214–245 24. Nait-Cherif H., McKenna S.J.: Head tracking and action recognition in a smart meeting room. IEEE International Workshop on Performance Evaluation of Tracking and Surveillance. (2003)

Particle Filter based Multi-Camera Integration for ...

calibrated cameras via one color-based particle filter. The algorithm re- ... ensured using a multi-camera system, which guarantees broad view and informa-.

629KB Sizes 1 Downloads 233 Views

Recommend Documents

Probabilistic Multiple Cue Integration for Particle Filter ...
School of Computer Science, University of Adelaide, Adelaide, SA 5005, ..... in this procedure, no additional heavy computation is required to calculate the.

Enhancing Memory-Based Particle Filter with Detection-Based ...
Nov 11, 2012 - The enhance- ment is the addition of a detection-based memory acquisition mechanism. The memory-based particle filter, called M-PF, is a ...

Convergence Results for the Particle PHD Filter - CiteSeerX
convergence of the empirical particle measure to the true PHD measure. The paper first ... tation, or Particle PHD Filter algorithm, is given in Section. Daniel Edward Clark ...... [Online]. Available: citeseer.ist.psu.edu/crisan00convergence.html. [

Convergence Results for the Particle PHD Filter
[2], the random finite set Θ was represented by a random counting measure nΘ ..... error of the PHD Particle filter at each stage of the algorithm. These depend on ..... t−1. ) 2. |Qt−1]), where the last equality holds because the particles are

Particle Filter Integrating Asynchronous Observations ...
Position tracking of mobile robots has been, and currently ..... GPS. 1. 20 ot. 4. Camera Network. 1. 500. The experimental testbench was composed by two com- puters. .... Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,”.

INTERACTING PARTICLE-BASED MODEL FOR ...
Our starting point for addressing properties of missing data is statistical and simulation model of missing data. The model takes into account not only patterns and frequencies of missing data in each stream, but also the mutual cross- correlations b

Convergence Results for the Particle PHD Filter - CiteSeerX
distribution itself. It has been shown that the PHD is the best-fit ... Electrical and Computer Engineering, Heriot-Watt University, Edinburgh. [email protected] ... basic idea of point processes is to study collections of point occurrences, the .....

Particle Filter Based Localization of the Nao Biped Robots
Moreover, kidnap scenarios which could not be considered and handled with the uni-modal Kalman .... Thereafter, by running a binary search on green color between the current pixel and the previous checked one, ... motions of the cameras installed on

Particle PHD Filter Multiple Target Tracking in Sonar ...
The matrices in the dynamic model ... The PHD is approximated by a set of discrete samples, or ... observation covariance matrix R. The dot product of the.

Boosting Target Tracking Using Particle Filter with Flow ...
Target Tracking Toolbox: An object-oriented software toolbox is developed for implementation ..... In data fusion problems it is sometimes easier to work with the log of a ..... [13] Gustafsson, F., “Particle filter theory and practice with positio

Extended Kalman Filter Based Learning Algorithm for ...
Kalman filter is also used to train the parameters of type-2 fuzzy logic system in a feedback error learning scheme. Then, it is used to control a real-time laboratory setup ABS and satisfactory results are obtained. Index Terms—Antilock braking sy

Hierarchical Dynamic Neighborhood Based Particle ...
Abstract— Particle Swarm Optimization (PSO) is arguably one of the most popular nature-inspired algorithms for real parameter optimization at present. In this article, we introduce a new variant of PSO referred to as Hierarchical D-LPSO (Dynamic. L

Particle-based Viscoelastic Fluid Simulation
and can animate splashing behavior at interactive framerates. Categories and ..... We can visualize how the combined effect of pressure and near-pressure can ...

Importance Sampling-Based Unscented Kalman Filter for Film ... - Irisa
Published by the IEEE Computer Society. Authorized ..... degree of penalty dictated by the potential function. ..... F. Naderi and A.A. Sawchuk, ''Estimation of Images Degraded by Film- ... 182-193, http://www.cs.unc.edu/˜welch/kalman/media/.

IC_26.Data-Driven Filter-Bank-Based Feature Extraction for Speech ...
IC_26.Data-Driven Filter-Bank-Based Feature Extraction for Speech Recognition.pdf. IC_26.Data-Driven Filter-Bank-Based Feature Extraction for Speech ...

Spline based least squares integration for two-dimensional shape or ...
Spline based least squares integration for two-dimensional shape or wavefront reconstruction.pdf. Spline based least squares integration for two-dimensional ...

Model-based DRC for design and process integration
Enhanced checks, however, for non-lithographic failure classes (metal slotting rules, for example) .... *See Fig 4 (frames E-G) for an illustration. 5.2. Extracting ...

Model-based DRC for design and process integration
Model-based DRC for design and process integration. Chi-Yuan Hung*a, .... provides the DBclassify function to perform this data reduction. It allows users to ...

fast wavelet-based single-particle reconstruction in cryo ...
The second idea allows for a computationally efficient im- plementation of the reconstruction procedure, using .... We will use the following definition for the Fourier transform of a D-dimensional function f(x) = f(x1,...,xD): ... the above definiti

Real-Time Particle-Based Simulation on GPUs - Semantic Scholar
tion to these platforms. Therefore, we need to ... (GPUs) as a parallel computation platform. As a result, we can ... ∗e-mail: [email protected].

Quantum Evolutionary Algorithm Based on Particle Swarm Theory in ...
hardware/software systems design [1], determination ... is found by swarms following the best particle. It is ..... “Applying an Analytical Approach to Shop-Floor.

Particle-based Simulations on Multiple GPUs
tion to the best of our knowledge although several researchers have been using a GPU. ... The computational domain is divided into subdomains of the number of GPUs. Each ... The host invoked five threads on the. CPU and each of them ...

2009.Artificial Neural Network Based Model & Standard Particle ...
Artificial Neural Network Based Model & Standard ... Swarm Optimization for Indoor Positioning System.pdf. 2009.Artificial Neural Network Based Model ...