Globally Optimal Target Tracking in Real Time using Max-Flow Network Tae Eun Choe, Zeeshan Rasheed, Geoffrey Taylor, Niels Haering ObjectVideo Inc. 11600 Sunrise Valley Dr., Reston, VA 20191, USA {tchoe, zrasheed, gtaylor, nhaering}@objectvideo.com

Abstract We propose a general framework for multiple target tracking across multiple cameras using max-flow networks. The framework integrates target detection, tracking, and classification from each camera and obtains the cross-camera trajectory of each target. The global data association problem is formed as a maximum a posteriori (MAP) problem and represented by a flow network. Similarities of time, location, size, and appearance (classification and color histogram) of the target across cameras are provided as inputs to the network and the target’s optimal cross-camera trajectory is found using the max-flow algorithm. The implemented system is designed for real-time process with high-resolution videos (10MB per frame). The framework is validated on high resolution camera networks with both overlapping and non-overlapping fields of view in urban scenes.

1. Introduction Identification and re-identification of targets across multiple cameras are challenging because targets may have significant variations in shape and appearance across cameras. Targets may enter and exit a scene at different times and can exhibit highly non-linear motion. For real-time surveillance, all processing modules should be faster than video feeds and computationally expensive appearance models cannot be applied. Tracking of multiple targets in a single camera has been widely studied. Targets are often modeled using color, motion, size, or time and a global data association problem is formulated as a maximum a posteriori probability (MAP) problem. Berclaz et al. [1] track multiple targets using dynamic programming to solve a global data association problem with overlapping multiple cameras. Wu and Nevatia [17] detected people using an edgelet based Adaboost classifier that models targets using their histogram, size and time. The trajectory of a target is also estimated using a MAP formulation. Huang et al. [6] solve a MAP problem employing a Hungarian algorithm and utilize scene context for improvement. Pirsiavash et al. [11] used a greedy algorithm for near-optimal global data association.

Zhang et al. [18] find an optimal MAP data association in a single camera by finding the max-flow of a network. Tracking across multiple cameras is an emerging area in video surveillance. Earlier, the multiple camera tracking with overlapping fields of view was developed [2][3]. Recently, tracking across non-overlapping cameras is discussed: Javed et al. [8] propose a method to maximize a posterior of spatial and temporal information to connect targets; Song and Roy-Chowdhury [13] propose an optimization method to combine short-term feature correspondences and long-term feature dependencies across multiple cameras; Kuo et al. use Multiple Instance Learning (MIL) for an on-line learning of appearance model across multiple cameras and a Hungarian algorithm to solve the global data association problem. However, extraction and matching of features used in MIL are computationally expensive and the Hungarian method is a greedy algorithm and thus it does not guarantee an optimal solution. We propose a framework to track multiple targets across multiple cameras in real-time. We also formulate the global data association as a MAP problem and find an optimal solution. We model the appearance, time, and location of a target to better identify targets across cameras. A target is detected and classified into a human or a specific type of vehicle. The confidence level of each sensor’s target classification is utilized to model the posterior probability of classification. The time model of a target between a pair of distant cameras is defined, and targets’ enter/exit areas in each scene are also modeled to utilize spatio-temporal information of targets and cameras. With those models, the similarity of each pair of targets is assigned and a network of target trajectories is formed. Finally, the optimal target tracking solution is obtained using max-flow on the network. Every module is optimized and the framework is capable of processing high resolution videos in real-time. The paper is organized as follows: target detection, tracking and classification in a single camera are explained in Section 2; Section 3 explains how the global data association problem is formed; Section 4 explains a method to find the optimal solution for data association across multiple cameras; in Section 5, similarity measures for cross camera-tracking are discussed; and Section 6 explains how the system is implemented. Experimental results are shown in Section7, and conclusion and future work are discussed

in Section 8.

2. Detection and Tracking in a Single Camera 2.1. Detection and Tracking The first step is to detect and track targets, or objects of interest, in a single camera. For detection, we use a pixel-wise stochastic background model [12]. The algorithm compares color distributions for each pixel collected over both short and long time scales. Pixels with notable color changes compared to two distributions are selected as foreground. This method significantly improves detection in environments with radical illumination changes (e.g. a maritime scene or a scene with moving clouds). Neighboring pixels are combined to form a blob and defined as a single detection. Detections across multiple frames are, then, tracked to form a tracklet (segment of a trajectory) in a Kalman filter framework using nearest-neighbor association incorporating motion smoothness and appearance matching. Tracklets are generated conservatively by connecting detections matched with high confidence. We do not form single camera tracking with a MAP problem as Zhang et al. do in [19] since it is computationally expensive to consider all detections in high resolution image sequences. Our detection and tracking routine is optimized for real-time. Details of the processing time are shown in Section 7.2.6.

2.2. Target Classification Tracked targets are classified as vehicles or pedestrians based on shape and appearance in a linear discriminant analysis framework. Vehicles are further classified according to class (passenger car, SUV or pickup truck) using a 3D model-based fitting method. Approaches exist to fit and classify vehicles [4][5][9][10], however, few of them are real-time methods. The real-time vehicle fitting algorithm [12] uses simplified 3D models learned from detailed CAD vehicle models representing the above mentioned vehicle classes of interest. The simple 3-D model is learned from more than 260 vehicle models sold in the United States. Each learned simple model contains 16 vertices and 28 facets that best approximate the mean shape of multiple CAD models in each class, as shown in Figure 1.

Passenger Car

SUV

Pickup Truck

Figure 1: Simplified models of the three vehicle classes.

To classify a target, the above vehicle models are fitted to the silhouette edges obtained by refining the results of

foreground detection using morphological filtering and Gaussian smoothing. Model fitting is based on minimizing the Chamfer distance between the silhouette edges and projected model edges [4]. To improve robustness, boundary edge pixels are classified into four groups based on edge orientation. A corresponding Chamfer distance map is constructed using pixels in each group. In each iteration of the fitting procedure, the vehicle model is projected onto the image in the current estimated pose, and the matching error is computed as the sum of the Chamfer distances along each projected edge from the map corresponding to its orientation. The estimated pose is updated for the next iteration based on the derivative of the matching error. The initial vehicle location and heading is estimated by back-projecting the detected target onto the road according to a manually calibrated projection matrix. The simple calibration procedure involves manually aligning 3D vehicle models to a few exemplar views and produces a projection matrix that encodes both the camera orientation and ground plane location.

Initial pose

Iteration 4

Iteration 8

Iteration 20

Figure 2: Iterative vehicle fitting. After 8 iterations the model is roughly aligned; after 20 the pose and shape are fully aligned.

Figure 2 illustrates several iterations of the vehicle model fitting procedure. To determine the class of a detected vehicle, the vehicle is fitted with all three models and the model with the lowest matching error is selected as the vehicle class. The size of the target is also estimated using the 3-D vehicle fitting results. The detection and classification response in a single frame is defined as (1) xi = D(ci, ti, li, si, oi, ai), where ci is camera ID, ti is time of the detection, li is location in the image, oi is classification type of the detected target (e.g. pedestrian, passenger car, SUV, pick-up truck, or others), si is the mensurated 3-D size of a target which is a by-product of vehicle fitting, and ai is appearance model of a detected target. We use either color histogram or color correlogram [7] as an appearance model. The color correlogram represents correlation of color histograms, and it contains contextual information of the target.

The vehicle fitting algorithm described above has been quantitatively evaluated on three video sequences of public roads with a total of 611 vehicles in varying orientations including frontal, profile and turning. These traffic surveillance videos do not include any pedestrian or others classification type. Table 1 provides the confusion matrix for vehicle classification based on manually observed ground truth. The results indicate that the proposed model-based fitting method is capable of classifying the vehicle type with a precision of 95%. Table 1. Confusion matrix of vehicle classification Detection Passenger Pick-up SUV Ground truth Car Truck Passenger Car 414 9 5 SUV 1 103 2 Pick-up Truck 11 4 62

A prior distribution P(T) is calculated assuming conditional independency of tracklets among cameras as shown in Equation (5). P(Tj) is a network with Pstart (x1,j) indicating the probability that the current tracklet x1,j is the start of the track in camera 1, Pend (xh,j) indicating the probability that tracklet xh,j is the end of the trajectory in camera h, and Psimilarity (xB,j, xA,j) is the similarity measure connecting two tracklets between two cameras A and B, which does not have to be nearest cameras. Applying a logarithm on equation (2) to change the multiplication to a summation yields:

{

}

T = arg min ∑ − log P ( x j | T ) + ∑ {− P (Tk )} T

j

Tk ∈T

  = arg min ∑  − log Pstart ( x j ) − ∑ log Psimilarity ( x j , x i ) − log Pend ( x j )  T j  i  +

3. Formation of Global Data Association

βi 



∑  − log 1 − β

Tk ∈T



i

 

(5)

The trajectory of the tracked vehicles across multiple cameras is constructed using a max-flow network framework. Zhang et al. [18] solved the global data association problem by formulating it as a min-cut/max-flow network. We are inspired by their work and, however, improve it by considering missing detections and occlusion, and extend the framework for tracking targets across multiple cameras. We generate a network of tracklets rather than detections. The set of tracklets are represented by X={xi} as shown in Equation (1). The global data association problem is formulated as a MAP problem [1][6][18]. The trajectory across multiple cameras is represented by T ={ x1 , x2 , …, xn …, xh }, where h is the number of cameras, xi in T is a either single tracklet or multiple tracklets in a camera network. The optimal global trajectory T* can be estimated by the maximum a posteriori of T given the observation of set X: T * = arg max P (T | X )

4. Solution by Maximizing Flow Network We represent the global data association in Equation (5) into a cost-flow network. Figure 3-(a) depicts a constructed sample network flow with 7 detections from 3 different cameras. A source and sink are indicated by pink ovals with S and E, and tracklets are indicated by yellow rectangles. Pstart (xj) and Pend (xj) are represented by gray arrows from a source to an observation and from an observations to a sink respectively. Psimilarity (xj=xi) are indicated by blue arrows. S e1 a1

o1 e2

a2

ē1

ā1

o2

ā2 e2

ō1

ē2 ō2

ē2

T

= arg max P ( X | T ) P (T )

(2)

Camera a



T

= arg max ∏ P ( xi | T ) P (T ) T

(a) The example of an initial cost-flow network

where P(xi|T) is the likelihood of observation xi modeled with the following Bernoulli distribution: 1 − β i T j ∈ T , xi ∈ T j (3) P ( xi | T ) =  otherwise  βi where the miss-detection rate of the detector is βi. j

j

Psimilarity ( xB , j , xA, j ) L Psimilarity ( xh , j , xh −1, j ) Pend ( xh , j )}

e1 a1

o1

Camera a

o2



ō1

ē2

ā2 e2

(4)

ē1

ā1 e2

a2

P(T ) = ∏ P(T j ) = ∏ P({x1, j , x2, j ,..., xh , j }) j

Camera o

E

i

= ∏ {Pstart ( x1, j ) Psimilarity ( x2, j , x1, j ) L



Camera e

ō2

ē2

Camera e …

Camera o

(b) Tracking results using the max-flow network Figure 3. A network with 3 cameras and 7 tracklets. All possible detection links are shown in (a). After finding the max-flow network, the final set of trajectories is shown in (b).

In the beginning, we build a complete graph in a range of a window of time and location for Psimilarity. The flow of every edge is assigned with a value one and the weight of probability are assigned with Psimilarity, Pstart, and Pend. Then, the solution is found by maximizing the flow of the network. Zhang et al. [18] represented data association as a Markov chain where the current state is determined by the previous state. However, we built a complete network by allowing networks to skip some states to handle missed-detections and occlusions in some cameras. Figure 3-(b) shows the result by max-flow network.

5. Similarity Measure The similarity measure of targets between two cameras is a key element of multi-camera tracking. Considering both overlapping and non-overlapping camera scenarios, we selected time, location, classification and appearance model as a similarity measure Psimilarity(xB, xA) between detected targets in two cameras (A and B), assuming that its components are conditionally independent each other.

Psimilarity (x B , j , x A,i ) = Ptime (t B , j , t A,i ) ⋅ Plocation (l B , j , l A,i ) ⋅ Psize ( sB , j , s A,i ) ⋅ Pclass (oB , j , oA,i ) ⋅ Pappearance (aB , j , a A,i ) Ptime(tB,j, tA,i) represents temporal probability, Plocation(lB,j, lA,i) represents spatial probability, Psize(sB,j, sA,i) represents size probability, Pclass(oB,j, oA,i) represents classification probability, and Pappearnce(aB,j, aA,i) represents appearance probability. Each probability independently provides a distinctive measurement. Other similarity measures such as velocity, acceleration, higher ontological activities, or feature-based appearance models can be easily added to this framework.

5.1. Time Similarity Temporal probability is given by: 2 Ptime (t B , j , t A, i ) = Ν (t B − t A ; mBA ,σ BA )

where Ν (t B, j , mBA , σ ) is a normal distribution of the time interval between Camera B and Camera A, which is learned from training data. For overlapping cameras, mean mBA is close to 0 and variance is very small. For non-overlapping cameras, if two cameras are closer and there is no traffic signal between them, the variance tends to be smaller and contribute a lot to the similarity measurement. However, when two cameras are further away from each other or there are traffic signals in between, the variance becomes higher and the time measurement does not affect the similarity measure since the distribution would be widely spread. The parameters (mean and variance) between all pairs of two cameras are learned from training data. Training data is 2 BA

obtained from our GPS-emitting vehicles. Table 2 shows the learned parameters (mean and starndard deviation) of time measurement for the NOCrossCamera dataset.

5.2. Location Similarity Spatial probability is defined as

(

)

Plocation (lB , j , l A, i ) = N dist ( g (lB, j ) − g (l A, i )); ml ,σ l2 . The spatial distance between two targets in two cameras is measured at the Enter/Exit areas, which are where a target enters and exits the scene. For a road with multiple lanes, each lane can be an Enter/Exit area. Each Enter/Exit area in one camera has a corresponding Enter/Exit area in the other camera. In an overlapping camera scene, the Enter/Exit areas are overlapping in the physical world. In a non-overlapping camera-scene, Enter/Exit area is located mostly near the boundary of the image. For overlapping cameras, the function g transforms image coordinates to geometric coordinates (Latitude/Longitude or UTM). For non-overlappng cameras, g transforms image location lB and lA to corresponding Enter/Exit areas and then the distance between two locations is computed by checking if the two locations belong to corresponding Enter/Exit areas. Figure 10 shows examples of Enter/Exit area pairs for the NOCrossCamera dataset.

5.3. Size Similarity The size probability is given by:

Psize ( sB , j , s A, i ) = Ν ( sB , j − s A,i ; ms , σ s2 ) The size of a target is a by-product of target classification. From 3-D fitting of a model, the length, width, and height of a target is calculated. L2 norm of this 3-D data is used for the size measurement.

5.4. Classification Similarity Target classification is performed by relying on a projection matrix of each camera using a 3-D model of humans and vehicles. With an accurate projection matrix, the performance of classification is highly accurate with more than 95% precision rate as shown in Table 1. However, in practical cases, an accurate projection matrix may not be available from the beginning. Rather than using a uniform metric for every camera, we use a variable metric based on the performance of each camera: we rely more on a camera with accurate classification and less on a camera with inaccurate classification results. For that, we utilize each camera’s classification confusion matrix which is computed

from training data. Considering that, we define the classification probability of two targets as Pclass (oB, j , o A,i ) = ∑ P(oB, j , o A,i , ck ) = ∑ P(oB, j , o A,i | ck )P(ck ) k∈C

k∈C

between two tracklets. The similarity measures of pairs of detections are computed and the best matching pair is selected as an edge of the network. Then, the optimal connection is determined by solving the MAP problem as shown in Figure 5-(b).

where oB,j and oB,j are the observed classes and ck is the groundtruth of the class. Assuming that each observation of classification is conditionally independent,

Pclass (oB, j , oA,i ) = ∑ P(oB, j | cB , k ) P(cB , k ) P(oA,i | c A, k )P(c A, k ) k ∈C

where P(oB,j|ck) and P(oA,i|ck) are from the confusion matrix, and P(cB,k) and P(cA,k) can be also easily estimated by the marginal probability of the confusion matrix.

5.5. Appearance Similarity

(a) Formulation of data association. Tracklet nodes at Enter/Exit areas are considered for matching. Camera 1 has Tracklet A and Camera 2 has Tracklet B and Tracklet C at coherent time and location.

The appearance model requires the most expensive computation for extracting and matching features. For real time processing, we use a histogram of HSV color as the appearance model Pappearance(aA,i|aB,j). To cope with color and illumination changes across cameras, for all images, brightness is equalized and hue is rectified assuming that the asphalt of the road is gray.

6. Implementation We implemented the multi-camera tracking system for overlapping and non-overlapping field-of-view cameras. The system uses a max-flow framework to solve MAP-based data association, and it can easily add, delete, or change similarity measures. Single view-based tracking results from a processor attached with a camera are ingested as an input and map-based trajectories across cameras are computed as an output. Figure 4 shows the architecture of the implemented multi-camera tracking system. Single Camera Tracker



Tracklets from Camera 1

Single Camera Tracklets from Camera 2 Tracker Cross-Camera Tracker



Single Camera Tracker

Map-based Processor

Map-based Global Trajectories

Tracklets from Camera 3

Processors

Figure 4. Dataflow of the multi-camera tracking system

Figure 5 illustrates a simple multi-camera tracking example with a non-overlapping pair of cameras. To find a matching pair of tracklets across cameras, we consider multiple detection results in the Enter/Exit areas of each camera. Temporally and spatially distant detections are not considered in the comparison to reduce unnecessary computation. Figure 5-(a) shows initial matching pairs

(b) Matched tracklets (Tracklet A in Camera 1 and Tracklet C in Camera 2) are connected after solving MAP. Tracklet B is considered as an individual tracklet with no connection with Camera 1. Figure 5. Illustration of multi-camera tracking with non-overlapping cameras.

7. Experiment 7.1. Overlapping Multiple Cameras We tested the system on multi-camera videos in the NGSIM peachtree dataset [19], which contains 7 videos taken on top of a building (See Figure 6). We selected the first 5 minutes of video as the test data. These test videos look like a relatively easier example, since camera FOVs are overlapping each other and all video frames are time-synchronized with the same frame rate. However, it still contains multiple targets in a complex situation. The targets frequently stop at the traffic signal and vehicles and pedestrians are often occluded by buildings and trees. In Camera 5, 6, and 7, vehicles are heavily occluded by trees. For camera calibration, correspondences between points on each image and lat-long coordinates of a geo-browser (e.g. GoogleEarth) are manually annotated to estimate the image-to-ground homography. The projected images are shown in Figure 7. For this data set, time, location, and size are used as similarity measures.

7.2. Non-Overlapping Multiple cameras

(a) Camera 6

(b) Camera 5

(c) Camera 4

(d) Camera 3

(e) Camera 2

(f) Camera 1

Figure 6. The snapshot of 6 out of 7 NGSIM videos (taken at Peachtree Dr, Atlanta, GA). The order of cameras is reversed considering the spatial arrangement of cameras. Camera 5 FOV

Camera 4 FOV Camera 3 FOV

Camera 7 FOV

Camera 6 FOV

We also tested on the NOCrossCamera dataset which has 4 cameras with mostly non-overlapping fields of view with 24-minute durations, 4000×640 pixel resolution, and 6 frames per second. This dataset is challenging because: i) cameras are located as far as 750 meters with traffic signals (See Figure 9); ii) classification results are very poor since the provided projection matrix for each is not accurate; iii) color and illumination changes radically across cameras; iv) high resolution video requires fast algorithm for every module; and, v) more than 150 vehicles passed by each camera but not all vehicles passed through all 4 cameras. Sample images for each camera are shown in Figure 8, and the map view is shown in Figure 9.

Camera 2 FOV

(a) Camera 1

Cameras Position Camera 1 FOV

Figure 7. The projected images of NGSIM peachtree videos on the map. Homographies for each camera are manually annotated. All 7 cameras are located on top of a building.

For evaluation, 21 targets shown from all 7 cameras are randomly selected as groundtruth. For evaluation of detection and tracking, the tracking metric in [17] is used. Evaluation results across multiple cameras are shown in Table 2. In the table, GT indicates the number of groundtruth targets, and Detection Recall is the detection rate within a camera. Precision could not be calculated since not all targets are annotated as groundtruth. When a target is tracked more than 80% compared to the corresponding groundtruth, it is considered as Mostly Tracked. Mostly Lost is when the target is tracked less than 20%. Partially Tracked is in the between. Frag indicates the number of fragments of a target in a single camera and X-Frag is average number of fragments per target across multiple cameras. Table 2. Evaluation of cross-camera tracking with NGSIM dataset. Video Detection Mostly Partially Mostly GT Frag X-Frag Number Recall Tracked Tracked Lost 1 21 94% 85.7% 14.3% 0 2 2

21

95%

95.2%

4.8%

0

0

3

21

97%

90.5%

9.5%

0

0

4

21

97%

95.2%

4.8%

0

0

5

21

85%

61.9%

38.1%

0

0

6

21

90%

85.7%

14.3%

0

0

7

19

89%

78.9%

21.1%

Average 21

93%

84.7% 15.3%

0

0

0%

0.29

(b) Camera 2

(c) Camera 3

(d) Camera 4 Figure 8. The example images of NOCrossCamera dataset.

Figure 9. Map-view images of NOCrossCamera dataset. Distance of each camera is indicated in metric.

Location, time, classification type, and color histogram are used for similarity measure. 7.2.1 Location Measure For the location measure, Enter/Exit areas are defined for each camera and their connections across cameras are specified as shown in Figure 10. 7.2.2 Time Measure For the time measure, the mean and variance between two corresponding Enter/Exit zones was learned using GPS-mounted vehicles.

2.14

includes every target in the scene. After running the multi-camera tracker, the recall and precision of target identification and re-identification across cameras are shown in Table 4. The overall F-measure is 79.7%. Between Camera 1 and 2, the performance is unsatisfactory (41.1%) as expected, since the cameras are separated more than 750 m with traffic signals between them, and more than half of the vehicles do not pass by both cameras. However, for Camera pair 2-3 and Camera pair 3-4, the system could achieve an F-measure of more than 85%. Examples of correct tracking across 4 cameras are shown in Figure 12, while examples of incorrect tracking are shown in Figure 13.

Figure 10. Enter/Exit areas. Pairs 1-2, 3-4, and 4-5 are associated for multi-camera tracking

Table 3 shows the parameters for Enter/Exit zone pairs. For Enter/Exit area pair 1 and 2, the standard deviation is 30, which forms a smooth normal distribution. This means that the time measure does not contribute much to the similarity measure. On the other side, Enter/Exit area pairs 2-3 and 3-4 have low standard deviation values, which form steep peaks near the mean.

Table 4. Recall and Precision of each tracking result. Camera 1-2 Camera 2-3 Camera 3-4 Overall 0.536 0.829 0.944 Recall 0.830 0.333 0.883 0.907 Precision 0.767 0.410 0.855 0.925 F-Measure* 0.797 *F-Measure = 2*(Recall * Precision) / (Recall +Precision) Camera_1 Camera_2 Camera_2 Camera_3 Camera_3 Camera_4

Table 3. Learned parameters of time measurement for associated Enter/Exit Area pairs. Pair of Enter/Exit Area Area 1-2 Area 3-4 Area 4-5

Arrival Time (in second) Mean 446 s 177 s 55 s

Standard Deviation 30.0 5.0 3.9

7.2.3 Classification and Size Measure In practice, the target classification method is applied on the NOCrossCamera dataset with poor projection matrices and therefore the classification results are unsatisfactory. However, we also wanted to test how robustly the system can run with poor classification results. In the current system, classification type C has 5 classes {human, passenger car (simply sedan), SUV, pick-up_truck (simply pickup), other}. The size similarity measure is not used since the observed vehicle class implies size information. 7.2.4 Appearance Measure For fast computation yet reliable performance of an appearance model, a color histogram with 8 bins for each color channel (3x8=24 bins) was empirically selected. 7.2.5 Overall Performance of Cross-Camera Tracking For evaluation, the quality analyst annotated ground truth corresponding targets across cameras in the test dataset. Unlike the NGSIM dataset, NOCrossCamera groundtruth

Figure 11. Examples of correct tracking. These targets are tracked for all 4 cameras. Camear_1 Camera_2 Camera_2 Camera_3 Camera_3 Camera_4

Camear_1 Camera_2 Camera_1 Camera_2 Camera_1 Camera_2

Figure 12. Examples of incorrect tracking are indicated by a red cross mark. The first two rows show the tracking across 4 cameras, and one incorrect tracking between camera 1 and 2. The last row shows individual incorrect matches. The wrong matching pair at the center of the third row looks like a correct match. However, two vehicles are different (Honda and Mercedes). The reason for the error is that the Honda exits the camera network and Mercedes enters it between camera 1 and 2 in a coherent time.

7.2.6 Computational Time We processed four videos (4000×640 pixel resolution, 6 frames per second and 24-miunte duration) on desktop PCs with 2.8 GHz Intel Xeon CPU processors. Average execution times for detection, tracking, and classification of multiple targets per frame in a single camera are shown in Table 5. The average computation time of cross camera tracking per frame is also shown in Table 5. Considering both single-camera tracker and a multi-camera tracker, the total average processing time per frame is 167.1 ms, which means that the system can process 6 frames per second on average. Table 5. Average computation Time of each process per frame with 4000× ×640 pixel resolution videos. Average Time Detection in single camera Tracking in single camera Classification in single camera Tracking across multiple cameras Total

70.4 ms 8.8 ms 38.5 ms 49.4 ms 167.1 ms

8. Conclusion A framework to perform robust multiple target tracking across multiple cameras is discussed. The cross-camera data association problem is formed with MAP estimation. For robust multi-camera tracking, time, location, size, classification type, and appearance of targets are effectively applied as a similarity measure. The system is extensively tested on both overlapping and non-overlapping camera networks. The experimental results validate the robustness and effectiveness of the system. The implemented method has been tested on land scenes. However, the framework can be easily applied to a maritime scene or a scene with moving aerial platforms. For future works, efficient and effective appearance models are required to increase the accuracy of the system.

Acknowledgment This material is based upon work supported in part by the Office of Naval Research under Contract number N00014-10-C-0527 and N00014-11-C-0308.

References [1] J. Berclaz, F. Fleuret, and P. Fua, “Robust people tracking with global trajectory optimization”, Proc. IEEE Conference on Compeuter Vision and Pattern Recognition, 2006. [2] Q. Cai, J. Aggarwal, "Tracking human motion in structured environments using a distributed-camera system," IEEE Tran. on PAMI 21, 1241–1247, 1999. [3] R. Collins, A. Lipton, H. Fujiyoshi, T. Kanade "Algorithms for cooperative multisensor surveillance," Proceedings of the IEEE 89, 1456–1477, 2001.

[4] Y. Guo, C. Rao, S. Samarasekera, J. Kim, R. Kumar, H.S. Sawhney, “Matching vehicles under large pose transformations using approximate 3D models and piecewise MRF model”, CVPR 2008. [5] Y.S. Harpreet S. Sawhney, R. Kumar, “Vehicle Identification between Non-Overlapping Cameras without Direct Feature Matching,” ICCV 2005. [6] C. Huang, B. Wu, R. Nevatia, "Robust Object Tracking by Hierarchical Association of Detection Responses”, ECCV 2008. [7] J. Huang, S.R. Kumar, M. Mitra, W. Zhu and R. Zabih, “Image Indexing Using Color Correlograms'', IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, June 1997, pages 762-768. [8] O. Javed, Z. Rasheed, Z., K. Shafique, M. Shah, "Tracking across multiple cameraswith disjoint views," ICCV 2003. [9] M.J. Leotta, J.L. Mundy, Predicting high resolution image edges with a generic, adaptive, 3-D vehicle model, , In Proc. IEEE Conf. on Computer Vision and Pattern Recog., 2009. [10] J. Lou & T. Tan, 3-D Model-Based Vehicle Tracking, IEEE Transactions on PAMI, 2005. [11] H. Pirsiavash, D. Ramanan,C. Fowlkes, "Globally-Optimal Greedy Algorithms for Tracking a Variable Number of Objects," CVPR 2011 [12] Z. Rasheed, G. Taylor, L. Yu, M.W. Lee, T.E. Choe, F. Guo, A. Hakeem, K. Ramnath, M. Smith, A. Kanaujia, D. Eubanks, N. Haering, "Rapidly Deployable Video Analysis Sensor Units for Wide Area Surveillance," First IEEE Workshop on Camera Networks (WCN2010), held in conjunction with CVPR 2010, 14 June, 2010. [13] Y. Shan, H.S. Sawhney, R. Kumar, “Vehicle Identification between Non-Overlapping Cameras without Direct Feature Matching”, ICCV, 2005. [14] B. Song, A.K. Roy-Chowdhury, "Stochastic adaptive tracking in a camera network," ICCV 2007. [15] X. Song, R. Nevatia, “A Model-Based Vehicle Segmentation Method for Tracking”, ICCV 2005. [16] C. Stauffer and W. E. L. Grimson, “Adaptive Background Mixture Models for Realtime Tracking”. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1999. [17] K. Toyoma, et al., “Wallflower: Principles and Practice of Background Maintenance”. In Proc. IEEE ICCV, 1999. [18] B. Wu and R. Nevatia. “Detection and Tracking of Multiple, Partially Occluded Humans by Bayesian Combination of Edgelet based Part Detectors,” International Journal of Computer Vision, Vol. 75, No. 2, November 2007, pp. 247-266. [19] L. Zhang, Y. Li and R. Nevatia, “Global Data Association for Multi-Object Tracking Using Network Flows”, Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2008. [20] NGSIM (Next Generation Simulation): http://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm

Globally Optimal Target Tracking in Real Time using ...

target's optimal cross-camera trajectory is found using the max-flow algorithm. ...... frames per second and 24-miunte duration) on desktop PCs with 2.8 GHz Intel ...

1MB Sizes 1 Downloads 479 Views

Recommend Documents

An Optimal Approach to Collaborative Target Tracking ...
Semidefinite programming·Second-order cone programming·Multi-agent systems ...... Advanced Optimization Laboratory: Addendum to the SeDuMi User Guide Version 1.1 ... Taipei, Taiwan. http://control.ee.ethz.ch/~joloef/yalmip.php (2004).

Label-free Photoacoustic Cell-Tracking in Real-time
Oct 13, 2014 - Xiang Hu, Zhen Cheng: Molecular Imaging Program at Stanford. (MIPS), Department of ... used to record the photoacoustic signal and pulse-echo ul- trasound signal. .... Medical physics 30, 1048-1051 (2003). [19] Nedosekin ...

Boosting Target Tracking Using Particle Filter with Flow ...
Target Tracking Toolbox: An object-oriented software toolbox is developed for implementation ..... In data fusion problems it is sometimes easier to work with the log of a ..... [13] Gustafsson, F., “Particle filter theory and practice with positio

Multi-Target Tracking Using 1st Moment of Random ...
Feb 12, 2007 - 2.5 Illustration of N-scan Pruning . .... the log-likelihood ratio (LLR) of a track hypothesis at the current time step, i.e. k = 2 in this case. . . . 97.

Globally Optimal Finsler Active Contours
Segmentation boundaries generally coincide with strong edges in the source ..... Minimization of the Active Contour/Snake Model. J. Math. Imag. Vision (2007).

Globally Optimal Tumor Segmentation in PET-CT Images: A Graph ...
hence diseased areas (such as tumor, inflammation) in FDG-PET appear as high-uptake hot spots. ... or CT alone. We propose an efficient graph-based method to utilize the strength of each ..... in non-small-cell lung cancer correlates with pathology a

Real-time Localization in Outdoor Environments using ...
a few meters in front of the camera in addition to providing localization capability. .... tion R and translation t so that its new location is M ≡. (X ,Y ,Z )T . The point ...

PHD Filter Multi-target Tracking in 3D Sonar
iteration by computing the mass of the particle weights. The locations of the ... not kept, this has a significant computational advantage over traditional methods of ...

cooperative target tracking in vehicular sensor networks - IEEE Xplore
This subject has been studied in fields such as airborne traffic, computer vision, and wireless sensor networks. A VANET brings out new challenges that should ...

Real-Time Detection, Tracking, and Monitoring of ...
Real-Time Detection, Tracking, and Monitoring of Automatically. Discovered Events in Social ... time event detection, tracking, monitoring ..... 6.4 Visual Analytics.

Towards Real Time 3D Tracking and Reconstruction on ...
the GPU rather than the CPU, is that the GPU has several computing units, that can run ... algorithm and a reference implementation that works in real time on ..... cloud so the displacement of its projections is similar to the optical flow of the ..

PHD Filter Multi-target Tracking in 3D Sonar
moment of the multiple target posterior distribution called the. Probability ... iteration by computing the mass of the particle weights. The ... data obtained from the Echoscope forward looking 3D imaging sonar. ... One of the advantages of.

real time eye tracking for human computer interfaces - CiteSeerX
Email: {asubram, ksampat, jgowdy}@clemson.edu. Abstract. In recent years ..... IEEE International Conference on Automatic Face & Gesture. Recognition 2000.

Real-Time Human Pose Tracking from Range ... - Research at Google
In this paper we focus on improving model-based tracking. ... the depth image as a point cloud. .... point cloud (rather than as an image-array of depth values). ..... desktop computer running Linux using the following benchmark data sets:.

Particle PHD Filter Multiple Target Tracking in Sonar ...
The matrices in the dynamic model ... The PHD is approximated by a set of discrete samples, or ... observation covariance matrix R. The dot product of the.

Optimal Multiprocessor Real-Time Scheduling based ...
Abstract—Optimal multiprocessor real-time scheduling can achieve full system utilization with implicit-deadline periodic task sets. However, worst case execution time (WCET) analysis is difficult on state-of-the-art hardware/software platforms beca

Beyond Globally Optimal: Focused Learning for ... - Alex Beutel
Apr 3, 2017 - that all movies or apps have a good opportunity to be sur- faced? Recommender .... side information about the items [34, 4, 20, 33], users' social networks [23], multi-task learning to model review text [11, ...... In RecSys, 2016.

Globally Optimal Surfaces by Continuous ... - Research at Google
other analysis techniques that practitioners need only define an appropriate mea- sure of 'goodness' and then optimise ... stereo matching yielding improved spatial consistency at the cost of additional computation [12]. ... additional user interacti

CREATING A REAL-TIME dashboard APP FOR TWITTER USING ...
When something could have changed, re-render everything to a new DOM-representation. 2.Diff new output with previous output. 3.Update only what has ...

Real-time structural damage detection using wireless ...
to data recorded using a traditional wired monitoring system. Keywords: structural health ... The design of such systems is the goal of structural health monitoring (SHM). ... system is assessed. A graphical user interface visualization system, in ..

Real-Time People Counting Using Multiple Lines
is performed by analyzing an image zone composed by ... predefined observed area in an image. .... http://www.vicomtech.es/ingles/html/proyectos/index_p.