Global motion estimation–based method for nighttime video enhancement Yunbo Rao Weiyao Lin Leiting Chen

Optical Engineering 50(5), 057203 (May 2011)

Global motion estimation–based method for nighttime video enhancement Yunbo Rao University of Electronic Science and Technology of China School of Computer Science and Engineering Chengdu, 611731, China E-mail: [email protected] Weiyao Lin Shanghai Jiao Tong University Institute of Image Communications and Information Processing Department of Electronic Engineering Shanghai, 200240, China

Abstract. In order to efficiently enhance the dark nighttime videos, the high-quality daytime information of the same scene is often introduced to help the enhancement. However, due to camera motion, the introduced daytime may not have exactly the same scene of the nighttime videos. Thus, the final fused moving objects may not produce reasonable results. In this paper, we make the following two contributions: 1. we propose a global motion estimation-based scheme to address the problem of scene differences between daytime and nighttime videos. 2. Based on this, we further propose an improved framework for nighttime video enhancement which can efficiently recover the unreasonable enhancement results due to scene differences. Experimental results show the effectiveness of the C 2011 Society of Photo-Optical Instrumentation Engineers (SPIE). proposed algorithm.  [DOI: 10.1117/1.3579451]

Subject terms: video enhancement; global motion estimation; motion segmentation.

Leiting Chen University of Electronic Science and Technology of China School of Computer Science And Engineering Chengdu, 611731, China

Paper 110011RR received Jan. 5, 2011; revised manuscript received Mar. 24, 2011; accepted for publication Mar. 28, 2011; published online May 5, 2011.

1 Introduction Video enhancement, which aims at improving visual quality of videos, plays a key role in nighttime video surveillance so that the objects or activities of interest can be clearly monitored.1–3 The existing techniques can be classified into two categories: 1. Enhance the nighttime video by itself and without introducing additional information for enhancing nighttime video4 (i.e., itself-based method). 2. Introduce external daytime or high-quality images of the same scene to help enhance the nighttime videos5–7 (i.e., daytime-based method). Since the daytime-based methods can normally create more efficient enhancement results, we will focus on this category in this paper. In these daytime-based methods, one of the key problems is to keep coherent scenes between the daytime and the nighttime videos. Currently, most existing daytime-based methods assume that the camera is fixed such that the exact same scene can be achieved from the daytime videos.5–7 However, due to wind or other factors, the surveillance camera may often have tiny motions which results in scene differences between daytime and nighttime videos. In these cases, the previous methods may often lose static illumination and create unreasonable results. For example, cars on highways may be moved outside the highway lanes in the enhanced results. Therefore, it is an important problem to handle this camera motion issue in video enhancement algorithms. In this paper, we propose a new global motion estimationbased (GME-based) algorithm which introduces global motion estimation for handling the scene difference problem. While the traditional GME methods are used for addressing motions within single video,8, 9 our GME-based algorithm focuses on dealing with the motion patterns between different videos (i.e., between the daytime image with no movC 2011 SPIE 0091-3286/2011/$25.00 

Optical Engineering

ing objects and the nighttime video which includes moving objects). Furthermore, we also propose an improved imagefusion method that can effectively reduce the light turn-off problem. Combining the above two contributions, our proposed improved framework for nighttime video enhancement can efficiently recover the unreasonable enhancement results due to scene differevnces. 2 Proposed Method Our proposed improved framework for nighttime video enhancement can be described as in Fig. 1. The proposed framework in Fig. 1 is composed of the following seven components: 1. motion segmentation; 2. fusion segmentation; 3. using GME to recover the scene difference problem; 4. acquisition of clean daytime background images; 5. intensity component extraction from the color background image; 6. illumination component calculation from the intensity component image; and 7. final enhancement from the GME-compensated input nighttime video and the daytime illumination background image. Note that steps 4 to 6 in the proposed algorithm follow our previous work.10 Therefore, in this paper, we will focus on discussing steps 1, 2, 3, and 7. These steps are described in detail in the following. Furthermore, note that steps 1, 2, and 3 correspond to our first contribution of the GME-based algorithm, and step 7 corresponds to our second contribution of the improved image fusion method. 2.1 Motion Segmentation The motion segmentation step refers to segmenting the pixels associated with coherently moving objects or moving regions. In practice, motion segmentation in the image space is difficult, especially when dealing with low contrast and noisy videos. In order to effectively extract moving objects from the dark background, we propose to introduce tone mapping

057203-1

May 2011/Vol. 50(5)

Rao, Lin, and Chen: Global motion estimation–based method for nighttime video enhancement

Fig. 1 A block diagram of our proposed framework.

functions for segmenting the nighttime video objects. In our motion segmentation step, we first apply the tone mapping function11 to “pre-enhance” the videos. Then, a cluster-based method is used to extract moving objects. The cluster-based method can be described in the following. The cluster-based method first segments an image into multiple nonoverlapping regions by a weighted k-means clustering algorithm.12, 13 Generally, a k-means clustering of an image I is defined as a partitioning of the elements of I into k-sets: C1 ,C2 ,. . . , Ck . Each cluster contains multiple vectors pu while each pu corresponds to a pixel u in I (i.e., pu = [xu , yu , i u ] wherexu , yu and iu represent the x-coordinate, y-coordinate, and intensity value of pixelu, respectively). The clustering is performed such that the following conditions are met. Ci = 0, k 

i = 1, ..., k,

(1)

Ci = I,

(2)

i=1

Ci ∩ C j = 0,

i = j,

S( pu , Ci ) > S( pu , C j ),

∀i, j = {1, ..., k},

(3)

+∀ pu ∈ Ci

i = j, i, j = 1, ..., k, (4) where S( pu , Ci ) represents the similarity between pu and k clusterCi . ∪i=1 represents the union operation among clusters. In this paper, the similarity metric S( pu , Ci ) is measured by minimizing a weighted squared Euclidean distance measure. Optical Engineering

In each iteration, a pixel u of the image is assigned to a new clusterjwhich minimizes the following criterion. eu j = ( pu − fj )w j ( pu − fj ),

(5)

where fj is a vector composed of the mean coordinates and the mean intensity of all pixels in the cluster j, and wj is a 3×3 diagonal matrix that contains weighting factors. In practice, the total number of clusterskcan be determined by k = num(Mk ) + 2, where Mk is the predetected motion mask in the frame12, 13 and num(Mk ) is the total number of connected pixel groups in the motion mask. This motion mask can be calculated by comparing pixels in the current frame and the previous frame to indicate pixels with large changes.13 After clustering, a binary process will be applied to label each pixel as foreground or background. In this process, clusters associated with the predetected motion mask will be labeled as the foreground and the other pixels will be labeled as the background. Furthermore, we also use morphological opening, closing, and connected component analysis on the binary masks to get rid of small and random noises, and to fill the holes. Some experimental results are shown in Fig. 1 (step 1). 2.2 Fusion Segmentation As mentioned, in this paper we propose to introduce GME for addressing the scene difference problem due to a nonfixed camera between daytime and nighttime videos. However, since daytime and nighttime videos have different illumination properties as well as different moving objects, directly applying the GME method cannot produce satisfactory results. Therefore, in this paper, we propose a “fusion segmentation” step to fuse multiple motion-segmented

057203-2

May 2011/Vol. 50(5)

Rao, Lin, and Chen: Global motion estimation–based method for nighttime video enhancement

frames (i.e., frames from step 1) to achieve the motion path information of moving objects (i.e., to create segmentationfused images). After this step, the GME can then be applied on these segmentation-fused images. By this way, although daytime and nighttime videos have different properties, their fused path information is similar and is thus good to estimate camera motion patterns. In this paper, the daytime or the nighttime segmentation-fused images f(x, y) can be obtained by fusing v frames of the daytime or nighttime motion-segmented videos: f N (x, y) =

v 



v 



(xi − xi , yi − yi ) = [ f x (xi , yi ) − xi , f y (xi , yi ) − yi ],

M N i (x, y), (6)

i=1

f D (x, y) =

ters. In this process, the motion vector (MV) fields between the segmentation-fused daytime and nighttime frames are first estimated where each MV = [MVxi , MVyi ] describes a translational motion for a block of pixels.8, 9 Then the global motion parameter sets can be estimated by fitting the transform coefficients using a least-square estimator to minimize the sum of matching error between the MV fields and the motion estimated by global motion parameters. The motion estimated by global motion parameters (xi , yi ) can be described as follows:

M Di (x, y),

i=1

where i is the pixel number. Then, the matching errors [exi ,eyi ] between the MV fields and the motion estimated by global motion parameters8 can be calculated by: 

where MNi (x, y)andMDi (x, y) represents the nighttime or daytime motion-segmented images from step 1, v is set to be 50 in this paper. Note that v can also take other values as long as it can make the segmentation-fused image cover all motion paths. Examples of the resulting segmentation-fused images are shown in Fig. 1 (step 2).

exi = M V xi − xi + xi ,



d x + ey + f , px + qy + 1

where (x, y)and (x , y )are coordinates in the current frame (i.e., frame in the nighttime video) and the reference frame (i.e., frame in the daytime video), respectively. In this paper, we use Su et al.’s method8 to estimate these eight parame-

(9)



eyi = M V yi − yi + yi . The squared matching error (SE) is calculated as     E= ((M V xi − xi + xi )2 exi2 + eyi2 = i

2.3 Pixel-Based Global Motions Estimation After obtaining the segmentation-fused images in step 2, we will perform GME to estimate the camera motion patterns between the daytime and the nighttime videos. Normally, global motion patterns can be described in a parametric form such as two-parameter translational motion model or twelveparameter quadratic transform model. In this paper, the eightparameter model is used since it can effectively model the 3-D affine motions of objects. This model can be defined as follows. ax + by + c  x = f x (x, y) = , px + qy + 1 (7) y = f y (x, y) =

(8)

i 

+ (M V yi − yi + yi ) ). 2

(10)

The final GME parameters can be achieved by minimizing the cost function in Eq. (10). In practice, due to the local motion of moving objects as well as the inaccuracy of the MV fields, the error terms exi and eyi in Eq. (10) can be assumed to be independent and identical zero-mean Gaussian random variables in both the horizontal and the vertical directions.8 Thus, we minimize the truncated quadratic error function via the Newton–Raphson method. Fig. 2 shows one result by using our GME step to produce the matching errors in the horizontal and vertical directions, respectively. 2.4 Nighttime Video Enhancement After achieving the GME-corrected video and the daytime background illumination image, we can perform fusion to obtain the final enhanced video. In this paper, we extend the denighting method in Ref. 6 to obtain the final enhanced nighttime videos. In Ref. 6, the algorithm utilized the

Fig. 2 (a) Matching errors in horizontal direction. (b) Matching errors in vertical direction.

Optical Engineering

057203-3

May 2011/Vol. 50(5)

Rao, Lin, and Chen: Global motion estimation–based method for nighttime video enhancement

illumination ratios between the daytime background and the nighttime background for enhancing the nighttime videos. The illumination component of the final enhanced nighttime video is obtained by: L E (x, y) =

L D B (x, y) L N (x, y) L N B (x, y)

(11)

where LDB (x, y) and LNB (x, y)represent the illumination components of the daytime and nighttime background images, respectively. L DB (x, y)/L NB (x, y) is the illumination ratio. LN (x, y) is the nighttime illumination component. Since this method does not need the foreground object segmentation, it can work effectively in cases when accurate segmentation is difficult. However, the enhanced results by this method may lose illumination in the original nighttime videos, such as the highway lighting and the lighting inside the room. This is because in those regions, the illumination ratios of the daytime background images and nighttime background images can be much smaller than 1 (since the lights are off in the daytime). This makes the fusion results to be more favorable for the daytime illumination instead of the nighttime ones, thus resulting in the lights in the nighttime videos to be “turned off” in the enhanced videos. Therefore, in this paper, we improve this algorithm by the following way: If the illumination ratio is less than 1, we set the illumination ratio to 1. Results show that this simple but effective improved method can address the illumination lost problem. 3 Experimental Results We collected 12 videos from various datasets and perform experiments on them. The resolutions for these videos are 320×240 and the frame rates for these videos are 25 frames per second. Figure 3 shows some of the enhanced results. Note that there is a slight camera motion between the nighttime and

Fig. 3 Frames from typical surveillance video. (a) The original unenhanced frame. (b) Enhanced nighttime video without using our proposed GME method for handling scene difference. (c) Enhanced nighttime video using our proposed algorithm including the GME step to deal with the camera motion problem. (d) Enhanced nighttime video by further reducing the over-enhancement problem in (c).

Optical Engineering

Fig. 4 (a) The original un-enhanced frame. (b) The enhanced result by using our GME-based procedure but without our improved final fusion method. (c) The enhanced result by using our improved final fusion method but without our GME-based procedure. (d) The enhanced result by using both our GME-based procedure and our improved final fusion procedure.

daytime videos. Figure 3(a) is the original un-enhanced frame. Figure 3(b) is the result using our framework but without using the GME step for handling the scene difference problem. We can see from Fig. 3 that the cars in the enhanced video are moving outside the highway, which is not a reasonable result. Figure 3(c) is the result of our proposed algorithm including the GME step. We can clearly see that by using our method to handle the camera motion problem, the enhanced result is not only subjectively pleasant but also reasonable in making the car stay inside the lane. However, it can be seen that some areas in Figs. 3(b) and 3(c) are over-enhanced [e.g., some car light areas in Fig. 3(c) become coherently white]. This is because the original fusion method6 does not consider the over-enhancement issue. Thus, the fused pixels may become overly bright when the illumination ratio [i.e., LDB (x, y)/LNB (x, y) in Eq. (11)] and its corresponding nighttime pixel illumination component [i.e., LN in Eq. (11)] are large. In order to solve this problem, we can further extend our improved fusion method by using some tone mapping method14 on LN to pre-scale its range before fusion or set some upperbound on LDB /LNB to avoid it to be too large. Figure 3(d) is the result by applying a simple pre-scaling on LN before fusion. We can see that the over-enhancement problem can be obviously reduced compared with Fig. 3(c). Further improvements on this over-enhancement problem can also be achieved by developing more sophisticated methods based on the discussions mentioned above. Figure 4 shows another experimental result. In Figs. 4(b) and 4(d) are enhanced results by using our GME-based procedure while Fig. 4(c) is the result without using our procedure. We can see that by using our GME-based procedure, the lamp-shift problem in Fig. 4(c) can be efficiently avoided in Figs. 4(b) and 4(d). Furthermore, the results in Figs. 4(c) and 4(d) use our improved final fusion method while Fig. 4(b) uses the method in6 for fusion [i.e., Eq. (11)]. We can see clearly that by using our improved final fusion method, the

057203-4

May 2011/Vol. 50(5)

Rao, Lin, and Chen: Global motion estimation–based method for nighttime video enhancement

Fig. 5 (a) The original un-enhanced nighttime frame. (b) The result after the GME step. (c) The enhanced result by using both our GMEbased and our improved final fusion procedure.

lamp-off problem in Fig. 4(b) can also be effectively avoided. Thus, the enhancement result by using our proposed method including both the GME-based procedure and the improved final fusion step [i.e. Fig. 4(d)] can produce the best result and handle various problems in the previous methods. Furthermore, Fig. 5 further compares the motion differences between the original and the enhanced results. Figure 5(a) is the original nighttime frame, Fig. 5(b) is the result after the GME step (i.e., the result from step 3 in Fig. 1), and Fig. 5(c) is the final enhanced result by our proposed

algorithm. Comparing Figs. 5(a) and 5(b), we can clearly see that objects in Fig. 5(b) are “shifted” from their original places in Fig. 5(a) by the GME step. By this way, the object locations in the nighttime videos can be aligned with its corresponding daytime locations. Thus, the camera motion problem can be eliminated in the final fusion step. Due to this, we can see from Fig. 5(c) that the object locations in the final enhanced video are not the same as the ones in the original nighttime videos, rather they are aligned with the daytime videos. Fig. 6 shows another result when daytime and nighttime videos have very different weather conditions. In Fig. 6(a) is the daytime frame with the “sunny” weather, Fig. 6(b) is the original nighttime frame with the “rainy” weather, and Fig. 6(c) is the final enhanced result by our proposed algorithm. We can see clearly that our algorithm can also work well even in different weather conditions. Finally, Table 1 shows the results of a user study test8 where the users are asked to view the original video and the enhanced video side by side (as shown in Figs. 3 and 4). After viewing the video, the users shall give a score to each video, within the range of 1 to 10 where 1 indicates very poor quality, 10 indicates very good quality, and a score of 6 is considered acceptable. A total of eight users responded in our test. All the users said they used LCD displays to watch the videos. Table 1 shows the average scores of the 8 users for the 12 video sequences. It can be seen that the enhanced videos outperform the original video in all sequences. The average score of the original videos is 4.51, which is below the acceptable level while the two compared methods (i.e., the two columns next to the original videos) can improve the average score to above six. However, compared to these methods, our proposed algorithm, which includes

Table 1 User study results.

Original video

Enhanced video using our GME-based procedure but without improved final fusion method

Enhanced video using improved final fusion method but without GME-based procedure

Enhanced video using our GME-based procedure and our improved final fusion method

1

5.89

6.45

6.21

7.05

2

5.67

6.20

7.01

7.78

3

3.96

6.12

6.34

7.10

4

2.61

5.81

5.92

6.40

5

3.11

6.01

6.20

6.31

6

4.50

6.12

6.21

7.00

7

3.80

5.91

5.93

6.71

8

3.12

5.82

5.21

6.40

9

5.62

6.31

6.12

7.12

10

6.10

6.48

6.80

7.90

11

4.81

5.61

5.78

6.90

12

5.02

6.21

6.10

7.13

Average

4.51

6.08

6.15

6.98

Sequence ID

Optical Engineering

057203-5

May 2011/Vol. 50(5)

Rao, Lin, and Chen: Global motion estimation–based method for nighttime video enhancement

Fig. 6 (a) The daytime frame. (b) The original nighttime frame. (c) The enhanced result by using both our GME-based and our improved final fusion procedure.

both the GME-based procedure and the improved final fusion method, has obviously the highest score. 4 Conclusions In this paper, we propose: a. a new GME-based algorithm for handling the camera motion problems in nighttime video enhancement b. an improved image fusion method for reducing the light turn-off effect. Based on these, we propose an improved framework for nighttime video enhancement which can efficiently recover the unreasonable enhancement results. Experimental results demonstrate that the proposed algorithm cannot only address the object-shift problem due to camera motion, but also reduce the light turn-off effect in the final fused image, thus producing more satisfactory results than the existing methods.

Acknowledgments This work is partly supported by National High-Tech Program 863 of China (Grant No. 2007AA010407 and 2009GZ0017), National Research Program of China (Grant No. 9140A06060208DZ0207), and National Science Foundation of China (Grant No. 61001146), and China Scholarships Council. References 1. W. Lin, M.-T. Sun, R. Poovendran, and Z. Zhang, “Group event detection with a varying number of group members for video surveillance,”IEEE Trans. Circuits and Systems for Video Technology 20(8), 1057–1067 (2010). 2. W. Lin, M.-T. Sun, R. Poovendran, and Z. Zhang, “Activity recognition using a combination of category components and local models for video surveillance,” IEEE Trans. Circuits and Systems for Video Technology 18, 1128–1139 (2008). 3. W. Lin, M.-T. Sun, R. Poovendran, and Z. Zhang, “Human activity recognition for video surveillance,” IEEE International Symposium on Circuits and Systems (ISCAS), Seattle, WA (2008). 4. T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Transactions on Image Processing 18(9), 1921–1935 (2009).

Optical Engineering

5. A. Ilie, R Raskar, and J. Yu, “Gradient domain context enhancement for fixed cameras,” International J. Pattern Recognition Artificial Intelligence 19(4), 533–549 (2005). 6. A. Yamasaki, H. Takauji, S. Kaneko, T. Kanade, and H. Ohki, “Denighting: enhancement of nighttime image for a surrveillance camera,” in Proceedings of SPIE, the International Society for Optical Engineering, San Diego, CA (2008). 7. Y. Cai, K. Huang, T. Tan, and Y. Wang, “Context enhancement of nighttime surveillance by image fusion,” in Proceedings of 18th International Conference on Pattern Recognition, Hong Kong (IEEE, New York, 2006), pp. 980–983. 8. Y. Su, M.-T. Sun, and V. Hsu, “Global motion estimation from coarsely sampled motion vector field and the applications,” IEEE Transactions on Circuits and Systems for Video Technology 15(2), 232–241 (2005). 9. W. Lin, M.-T. Sun, H. Li, and H. Hu, “A new shot change detection method using information from motion estimation,” in Proceedings of Advances in Multimedia Information Processing (Springer-Verlag, Berlin, 2010), Vol. 6298/2011. 10. Y.-B Rao, W. Lin, and L.-T Chen, “Image-based fusion for video enhancement of nighttime surveillance,” Optical Engineering Letters, issue 12 (2010). 11. E.-P. Bennett and L. McMillan, “Video enhancement using per-pixel virtual exposures,” ACM Transactions on Graphics 24(3), 845–852 (2005). 12. S. Elhabian, K. El-Sayed, and S. Ahmed, “Moving object detection in spatial domain using background removal techniques – state-of-art,” Recent Patents on Computer Science 1(1), 32–54 (2008). 13. J. B. Kim and H. J. Kim, “Efficient region-based motion segmentation for a video monitoring system,”Pattern Recognition Letters 24, 113– 128 (2003). 14. Z. Liu, C. Zhang, and Z. Zhang, “Learning-based perceptual image quality improvement for video conferencing,” ICME, 1035–1038 (2007). Yunbo Rao received his BS and MS degrees from the Sichuan Normal University and the University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 2003 and 2006, respectively, both in school of computer science and engineering (SCSE). He is currently working toward his PhD degree in the department of computer science and engineering from UESTC. He is currently a visiting scholar of electrical engineering of the University of Washington. His research interests include video enhancement, computer vision, and crowd animation. Weiyao Lin received his BE degree from Shanghai Jiao Tong University, China, in 2003, his ME degree from Shanghai Jiao Tong University, China, in 2005, and his PhD degree from the University of Washington, Seattle, WA in 2010, all in electrical engineering. Since 2010, he has been an assistant professor at the Institute of Image Communication and Information Processing, Department of Electronic Engineering, Shanghai Jiao Tong University. His research interests include video processing, machine learning, computer vision, and video coding and compression. Leiting Chen received his MS and PhD degrees from the University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 1994 and 2007, respectively, both in school of computer science and engineering. He is currently a faculty member with school of computer science and engineering of UESTC. He is a professor at the school of computer science and engineering of UESTC. He has been serving as chairs, program committee, or organizing committee chair for many international conferences and workshops. He was editor of the Journal of computer application. His current research interests include computer graphics and virtue reality.

057203-6

May 2011/Vol. 50(5)

Global motion estimation–based method for nighttime video ...

Subject terms: video enhancement; global motion estimation; motion segmenta- tion. ... 28, 2011; published online May 5, 2011. 1 Introduction. Video enhancement, which ..... ity improvement for video conferencing,” ICME, 1035–1038 (2007).

630KB Sizes 0 Downloads 74 Views

Recommend Documents

Method of motion-picture composition
As an illustration of an application of this invention, it is .... the desired background, making a positive. 'therefrom .... projected picture by-creating on said screen '.

A Topic-Motion Model for Unsupervised Video ... - Semantic Scholar
Department of Electrical and Computer Engineering. Carnegie Mellon University ..... Locus: Learning object classes with unsupervised segmentation. In IEEE ...

Human Motion Detection and Tracking for Video ...
Gradients combined with ADABOOST learning to search for humans in an image .... Frame (b) Background Subtracted Frame with motion region detected (c). Extraction of image ... E. Light Support Vector Machine – Training and Testing. The classificatio

Bayesian Method for Motion Segmentation and ...
ticularly efficient to analyse and track motion segments from the compression- ..... (ISO/IEC 14496 Video Reference Software) Microsoft-FDAM1-2.3-001213.

A Computation Control Motion Estimation Method for ... - IEEE Xplore
Nov 5, 2010 - tion estimation (ME) adaptively under different computation or ... proposed method performs ME in a one-pass flow. Experimental.

True Motion Vectors for Robust Video Transmission
In this way, no syntax is needed to support the recovery of missing data. .... pixels of the same moving object move in a consistent way, there should be a good degree of motion smoothness .... In an error-free condition, F(t + 1) depends on F(t).

System and method for synchronization of video display outputs from ...
Jun 16, 2009 - by executing an interrupt service routine by all host processors. FIG. 9 .... storage medium or a computer netWork Wherein program instructions are sent over ..... other information include initialization information such as a.

Novel method based on video tracking system for ...
A novel method based on video tracking system for simultaneous measurement of kinematics and flow in the wake of a freely swimming fish is described.

Method for segmenting a video image into elementary objects
Sep 6, 2001 - Of?ce Action for JP App. 2002-525579 mailed Dec. 14, 2010. (Continued) ..... A second family calls upon the implementation of active contours ...

Method for segmenting a video image into elementary objects
Sep 6, 2001 - straints relating to two sets: the background of the image and the objects in motion. ..... to tools for creating multimedia content satisfying the. MPEG-4 ... 3c shoWs, by Way of illustration, the parts of the object on Which the ...

A novel video summarization method for multi-intensity illuminated ...
Dept. of Computer Science, National Chiao Tung Univ., HsinChu, Taiwan. {jchuang, wjtsai}@cs.nctu.edu.tw, {terry0201, tenvunchi, latotai.dreaming}@gmail.com.

System and method for synchronization of video display outputs from ...
Jun 16, 2009 - media include magnetic media such as hard disks, ?oppy disks, and ... encompass data signals embodied in a carrier Wave such as the data ...

Secure hierarchial video delivery system and method
Dec 15, 1994 - library for short term storage at a local library where it is available for user access and control. ..... ming to said local libraries for storage on said program record and playback units or for delivery to a .... carried on the cabl

Secure hierarchial video delivery system and method
Dec 15, 1994 - MASTER E MODEM H EAZO. "DL I \ ..... modems 10 are incorporated to enable the high speed ..... The broadband RF ampli?ers 53 each.

Audio/video reproducing apparatus and method
Aug 3, 2006 - A particular advantage is provided by identifying the con tent of the ..... the interface 118 is a Wireless communications link. The interface 118 ...

Robust Video Stabilization to Outlier Motion using ...
Apple iMovie '09 already included this function. Cameras on robots suffer ..... [12] C. Guestrin, F. Cozman, and E. Krotkov, “Fast software image stabilization with ...

man-196\motion-activated-digital-video-recorder.pdf
man-196\motion-activated-digital-video-recorder.pdf. man-196\motion-activated-digital-video-recorder.pdf. Open. Extract. Open with. Sign In. Main menu.

A Motion Trajectory Based Video Retrieval System ...
learning and classification tool. In this paper, we propose a novel motion trajectory based video retrieval system. For feature space representation, we use two ...

Robust Video Stabilization to Outlier Motion using ...
have large depth variation and motion of multiple planes. Lee et al ... are used as data to estimate paramters of the assigned ..... Flow Chart of Video Stabilization.