A New Shot Change Detection Method Using Information from Motion Estimation Weiyao Lin1, Ming-Ting Sun2, Hongxiang Li3, Hai-Miao Hu4 1 Institue of Image Communication and Information Processing, Department of Electronic Engineering, Shanghai Jiao Tong University Shanghai 200240, China 2 Department of Electrical Engineering, University of Washington Seattle, WA 98195, USA 3 Department of Electrical Engineering , North Dakota State University Fargo, ND 58108, USA 4 School of Computer Science and Engineering, Beihang University Beijing 100191, China

Abstract. In many video coding related applications, both shot change detection and motion estimation need to be performed. In this paper, a new shot change detection method is proposed for such applications, which uses the information from motion estimation. We first propose to classify Macroblocks of each frame into different classes based on the information obtained in the motion estimation process. Each class can capture some specific characteristics of the frame. The shot changes can then be detected based on the extracted class information. The proposed method has low computation complexity. It can easily be implemented into most existing video coding systems without extra cost. Experimental results show that the proposed method can detect shot changes effectively. Some extension applications of our class information are also discussed in the paper. Keywords: shot change detection, motion estimation, video coding

1

Introduction and Related Work

In many video coding related applications, both shot change * detection and motion estimation (ME) need to be performed. For example, a good shot change detection algorithm can obviously improve the performances of rate-control. Furthermore, in systems such as digital library or video on demand, many original videos need to be compressed before stored in database. In these cases, it is also desirable to label shot changes in the process of compressing videos so that the computation load for later steps such as video retrieval or video indexing can be greatly reduced. Various shot change detection algorithms for video coding have been proposed [1*

In this paper, we define a ‘shot’ as a segment of continuous video frames captured by one camera action (i.e. a continuous operation of one camera), and a ‘shot change’ as the transition period between the neighboring shots [3].

8]. The most widely used method is to detect shot changes based on the number of Intra MBs [2-3]. A large portion of Intra MBs will imply the possible shot change in the current frame. This simple method works well in detecting abrupt shot changes. However, it does not work well in cases of gradual shot change or rapid motion. Kim et al. [8] try to detect shot changes based on the DC information. However, only using DC information cannot detect various shot changes reliably. Akutsu et al. [4] and Shu et al. [5] detect the shot changes based on the information of motion smoothness. Although these methods are more robust to the selection of coding parameters, they cannot perform well in differentiating complex motion from actual shot change. Furthermore, other features such as the AC components [7], variance of residue or pixel intensity information [5], or local luminance histogram [1] are also proposed for shot change detection. However, these methods require heavy computation and are not suitable for the applications discussed in this paper. Since shot changes and motion estimation are all related to frame correlations, intuitively, shot changes could be detected using information obtained in the motion estimation process without extra computations. In this paper, a new shot change detection algorithm is proposed for video coding related applications as mentioned above, which uses the motion estimation information for detection. The Macroblocks (MBs) of each frame are first classified into different classes in the motion estimation process. Each class will capture some specific characteristics of the frame. The possible shot changes at the current frame can then be detected based on the extracted class information. Since the proposed algorithm uses the information readily available in the ME process, its computation overhead is low. It can easily be implemented into most existing video coding systems without extra cost. Experimental results show the effectiveness of the proposed method. The rest of the paper is organized as follows: Section 2 describes our proposed MB classification method. Section 3 analyzes the characteristics of each MB class and discusses their application in the shot change detection. Based on the discussion of Section 3, Section 4 describes the proposed shot change detection algorithm in detail. The experimental results are given in Section 5. Section 6 discusses some extensions and other applications of our proposed MB class information. Section 7 concludes the paper.

2

The MB Classification Method

Since ME is the process to match similar areas between frames, many information related to frame content correlation and motion discontinuity are already available in the ME process. Since these information are highly related to shot changes, if we can extract these readily-available information from the ME process, we will be able to detect these shot changes without extra computation. Therefore, in this section, we propose to extract these information and use the information to classify MBs. In the first step of ME process, usually an initial matching cost is evaluated based on the prediction motion vector from spatially or temporally neighboring MBs. We propose to classify the MBs as described in Eqn. (1).

⎧ 1 if init_COST < Th1 ⎪ Classcur _ MB = ⎨2 if init_COST ≥ Th1 and | PMVcur_MB − MV pre _ final |> Th2 ⎪3 if init_COST ≥ Th and | PMV 1 cur_MB − MV pre _ final |≤ Th2 ⎩

(1)

where cur_MB is the current MB, MVpre_final is the final Motion Vector (MV) of the co-located MB in the previous frame, PMV is the Predictive Motion Vector of the current MB [9], init_COST is the initial matching cost value calculated based on the motion information of spatially or temporally neighboring MBs, Th1 is a threshold, and Th2 is another threshold checking the closeness between PMV and MVpre_final. Using Eqn. (1), MBs with small init_COST values will be classified as Class 1. MBs will be classified as Class 3 if their PMVs are close to the final MVs of their collocated MBs in the previous frame. Otherwise, MBs will be classified into Class 2. The motivation of classifying MBs according to Eqn. (1) is: (i) According to Eqn. (1), MBs in Class 1 have two features: (a) their MVs can be predicted accurately (i.e. init_COST is calculated based on the motion information of spatially or temporally neighboring MBs). This means that the motion patterns of these MBs are regular and smooth. (b) They have small matching cost values. This means that these MBs can find good matches from the previous frames. Therefore, the Class 1 information can be viewed as an indicator of the content correlation between frames. (ii) According to Eqn. (1), Class 2 includes MBs whose motion cannot be accurately predicted by their neighboring information (PMV) and their previous motion information (MVpre_final). This means that the motion patterns of these MBs are irregular and unsmooth from those of the previous frames. Therefore, the Class 2 information can be viewed as an indicator of the motion unsmoothness between frames. (iii) According to Eqn. (1), Class 3 includes MBs whose MVs are close to the PMVs and whose matching cost values are relatively large. Therefore, Class 3 MBs will include areas with complex textures but similar motion patterns to the previous frames. From the above observations, we can outline the ideas of applying our class information into shot change detection: Since shot changes (including abrupt, gradual, fade in or fade out) always happen between two different camera actions, the content correlation between frames at shot change places will be relatively low. Therefore, we can use the information of Class 1 as the primary feature to detect shot changes. Furthermore, since the motion pattern will also change at shot change places, the information of Class 2 and Class 3 can also be used as additional features for shot change detection. Our MB classification method can also be used for fast ME. In the experimental result of this paper, the ME process is implemented as the fast ME method in our previous work [9] which is based on the same MB classification. The fast ME algorithm is implemented based on the Simplified Hexagon Search (SHS) algorithm [10] with the proposed MB classification method where Th1 in Eqn. (1) is set to be 1000, Th2 is set as 1 in integer pixel resolution and init_COST is calculated as in Eqn. (2).

init _ COST = min (COST( 0,0 ) , COST PMV )

(2)

In Eqn. (2), COST(0,0) is the COST of the (0,0) MV, and COSTPMV is the COST of the PMV [10]. COST = SAD + λ ⋅ R( MV ) , where SAD is the Sum of Absolute Difference for the block matching error, R(MV) is the number of bits to code the Motion Vector (MV), and λ is the Lagrange multiplier. It should be noted that this ME implementation [9] is just one example of using our MB classification method. Our MB classification method is general regardless of the ME algorithms used. It can easily be extended to other ME algorithms.

3. Insight of MB Class Information in Shot Change Detection In order to show the insight of the MB class information for shot change detection, we show the distribution of MBs for each class in each frame. Fig. 1 shows the example frames for two video sequences. The experimental setting is the same as that described in Section 5. In Fig. 1, blocks labeled grey in (a) and (d) are MBs belonging to Class 1. Blocks labeled black in (b) and (e) and blocks labeled white in (c) and (f) are MBs belonging to Class 2 and Class 3, respectively.

(a)

(d)

(b)

(e)

(c)

(f)

Fig. 1 The distributions of Class 1 (a, d), Class 2 (b, e), and Class 3 (c, f) MBs for Mobile_Cif and Bus_Cif.

Several observations can be drawn from Fig. 1 as follows: From Fig. 1 (a) and (d), we can see that most Class 1 MBs include backgrounds or flat areas that can find good matches in the previous frames. From Fig. 1 (b) and (e), we can see that our method can effectively detect

irregular areas and classify them into Class 2 (for example, the edge between the calendar and the background as well as the bottom circling ball in (b), and the running bus as well as the down-right logo in (e)). From Fig. 1 (c) and (f), we can see that most complex-texture areas are classified as Class 3, such as the complex background and calendar in (c) as well as the flower area in (f). Based on the above discussion, we can propose a Class-Based Shot Change Detection algorithm. It is described in detail in the next section. Furthermore, it should also be noted that our proposed class information is not limited to shot change detection, it can also be used in other applications such as motion discontinuity detection or global motion estimation. These will be discussed in detail in Section 6.

4. The class-based shot change detection algorithm We investigated three approaches to detect shot changes: (a) using only the class 1 information for detection, (b) using information of all the three classes for detection, and (c) combining the class information with the number of intra-coded MBs information for detection. Due to the limited space, we only describe method (c) in this paper (i.e. combining the class information with the number of intra-coded information for detection). It is described as in Eqn. (3). ⎧1 ⎪ ⎪ ⎪ Fgshot (t ) = ⎨ ⎪ ⎪ ⎪ ⎩0

if

Nc_1(t) ≤ T1 and NIntra_ MB(t) ≥ T4

⎧⎪Nc_1(t ) ≤ T2 and NIntra_ MB(t) ≥ T4 and or if ⎨ ⎪⎩ Nc_2(t ) − Nc_2(t - 1) + Nc_3(t ) − Nc_3(t - 1) ≥ T3

(3)

else

where t is the frame number and Fgshot(t) is an flag indicating whether a shot change happens at the current frame t or not. Fgshot(t) will equal to 1 if there is a shot change and will equal to 0 else. NIntra_MB(t) is the number of intra-coded MBs at frame t. Nc_1(t), Nc_2(t) and Nc_3(t) are the total number of Class 1, Class 2 and Class 3 MBs in the current frame t, respectively. T1, T2, T3 and T4 are the thresholds for deciding the shot change. In this paper, T1 -T4 are selected by Eqn. (4).

T1 = N MB (t )

40

, T2 = N MB (t )

30

, T3 = N MB (t ) , T4 = T1 4

(4)

where NMB(t) is the total number of MBs of all classes in the current frame. It should be noted that in Eqn. (3) the Class 1 information is the main feature for detecting shot changes (i.e., Nc_1(t)≤T1 and Nc_1(t)≤T2 in Eqn. (3)). The intuitive of using the Class 1 information as the major feature is that it is a good indicator of the content correlation between frames. The Class 2 and Class 3 information is used to

help detect frames at the beginning of some gradual shot changes where a large change in motion pattern has been detected but the number of Class 1 MBs has not yet decreased to a small number. The intra-coded MB information can help discard the possible false alarm shot changes due to MB mis-classfication. Furthermore, it should also be noted that since all the information used in Eqn. (3) are readily available in the ME process, the extra complexity introduced by the proposed algorithm is negligible compared to ME.

5. Experimental Results We perform experiments on the H.264/MPEG-4 AVC reference software JM10.2 version [11]. For each of the sequences, the picture coding structure was IPPP…. In the experiments, only the 16x16 partition was used with one reference frame coding for the P frames. The QP was set to be 28, and the search range was ± 32 pixels. In our experiments, the following four shot change detection algorithms are compared. (1) Detect shot changes based on the number of Intra MBs [2,3] (Intra-based in Table 1). A shot change will be detected if the number of Intra MBs in the current frame is larger than a threshold. (2) Detect shot changes based on motion smoothness [4,5] (MV-Smooth-based in Table 1). The motion smoothness can be calculated by the Square of Motion Change [5], as in Eqn. (5): SMC(t) =



i∈cur _ frame

⎛ (MV xi ( t ) − MV xi ( t − 1) )2 ⎞ ⎜ ⎟ 2 ⎜ + (MV yi ( t ) − MV yi ( t − 1) ) ⎟⎠ ⎝

(5)

where SMC(t) is the value of the Square of Motion Change at frame t. MVxi(t) and MVyi(t) are the x and y component of the motion vector for Macroblock i of frame t, respectively. From Eqn. (5), we can see that SMC is just the “sum of squared motion vector difference” between co-located MBs of neighboring frames. Based on Eqn. (5), a shot change can be detected if SMC(t) is larger than a threshold at frame t. (3) Detect shot changes based on the combined information of Intra MB and motion smoothness [5] (Intra+MV-Smooth in Table 1). In this method, the IntraMB information is included into the Square of Motion Change, as in Eqn. (6). SMC Intra_included (t) = ∑i∈cur _ frame MC (i )

(6)

where SMCIntra_included(t) is the Square of Motion Change with Intra-MB information included. MC(i) is defined as in Eqn. (7): ⎧(MV i (t ) − MV i (t − 1) )2 x x ⎪ 2 ⎪ MC (i ) = ⎨ + (MV yi (t ) − MV yi (t − 1) ) ⎪L ⎪⎩

if i is inter - coded if i is intra - coded

(7)

where i is the MB number, L is a large fixed number. In the experiment of this paper, we set L to be 500. From Eqn. (6) and Eqn. (7), we can see that the Intra+MV-Smooth method is similar to the MV-Smooth-based method except that when MB i is intra-coded, a large value L will be used instead of the squared motion vector difference. It should be noted that when the number of intra MBs is low, the Intra+MV-Smooth method will be close to the MV-Smooth-based method. If the number of intra MBs is high, the Intra+MV-Smooth method will be close to the Intra-based method. (4) The proposed Class-Based shot change detection algorithm which uses the Class 1 information as the major feature for detection, as in Eqn. (3) (Proposed in Table 1). It should be noted that we choose Method (1)-(3) as the reference algorithms to compare with our methods because they are suitable methods for the application of shot change detection in video coding. Other sophisticated methods [2,7-8] will require heavy computation and are not suitable for the application discussed in this paper.

Fig. 2 Feature curves of a gradual shot change sequence.

Fig. 2 compares the curves of features that are used in the above algorithms. Since all the algorithms perform well in detecting abrupt shot changes, we only show the curves of a gradual shot change in Fig. 2. Furthermore, based on our experiments, the MV-Smooth-based method and the Intra+MV-Smooth method have poor performance in detecting gradual shot changes. Due to the limited space, we only compare the curves of the number of Intra MBs and our proposed number of Class 1 MBs in Fig. 2. Fig. 2-(a) is the ground-truth for the shot change sequence where a frame is labeled as a shot-change frame when it contains contents of both the previous shot and the following shot. Fig. 2-(b) shows the curve of the number of Intra MBs in each frame, and Fig. 2-(c) shows the curve of the number of Class 1 MBs in each frame. It should be noted that we reverse the y-axis of Fig. 2-(c) so that the curve has the same concave shape as the other figures. Fig. 2 shows the effectiveness of using our class information for shot change detection. From Fig. 2 (c), we can see that the number of Class 1 MBs immediately decreases to 0 when a shot change happens and then quickly increases to a large

number right after the shot change period. Therefore, our proposed shot change detection algorithms can effectively detect the gradual shot changes based on the Class 1 information. Compared to our class information, the method based on the Intra MB number has low effectiveness in detecting the gradual shot change is low. We can see from Fig. 2 (b) that the Intra MB number has similar values for frames inside and outside the shot change period. This makes them very difficult to detect gradual shot changes. Furthermore, our experiments show that SMC(t) is the least effective. This implies that only using motion smoothness information cannot work well in detecting shot changes. The effectiveness of SMC(t) will be further reduced when both of the sub-sequences before and after the shot change have similar patterns or low motions. In these cases, the motion unsmoothness will not be so obvious at the shot change. Various experiments are also conducted on different shot change datasets. Due to the limited space, we only show the result of one set of experiment in this paper. Table 1 compares the Miss rate and the False Alarm rate [12] of the four algorithms in detecting the shot changes in the dataset that we created. The dataset has totally 25 sequences which include 2 abrupt shot change sequences and 23 gradual shot change sequences with different types (gradual transfer, fade in and fade out) and with different length of shot-changing period (10 frames, 20 frames and 30 frames). An example sequence is shown in Fig. 3. The Miss rate is defined by Nkmiss/N+k, where Nkmiss is the total number of mis-detected shot change frames in sequence k and N+k is the total number of shot change frames in sequence k. The False Alarm rate is defined by NFAk/ N-k, where NFAk is the total number of false alarmed frames in sequence k and N-k is the total number of non-shot-change frames in sequence k. We calculate the Miss rate and the False Alarm rate for each sequence and average the rates. At the rightmost column of Table 1, the Total Error Frame Rate (TEFR) [12] is also compared. The TEFR rate is defined by Nt_miss / Nt_f, where Nt_miss is the total number of mis-detected shot change frames for all sequences and Nt_f is the total number of frames in the dataset. The TEFR rate reflects the overall performance of the algorithms in detecting all sequences. In the experiments of Table 1, the thresholds for detecting shot changes in Method 1 (Intra-based), Method 2 (MV-Smooth-based) and Method 3 (Intra+MV_Smooth) are set to be 200, 2000 and 105000, respectively. These thresholds are selected based on the experimental statistics.





Bus_Cif

Shot Change



Football_Cif

Fig. 3 An example shot change sequence.

From Table 1, we can see that the performances of our proposed algorithms are clearly better than the other methods. Furthermore, several other observations can be drawn from Table 1 as follows: Basically, our Class 1 information, the Intra MB information [2,3] and the residue information [7] can all be viewed as the features to measure the content correlation

between frames. However, from Table 1, we can see that the performance of our proposed method is clearly better than the Intra-based method. This is because the Class 1 information includes both the residue information and the motion information. Only those MBs with both regular motion patterns (i.e., MV close to PMV or (0,0) MV) and low matching cost values are classified as Class 1. We believe that these MBs can reflect more efficiently the nature of the content correlation between frames. In our experiment, we found that there are a large portion of MBs in the gradual-shotchange frames where neither intra nor inter prediction can perform well. The inter/intra mode selections for these MBs are quite random, which affects the performance of the Intra-based method. Compared to the Intra-based method, our algorithm can work well by simply classifying these MBs outside Class 1 and discarding them from the shot change detection process. Table 1 Performance comparison of different algorithms in detecting various shot changes in our dataset Miss (%)

False Alarm (%)

TFER

15.16 36.31 18.47 2.97

0.52 15.46 0.52 0.62

7.89 19.43 9.87 2.31

Intra-based MV-Smooth-based Intra+MV-Smooth Proposed

5. Discussion and Extension As mentioned, our proposed class information is not limited to shot change detection. It can also be used in other applications. In this section, we discuss some extension applications of our class information. A. Motion Discontinuity Detection We define motion discontinuity as the boundary between two Smooth Camera Motions (SCMs). For example, in Fig. 4, the first several frames are captured when the camera has no or little motion. Therefore, they form the first SCM (SCM1). The second several frames form another SCM (SCM2) because they are captured by a single camera motion of rapid rightward. Then, a motion discontinuity can be defined between these two SCMs.



… SCM1

Motion Discontinuity

Fig. 4 An example of motion discontinuity.

SCM2

Basically, motion discontinuity can be viewed as motion unsmoothness or the change of motion patterns. The detection of motion discontinuity can be very useful in video content analysis or video coding performance improvement. Since our class information, especially Class 2 information, can efficiently reflect the irregular motion patterns, it can be easily used for motion discontinuity detection. Fig. 5 compares the curves of features that are used in the algorithms in Section 5 for Stefan_Sif sequence. Fig. 5 (a) shows the ground truth segment of Smooth Camera Motions. In Fig. 5 (a), the segments valued 0 represent SCMs with low or no camera motion and the segments with value 1 represent SCMs with high or active camera motion. For example, the segment between frame 177 and 199 represents an SCM where there is a rapid rightward of the camera; and the segment between frame 286 and 300 represents an SCM of a quick zoom-in of the camera. The frames between SCMs are the Motion Discontinuity frames that we want to detect. The ground truth MD frames are labeled as the vertical dashed lines in Fig. 5 (b)-(e). It should be noted that most MDs in Fig. 5 include several frames instead of only one. Fig. 5 (b)-(e) show the curves of the number of Intra MBs, SMC(t), SMCIntra_included(t), and the number of Class 2 MBs, respectively.

Fig. 5 Feature curves for the MD detection in Stefan_Sif.

From Fig. 5, we can see that our proposed Class 2 information is more efficient in detecting motion discontinuities. For example, our Class 2 information has much stronger and quicker response at the first four motion discontinuities. Furthermore, our Class 2 information always has largest response at places where motion pattern changes while the other features are more sensitive to the “motion strength” rather than the “motion unsmoothness” (e.g. the features in (b)-(d) have largest values around frame 250 where there is a smooth but vary rapid camera motion). This demonstrates that our Class 2 information is a better measure of the motion unsmoothness.

B. Global Motion Estimation Global motion estimation is another useful application area of our class information. Since the video frame may often contain various objects with different motion patterns and directions, object segmentation is needed to filter out these moving objects before estimating the global motion parameters of the background. Since our class information can efficiently describe the motion patterns of different MBs, it is very useful in filtering out the irregular motion areas. For example, we can simply filter out Class 2 or Class 2+Class 3 MBs and perform global motion estimation based on the remaining MBs. Fig. 6 shows one result of a global-motion-compensated frame by using our class information for object segmentation and LS-6 method [6] for global motion estimation. From Fig. 6, we can see that global motion estimation including our class information can efficiently locate and compensate the background areas. Compared with the other segmentation methods, using our class information for segmentation has no extra cost since it uses the information already available in the motion estimation.

(a) Original frame

(b) Segmentation result

(c) Global-motion-compensated frame Fig. 6 An example of using our class information for object segmentation and global motion estimation.

C. Discussions The shot change detection in this paper is implemented in an after-ME manner (i.e. the shot change detection can only be performed after the ME process), which is not suitable for real-time coding situations. However, it should be noted that the idea of our class information is general and it can be easily extended into real-time applications with few additional complexities. For example, we can simply add a parallel module to perform ME for the future frames (frames after the currently coding frame). Since the reconstructed reference frame is not available for the future frames, we can use the original frames as the reference. By this way, the ME information will always be available before coding the current frame. And the added complexity is little since the actual ME process can be greatly simplified by slightly refining the future-frame ME results.

6. Conclusion In this paper, a new shot change detection algorithm is proposed. We first propose to classify MBs into different classes based on information available from the ME process and use the information of these classes to detect the shot changes. Our algorithm has low extra complexity. Experimental results demonstrate the effectiveness of our algorithm. Some extension applications of our class information are also discussed in the paper.

Acknowledgements This work is supported in part by the following grants: Chinese national 973 grants (2010CB731401 and 2010CB731406), National Science Foundation of China grants (60632040, 60902073, 60928003, 60702044 and 60973067).

References 1.

D. Swanberg, C. F. Shu, and R. Jain, “Knowledge guided parsing in video database,” Proc. Storage and Retrieval for Image and Video Database, 1993. 2. K. Zhang and J. Kittler, “Using scene-change detection and multiple-thread background memory for efficient video coding,” Electronics Letters, vol. 35, no. 4, pp. 290–291, 1999. 3. M. Eom and Y. Choe, “Scene Change Detection on H.264/AVC Compressed Video Using Intra Mode Distribution Histogram Based on Intra Prediction Mode,” Proc. Applications of electrical engineering, pp. 140-144, Turkey, 2007. 4. A. Akutsu, Y.Tonomura, H.Hashimoto, and Y.Ohba, “Video indexing using motion vectors,” Proc. Visual Communication and Image Processing, 1992. 5. S. Shu and L.P. Chau, “A new scene change feature for video transcoding,” IEEE Symp. Circuits and Systems, 2005. 6. S. Soldatov, K. Strelnikov, and D. Vatolin, "Low complexity global motion estimation from block motion vectors," Spring Conf. Computer Graphics, 2006. 7. F. Arman, A. Hsu, and M. Y. Chiu, “Image processing on encoded video sequences,” Multimedia Syst., vol. 1, pp. 211–219, 1994. 8. J.-R. Kim, S. Suh, and S. Sull, “Fast scene change detection for personal video recorder,” IEEE Trans. Consumer Electronics, vol. 49, pp. 683–688, 2003. 9. W. Lin, K. Panusopone, D. Baylon and M.T. Sun “A new class-based early termination method for fast motion estimation in video coding,” IEEE Symp. Circuits and Systems, Taipei, 2009. 10. X. Yi, J. Zhang, N. Ling, and W. Shang, “Improved and simplified fast motion estimation for JM,” JVT-P021, Poland, 2005. 11. JM 10.2, http://iphome.hhi.de/suehring/tml/download/old_jm/. 12. W. Lin, M.T. Sun, R. Poovendran and Z. Zhang, “Activity recognition using a combination of category components and local models for video surveillance,” IEEE Trans. Circuits and Systems for Video Technology, vol. 18, pp. 1128-1139, 2008.

A New Shot Change Detection Method Using ...

Department of Electronic Engineering, Shanghai Jiao Tong University ..... Consumer Electronics, vol. 49, pp. ... Circuits and Systems for Video Technology, vol.

433KB Sizes 0 Downloads 245 Views

Recommend Documents

unsupervised change detection using ransac
the noise pattern, illumination, and mis-registration error should not be identified ... Fitting data to predefined model is a classical problem with solutions like least ...

A Formal Study of Shot Boundary Detection - CiteSeerX
Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. ... not obtained by thresholding schemes but by machine learning ...... working toward the M.S. degree in the Department of Computer ...

Hierarchical Method for Foreground Detection Using Codebook Model
Jun 3, 2011 - important issue to extract foreground object for further analysis, such as human motion analysis. .... this object shall become a part of the background model. For this, the short-term information is employed ...... processing-speed-ori

A New Method to Identify Individuals Using Signals ...
larger categories to form and lead to a broader generalisation and higher code .... Conference on Control, Automation, Robotics and. Vision, Singapore, pp.

A Formal Study of Shot Boundary Detection - CiteSeerX
Based on the comparison of the existing approaches, optimal criteria for ... in computer perfor- mance and the growth of the Internet, have led to the widespread.

A novel shot boundary detection framework - Semantic Scholar
Fuzong Lin and Bo Zhang. State Key Laboratory of Intelligent Technology and System. Department of Computer Science and Technology. Tsinghua University ...

A Formal Study of Shot Boundary Detection
fied shot boundary detection system based on graph partition ... technologies and tools for efficient indexing, browsing and re- ...... Right: Visualization of the similarity matrix of the left graph. w is defined as the reciprocal of Euclidean dista

A Unified Shot Boundary Detection Framework ... - Semantic Scholar
Nov 11, 2005 - Department of Computer Science and. Technology, Tsinghua University ..... the best result among various threshold settings is chosen to.

A Double Thresholding Method For Cancer Stem Cell Detection ...
A Double Thresholding Method For Cancer Stem Cell Detection ieee.pdf. A Double Thresholding Method For Cancer Stem Cell Detection ieee.pdf. Open.

A Flow Cytometry Method for Rapid Detection and ...
was developed as an automated instrument for routine testing ... Phone: 61-2-98508157. .... a single instrument for numerous rapid microbiological assay.

A Review on Change Detection Methods in Hyper spectral Image
Keywords: - Change detection, hyper spectral, image analysis, target detection, unsupervised ..... [2] CCRS, Canada Center for Remote Sensing, 2004.

Appreciative Inquiry as a Method for Participatory Change in ...
Page 1 of 9. Article. Journal of Mixed Methods Research. 1–9. The Author(s) 2014. Reprints and permissions: sagepub.com/journalsPermissions.nav. DOI: 10.1177/1558689814527876. mmr.sagepub.com. Appreciative Inquiry as a. Method for Participatory. Ch

Unsupervised Change Detection with Synthetic ...
False alarm rate no SRAD. 0.05%. 0.02%. 0.1. 0.21%. 0.01%. 0.5. 82.30%. 0%. 1.0. 80.05%. 0%. Alessandria. Λ (SRAD). Detection accuracy. False alarm rate.

Keylogger Detection Using a Decoy Keyboard
OPC server account, which includes the IP address of the server computer, the name of the .... external IP addresses. In the latter .... RAND CORP (1980). 35.

A Generalized Data Detection Scheme Using ... - Semantic Scholar
Oct 18, 2009 - We evaluated the performance of the proposed method by retrieving a real data series from a perpendicular magnetic recording channel, and obtained a bit-error rate of approximately 10 3. For projective geometry–low-density parity-che

Human eye sclera detection and tracking using a ...
Keywords: Human eye detection; Eye sclera motion tracking; Time-adaptive SOM; TASOM; .... rectly interact with the eye tracking system when it is in oper- ation ...

A study of OFDM signal detection using ... - Semantic Scholar
use signatures intentionally embedded in the SS sig- ..... embed signature on them. This method is ..... structure, channel coding and modulation for digital ter-.

Fast Pedestrian Detection Using a Cascade of Boosted ...
on pedestrian detection using state-of-the-art locally extracted fea- tures (e.g. ... meaningful features if there is a large variation in object's ap- pearance .... The final strong classifier can be .... simple nonpedestrian patterns in the early s

A Survey on Brain Tumour Detection Using Data Mining Algorithm
Abstract — MRI image segmentation is one of the fundamental issues of digital image, in this paper, we shall discuss various techniques for brain tumor detection and shall elaborate and compare all of them. There will be some mathematical morpholog

A Review on Segmented Blur Image using Edge Detection
Image segmentation is an active topic of research for last many years. Edge detection is one of the most important applications for image segmentation.