VLSI ORIENTED FAST MULTIPLE REFERENCE FRAME MOTION ESTIMATION ALGORITHM FOR H.264/AVC Zhenyu Liu, Lingfeng Li, Yang Song, Takeshi Ikenaga, Satoshi Goto The Graduate School of IPS, Waseda University N355, 2-7, Hibikino, Wakamatsu, Kitakyushu, 808-0135, Japan

ABSTRACT In H.264/AVC standard, motion estimation can be processed on multiple reference frames (MRF) to improve the video coding performance. For the VLSI real-time encoder, the heavy computation of fractional motion estimation (FME) makes the integer motion estimation (IME) and FME must be scheduled in two macro block (MB) pipeline stages, which makes many fast MRF algorithms inefficient for the computation reduction. In this paper, two algorithms are provided to reduce the computation of FME and IME. First, through analyzing the block’s Hadamard transform coefficients, all-zero case after quantization can be accurately detected. The FME processing in the remaining frames for the block, detected as all-zero one, can be eliminated. Second, because the fast motion object blurs its edges in image, the effect of MRF to aliasing is weakened. The first reference frame is enough for fast motion MBs and MRF is just processed on those slow motion MBs with a small search range. The computation of IME is also highly reduced with this algorithm. Experimental results show that 61.4%-76.7% computation can be saved with the similar coding quality as the reference software. Moreover, the provided fast algorithms can be combined with fast block matching algorithms to further improve the performance. 1. INTRODUCTION The superior performance of the latest international video coding standard, H.264/AVC, mainly comes from the new techniques, which include 1/4-pixel accurate variable block size motion estimation (VBSME) with multiple reference frames (MRF), intra prediction (IP), context-based adaptive variable length entropy coding (EC) and inloop deblocking (DB), etc. The huge computation complexity of the prediction algorithm makes the traditional 2-stage MB pipeline architecture inefficient for the H.264 hardwired encoder design. For example, in H.264, FME is 100 times more complex than that of previous standards [1]. It has become the system bottleneck because of the 1/4-pixel accuracy, VBS, MRF and precise distortion evaluation. Consequently, IME and FME must be arranged in two stages to get the high hardware utilization and throughput. One optimized 4stage MB pipelining hardwired encoder is provided in [1], as shown in Figure 1. In order to analyze the impact of MB pipelining to the MRF fast algorithms, a brief overview of the dataflow is introduced here. In the first MB stage, IME engine is processed on all reference frames. The integer motion vectors (MV) of 41 blocks in MB on all reference frames are achieved and dispatched to the second FME stage. Through 1/4-pixel accurate fine ME and precise RD-cost evaluation, This work was supported by fund from MEXT via Kitakyushu innovative cluster projects and CREST, JST.

1-4244-1017-7/07/$25.00 ©2007 IEEE

VBS MRF IME Engine

41×RFnum IMVs

VBS MRF FME Engine

st

1 Stage RFnum : The number of reference frames;

2

nd

Stage

Best Mode & MVs Luma MC

EC Engine IP Engine Chroma MC rd

3 Stage

DB Engine 4th Stage

IMV: Integer accurate motion vector

Fig. 1. Block diagram of 4-stage hardwired H.264 encoder FME engine finds the best candidates and the corresponding reference frames and decides the best inter prediction mode. The post inter/intra mode decision IP and chroma MC are implemented at the third stage. EC and DB are processed in parallel in the fourth stage. According to the analysis in [2], 89.2% computation power is consumed by ME part. MRF is the main issue that leads to the huge computation complexity. In order to reduce the computation, many studies and excellent works about fast MRF ME have been proposed. One excellent work is provided in [2], which provides four criterions to early terminate the motion search on MRFs. These algorithms efficiently reduce 30%-80% redundant computation in the software. However, MB pipeline in hardware architecture degrade the performance of these methods. All these criterions must be applied in the second FME stage. That means the computation load of IME, which is the most computation intensive part, can not be saved. Another promising scheme is reducing the search areas on MRFs depending on the MVs’ strong correlations in consequent pictures [3][4]. The first drawback of this algorithm is the hardware overhead consumed by the MV composition. For example, in reference [3], 4×4 block based MVs on each frame must be kept. If the frame size is 720×480 with 128×128 search range and 5 reference frames, totally 1.65Mb memories are required. For the accuracy of MV composition, the multiplication, which increases the hardware cost, is also applied. Moreover, this algorithm just simplifies the computation of IME and does not benefit FME engine. Based on the 4-stage pipeline hardware architecture, two algorithms are proposed in this paper. The first one is Hadamard transform coefficients based all-zero block detection, which can more precisely detect all-zero blocks before real DCT transform and quantization than previous algorithms. The second is the MV based reference frame elimination and search area adjustment algorithm. In details, for the fast motion MB, just one reference frame is adopted; for the slow motion MB, multiple reference frames are processed, but the search area for other frames can be greatly reduced. This approach contributes to the computation reduction of both IME and FME engines. The rest of this paper is organized as follows. In Section 2, Hadamard transform coefficients based all-zero block detection algorithm is proposed. The motion vector based reference frame elimination and search area adjustment method is present in Section 3.

1902

ICME 2007

Section 4 shows some experimental results to demonstrate our algorithms. Conclusions are drawn in Section 5. 2. HADAMARD TRANSFORM COEFFICIENTS BASED ALL-ZERO BLOCK DETECTION ALGORITHM

thH [QP ][i][j] = thDCT [QP ][0][0]

In H.264 standard, after the prediction procedure, the residual blocks are DCT transformed, quantized and entropy coded. The separable 4×4 2-D DCT in H.264 is written as (1). Y = CXCT

(1)

Where the superscript T denotes transposition, X is the 4×4 residual matrix and C is the transform matrix, as shown in (2). 3 2 C0 6 C 7 6 C=4 1 5=4 C2 C3 2

1 2 1 1

1 1 −1 −2

1 −1 −1 2

3 1 −2 7 1 5 −1

(2)

During the quantization step, the thresholds in 4 × 4 matrix THDCT are shown in (3).

thDCT [QP ][i][j]=

2qp bits −qp const quant coef [qp rem][i][j]

i, j ∈ [0, 3]

2

(4)

Where H is the Hadamard transform matrix, as shown in (5). 3 2 H0 6 H 7 6 H=4 1 5=4 H2 H3 2

1 1 1 1

1 1 −1 −1

1 −1 −1 1

3 1 −1 7 1 5 −1

(6)

This setting depends on two reasons. First, for (i, j) = {(0, 0), (0, 2), (2, 0), (2, 2)}, z(i, j) = y(i, j) and thH [QP ][i][j] = thDCT [QP ][i][j]. So we can get the real values of the these entries post DCT/Q stage. Second, in other entries, the ratio of the DCT coefficient standard deviation to the Hadamard coefficient standard deviation is similar to that of their thresholds. Proof: According to [2], the standard deviation matrix of DCT and Hadamard coefficients can be expressed as (7) and (8) respectively, where σf denotes the standard deviation of residues. The ratio matrix of σ DCT to σH is illustrated in (9). If qp const item is eliminated in (3) to simplify the analysis, the ratio of THDCT to THDCT , which is denoted as RTH , has 6 cases according to qp rem. For example, for qp rem = 0, RTH is shown as (10) and the ratio between RTH to Rσ is shown in (11). Other cases can be traced by analogy. 9.47 6 9.23 σDCT = σf 4 4.12 5.19 2 9.47 6 5.61 σH = σf 4 4.12 3.65 2 1.00 6 1.65 Rσ=σDCT  σH=4 1.00 1.42 2 1.00 6 1.63 RTH = THDCT THH = 4 1.00 1.63 2 1.00 6 0.98 RTH  Rσ = 4 1.00 1.15

(3)

where QP is the quantization parameter, qp rem = QP %6, qp bits = QP/6+15, qp const = (1  qp bits)/6 and quant coef is the scaling matrix shown in [5]. For one block, during its MRF ME, if we found its residues at current reference frame are small enough to make it all-zeros post transformation (DCT) and quantization (Q), its search processing can be terminated. Depending on its SAD or SATD value, reference [2] has provided one smart method for the early estimation of the all-zero block. However, SAD and SATD can not emulate the frequency characteristics, so they just provide the coarse all-zero estimations. In the JM, SATD is applied in the fractional pixel motion estimation and the mode decision, because SATD accounts for the amount of prediction error, as well as the cost for transformed representation. Hence, the SATD operates as one more accurate RD cost criterion for the prediction error than SAD. This algorithm has already been implemented in the hardware design [1]. For SATD calculation, the 4×4 Hadamard transform is applied to each 4×4 residual block as shown in (4). Depending on these Hadamard coefficients, we can get more accurate all-zero estimations. Z = HXHT

all-zero block. The entries in THH have the same value as illustrated in (6). If the Hadamard coefficient is less than the threshold, it is assumed that the corresponding DCT coefficient will become zero after the quantization stage.

9.23 8.99 4.01 5.06

4.12 4.01 1.79 2.26

5.61 3.33 2.44 2.23

4.12 2.44 1.79 1.59

1.65 2.7 1.64 2.26

1.00 1.64 1.00 1.42

1.63 2.50 1.63 2.50

1.00 1.63 1.00 1.63

0.98 0.93 0.99 1.10

1.00 0.99 1.00 1.15

3 5.19 5.06 7 (7) 2.26 5 2.85 3 3.65 2.23 7 (8) 1.59 5 1.41 3 1.42 2.26 7 (9) 1.42 5 2.02 3 1.63 2.50 7 (10) 1.63 5 2.50 3 1.15 1.10 7 (11) 1.15 5 1.24

where  is the scalar division, which means each entry of the first matrix is divided by the element in the same position in the second matrix. According to the above analysis, two early termination criterions are provided to alleviate the computation load in FME stage: 1. if((mode == P16×16) && (P16×16 block == all zero block) && (MV == SKIP MV)), early terminate.

(5)

2. if(current block==all zero block), early terminate.

Comparing H and C, we find that the basic functions of H, H0 and H2 , are the same as C0 and C2 of C and H1 and H3 have the similar patterns as C1 and C3 . In fact, Hadamard transform is a simplified formation of DCT transform. Its resulting transformed signal emulates the frequency characteristics of the true DCT transformed block in the subsequent DCT/Q stage at very low computational cost. Since we have already derive the Hadamard coefficients during FME, a threshold matrix THH can be set for the early detection of

Through analyzing its Hadamard coefficients, we can make early all zero block detection for each block. If the current block mode is P16×16 and it is decided as all zero block at its SKIP MV search position, this MB is set as SKIP MODE and all other searches are eliminated. For other blocks, once they are estimated as all zero block case, their searches on subsequent reference frames are terminated. The effect of this algorithm depends on QP value. For QP is 32, 21%-46% FME calculation can be saved.

1903

3. MOTION VECTOR BASED REFERENCE FRAME ELIMINATION AND SEARCH AREA ADJUSTMENT By mathematical analysis, aliasing is the main component that deteriorates the prediction efficiency. Sub-pel interpolation and MRF techniques adopted by H.264 mainly aim to compensate the aliasing. In this section, first, the prediction error signal caused by aliasing is introduced. Second, the effect of image motion to aliasing is analytically described. At last, our MV based reference frame elimination and search area adjustment algorithm is proposed. 3.1. Impact of Aliasing to Prediction Error Signal In order to simplify the mathematical description, the analysis is restrict to one spatial dimension signal. lt (x) and lt−1 (x) denote the spatial-continuous signals at time instance t and t − 1. lt (x) is a displaced version of lt−1 (x) and the distance is dx , which can be expressed as lt (x) = lt−1 (x − dx ). Their frequency domain signals are denoted as Lt (ω) and Lt−1 (ω). These continuous image signals are sampled by the sensor array before digital processing and they are denoted as xt (xn ) and xt−1 (xn ). Aliasing dose not exist if Nyquist-Shannon sampling precondition, i.e., Lt−1 (ω) = 0 for |ω| ≥ ωs /2, where ωs is the sampling frequency, is satisfied. However, because no time-limited signals can be band limited and the low-pass filter of the sampling system is not ideal, the precondition of Nyquist-Shannon sampling theorem cannot be fulfilled. According to [6], with the normalized sampling frequency, i.e., ωs = 2π, the magnitude of prediction error signal caused by aliasing can be described as (12) |Et (ω)| = 2 · |At−1 (ω)| · | sin(dx · π)|

(12)

where At−1 (ω) = Lt−1(ω+2π)+Lt−1 (ω−2π). According to (12), two important conclusions can be drawn:

displacement. In fact, through our experiments, aliasing is the main reason for MRF. 3.2. Effect of Image Motion to Multiple Reference Frame Algorithm The motion between the object and the sensor can blur the sampled image. According to [7], if an image f (x, y) undergoes planar motion and x0 (t) and y0 (t) represent motion in x- and y-directions, the obtained image at (x, y) position can be expressed as the integration during the exposure period, T , as (13). Z g(x, y) =

Conclusion 1 states that the image rich of high frequency signals is prone to be affected by the aliasing problem. Conclusion 2 explains the necessity of MRF during prediction processing: If the displacement dx,t−1 between the current st (xn ) and the previous st−1 (xn ) image is sub-pel and the more previous image st−k (xn ) has the full-pel displacement dx,t−k , st−k (xn ) is preferred to be chose as reference because the aliasing problem dose not exist any more. Now, we can explain why ‘Mobile’ sequence is so sensitive to MRF. Many textures are contained in this video sequence. Sharp edges in the spatial domain generate rich high frequency signals after the Fourier transformation. For example, flickering at the edges of calender is caused by input aliasing. Though 2-D Wiener filter interpolation algorithm in H.264/AVC can alleviate the error of aliasing, its effect can not compare with the reference image with the full-pel

0

f [x − x0 (t), y − y0 (t)]

(13)

G(ω, ψ) = F (ω, ψ)H(ω, ψ) (14) R T −j2π[ωx (t)+ψy (t)] 0 0 dt. where H(ω, ψ) = 0 e For uniform linear motion, x0 (t) = at/T and y0 (t) = bt/T , Using (14), H(ω, ψ) may be expressed as H(ω, ψ) =

T sin[π(ωa + ψb)]ejπ(ωa+ψb) π(ωa + ψb)

(15)

From this analysis, we can see that the effect of motion is the same as low-pass filter and its pass band is decreased with the motion speed, a and b. In other words, edges of the sampled image are blurred by the motion. According to Conclusion 1 in 3.1 and (15), it can be deduced that the effect of aliasing will be alleviated by the motion of image. Mobile CIF 30Hz [-24.75,+24.75] 48 46 44 42 40 38

original 5-ref

36

original 1-ref

34

blurred 5-ref

32

blurred 1-ref

30 0

1000

2000

3000

4000

5000

6000

7000

8000

Bit-Rate (kbps)

1. Because of the item |At−1 (ω)| , aliasing is cause by the high frequency signals in Lt−1 (ω), where |ω| ≥ π. 2. According to the item | sin(dx · π)|, the impact of aliasing vanishes at full pixel displacements and is maximum at half pixel displacements.

T

In frequency domain, this procedure can be expressed as (14)

PSNR(dB)

Since the Hadamard transformation module has already been built in the FME engine [1], according to the 2-D Hadamard architecture, 4 comparators are required to be added to the output of each 2-D 4×4 Hadamard module for the all zero block detection. This additional hardware overhead is trivial compared with the FME engine.

Fig. 2. Blurring effect to RD curve comparisons In order to verify this opinion, we synthesize the motion effect on ‘Mobile CIF’ video sequence, where a = 15 and b = 0. Figure 2 shows the comparisons about rate-distortion curves of the original sequence and blurred one. At medium and high bitrates, the peak signal to noise ratio (PSNR) difference between five reference frames and one reference frame of original image is 1.2-1.3 dB. For the blurred image, this difference is reduced to 0.5-0.6 dB. 3.3. Motion Vector Based Reference Frame Elimination and Search Area Adjustment Algorithm Based on the analysis in 3.1 and 3.2, the MV based reference frame elimination and search area adjustment algorithm is provided: 1. In the first reference frame, VBSME is processed in the full search range, SF W ×SF H . 2. If the |M Vx | + |M Vy | of P16 × 16 is beyond the threshold T HM V , that means that current MB is a fast motion one,

1904

46

Table 1. Simulation conditions

44

16, 20, 24, 28, 32 Jm81a Ours ±16 T HM V =12, SF W =SF H =±16, SSW =SSH =±2 no B slice, CAVLC, 5-ref, RDO Off, Hadamard Transform

42 PSNR (dB)

QP Search Range etc

Table 2. Coding speed-up ratio 16 2.68 2.77 2.63 3.47 3.43 3.11

20 2.75 2.89 2.59 3.42 3.42 3.06

24 2.96 3.37 2.68 3.43 3.54 3.01

28 3.33 3.58 2.82 3.70 3.58 3.11

32 3.94 4.11 3.07 3.93 3.80 3.23

foreman-jm

38

foreman ours

36

carphone-jm

34

carphone-ours

32

mobile-jm mobile-ours

30 0

200

400

600

800 1000 Bit-Rate (kbps)

1200

1400

1600

45 42 PSNR (db)

QP Foreman Carphone Mobile Coastguard Football Tempete

40

so the searches on subsequent reference frames can be eliminated. Otherwise, as current MB is the slow motion one and aliasing exists, MRF is still required. However, VBSME on other frames can be processed in the reduced ranges SSW ×SSH to save the computation of IME part. As small blocks contain less texture and prone to be trapped in local optimum positions, the motion feature judgment just depends on the MV of P16 × 16 block. Moreover, this approach also simplifies its computation complexity. This criterion is placed on the first IME stage in Figure 1. If current MB is decided as fast motion type, the first IME engine and second stage FME engine just have one reference frame. In this way, the computation load of IME and FME are both saved. For those slow motion MBs, this algorithm just contributes to the IME’s computation reduction. 4. EXPERIMENTAL RESULTS Six standard sequences at QCIF format and 30Hz, Forman (255 frames), Carphone (255 frames), Mobile (249 frames), Coastguard (255 frames), Football (249 frames) and Tempete (249 frames), are tested to compare bit-rate, PSNR and coding speed. Among these sequences, Forman, Carphone and Mobile are the most sensitive to the reference frame number. Other simulation conditions are shown in Table 1. The RD-curve comparisons of the six test vectors are shown in Figure 3. As our algorithms provide the almost the same coding efficiency, it is hard to distinguish our algorithms’ curves from the reference ones. The experimental coding-speedup ratio results are listed in Table 2. The coding-speedup ratio is defined as the ratio of the fullsearch algorithm’s coding time to the one of ours in the case of 5 reference frames. Table 2 demonstrates that 61.4% to 76.7% computation can be save by our schemes. We can see that the ratio is commonly increased with QP. This mainly comes from the enhancement of all zero block detection algorithm’s effect with the incensement of QP. It should be noticed that our algorithms are orthogonal to fast block matching algorithms, namely, if fast block matching algorithms are applied in the first reference frame search, more computation can be saved. 5. CONCLUSIONS Fully considering the limitations of MB-pipeline hardware architectures, we propose two VLSI friendly fast algorithms for MRF ME in H.264/AVC: Hadamard transform coefficients based all zero block detection algorithm efficiently alleviates the computation load

39

coastguard-jm coastguard-ours football-jm football-ours tempete-jm tempete-ours

36 33 30 0

200

400

600

800

1000

1200

1400

1600

1800

Bit-Rate (kbps)

Fig. 3. RD curve comparisons of FME part; The effect of motion to aliasing on the prediction error is theoretically and experimentally investigated. The MV based reference frame elimination and search area adjustment algorithm is proposed to save operations in both IME and FME parts. Experimental results show that 61.4%-76.7% computation can be saved with almost the same coding quality as the reference software. Moreover, these provided schemes can be combined with other fast block ME algorithms to further improve the performance. 6. REFERENCES [1] T. C. Chen et.al., “Analysis and architecture design of an hdtv720p 30 frames/s h.264/avc encoder,” IEEE Trans. on circuits and systems for video technology, vol. 16, no. 6, pp. 673– 688, June 2006. [2] Y. W. Huang et.al., “Analysis and complexity reduction of multiple reference frames motion estimation in h.264/avc,” IEEE Trans. on circuits and systems for video technology, vol. 16, no. 4, pp. 507–522, April 2006. [3] Y. P. Su and M. T. Sun, “Fast multiple reference frame motion estimation for h.264/avc,” IEEE Trans. on circuits and systems for video technology, vol. 16, no. 3, pp. 447–452, March 2006. [4] M. J. Chen et.al., “Efficient multi-frame motion estimation algorithms for mpeg-4 avc/jvt/h.264,” in Proceedings of the 2004 International Symposium on Circuits and Systems, May 2004, vol. 3, pp. 737–740. [5] H. S. Malvar et.al., “Low-complexity transform and quantization in h.264/avc,” IEEE Trans. on circuits and systems for video technology, vol. 13, no. 7, pp. 598–603, July 2003. [6] T. Wedi and H. G. Musmann, “Motion- and aliasingcompensated prediction for hybrid video coding,” IEEE Trans. on circuits and systems for video technology, vol. 13, no. 7, pp. 577–586, July 2003. [7] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Second Edition, Prentice Hall, 2002.

1905

VLSI Oriented Fast Multiple Reference Frame Motion Estimation ...

mance. For the VLSI real-time encoder, the heavy computation of fractional motion ... is the most computation intensive part, can not be saved. Another promising ...

144KB Sizes 3 Downloads 270 Views

Recommend Documents

A VLSI Array Processing Oriented Fast Fourier ...
key words: fast Fourier transform (FFT), array processing, singleton al- gorithm. 1. Introduction ... ment of Industry Science and Technology, Kitakyushu-shi, 808-. 0135 Japan. ..... Ph.D. degree in information & computer sci- ence from Waseda ...

VLSI Friendly Edge Gradient Detection Based Multiple Reference ...
software oriented fast multiple reference frames motion es- timation ... Through analyzing the .... tor, we can accurately analyze the frequency nature of image.

A Fast Algorithm For Rate Optimized Motion Estimation
uous motion field reduces the bit rate for differentially encoded motion vectors. Our motion ... In [3], we propose a rate-optimized motion estimation based on a “true” motion tracker. ..... ftp://bonde.nta.no/pub/tmn/software/, June 1996. 477.

A Fast Algorithm For Rate Optimized Motion Estimation
Abstract. Motion estimation is known to be the main bottleneck in real-time encoding applications, and the search for an effective motion estimation algorithm has ...

Fast Sub-Pixel Motion Estimation and Mode Decision ...
partition selection and only performs the 'precise' sub-pel search for the best ... Motion Estimation process contains two stages: integer pixel search .... full. COST avg. COST. COST avg. COST ii blocktype if c where COSTFull is the best COST after

A Fast Sub-Pixel Motion Estimation Algorithm for H.264/AVC Video ...
H.264/AVC is the state-of-the-art video coding standard ... ommended by Associate Editor R. Lukac. .... where R(MV) is the number of bits to code the MV and.

A VLSI Architecture for Variable Block Size Motion ...
Dec 12, 2006 - alized in TSMC 0.18 µm 1P6M technology with a hardware cost of 67.6K gates. ...... Ph.D. degree in information & computer sci- ence from ...

Leakage power estimation and minimization in VLSI ...
shows how a linear programming model and a heuristic algorithm can be used to ... automation (EDA) tools can help you estimate power for your clip and the ...

Fast Conditional Kernel Density Estimation
15 Dec 2006 - Fast Conditional Kernel Density Estimation. Niels Stender. University .... 2.0. 0.0. 0.5. 1.0. 1.5. 2.0 x1 x2. Level 4. Assume the existence of two datasets: a query set and a training set. Suppose we would like to calculate the likelih

frame-rate up-conversion using transmitted true motion ...
coding). Then, it is recovered by the decoder and is used not only for motion compen- sated predictions but also used to reconstruct missing data. It is shown that ...

Comparison of Camera Motion Estimation Methods for ...
Items 1 - 8 - 2 Post-Doctoral Researcher, Construction Information Technology Group, Georgia Institute of. Technology ... field of civil engineering over the years.

Robust Tracking with Motion Estimation and Local ...
Jul 19, 2006 - The proposed tracker outperforms the traditional MS tracker as ... (4) Learning-based approaches use pattern recognition algorithms to learn the ...... tracking, IEEE Trans. on Pattern Analysis Machine Intelligence 27 (8) (2005).

true motion estimation — theory, application, and ... - Semantic Scholar
5 Application in Motion Analysis and Understanding: Object-Motion Estima- ...... data, we extend the basic TMT to an integration of the matching-based technique ...

Geometric Motion Estimation and Control for ... - Berkeley Robotics
the motion of surgical tools to the motion of the surface of the heart, with the surgeon ...... Conference on Robotics and Automation, May 2006, pp. 237–244.

Adaptive Curve Region based Motion Estimation and ...
spatial coherence. In this paper, we use the UFLIC method to visualize the time-varying vector fields. This paper is organized as follows: the adaptive curve ..... estimation and motion visualization algorithms, we have tested a series of successive

A Computation Control Motion Estimation Method for ... - IEEE Xplore
Nov 5, 2010 - tion estimation (ME) adaptively under different computation or ... proposed method performs ME in a one-pass flow. Experimental.

rate optimization by true motion estimation introduction
proved by the removal of redundancy among the block motion vectors within the same ... Those conventional block-matching algorithms ..... inexpensive to code.

true motion estimation — theory, application, and ... - Semantic Scholar
From an application perspective, the TMT successfully captured true motion vectors .... 6 Effective System Design and Implementation of True Motion Tracker.