A scalable architecture of real—time MP©HL MPEG-2 video encoder for multi-resolution video Kazuhito SUGURI, Takeshi YOSHITOME, Mitsuo IKEDA, Toshio KONDO and Takeshi OGURA

NTT Human Interface Laboratories 1-1 Hikarinooka Yokosuka City Kanagawa 239-0847 Japan.

ABSTRACT We have proposed a new system architecture for an MPEG—2 video encoder designed for high-resolution video. The system architecture uses the spatially parallel encoding approach and has scalability for the target video resolution

to be encoded. Three new techniques have been introduced to the system. The first is a general video interface that supports multiple video formats. The second is a bitstream generation control scheme suitable for the spatially parallel encoding approach. The third is a simple data sharing mechanism for all encoding units. With these techniques, the system achieves both scalability and high encoding efficiency. Video encoding systems based on this system architecture will enable high quality video encoding to be used for visual applications for commercial and personal use at reasonable system cost.

Keywords: HDTV, video encoding, MPEG-2, scalable system, architecture

1. INTRODUCTION High-resolution video data will be one of the important contents of the visual applications and services of the next generation. For example, (1) the field of digital broadcasting is now entering the HDTV era and some broadcast stations are planning to begin broadcasting HDTV programs within a few years, (2) video archive service providers will need to be able to meet the demand for high-resolution video films and (3) applications for a PCs, whose capability is increasing year by years, will soon offer better resolution than that of conventional TV with an NTSC or PAL format. To make these services and applications available at reasonable cost, limited resources such as bandwidth and storage must be used effectively and the data compression of high-resolution video is one essential technology in this regard. The MPEG-2 video coding standard' is the most important of the standards because it supports not only the video resolution of conventional TVs for digital broadcasting and DVD storage media but also supports the higher resolution of video such as HDTV. Recently, video encoding systems based on Main-Profile at Main-Level (MP©ML) of the MPEG-2 standard for conventional TV resolution are becoming available at reasonable cost thanks to 1-chip video encoder LSIs.25 This means that systems based on Main-Profile at High-Level (MP©HL) of the MPEG-2 standard will soon be required for higher resolution video services and applications. One of the most important factors in the design of the video encoding system is the computational power required for the system. The computational power required for video encoding varies with the video resolution. For example, a system designed for video resolution of 1920 pixels x 1080 lines x 30 fps requires six times the computational power of that for the resolution of up to 720 x 480 x 30. To keep costs down, it is desirable that the system for higher resolution video be composed of sub-systems designed for lower resolution video. This is especially important for systems designed for PC extension boards because the video resolution should be able to be easily expanded according to the upgrades of the PC environments in which the applications run. So, in the design of the MP©HL MPEG-2 video encoding system, it is important to keep the system adaptable to the resolution of the video to be encoded. Some video encoding systems designed for HDTV broadcasting service have already been reportd.68 However, those systems are too large and expensive for the typical consumer. Furthermore, the system architecture of these systems cannot be extended to meet the different requirements for video resolution. E-mail: {suguritomeikekondogura}©nttvdt.hiI.ntt.co.jp Part of the IS&T/SPIE Conference on Visual Communications and Image Processinci '99 • San Jose, California • January 1999 SPIE Vol. 3653 • 0277-786X/981$1O.OO

895

In this paper, we propose an MPEG-2 video encoding system architecture that is adaptable to the target video resolution to be encoded. Some control techniques newly introduced to a unit-encoder of the system guarantee the scalability of the system architecture. Section 2 describes the approach we chose and the problems to be solved. Section 3 discusses our system architecture, which solves the problems described in section 2. Section 4 shows the experimental encoding system we have developed using the proposed system architecture.

2. PROBLEMS TO BE SOLVED There are two approaches to realizing a high-resolution video encoding system: a spatially parallel encoding approach

and a functionally parallel encoding approach. Because single-chip implementation of the HDTV video encoder cannot be done cost-effectively using current technology, these parallel encoding approaches are essential. The spatially parallel encoding approach uses several video encoding units (VEUs) that operate simultaneously.

Figure 1 shows how the original video is encoded by using this approach. The original video frames are first separated horizontally and/or vertically prior to being input into each VEU. Each VEU encodes a sequence of the separated video frames and generates a partial-bitstream for them. A bitstream for the original video frames is then reconstructed from all the partial-bitstreams from the VEUs. original

separated video frames

video frames

bitstream (for original video)

partial bltstreams

Figure 1. Spatially parallel encoding. The functionally parallel encoding approach uses high-performance function specific modules (HFMs). This approach has been adopted for some MP©ML MPEG-2 video encoding systems.9'10 Figure 2 shows how the original video is encoded by using this approach. The original video frames are input to the first HFM, which perform motion estimation. The processed data from the first HFM are then processed by next HFM, which performs DCT. Finally, the final HFM outputs a bitstream for the original video frames. original

video trames

J

function Iii , module for ME/MC

system controller

function module for DCT/Q

I

function

, module

bitstream (for original video) 01 10101 1...

for VLC

Figure 2. The functionally parallel encoding. The functionally parallel encoding approach has a serious problem in that the performance of each HFM will need to be improved for higher-resolution video. This improvement will have to be done by increasing the operation

896

frequency or the amount of hardware for each HFM, which may require re-designing the HFMs and changing the interface between the HFMs. In contrast with the functionally parallel encoding approach, the spatially parallel encoding approach can be more suitable for higher resolution systems simply by increasing the number of VEUs. Furthermore, the cost of the system can be reduced by using a single—chip MP©ML MPEG-2 video encoding LSI for the VEU. We have therefore chosen the spatially parallel encoding approach. Before the spatially parallel encoding approach can be used in practice, two major problems must be solved.

problem 1 : The bitstream for the original video has to be reconstructed from partial-bitstreams generated by the VEUs. Furthermore, some of the headers in the partial-bitstreams will have to be removed or modified. This reconstruction process hampers making the system scalable because the component design for this process depends on the number of VEUs in the system. problem 2: It is difficult for the VEUs to share data each other. If no data can be shared, the quality of the decoded video frames is degraded because 1) the reference images for motion estimation and compensation are limited to the separated video frames, and 2) the bit—allocation for each separated video frame cannot be optimized over the entire original video frame.

3. SYSTEM ARCHITECTURE To overcome problems caused by the frame separation, we propose a video encoding system architecture that has the following features:

. parallel encoding with horizontally separated video frames, S partial-bitstream generation

with header generation control,

I partial-bitstream output control for automatic bitstream reconstruction,

. data transfer between VEUs.

The first three address the first problem, and the last addresses the second problem described in Sec. 2. All the features listed above have been achieved by using a multiple chip enhanced MP©ML MPEG-2 video encoding LSI, the SuperENC,5 we have already developed for part of the VEU. Special functionalities implemented in the SuperENC for the multiple chip system are a generalized video interface (GVIF), partial-bitstream generation control (PBGC), partial-bitstream output control processes (BOCPs), and a multi-chip data transfer interface (MDTI). Despite these additional functions, the SuperENC can be fabricated at about the same cost as other single-chip MP©ML MPEG-2 video encoder LSIs. The over-all system architecture is illustrated in Fig. 3, and a block diagram of the SuperENC is shown in Fig. 4. The shaded area in Fig. 4 indicates the blocks performing the special functions for the multiple chip system. This system architecture is easily adaptable to the target video resolution without degradation in the video encoding quality. In addition, the system architecture is capable of scalability when the system is extended to support higher resolution.

3.1. Frame separation for several video formats In the parallel encoding approach, each VEU must select an appropriate area of the original video frame, the separated video frame, to encode. In addition, the video interface module of the VEU should be flexible enough to support multiple video formats. The GVIF of the SuperENC is implemented for these purposes.

The GVIF extracts a rectangular area in the original video frame. The rectangular area can be specified by operation parameters that indicate the size of the area and the location of the area in the original video frame. These parameters can be specified by the system controller so that each SuperENC encodes the appropriate part of the original video frames.

By using the GVIF, any kind of frame separation can be done, as shown in Fig. 5. In the proposed system, horizontal separation [Fig. 5 (a)] and vertical separation [Fig. 5 (b)] are indispensable condition for making a scalable system for the target video resolution. Horizontal separation is desirable from the standpoint of encoding efficiency because using the vertical separation the number of slice headers in the bitstream generated increase. Details about the benefits of these frame separation strategies are discussed in Sec. 3.3 and Sec. 3.4.

897

bitstream

output

u_p. : upper-port

I.p: lower-port

fi.: flag-input f.o.: flag-output

Figure 3. The system architecture for the MP©HL video encoder. The GVJF also supports multiple video formats. Acceptable video signal formats are listed in Table 1. Some formats of the video signal for HDTV specified by SMPTE standards adopt similar timing reference sequences using EAV and SAV to identify a frame. An important difference between these formats is the number of pixels between EAV and SAy, that is corresponds to the size of the rectangular area. Because this information can be changed by the operation parameters of the GVIF, only the changes of these parameters are needed to support these standardized video formats. On the other band, non-standardized video formats can be easily handled by using video signals that use separated synchronous signals, VSYNC and HSYNC, to identify a frame. The number of lines and the number of pixels from the edge of the VSYNC and the HSYNC can be changed by the operation parameters of the GVIF. This feature of the GVIF can simplify the design of the preprocessor because little care need be taken about the output timing of the processed video data, VSYNC and HSYNC. So, a preprocessor to convert a non-standard video format to suitable one for the GVIF can be easily implemented.

Table 1 . Video signal formats acceptable for the GVIF. Y/C signals sync. signals sync. type 1 composite f composite (EAV/SAV) type 2 separate composite (EAV/SAV) VSYNC, HSYNC type 3 composite separate VSYNC, HSYNC type 4 separate separate

3.2. Partial-bitstream generation In the parallel encoding approach, bitstreams generated by each VEU have to be reconstructed in order to generate a bitstream for the original video. During this reconstruction process, it is required that none of the partial-bitstreams contain any unnecessary layers. For example, the partial-bitstream containing the picture and the upper layer is required for only the VEU that encodes the upper left corner of the original video frame, and the other VEUs only need to generate the slice layer and the lower layer of the bitstream. Furthermore, some of the header information contained in the partial-bitstreams, the slice-number for example, must be changed according to the location of the

898

VIF: video interface module MDT: multiple chip data transfer module ME/MC: motion estimation and compensatIon module DCT/Q: DCT and quantization module VLC: variable length coding module BSIF: bitstream Interface module GVIF: generalized video interface PBGC: partial bitstream generation control BOCP: bItstream output control process MDT1: multiple chIp data transfer Interface

Figure 4. The block diagram of the SuperENC. slice in the original video frame. If this requirement is not satisfied, decoding and a modification process for the partial-bitstreams should be implemented in the system as shown in Fig. 6 (a). The PBGC is introduced to the SuperENC to control the generation of the headers in the partial-bitstream. The PBGC is realized by a combinational work of the system controller and the on-chip RISC module. The system controller indicates information about the location of the separated video frame each SuperENC will encode. Based on the information, the on-chip RISC module decides whether to generate each layer of headers or not, and modifies the slice-number for each slice in the separated video frame. In addition, for header fields that are generated using information from other SuperENC, such as vbvAelay, the collection and re-distribution of information can be done by the system controller. The PBGC makes it possible for the system to generate a bitstream of the original video simply by concatenating these partial-bitstreams, as shown in Fig. 6 (b). In contrast with the conventional system shown in Fig. 6 (a), the only functionality required to make a bitstream for the original video is a switch to select the partial-bitstream at the proper time.

3.3. Bitstream output procedure Problems still remain with the implementation of the switch. There are two processes, called the BOCPs, that are needed to realize the switch. The first process is a bitstream-field separation that splits every partial-bitstream into bitstream-fields that should be continuously located in the final bitstream. Because conventional video encoding LSIs output bitstream continuously, a sub-set of the decoder is required outside of the LSI to be able to detect the points at which the partial-bitstream should be cut. The second process is bitstream-field reordering, which concatenates these bitstream-fields in proper order. This process hampers making the system scalable because the order depends on how many VEUs are operating in the system and how the original video frames are separated for each VEU. Two mechanisms have to be implemented in the VEU so that the BOCPs can be realized in a cost-effective manner and so that the scalability of the system is not affected. The first is a bitstream-field management mechanism used for

899

U119111d1 viueu

(a) horizontal separation

(b) vertical separation

(c) complex separation

Figure 5. Separation of the original video frame. bitstream for the original video partial bitstreams bitstream reconstruction

A'B'I 1...

(a) conventional system

(b) proposed system

Figure 6. Bitstream reconstruction.

the bitstream-field separation process. In the SuperENC, this mechanism is implemented using period-flags that are attached at every 1 byte of partial-bitstream to indicate whether the byte is a period of continuous bitstream-fields or not. The period-flags are attached to the partial-bitstream as the partial-bitstream is being generated. The second is an output-permit-flag transfer mechanism used for the bitstream-field reordering process. The output-permitflag indicates whether a SuperENC is permitted to output the partial-bitstream or not. If the output-permit-flag indicates to the SuperENC that the partial-bitstream should be output, the SuperENC checks the period-flag of each 1 byte of the bitstream-fields. If the period-flag indicate the period of the continuous bitstream-field and all

900

of the bitstream-fields go out, the SuperENC transfers the value of the output-permit-flag to the next SuperENC which should output the next bitstream-fields. To avoid a timing conflict of the bitstream output, the value of the output-permit-flags of all the SuperENCs in the system are controlled so that only one SuperENC is permitted to output. The operation of the BOCPs using the SuperENCs is summarized as follows:

step 0: In the initial state of the system, only the SuperENC encoding the upper-left corner part of the original video frame has its output-permit-flag activated. step 1 : The SuperENC with its output-permit-flag activated outputs continuous bitstream-fields.

step 2: If the period of the continuous bitstream-field is detected by checking the period-flag, the SuperENC deactivates its output-permit-flag and sends a signal to activate that of the next SuperENC having the next continuous bitstream-field. step 3: Steps 1 and 2 are repeated during the time the systems is operating. An important key to making the system scalable is making step 2 simple. Because the scalability is achieved by changing the number of SuperENCs to be used, step 2 would be done in the same manner even if the number of SuperENCs is changed. Step 2 of the operation can be simply realized by using a ring-connected control line, called the BOCP-ring, as shown in Fig. 3. This simple architecture can be adopted because for each SuperENC, the next SuperENC in step 2 is always the same one given that the original video frame is separated simply horizontally or vertically. This is one of the significant benefits of our frame separation strategy. For example, for the vertical separation shown in Fig. 5 (b), the continuous bitstream-field is a one slice-layer of the partial-bitstream. Because the bitstream-field should be output in raster-scan order in the original video frame, the next slice-layer for a slice in the left-most separated video frame is always located in the middle separated video frame. The same thing occurs for the horizontal separation shown in Fig. 5 (a) , as shown by changing the layer from the slice to the separated video frame. However, for a complex separation, as shown in Fig. 5 (c) , continuous bitstream-fields for the upper-right separated video frame may exist in the upper-left or lower-left ones. Since there are more than one candidates for the next SuperENC, the system needs a more complex mechanism to determine the next one. There are two major benefits of the BOCP-ring architecture. The first is that the scalability of the system is easily achieved by changing the number of the SuperENC; the operation of the SuperENC for the BOCPs does not depend on the number of SuperENCs in the BOCP-ring. The second is that there is no need to prepare an external module to reconstruct a bitstream for the original video because the reconstruction is made by the combinational operation of SuperENCs in the BOCP-ring. One thing should be noted. A vertical separation will degrade encoding quality. This is caused by the implementation of the bitstream-field management mechanism in the SuperENC. Because the period of continuous bitstream-fields is detected in terms of 1 byte, the bitstream-field has to be byte-aligned. The continuous bitstreamfield should therefore be at least one complete slice-layer because the slice-layer is the lowest layer that is byte-aligned

in the MPEG-2 standard. This means that the vertical separation needs more than N times the slice-headers than that needed for horizontal separation, where N is the number of separated video frames for one original video frame.

3.4. Data sharing For video applications using the compression technique, the quality of the decoded images is very important, as is the cost of the system. For high-definition video applications, especially, high-quality compression is essential. In the parallel encoding approach, some data have to be shared between VEUs in order to avoid degradation of encoding quality. Two kinds of data must be shared. The first is reference images that are used for the motion compensation process of the encoding. Because MPEG-2 uses the motion compensation process to obtain high encoding efficiency, degradation of the encoding quality will result if the reference images can not be accessed. The second is encoding control information, which are used to dynamically optimize bit-allocation during encoding. In terms of optimization, the number of bits to be allocated for separated video frames should be controlled so that more bits are used for separated video frames with higher complexity.

901

The MDTI is introduced to share these data over the SuperENCs in the system. The SuperENC has two data-bus interfaces for the MDTI: an upper-port and a lower-port. These ports allow all the SuperENCs in the system to be connected to make a ring connection, called an MDTI-ring, as shown in Fig. 3. Only one MDTI-ring is needed to share data over the SuperENCs because data transfered from the previous one can be relayed to the next one. Furthermore, the amount of data to be transfered between the SuperENCs can be reduced by separating the original video frame simply horizontally or vertically. Because the search range for the motion compensation is not cover an entire frame, the useful reference images exist only in the neighboring separated video frames. For example, for the horizontal separation shown in Fig. 5 (a), data transfers from the top and bottom separated video frames are needed for the process of the middle separated video frame. However, for the complex separation shown in Fig. 5 (c), the encoding for the upper-right corner of the separated video frame requires reference images from the surrounding

separated video frames. In general, encoding based on a complex separation requires more than four times the number of data transfers than that based on the horizontal or vertical separation. The data transfer over the MDTI-ring is based on a token-ring control scheme. There is a master-SuperENC that controls data transfer over the MDTI-ring. The master-SuperENC can obtain the current status of data transfer by monitoring the data transfer control signals that are relayed over the MDTI-ring step by step. The data transfer process using the MDTI-ring is summarized as follows:

step 0: In the initial state ofthe data transfer, all SuperENCs in the MDTI-ring wait for data input from neighboring SuperENCs. step 1 : The master-SuperENC starts to send data via one port of the data-bus interface, the upper-port for example. Then, as each subsequent SuperENC receives data via its lower-port, it sends data via its upper-port.

step 2: The master-SuperENC stops sending data via the upper-port, and then compares the amount of data received from the lower-port and the amount of data sent via the upper-port. If the amounts are the same, the master-SuperENC judges that all SuperENCs in the MDTI-ring have finished the data transfer started at step 1.

step 3: Repeat step 1 and step 2 once for data transfer in the opposite direction, i.e. start from the lower-port.

step 4: Repeat steps 1 to 3, while encoding. Because the number of SuperENCs affects only the size of the MDTI-ring, as it does the size of the BOCP-ring,

the data transfers do not affect the scalability of the system. Furthermore, because the cycles of delay for data transfer between the SuperENCs are not important things as long as the protocol is kept, any process such as filters can be inserted if it is needed. The data transfer can be done simultaneously with the encoding process. Furthermore, there is no need to load data twice from the frame-memory of the SuperENC for the transfer, because these data to be transfered are already prepared in local-storages of the SuperENC for the encoding process. Thus, the only overhead caused by the data sharing process is the cycles to store the data transfered from other SuperENCs in the frame-memory. The overhead can be kept to less than 10% of that required for the encoding process.

4. EXPERIMENTAL SYSTEM We have developed an experimental video encoding system based on the system architecture described above. A photograph of the board is shown in Figure 7, and the specifications of the experimental system are summarized in Table 2. The system is constructed using 12 SuperENCs. All components of the system, including an audio encoder and a multiplexer, are integrated on a single board. The system is capable of real-time MP©HL MPEG-2 video encoding. By changing the number of SuperENCs activated in the system, it can encode any resolution of video. For example, more than 2 SuperENCs can encode 720 pixels x 480 lines x 60 fps, more than 7 SuperENCs can encode 1920 x 1088 x 30, and 12 SuperENCs can encode a maximum resolution of 2048 x 2048 x 30. Each SuperENC can transfer 10 blocks of image data and 10 bytes of arbitrary data between two neighboring SuperENCs during every macroblock operation period by using the MDTI. The image data transfer rate of 10 blocks per macroblock operation period guarantees that all image data encoded in neighboring SuperENCs can be used as reference images for motion compensation. The data transfer rate of 10 bytes of arbitrary data per macroblock

902

operation period enables us to optimize bit allocation during the video encoding by sharing any information, such as allocated bit counts and complexity, over all the SuperENCs in the system. Thus, the proposed system architecture allows us to create high-quality video encoding systems that have scalability in terms of the target video resolution.

Figure 7. The experimental board.

Table 2.__Specifications of the experimental system. Standard video {MPJ422P}{MLIHL} MPEG-2 video audio ISO/IEC 11172-3 ISO/IEC 13818-1 multiplex Video formats NTSC, PAL, 720P, 10801 and others Bitstrearns ES/PES Bit rate up to 80 Mbps Other video resolution up to 2048 pixels x 2048 lines x 30 fps MV search range 211.5 pixels x 113.5 lines (max.)

5. CONCLUSIONS In this paper, we have proposed a new system architecture for an MPEG-2 video encoder designed for high-resolution video. The system architecture uses the spatially parallel encoding approach and have scalability for the target video resolution to be encoded. Three new techniques have been introduced to the system. The first is the general video interface to support multiple video formats. The second is the bitstream generation control scheme suitable for the spatially parallel encoding approach. The third is the simple data sharing mechanism for all encoding units. By using these techniques, we have achieved both the scalability of the system and high encoding efficiency. The capability of this proposed architecture has been confirmed by using it in an experimental system that uses a newly designed MP©ML MPEG-2 video encoding LSl, the SuperENC, for the encoding units. By using video encoding systems based on the system architecture, high quality video will be able to used for visual applications for commercial and personal use at reasonable system cost.

ACKNOWLEDGMENTS The authors thank Makoto Endo and Ken Nakamura for technical suggestions arl(l discussions. The authors thank Tsunehachi Ishitani, Toshio Tuchiya, Hiroshi Sakai and Yasushi Tokioka of NTT Electronics Co. for their contributions to the implementation of the experimental system. And the authors thank Dr. Ryota Kasai, Dr. Siisiuriii Ichinose and Dr. Hiroshi Watanabe of NTT RD Center for their encouragement and support.

REFERENCES 1. "Information technology — Generic coding of moving pictures and associated audio ISO/IEC 13818-2 international standard (video)," November 1994. ISO/IEC 13818-2. 2. M. Mizuno, Y. Ooi, N. Hayashi, J. Goto, M. Hozumi, K. Furuta, A. Shibayama, Y. Nakazawa, 0. Ohnishi, S.-Y. Zhu, Y. Yokoyama, Y. Katayama, H. Takano, N. Miki, Y. Senda, I. Tamitani, and M. Yamashina, "A 1.5-W single-chip MPEG-2 MP©ML video encoder with low power motion estimation and clocking," IEEE Jotirnal of Solid-State Circuits 32, pp. 1807—1816, November 1997.

3. E. Miyagoshi, T. Araki, T. Sayama, A. Ohtani, T. Minemaru, K. Okamoto, H. Kodama, T. Morishige, A. Watabe, K. Aoki, T. Mitsumori, H. Imanishi, T. Jinbo, Y. Tanaka, M. Taniyama, T. Shingou, T. Fukumoto, H. Morimoto, and K. Aono, "A 100mm2 0.95w single-chip MPEG2 MP©ML video encoder with a 128GOPS motion estimator and a multi-tasking RISC-type controller," in Proceedings of International Solid-State Circiits Conference, pp. 30—31, 1998.

4. E. Ogura, M. Takashima, D. Hiranaka, T. Ishikawa, Y. Yanagita, S. Suzuki, T. Fukuda, and T. Ishii, "A 1.2w single-chip MPEG2 MP©ML video encoder LSI including wide search range motion estimation and 81MOPS controller," in Proceedings of International Solid-State Circttits Conference, pp. 32—33, 1998. 5. T. Minami, T. Kondo, K. Nitta, K. Suguri, M. Ikeda, T. Yoshitome, H. Watanabe, H. Iwasaki, K. Ochiai, J. Naganuma, M. Endo, E. Yamagishi, T. Takahishi, K. Tadaishi, Y. Tashiro, N. Kobayashi, T. Okubo, T. Ogura, and R. Kasai, "A single-chip MPEG2 MP©ML video encoder with multi-chip configuration for a single-board MP©HL encoder," in HOT Chips X, pp. 123—131, IEEE, August 1998.

6. J. Jeong, Y. Park, J. M. Jo, M. Yim, H. Moon, S. Kim, P. Yu, H. Kang, and W. Ahn, "Development of an experimentaJ full-digital HDTV system: algorithm and implementation," IEEE Transactions on Consumer Electronics 40, pp. 234—242, August 1994. 7. T. Nakai, Y. Hatano, T. Kasezawa, H. Ito, and M. Nishida, "Development of HDTV digital transmission system through satellite," IEEE Transactions on Consumer Electronics 41, pp. 604—614, August 1995. 8. J. N. Mailhot and H. Derovanessian, "The Grand Alliance HDTV video encoder," IEEE Transactions on Consumer Electronics 41, pp. 1014—1019, November 1995. 9. T. Matsumura, H. Segawa, S. Kumaki, Y. Matsuura, A. Hanami, H. Yamaoka, R. Streitenberger, S. Nakagawa, K. Ishihara, T. Kasezawa, Y. Ajioka, A. Maeda, and M. Yoshimoto, "A chip set architecture for programmable real-time MPEG2 video encoder," in Proceedings of IEEE Custom Integrated Circuits Conference, pp. 393—396, 1995.

10. T. Kondo, K. Suguri, M. Ikeda, T. Abe, H. Matsuda, T. Okubo, K. Ogura, Y. Tashiro, N. Ono, T. Minami, R. Kusaba, T. Ikenaga, N. Shibata, R. Kasai, K. Otsu, F. Nakagawa, and Y. Sato, "Two-chip MPEG-2 video encoder," IEEE Micro 16, pp. 51—58, April 1996.

904

encoder for multi-resolution video

system architecture uses the spatially parallel encoding approach and has ... Some video encoding systems designed for HDTV broadcasting service have ..... real-time MPEG2 video encoder," in Proceedings of IEEE Custom Integrated ...

922KB Sizes 6 Downloads 267 Views

Recommend Documents

mpeg-2 video encoder chip - IEEE Xplore
Thanks to increased market accep- tance of applications such as digital versatile disks (DVDs), HDTV, and digital satellite broadcasting, the MPEG-2 (Moving ...

AN IMPROVED VIDEO ENCODER WITH IN-THE-LOOP ...
also keeps the H.264 coding architecture unaltered. The mo- ... In-Loop filter Architecture .... (c):Noisy, AM filtered#38(28.53dB), (d):Noisy, IL filtered.

An MPEG-2 Video Encoder LSI with Scalability for ...
Email: fike, kond, koyo, suguri, tome, nami, jiro, [email protected]. Abstract ... exible data-transfer that improves the encoder from the standpoints of ...

Adaptive Wisp Tree - a multiresolution control structure for simulating ...
for simulating dynamic clustering in hair motion. F. Bertails1 .... The use of adaptive level of detail is a well known way to improve .... We call segment the links.

Cheap China Supplier H.264 Hd Hdmi Encoder For Iptv, Ip Encoder ...
Cheap China Supplier H.264 Hd Hdmi Encoder For Iptv, ... o Ip Audio Video Free Shipping & Wholesale Price.pdf. Cheap China Supplier H.264 Hd Hdmi ...

encoder cc os.pdf
There was a problem loading more pages. encoder cc os.pdf. encoder cc os.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying encoder cc os.pdf.

Cheap H.264 Hd 3G Sdi Video Encoder Hd Sdi To Ip Sdi Over Ip ...
Cheap H.264 Hd 3G Sdi Video Encoder Hd Sdi To Ip Sdi ... r Live Broadcast Free Shipping & Wholesale Price.pdf. Cheap H.264 Hd 3G Sdi Video Encoder Hd ...

Generalized Multiresolution Hierarchical Shape Models ...
one-out cross-validation. Table 1 shows the results obtained for the multi-object case. Compared with the classical PDM (avg. L2L error: 1.20 ± 0.49 vox.; avg.

Cheap Mpeg-4 Avc ⁄ H.264 Wifi Hdmi Video Encoder Hdmi ...
Cheap Mpeg-4 Avc ⁄ H.264 Wifi Hdmi Video Encoder Hd ... 64 Iptv Encoder Free Shipping & Wholesale Price.pdf. Cheap Mpeg-4 Avc ⁄ H.264 Wifi Hdmi Video ...

A 4:2:2P@ML MPEG-2 video encoder board using an ... - IEEE Xplore
4:2:4PQh4L video encoding, on the contrary, one macroblock contains eight blocks; four luminance hlocks a,nd four chrominance blocks. In order to en- code 4:2:2P@ML video, the ciicoder requires 33% more proccssing capability. 111. 4:2:2P Enhancement

Multiresolution Feature-Based Image Registration - CiteSeerX
Jun 23, 2000 - Then, only the set of pixels within a small window centered around i ... takes about 2 seconds (on a Pentium-II 300MHz PC) to align and stitch by ... Picard, “Virtual Bellows: Constructing High-Quality Images for Video,” Proc.

Cheap Mpeg-4 Avc⁄H.264 Wifi Hdmi Video Encoder Hdmi ...
Cheap Mpeg-4 Avc⁄H.264 Wifi Hdmi Video Encoder Hdmi ... 64 Iptv Encoder Free Shipping & Wholesale Price.pdf. Cheap Mpeg-4 Avc⁄H.264 Wifi Hdmi Video ...

Low Complexity Encoder for Generalized Quasi-Cyclic ...
orbits, as described in (1), where Oi := O(si) for some si ∈ S. We can condider σ as the simultaneous shift (3) of {ci}. Thus, we have shown that the class of GQC codes is equivalent to the class of linear codes with nontrivial Aut(C) ⊃ 〈σ〉

Wireless communication system having linear encoder
Apr 8, 2013 - digital audio broadcasting,” in Proc. of the IEEE Region 10 Conf,. 1999, 1:569-572 ... Proc. of 36th Asilomar Conf. on Signals, Systems, and Computers,. Nov. 2002 ...... local area network, a cellular phone, a laptop or handheld.

Wireless communication system having linear encoder
Apr 8, 2013 - Antipolis, France. [Online]. 1999, Available: http://www.etsi.org/ umts, 25 pages. ... Boutros et al., “Good lattice constellations for both Rayleigh fading ..... and Computer Engineering, Edmonton, Alta, Canada, May 9-12,. 1999.

Universal Sentence Encoder - Research at Google
We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding mod- el

DoubleClick for Publishers Video .ca
Hosting and transcoding. Viewers are ... Best of all, DFP Video brings content-level insights to the surface that give unrivalled power to ... Viewer ad experience can be at odds with business priorities, and managing this balance can be ...

Semantic Video Event Search for Surveillance Video
2.2. Event Detection in Distributed System. Conventional video analytic systems typically use one processor per sensor. The video from the calibrated sensor.

Opt-Encoder-62R-nsrmtr.pdf
Terminal Strength: 15 lbs cable pull-out force. min. Operating Speed: 100 RPM max. Environmental Ratings. Operating Temperature Range: -40°C to. 85°C.

Universal Sentence Encoder - Research at Google
transfer task training data, and task perfor- mance. Comparisons are made with base- lines that use word level ... Limited amounts of training data are available for many NLP tasks. This presents a challenge for ..... sentence length, while the DAN m

DoubleClick for Publishers Video .ca
Benefits at a glance. • Increase revenue ... module provides you with a sophisticated technology platform to capture more value from video. ... Viewer ad experience can be at odds with business priorities, and managing this balance can be ...

Multiresolution Hierarchical Shape Models in 3D ...
cess described above, the extension to the 3D domain follows as presented be- low. Suppose x0 ..... The registration of 3D MR data used affine and non-rigid ...