Proto-Object Based Rate Control for JPEG2000: An Approach to Content-Based Scalability

what each group of pixels represents. It has been recognized that an abundance of research has produced algorithms for automatic image and video segmentation [9]–[11], and structuring of multimedia content [12], [13], but these techniques cannot overcome problems, such as partial occlusion, nonrigid motion, and over-segmentation, among many others. Many strategies reported so far are not efficient enough for representing generic scene content, and most practical applications are limited to the context of video-telephone applications with one person in front of a static background, i.e., typical head and shoulder scenes [4], [8], [14]. There is thus far still no suitable language to efficiently represent generic images at the object-level [15]. An alternative to object-based coding is the lower level process of region-based coding [16], [17], in which the focus is not on objects as semantic units, but rather on image regions defined by their statistical properties. Statistically homogeneous regions can be singled out by pixel-level segmentation techniques with the aim of encoding them efficiently, or allowing the user himself to identify a region of interest (ROI) [18] to be encoded with a higher priority, or with SPRITE or Panorama coding [15], [19] techniques that are used to encode the background, as envisaged in serval applications and standards [1]. These techniques can be seen as a midlevel computer vision image processing approach that divides an image into regions of coherent textures. Note that this is an extension of the block-based approach in that regions are now arbitrarily shaped. The shape of a region needs to be coded using shape coding algorithms. However, these techniques require very sophisticated motion analysis and prediction strategies to construct and transmit the background separately from the foreground. They cannot be seen as a tool that is easily applied to generic scene content. In this paper, we attempt to bridge the semantic gap between objectbased and region-based approaches by adopting the notion of a protoobject (PO). The notion of a PO is defined as a volatile unit of visual information that can be bounded into a coherent and stable object when accessed by focused attention [20]. As suggested by the name, a PO adds semantic meaning to the region-based description of an image, in which its semantic granularity is rougher than an object-level encoding but finer than a region-level encoding. We first segment an input image into biologically plausible PO regions and background (BG) regions ( a BG region refers to an image region not belonging to any PO regions.) using a computational visual attention model [21], [22], and then separately encode these regions within the JPEG2000 coding system. In order to produce an embedded code-stream that is compatible with the JPEG2000 standard, we propose a three-stage rate control system that performs region-based rate allocation and efficiently reduces the computational complexity and memory usage, as well as maintaining image quality comparable to the conventional post-compression rate distortion (PCRD) optimum algorithm for JPEG2000. The proposed approach adds to the JPEG2000 coding system the functionality of selectively encoding, decoding, and manipulating individual PO regions in an image, as well as content-based scalability to JPEG2000 coding system, with trivial modifications beyond the scope of the standard.

Jianru Xue, Member, IEEE, Ce Li, and Nanning Zheng, Fellow, IEEE

Abstract—The JPEG2000 system provides scalability with respect to quality, resolution and color component in the transfer of images. However, scalability with respect to semantic content is still lacking. We propose a biologically plausible salient region based bit allocation mechanism within the JPEG2000 codec for the purpose of augmenting scalability with respect to semantic content. First, an input image is segmented into several salient proto-objects (a region that possibly contains a semantically meaningful physical object) and background regions (a region that contains no object of interest) by modeling visual focus of attention on salient proto-objects. Then, a novel rate control scheme distributes a target bit rate to each individual region according to its saliency, and constructs quality layers of proto-objects for the purpose of more precise truncation comparable to original quality layers in the standard. Empirical results show that the suggested approach adds to the JPEG2000 system scalability with respect to content as well as the functionality of selectively encoding, decoding, and manipulation of each individual proto-object in the image, with only some slightly trivial modifications to the JPEG2000 standard. Furthermore, the proposed rate control approach efficiently reduces the computational complexity and memory usage, as well as maintains the high quality of the image to a level comparable to the conventional post-compression rate distortion (PCRD) optimum truncation algorithm for JPEG2000. Index Terms—Bit allocation, embedded bit stream, JPEG2000, rate control, salient region.

I. INTRODUCTION Constraints inherent in a modern visual data transmission system, such as heterogeneous network, varying connection quality, or the need to operate on a variety of devices with a wide range of capabilities, motivate an intense worldwide research effort underway to develop an image/video codec that has the ability to provide content-based interactivity and scalability. However, currently even the state-of-art coding techniques, for example, JPEG2000 for still image coding [1], and H.264/AVC for video compression [2], [3], still code images/video on a pixel basis, a technique which does not facilitate the encoding of semantic meaning, and thus lacks such ability. To address this problem, content-based coding techniques [4]–[8] attempt to identify semantically, meaningful object regions in image, and code them separately and adaptively. The assumption is that an image is naturally composed of object regions. Ideally, once semantic object regions have been identified and described, we can add information to what is being coded so that it is more meaningful to both humans and computers. There are two main lines of research in this area, object-based coding and region-based coding. Theoretically, object-based coding of generic scene content requires segmentation techniques to classify pixels with markup tags specifying

Manuscript received August 20, 2009; revised March 17, 2010, August 09, 2010; accepted September 08, 2010. Date of publication September 20, 2010; date of current version March 18, 2011. This work was supported in part by the NSFC projects 60875008 and 60635050, and 973 Program Project 2010CB327902. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Hsueh-Ming Hang. The authors are with the Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China (e-mail: jrxue@mail.; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier 10.1109/TIP.2010.2077643

II. PROPOSED SYSTEM ARCHITECTURE In our augmented JPEG2000 system, as shown in Fig. 1, to support the content-based scalability, we first segment an input image into PO regions and BG regions, and then reconsider both the construction of an operational RD curve in the coding pipeline and the implementation of an efficient rate control scheme in terms of PO regions. Two major modifications are thus made to the standard JPEG2000 system: 1) using PO region segmentation instead of tile partition, 2) defining the quality

1057-7149/$26.00 © 2011 IEEE



Fig. 1. Framework of the proposed approach. Boxes in blue are augmented into the standard JPEG2000 coding system, and boxes in black are original components of the JEPG2000 system.

Fig. 2. Hierarchical partition in Tier-1 of the content scalable JPEG2000.

layer in terms of PO regions. These are reflected in the partition system and coding pipeline of the JPEG2000 system. A. Hierarchal Partition System The modified hierarchical partition system is shown in Fig. 2. First, we use the segmentation of PO regions and BG regions to replace the tile partition in the JPEG2000 system. This segmentation not only provides an approach to augment JPEG2000 the scalability with respect to content, but also forms the basis for a more efficient and accurate rate control scheme compared with PCRD. The PO region segmentation is based on the work of Walther et al. [22], which extends the Itti et al. [21] implementation of the saliency map-based model of bottom-up attention with a process that infers the extent of a PO at the attended location from the maps that are used to compute the saliency map. A PO region is obtained by extracting an image region around the focus of attention that corresponds to the approximate extent of a PO at that location. After the first PO region is determined, and by setting the saliency value of the locations belonging to the PO region to zero, we obtain a updated saliency map. A second PO region is obtained by performing the same PO region finding procedure on the updated saliency map. This PO location procedure is repeated until the maximum saliency value of the updated saliency map is below a predefined threshold. Finally, we treat pixels unassigned to any PO regions in the saliency map as belonging to BG regions. Each color component of a segmented region (including both PO regions and BG regions), e.g., region-component, is then decomposed into high-frequency and low-frequency subbands using a shape-adaptive wavelet transformation [23]. After the wavelet decomposition, each subband of a region-component is then approximately divided into rectangular blocks called precincts [18]. This is different from JPEG2000 in that we insert the precinct partition before code-block partition. Each precinct is then divided into smaller rectangular blocks called code-blocks, which define the smallest accessible spatial patch of the image in wavelet domain because their data is enclosed in the ultimate container of the code-stream.

Theoretically, code-block provides a source of spatial accessibility in JPEG2000. However, direct access to code-blocks is hampered by the fact that each code-block’s code-stream may be distributed across many quality layers, and the information describing their contributions is itself coded to exploit redundancy between neighboring code-blocks within the same subband. We use the term of precinct to overcome these obstacles, since it provides a spatial structuring element, in which each image resolution, LLd (see Fig. 2) of each region-component has its own precinct partition, and can collect code-block into spatial and resolution groupings. This naturally leads to that each segmented region or region-component has its own DWTs, its own set of code-block code-streams, and its own quality layers. Parameters controlling the number of DWT levels, quantization step sizes, the DWT kernels, and reversibility may all be adjusted on a region-component basis by including appropriate headers in the code-stream. Thus, individual regions or region-components may be readily extracted and rewritten as valid JPEG2000 code-streams in their own right. B. Redefinition of Quality Layer We redefine quality layers for a PO region based on two observations. First, the practical use of quality layers defined in the JPEG2000 may need to address two points: 1) the lack of quality scalability of code-streams containing a single or few quality layers, and 2) the ratedistortion optimality of regions of interest transmission. Second, we need to organize an embedded code-stream to enable scalability of quality, resolution, color component with respect to PO regions. The formulation of our quality layer involves modifications of both Tier-1 and Tier-2. In Tier-1, a region, after being extracted from an image, is treated as an image tile. Each region is elaborated upon independently for the DWT and EBCOT. We adopt a slightly modified Maxshift method [1] to identify these segmented regions, which does not require any shape coding or any shape information to be explicitly transmitted to the decoder. In the traditional Maxshift of JPEG2000, all regions of interest (ROI) use a single scaling value, which lacks the ability to identify each individual region. To overcome this limitation,



Fig. 3. Organization of code-stream in Tier-2 of the content scalable JPEG2000.

the modified Maxshift proportionally assigns different scaling values to different regions according to their salientness. Only these scaling values are needed to signal the decoder. We also exploit the restart mechanism of Tier-1. The modified Tier-1 produces an embedded code-stream with a large collection of potential truncation points (one at the end of each coding pass) that can be used by the RD optimization techniques for each code-block. Thus, the code-stream of a code-block can be truncated by rate control algorithm to meet the target bit rate. In Tier-2, we organize quality layers in term of precincts. As the progressive order shown in Fig. 3, the quality layer of a PO region is formed by quality layers of precincts belonging to the PO region. Codewords of code-blocks in a precinct are enclosed in a single packet. This organization conceptually defines a packet in terms of a quality increment of one spatial location of one resolution level, and allows the identification of quality layers within the code-stream to be represented in collections of packets from different precincts with an equal number of the quality layers. The collection of quality layers comprises the final code-stream. III. THREE-STAGE RATE CONTROL In this section, we present a three-stage rate control approach to implement the PO based bit allocation mechanism. Within the proposed coding system, rate control reduces to choosing coding parameters so as to meet a target bit rate. A. Problem Formulation In JPEG2000, scalability by quality is provided by the optimized truncation of the code-streams produced for each code-block, and by the definition of quality layers. Quality layers are formed by collections of code-stream segments optimally selected using RD optimization techniques. Usually, a post compression RD (PCRD) optimization is applied after all the quantized wavelet coefficients have been entropy encoded. Specifically, let nj denote the jth potential truncation point of the code-stream produced for code-block Bi , with 0  j < Ti , Ti den n noting the number of coding passes of Bi , and let Ri and Di den < note, respectively, the bit rate and distortion of nj , with Ri Rin and Din < Din . PCRD computes first the RD slope Sin = n n n n n n n and 1Ri = Ri 0 1Di =1Ri , with 1Di = Di 0 Di n Ri , to identify those truncation points with strictly decreasing RD slope, i.e., those truncation points lying on the convex hull. Then by computing the total distortion of the image and the total bit rate of the n n code-stream as D = and R = respectively, and i Di i Ri

considering only the truncation points lying on the convex hull, PCRD approaches the RD optimization problem as follows: n

D() + R() =


Di + Ri



where fnj g stands for the set of truncation points, and the value of  that minimizes this expression yielding R(), which represents the optimal solution. By utilizing the actual RD functions of all the compressed data, the PCRD techniques attain the minimum image distortion for a given bit rate. However, since PCRD requires encoding all of the data and storing all code-stream segments even though a large portion of the data needs not be output, most of the computation and memory usage could be considered redundant in this process. More specifically, the rate control scheme searches the optimum truncation points for every quality layer based on RD-slopes using the same scheme as is used for one layer, which requires a great amount of computation. To address the problems aforementioned above, we present in the following subsections a three-stage rate control scheme to reduce the computational complexity and memory usage, as well as maintain a high image quality and precise bit rate comparable to PCRD. B. Pre-Coding Stage: Rate Allocation for PO Regions Let the target bit rate for the input image be R, and the ith PO region’s saliency be wi , where i = 0; . . . ; N 0 1, and N is the number of regions after PO region segmentation. The saliency of a region is defined as the maximum saliency of the region in the saliency map. Then the bit rate assigned to the ith region can be precalculated as

Ri = R 3

wi wi


before we start the Tier-1 coding. C. Coding Stage: Constructing the Operational RD Curve The operational RD curve is constructed in two steps: 1) the Tier-1 output code-stream segments with a set of truncation points for coding passes. The code-stream segment is the smallest unit for constructing the operational RD curve. 2) quality layers of PO regions are developed in Tier-2, and this forms the final operational curve for the further purpose of rate control. 1) Step 1: In its original formulation, PCRD forces a full encoding of the image even when few coding passes are included in the final code-stream. When encoding an image at low bit rates, this causes



Tier-1 to consume more computational resources than is strictly necessary. To reduce the computational burden of Tier-1 when applying PCRD, we build RD curves in terms of coding passes. This allows further truncation of the code-stream segments within packets and, therefore, the optimal decoding of PO regions or the complete image area. However, to know the RD of coding passes in JPEG2000, we need to identify coding passes within the code-stream, and this further requires the decoding of the image, because a packet contains one or more coding passes, without any guarantee that the length of each pass is kept. To solve this problem, we make use of the restart variation mechanism in Tier-1. The restart variation mechanism is designed to restart the MQ coder at the beginning of each coding pass to facilitate the parallel implementation of coding passes. This mechanism enables the MQ coder to produce one codeword for each coding pass, and forces an explicit encoding of the length of coding passes in the headers of each packet. This allows the recovery of coding pass lengths through a simple decoding of packet headers, an operation with a low computational complexity. Furthermore, it scarcely penalizes the bit rate of the final code-stream. The completion of each coding pass (SPP, MRP or CP) results in a code-stream segment, and a potential truncate point associated with the corresponding bit rate and distortion. This forms the basis of the RD optimization for rate control schemes. Thus, for each code-block, Tier-1 outputs a code-stream that consists of several code-stream segments connected by a set of truncated points. This approach implies a modification slightly beyond the scope of the standard, since the encoder must store lengths of coding passes to an external file independently of the code-stream of code-blocks. However, note that the approach constructs fully compliant JPEG2000 code-streams and fits perfectly into the framework of interactive image transmissions. It is also should be noted that the creation of the external file of lengths, or the introduction of the restart coding variation to already encoded code-streams, is needed only once, without requiring the full decoding of the image. 2) Step 2: Once the final code-stream is constructed in the JPEG2000 coding system, the number and RD of quality layers becomes fixed and cannot be modified. The identification of quality layers within the code-stream enables the definition of progression orders primarily with respect to quality, which provides optimal RD representations of the image only when the code-stream is decoded at quality layer boundaries. For example, let a code-stream that is progressive primarily with respect to quality contain quality layers l+1 . When the allocated at bit rates 0 1 . . . N01 , with l l l+1 , ^ code-stream needs to be truncated at bit rate ^ , with the decoded image may have a non-optimal, or even a poor coding performance. This fact suggests two problems with the RD optimality of JPEG2000 code-streams: 1) the lack of quality scalability of code-streams containing a single or few quality layer, and 2) the lack of transmission of ROIs with RD optimality. The first problem can be avoided using an adequate strategy for the allocation of quality layers [24] in code-streams containing few or a single quality layer: code-streams constructed with inadequate allocations strategies may negatively impact the quality of the decoded image in more than 10 dB. The second problem concerns, the RD optimality achieved when decoding ROIs from a code-stream. When the encoder selects and truncates the code-stream segments to form quality layers, the complete spatial area of the image is considered, and therefore the overall bit rate of the quality layer. This decreases the precision of the rate control scheme. To overcome limitations in the quality layers in JPEG2000, we define quality layers for PO regions by using the quality layers of precincts. Each PO region has several precincts, and quality layers of a PO region

R ;R ; ;R

are formed using quality layers of precincts belonging to the PO. More specifically, before logically partitioning a region into code-blocks, we introduces the concept of precincts. Although precincts are further partitioned in code-blocks, they define the smallest accessible spatial locations of the image because their data is enclosed in the ultimate container of the code-stream, the packet. A packet encapsulates some code-stream segments of code-blocks belonging to one precinct, can be decoded independently, and is the smallest access unit in a code-stream. This is important in supporting the quality progression of a region. For example, to randomly access to a specific region, we need only find packets of quality layers of precincts belonging to the region, and then decode these packets. As shown in Fig. 3, quality layers of a PO region provide a basis for more accurate rate estimation than the quality layers of JPEG2000. 0 1, Each precinct has several packets, referred to as 0pl , = 0 . . . denoting the number of quality layers. A quality layer of a PO region is formed using packets in the same quality layer but belonging to different precincts. Let a code-stream for a PO region that is progressive primarily with respect to quality contain quality layers allocated at bit rates p0 p1 . . . pN01 , with pl pl+1 . Let 1 pl denote the bit rate increment of the PO’s code-stream segments in layer . Then l ], where 1 l denotes the increment bit rate of the l 1 p 2 [0 1 original quality layer . This means that rate control using the quality layers of a PO region provides higher precision with respect to rate control using quality layer of JPEG2000.



R ;R ; ;R ; R l


; ;N




D. Post-Coding Stage: Fast RD Slope Estimation for Optimum Truncation By utilizing the actual RD functions of all the compressed data, the optimal truncation techniques attain the minimum image distortion for a given bit rate. Our rate control scheme is based on the estimation of RD slopes of the coding passes. Using these estimations, the selection of coding passes to yield a target bit rate can be performed without information related to the encoding process, or distortion measures based on the original image. To speed up the RD slope estimation of the coding passes, we adopt the concept of coding level in [25]. A coding level is defined as the coding pass of all code-blocks of the image at the same height, given by = ( 1 3) + , where stands for the bit plane and stands for = f2 for SPP 1 for MRP 0 for CPg. the coding pass type with Coding passes are scanned from the highest coding level of the image to the lowest level until the target bit rate is achieved. In each coding level, coding passes are selected from the lowest resolution level to the highest one, and in each resolution level, subbands are scanned in order [HL, LH, HH]. The hypothesis behind the coding level is that, within a subband, code-blocks with different number of magnitude bit planes will have different estimations, whereas the code-blocks with the same number of magnitude bit plane will have the same estimation. By identifying the number of magnitude bit planes in code-blocks, obtainable through the decoding of packet headers, RD slopes of coding passes can be estimated fairly closely. This assumption has been verified in [26]. The RD slope of coding passes is then computed as





m cp




+ 1+


cp ;

for SPP coding passes for MRP coding passes for CP coding passes



where compels to select the coding passes from the highest to the lowest coding level of the image. Within each coding level, coding passes are selected using the value of . In order to assure that coding passes of MRP are always concatenated with the consecutive coding pass of type CP, we set MRP = 0, except for the MRP of the second





highest bit plane, where EMRP = 0:99. For coding passes of type SPP and CP , EfSPP jCP g attempts to approximate the balloon effect [26] within each subband. We adopt values for these three parameters from the experimental results reported in [26]. The RD slopes of code passes in a code-block are monotonically decreasing with the bit rate. Different values of RD-slopes denote different image quality. After all the code-blocks are encoded, the quality layers for PO regions are divided simultaneously, and the optimal truncation points will be searched efficiently according to a given target bit rate. E. Computational Complexity Analysis PCRD for a standard JPEG2000 requires Tier-1 encoding of all of the quantized coefficients and the storage of the whole encoded bit-stream in the memory buffer, even though a large portion of them will not be included in the code streams after the optimal truncation. Its computational complexity can be up to 60% of the total CPU execution time [27]. Therefore, a significant portion of computational power and working memory size is wasted on computing and storing the unused data. Also, PCRD is a noncausal or offline process because the entire image/tile needs to be completely encoded before the code-stream can be determined and outputted. Hence, a long transmission delay is possible. Although PCRD can obtain the optimum truncation points for every code-block at a given bit rate based on the RD-slopes of coding-passes, it costs a great amount of computation to search the optimal truncation points among the coding passes in the whole image tile for each layer. Our three-stage rate-control approach for the content scalable JPEG2000 can not only ensure the image quality but also uses less working memory and reduces the computation complexity. First, because of the usage of saliency-based PO region bit allocation, when the original image is input, all of the target number of bytes for every image tile can be acquired. Therefore, The proposed method only uses a little resource to improve the compression performance. Second, with a given target bit rate, our approach can truncate the code-stream in parallel with the code-block coding. Because most needless data are truncated before being encoded, and this speeds up Tier-1 coding and reduces working memory. During encoding, the RD-slope of the current coding pass is calculated by means of fast estimation techniques to decide whether the code-block is truncated from the current coding pass according to its RD-slope and the remaining RD information. If the current coding pass can be truncated and its RD-slope is less than or is equal to the rest rate-distortion information, the current code-block will be truncated from the current coding pass, then Tier-1 coding will be reset and prepare to encode the next code-block or finish code-block coding. At the same time, the quality layers are divided according to the RD-slopes. Finally, the post-coding stage calculates the RD-slopes by means of estimation in order to save the memory of RD-slopes. Moreover, this estimation avoids searching the optimum truncation points based on RD-slopes through incorporating some quality layers into one layer. This not only attains approximately optimal quality of the reconstructed images, but also saves large memory on account of the RD-slopes. Fig. 4 presents the time consumption of the proposed system compared with Kakadu in coding the test images. Empirical results show that the modifications to the Kakadu software package do not cause an increase in computing complexity. IV. EXPERIMENTAL RESULTS AND DISCUSSIONS To demonstrate the effectiveness of the proposed system, we implemented it by modifying the Kakadu software package. The Kakadu software package is a complete implementation of the JPEG2000 standard, and is used as a benchmark to evaluate the performance of the

Fig. 4. Time consumptions by the proposed system and Kakadu software in encoding images.

Fig. 5. Evaluation of the coding performance achieved with the proposed system compared to JPEG2000 system. Average results for the images of both JPEG2000 test corpus and Kodak test corpus (gray scaled image, 512 512).


proposed system. In the rest of this section, the proposed system is referred to as Ours, and the original Kakadu software is referred to as JPEG2000. In the following experiments, we compare our approach with the Kakadu software along three dimensions: 1) performance as an image codec; 2) performance of the proposed rate control scheme; and 3) content-based scalability. Experiments have been carried out using an image corpus belonging to the JPEG2000 test images which contains 10 pictures, and an image corpus belonging to Kodak Lossless True Color Image Suit1 which contains 24 pictures. A. Performance as an Image Codec We assess the coding performance of the proposed system compared with the JPEG2000. We set the number of PO regions to 2 or 3 by a predefined threshold of the PO segmentation procedure. For the purpose of comparison, we defined these PO regions as ROIs for JPEG2000 codec. However, the JPEG2000 codec does not support multiple ROIs, and can only code them as a single ROI region consisting of several subregions. These ROIs can not be decoded individually, while the proposed system can selectively code, decode and manipulate each individual PO region. The compatibility of the proposed system with the JPEG2000 standard has been testified by using a standard JPEG2000 decoder to decode the code-streams generated by the proposed system. Results are given as the PSNR obtained with our system and the PSNR obtained with the JPEG2000 when encoding at the same bit rate. The PSNR is calculated in two ways :) using the average RSNR of R, G, and B-component of images, or 2) using PSNR of the gray scaled images. Similar results are obtained. Fig. 5 reports the PSNR comparison average over all test images, as well as the standard deviation. It shows that on average for all bit rates and all images, our system is better than JPEG2000. We found that: 1) with a given bit rate, the proposed system generates less distortion than does Kakadu, at least 1 dB or so higher, and 2) with a given distortion, the proposed system generates smaller 1



= 0 24 bpp

Fig. 6. Three reconstructed images of the test image Dog at bitrate : . The labeled regions are PO regions for our system, and the ROI for JPEG2000. : dB, Middle) the JPEG2000 with P SN R : dB, and Right) the JPEG2000 with P SN R dB. Left) the proposed system with P SN R

= 23 7

= 22 25

= 22




bit rate than does the Kakadu, at least 0.2 bpp or so. These results also hold for other coding parameters. Regarding the visual comparison, Fig. 6 shows three reconstructed Dog images which are encoded at 0.24 bpp, one by the proposed system, and two by the JPEG2000 with different ROI. The test image Dog is segmented into two PO regions. For the two reconstructed images of JPEG2000, one is obtained by setting the two PO regions as a whole ROI, and the other is obtained by choosing the most salient PO as the ROI. Obviously, the quality of the reconstructed image of the proposed system exceed that of JPEG2000, especially in areas that does not belong to the ROI. This is attributed to the saliency based bit allocation mechanism, which is able to distribute bit rate among PO regions and BG regions more reasonably compared with the JPEG2000. The noticeable artifacts appears in the reconstructed image from JPEG2000 codec is due to the sharp change in the bit rate allocated amount between the ROI and BG regions. Experimental results in Fig. 6 show that the proposed system can alleviate the artifacts caused by the change in the bit rate allocated amount between neighboring regions.

B. Rate Control Performance Comparison We also compare performance of the proposed rate control scheme in the proposed system with the PCRD algorithm in the Kakadu software. One of the PO regions is chosen as the ROI region for the Kakadu software. Some experimental results are presented in Table I, which shows that the proposed rate-control scheme provides more precise truncations than does PCRD. In the experiment, we also found that the precision of truncations is slightly affected by test image’s content type. For example, truncation points in Balloon image are closer to the ideal points than those in Airplane image. This may be explained by the computational attention model, since the saliency of a PO depends on its contrast compared to


the remain of the image. This limitation could be improved by selecting a more reasonable proto-object segmentation algorithm. C. Content-Based Scalability The advantages of the proposed system is twofold. On the one hand, it provides better characteristics in terms of operational RD curves, and a better rate control scheme, than the JPEG2000, without causing an increase in computational complexity. On the other hand, it also defines a framework for adaptive image delivery in terms of PO regions, and augment the content-based scalability of the JPEG2000 standard, as shown in Fig. 7. In particular, the content-based scalability not only enables the content search ability of images, but also improves visualization and interactivity in image delivery. The proposed system allows the use of multiple arbitrarily shaped regions within an image, with weights determined by the saliency that describes the degree of importance of each region, including the background so that these PO regions may be represented by different quality layers. We also compare PO region’s PSNR of the proposed system with the JPEG2000. A region’s PSNR is calculated by only taking pixels within the region into consideration. The experiment is done in two steps. First, we choose the most salient PO region as the ROI for the JPEG2000 system, and then calculate its PSNR for all bit rates and all images. The average result is presented in Fig. 8, where the results of ours and the JPEG2000 are referred as Ours-region, and JPEG2000-region, respectively. It shows that the proposed system can not improve the PSNR of a specified salient object compared with JPEG2000. This is because that the best PSNR can only be obtained by global optimization across the whole image. However, when the region’s PSNR is calculated average over all PO regions, and compared with the PSNR average over all corresponding regions in the reconstructed image of the JPEG2000, the empirical results presented in Fig. 8 show that the proposed system can indeed increase the PSNR of salient objects, where



as well as maintain an image quality image comparable to the conventional post-compression rate distortion (PCRD) optimum truncation algorithm of JPEG2000. We believe that the suggested approach bridges several concepts in content-based image coding such as region-based coding and object-based coding. However, there is much further room for improving the proposed approach. First, the PO region segmentation approach we adopted is purely bottom-up, stimulus driven and has no prior notion of what constitutes an object. There is no guarantee that the segmented PO regions are really objects. A more accurate boundaries of these PO regions will enable a better content-based scalability as well as a better rate control performance. Second, even our saliency-based rata allocation mechanism greatly reduces artifacts caused by the sharp change in the allocated amount between neighboring regions. However, this does not mean that the problem is solved completely. The great challenge need to address in the future is how to find an useful quantitative metric to relate the image quality with the rate allocation.


Fig. 7. Content-based scalability. From the top row to the bottom row, it presents two original images Balloons and face with PO regions’ boundaries are labeled, and the reconstructed images of the first PO region, the second PO region, and the third PO region. Specifically, the face image at the bottom is obtained when the two PO regions is treated as a single PO region. All these reconstructed images are coded at @0.2 bpp by the proposed system.

Fig. 8. Comparison of Region’s PSNR between the proposed system and the JPEG2000. Each curve is the average results for all images of both JPEG2000 test corpus and Kodak test corpus (gray scaled image) are presented.

the results of ours and JPEG2000 are referred as Ours-Region average and JPEG2000-region average.

V. CONCLUSION We presented an approach to content-based scalability within the JPEG2000 standard, based upon PO region segmentation. PO segmentation enables the decomposition of an image into meaningful regions. The proposed rate control scheme adds scalability with respect to content, as well as additional functionality of selectively encoding, decoding, and manipulation of each individual PO region in the image, with only some slightly trivial modifications to the JPEG2000 standard. Empirical results show that the rate control method is able to efficiently reduce the computational complexity and memory usage,

[1] D. Taubman, M. Marcellin, and M. Rabbani, “JPEG2000: Image compression fundamentals, standards and practice,” J. Electron. Imag., vol. 11, pp. 286–287, 2002. [2] S. Wenger, A. Teles, and G. Berlin, “H. 264/AVC over IP,” IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 7, pp. 645–656, Jul. 2003. [3] H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the scalable video coding extension of the H. 264/AVC standard,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, pp. 1103–1120, Aug. 2007. [4] P. Gerken, “Object-based analysis-synthesis coding of image sequences at verylow bit rates,” IEEE Trans. Circuits Syst. Video Technol., vol. 4, no. 3, pp. 228–235, Mar. 1994. [5] T. Meier and K. Ngan, “Video segmentation for content-based coding,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, no. 8, pp. 1190–1203, Aug. 1999. [6] T. Chen, C. Swain, and B. Haskell, “Coding of subregions for contentbased scalable video,” IEEE Trans. Circuits Syst. Video Technol., vol. 7, no. 1, pp. 256–260, 1997. [7] W. Li, “Overview of fine granularity scalability in MPEG-4 video standard,” IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 3, pp. 301–317, Aug. 2001. [8] A. Shamim and J. Robinson, “Object-based video coding by global-tolocal motion segmentation,” IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 12, pp. 1106–1116, Dec. 2002. [9] Y. Deng and B. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 8, pp. 800–810, Aug. 2001. [10] E. Borenstein, E. Sharon, and S. Ullman, “Combining top-down and bottom-up segmentation,” in Proc IEEE Conf. Comput. Vis. Pattern Recognit., 2004, p. 46-46. [11] A. Levin and Y. Weiss, “Learning to combine bottom-up and top-down segmentation,” in Proc Eur. Conf. Comput. Vis., 2006, vol. 3954, pp. 581–594. [12] T. Athanasiadis, V. Tzouvaras, K. Petridis, F. Precioso, Y. Avrithis, and Y. Kompatsiaris, “Using a multimedia ontology infrastructure for semantic annotation of multimedia content,” in Proc. 5th Int. Workshop Knowledge Markup Semantic Annotation (SemAnnot’05), Galway, Ireland, Nov. 2005, pp. 59–68. [13] J. Ryu, Y. Sohn, and M. Kim, “MPEG-7 metadata authoring tool,” in Proc 10th ACM Int. Conf. Multimedia, 2002, vol. 1, no. 6, pp. 267–270. [14] H. Musmann, M. Hotter, and J. Ostermann, “Object-oriented analysissynthesis coding of moving images,” Digital Image Process., vol. 1, pp. 117–138, 1992. [15] T. Sikora, “Trends and perspectives in image and video coding,” Proc. IEEE, vol. 93, no. 1, pp. 6–17, Jan. 2005. [16] P. Salembier and F. Marques, “Region-based representations of image and video: Segmentation toolsfor multimedia services,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, no. 8, pp. 1147–1169, Aug. 1999. [17] M. Penedo, W. Pearlman, P. Tahoces, M. Souto, and J. Vidal, “Regionbased wavelet coding methods for digital mammography,” IEEE Trans. Med. Imag., vol. 22, no. 10, pp. 1288–1296, Oct. 2003.


[18] C. Christopoulos, J. Askelof, and M. Larsson, “Efficient methods for encoding regions of interest in the upcoming JPEG2000 still image coding standard,” IEEE Signal Process. Lett., vol. 7, no. 9, pp. 247–249, Aug. 2000. [19] N. Grammalidis, D. Beletsiotis, and M. Strintzis, “Sprite generation and coding in multiview image sequences,” IEEE Trans. Circuits Syst. Video Technol., vol. 10, no. 2, pp. 302–311, Feb. 2000. [20] J. Serences and S. Yantis, “Selective visual attention and perceptual coherence,” Trends in Cognitive Sciences, vol. 10, no. 1, pp. 38–45, 2006. [21] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, Nov. 1998. [22] D. Walther and C. Koch, “Modeling attention to salient proto-objects,” Neural Netw., vol. 19, no. 9, pp. 1395–1407, 2006. [23] S. Li and W. Li, “Shape-adaptive discrete wavelet transforms for arbitrarily shapedvisual object coding,” IEEE Trans. Circuits Syst. Video Technol., vol. 10, no. 5, pp. 725–743, May 2000.


[24] X. Wu, S. Dumitrescu, and N. Zhang, “On multirate optimality of JPEG2000 code stream,” IEEE Trans. Image Process., vol. 14, no. 12, pp. 2012–2023, Dec. 2005. [25] F. Auli-Llinas, J. Serra-Sagrista, J. Monteagudo-Pereira, and J. Bartrina-R, “Efficient rate control for JPEG2000 coder and decoder,” in Proc. DCC, 2006, pp. 282–291. [26] F. Auli-Llinas and J. Serra-Sagrista, “JPEG2000 quality scalability without quality layers,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 7, pp. 923–936, Jul. 2008. [27] T. Chang, L. Chen, C. Lian, H. Chen, and L. Chen, “Computation reduction technique for lossy Jpeg2000 encoding through ebcot tier-2 feedback processing,” in Proc. IEEE Int. Conf. Image Process., 2002, vol. 3, pp. 85–88.

Proto-Object Based Rate Control for JPEG2000: An ... - IEEE Xplore

Abstract—The JPEG2000 system provides scalability with respect to quality, resolution and color component in the transfer of images. However, scalability with ...

702KB Sizes 6 Downloads 271 Views

Recommend Documents

Proto-Object Based Rate Control for JPEG2000: An ... - IEEE Xplore
Constraints inherent in a modern visual data transmission system, such as heterogeneous network, varying connection quality, or the need to operate on a variety of devices with a wide range of capabilities, motivate an intense worldwide research effo

Ordered Statistics based rate allocation scheme for ... - IEEE Xplore
We propose a new rate allocation algorithm for closed loop MIMO-OFDM system. The new scheme utilizes or- dered statistics of channel matrix's singular value ...

Rate-Dependent Hysteresis Modeling and Control of a ... - IEEE Xplore
a Piezostage Using Online Support Vector Machine and Relevance .... model and suppress the rate-dependent hysteretic behavior of a piezostage system, and ...

Daisy Chaining Based Visual Servo Control Part I - IEEE Xplore
Email: {gqhu, siddhart, ngans, wdixon} Abstract—A quaternion-based visual servo tracking con- troller for a moving six degrees of freedom object is ...

Daisy Chaining Based Visual Servo Control Part II - IEEE Xplore
Email: {gqhu, ngans, siddhart, wdixon} Abstract— In this paper, the open problems and applications of a daisy chaining visual servo control strategy ...

An Ambient Robot System Based on Sensor Network ... - IEEE Xplore
In this paper, we demonstrate the mobile robot application associated with ubiquitous sensor network. The sensor network systems embedded in environment.

Optimized Software Implementation of a Full-Rate IEEE ... - IEEE Xplore
Hardware implementations are often used to meet the high-data- rate requirements of 802.11a standard. Although software based solutions are more attractive ...

Distributed Sum-Rate Maximization Over Finite Rate ... - IEEE Xplore
of a wired backhaul (typically an x-DSL line) to exchange control data to enable a local coordination with the aim of improving spectral efficiency. Since the backhaul is prone to random (un- predictable) delay and packet drop and the exchanged data

Codebook-Based Opportunistic Interference Alignment - IEEE Xplore
May 9, 2014 - based on the exiting zero-forcing receiver. We first propose a codebook-based OIA, in which the weight vectors are chosen from a predefined ...

Autonomous Oscillation Control Loop Design for ... - IEEE Xplore
Abstract—This paper suggests an autonomous oscillation con- trol loop for frequency read-out-type resonant sensors that pro- duces outputs of variable ...

Computationally Efficient Template-Based Face ... - IEEE Xplore
head poses, illuminations, ages and facial expressions. Template images could come from still images or video frames. Therefore, measuring the similarity ...

SROS: Sensor-Based Real-Time Observing System for ... - IEEE Xplore
field ecological data transportation and visualization. The system is currently used for observation by ecological research scientists at the Institute of Geographic ...

Noniterative Interpolation-Based Super-Resolution ... - IEEE Xplore
Noniterative Interpolation-Based Super-Resolution. Minimizing Aliasing in the Reconstructed Image. Alfonso Sánchez-Beato and Gonzalo Pajares, Member, ...

A Computation Control Motion Estimation Method for ... - IEEE Xplore
Nov 5, 2010 - tion estimation (ME) adaptively under different computation or ... proposed method performs ME in a one-pass flow. Experimental.

Proprioceptive control for a robotic vehicle over ... - IEEE Xplore
Inlematioasl Conference 00 Robotics & Automation. Taipei, Taiwan, September 14-19, 2003. Proprioceptive Control for a Robotic Vehicle Over Geometric ...

Failure Rate Modeling Using Equipment Inspection Data - IEEE Xplore
Page 1 ... of customizing failure rates using equipment inspection data. This ... that the incorporation of condition data leads to richer reliability models.

Maximum principle for optimal control of sterilization of ... - IEEE Xplore
Feb 19, 2007 - BING SUN†. Department of Mathematics, Bohai University, Jinzhou, Liaoning 121000,. People's Republic of China. AND. MI-XIA WU. College of Applied Sciences, Beijing University of Technology, Beijing 100022,. People's Republic of China

Power Control for Multirate DS-CDMA Systems With ... - IEEE Xplore
[25] B. M. Hochwald and S. ten Brink, “Achieving near-capacity on a multiple- antenna channel,” IEEE Trans. Commun., vol. 51, no. 3, pp. 389–399,. Mar. 2003.

Control of Multiple Packet Schedulers for Improving QoS ... - IEEE Xplore
Abstract—Packet scheduling is essential to properly support applications on Software-Defined Networking (SDN) model. How- ever, on OpenFlow/SDN, QoS is ...

Effective Flash-based SSD Caching for High ... - IEEE Xplore
Jun 29, 2015 - Effective Flash-based SSD Caching for High Performance Home Cloud Server. 215. Contributed Paper. Manuscript received 04/01/15.

A Gram-Based String Paradigm for Efficient Video ... - IEEE Xplore
Mar 13, 2013 - semination of video data has created an urgent demand for the large-scale ... video sequence, retrieval is performed by transforming the query.

Neural Network Based Macromodels for High Level ... - IEEE Xplore
A simple Back Propagation (BP) algorithm is employed to train a feed-forward neural network with the available data set to find out the weights and biases of the interconnecting layers, and subsequently the neural network is used as a model to determ

Control Design for Unmanned Sea Surface Vehicles ... - IEEE Xplore
Nov 2, 2007 - the USSV, and the actual hardware and software components used for control of ... the control design problem was developed in our previous.

Stable Topology Control for Mobile Ad-Hoc Networks - IEEE Xplore
Abstract—Topology control is the problem of adjusting the transmission parameters, chiefly power, of nodes in a Mobile. Ad Hoc Network (MANET) to achieve a ...