Adaptive Artistic Stylization of Images Ameya Deshpande

IIT Gandhinagar

[email protected]

[email protected]

ABSTRACT In this work, we present a novel non-photorealistic rendering method which produces good quality stylization results for color images. The procedure is driven by saliency measure in the foreground and the background region. We start with generating saliency map and simple thresholding based segmentation to get rough estimation of the foregroundbackground mask. We improve this mask by using a scribblebased method where the scribbles for foreground-background regions are automatically generated from the previous rough estimation. Followed by the mask generation, we proceed with an iterative abstraction process which involves edgepreserving blurring and edge detection. The number of iterations of the abstraction process to be performed in the foreground and background regions are decided by tracking the changes in saliency measure in the foreground and the background regions. Performing unequal number of iterations helps to improve the average saliency measure in more salient region (foreground) while decreasing the average saliency measure in the non-salient region (background). Implementation results of our method shows the merits of this approach with other competing methods.

CCS Concepts •Computing methodologies → Non-photorealistic rendering; Image-based rendering; Image processing;

Keywords Non-photorealistic rendering; Saliency; Image Abstraction; Guided Filter

1.

Shanmuganathan Raman

IIT Gandhinagar

INTRODUCTION

The human visual system (HVS) is selectively sensitive to certain features of the incident light. Expressive rendering deals with emphasizing the features of light to which the HVS is sensitive to. The most important aspect is abstraction which is the process of removing not so important Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

details while retaining the important ones. Edges, which include silhouettes and contours, play an important role in the abstraction process [16]. HVS draws most of the information from the more sensitive regions called salient regions in an image. Hence while doing abstraction on images, it is essential to retain more information in these salient regions by performing lesser abstraction and retaining lesser information in other regions by doing more abstraction. Our image abstraction framework is based on this idea. Consider a scene with distinct foreground (FG) and background (BG) regions. The first step in our approach is finding saliency in different regions followed by simple thresholding based segmentation to get a rough idea about the ForegroundBackground (FG-BG) mask. Among the many saliency methods proposed in literature, the method proposed in [20] provides a good distinction between FG-BG saliency values and thus helpful in thresholding based segmentation. The poor segmentation achieved by simple thresholding on the saliency map is improved by using one-cut algorithm [29]. This algorithm is based on the grabcut algorithm which requires scribbles in the FG-BG regions as constrains [2]. To avoid user interaction for providing scribbles, we use simple morphological operations to generate FG-BG scribbles automatically from the previous rough estimation of FGBG mask. These entire efforts provide an improved FG-BG mask. This FG-BG mask is used to perform non-uniform abstraction over the image and retain more details (by performing lesser abstraction) in FG region (which is more salient in most of the cases) than the BG region. Like many other abstraction approaches, our image abstraction method is also iterative in nature and iterations are performed using guided filter which performs edge preserving blurring [14]. The number of iterations to be performed is decided automatically by how the average saliency values are changing in the FG-BG regions rather than deciding based on subjective quality only. While performing iterations we keep track of changes in the saliency measures in both the FG and BG regions. Finally to stylize the abstraction result, we add the edges back to abstracted image using the contour detection algorithm proposed in [8]. We have tested our method on MSRA10K benchmark dataset images and for most of the images good stylization results are obtained within few iterations [5]. The proposed algorithm is well suited for stylizing images containing distinct FG-BG regions or salient object(s). Our key contributions are:

ICVGIP, December 18-22, 2016, Guwahati, India c 2016 ACM. ISBN 978-1-4503-4753-2/16/12. . . $15.00

DOI: http://dx.doi.org/10.1145/3009977.3009985

1. Automatic generation of FG-BG scribbles from the saliency map which are required in segmentation.

Original Image (RGB)

Rough FG-BG Mask

Improved FG-BG Mask

Abstract Image

Thresholding on saliency map

One-cut algorithm + Morphology

Guided Filter + Contour Detection

Figure 1: Flow diagram of the proposed method. 2. Non-uniform abstraction over FG-BG regions using guided filter as edge-preserving blurring filter. 3. Saliency based stopping criteria for iterative abstraction process. The organization of paper is as follows. In the related work section, we provide the literature survey of relevant works performed prior to our work. In the proposed framework section, we present our algorithm for automatic segmentation, abstraction and stylization followed by the details of implementation in the implementation section. Finally, we discuss the results achieved and comparative study which includes visual observations along with subjective analysis with other stylization methods in the results and discussion section. This section is followed by conclusion section.

2.

RELATED WORK

Many non-photorealistic effects come under the category of image stylization. The main aim of any stylization algorithm is to give artistic effect to a photograph. Most recent work in this domain is by Deep Dream Generator which use artificial intelligence for stylization [6]. It also can transform relevant effects from one image to other. Another state-ofthe art work published by Prisma Labs which also uses artificial intelligence to modify predefined effects to best match the query images [25]. In ([31], [27]) authors have worked to generate pen-and-ink illustration from photographs. Authors in ([28], [10]) observed the interaction of pencil and paper to generate pencil drawings. Another popular effect that comes under the category of stylization is painting illustration and many authors have contributed with different approaches ([19], [15], [13]). Abstraction is one of the most studied effects in image stylization. Smoothing out the regions to hide the details and emphasising on prominent edges are in-general steps followed in all abstraction algorithms. DeCarlo and Santella built a perceptual model from the eye tracker data to determine the important information contents in an image and perform the abstraction accordingly [7]. They used Canny edge detector, which is one of the most famous edge detection algorithms [4]. Though the canny edge detector is one of the best edge detectors, many times it fails in non-photorealistic rendering applications such as abstraction since the edges produced with canny edge detector are very thin and many times discontinuous in nature. The DoG filter is proven to be more effective than Canny edge detector in stylization applications. Wide use of DoG filter in these applications is due to the fact that it is easy to implement, fast and edge thickness with DoG can be easily controlled which is useful in non-photorealistic rendering applications [12]. Some researchers have modified the basic DoG implementation to enhance the effects [32].

Some of the earlier works related to image stylization used the bilateral filer as edge preserving filter along with suitable edge detection algorithm [30]. Winnem¨ oller et.al. used bilateral filter followed by color quantization and a novel implementation of DoG filter to enhance edges in abstracted images [33]. ˜ ullner applied bilateral filter and DoG Kyprianidis and DA˝ filter in the specific directions (according to gradient) which helps to detect salient edges to improve the results of image abstraction [18]. Oh et. al used bilateral filter for modeling and photo editing applications [22]. Orzan et al. used edge based approach and gradient domain image processing techniques to manipulate photographs in non-realistic manner [23]. Kang et al. describe the flow of salient features in an image based on shape/color filtering guided by vector field [17]. An inherent problem with bilateral filter is reversal of gradient. Since in many approaches ultimately edge responses are multiplied with abstraction output (without edges), the areas where gradient changes overlap with edge responses, gradient reversal does not become visible but they do show up if such overlaps are missing in some regions. Besides bilateral filter, abstraction based on Kuwahara filter, morphological filters and partial differentiation based methods are also used ([24], [26]). Gerstner et al. perfomed abstraction on images by pixalating the original image [11]. Another approach is the adaptive image abstraction algorithm where over-segmentation is performed followed by saliency driven adaptive smoothing [21]. We would like to provide an alternate solution to this image stylization problem deriving the necessity of such a solution from these earlier works.

3. PROPOSED FRAMEWORK Figure 1 shows the flow diagram of our work. The figure shows the key steps in the stylization process and how they are achieved. Each block in the figure 1 is explained in detail in successive subsections. Section 3.1 explains how we obtain the rough nature of FG-BG mask. Section 3.2 explains how we automatically generate scribbles for the FG-BG to get improved FG-BG mask. Followed by this, section 3.3 gives an in-depth idea about how we perform abstraction with guided filter and how number of iterations are automatically decided based on the saliency measure changes. Further in section 4, we provide required implementation details followed by results and supporting discussion in section 5. Each section is supported with necessary intermediate results to illustrate the significance of different steps involved in the stylization process.

3.1 Rough FG-BG mask: Saliency based thresholding The first step in the process is to get saliency map. Dif-

(a)

(b)

(c)

Figure 2: Getting rough FG-BG Mask: (a) Original image, (b) Saliency map, and (c) Thresholding on the saliency map. ferent saliency methods were explored to test which method best suits for the application ([1], [3], [34], [5]). In [5] authors have proposed their own saliency approach and compared their method with many existing state-of-art saliency algorithms. By comparing the results of different saliency methods we finalized to use method proposed in [20]. The saliency algorithm helps to get a rough nature of the FG-BG mask via simple thresholding as shown in the equation:

Sth (i, j) =

(

3.3 Abstraction with guided filter b Original Image (RGB)

Contour Detection

a L

RGB2Lab

Guided Filter

Add Edges

Lab2RGB

Add Colors

Figure 4: Image abstraction flow diagram. 1 0

S(i, j) >= th S(i, j) < th

where S(i, j) is saliency value at pixel (i, j) in the saliency map and Sth represents thresholded map. Here threshold th is chosen as the average value of saliency in saliency map over complete image. Figure 3 shows these steps for a sample image from MSRA10K benchmark dataset [5].

3.2 Improved FG-BG Mask with one-cut algorithm

(a)

BG scribbles, we dilate the thresholded saliency map and take logical negation over complete matrix (figure 3c). For all the images tested with our algorithm, we used disk as the structuring element of size 25 for erosion and dilation. The output of this implementation is shown in figure 3d. To further improve this process, we reject small mask elements and fill the holes to get continuous mask (figure 3e).

(b)

(c)

Figure 4 shows the different steps carried out to perform abstraction. Red dashed line in the figure 4 denotes single iteration of the proposed abstraction process.

3.3.1 Initial iterations Abstraction process starts with converting RGB image to Lab space. We use guided filter on L channel [14]. Guided filter requires an input image whose properties will be changed according to another input called as guided image. When both the input image and the guided image are same, guided filter makes edge preserving blurring on an image [14]. Colors are added back to the output of the guided filter. In the abstraction process, edges play an important role. Also the position of edges, nature of edges (continues/broken and thick/thin) affect the quality of the final output. To detect edges, we use the contour detection algorithm proposed in ([8], [9], [35]). The algorithm gives nice continuous edges when there are depth changes, occlusions and avoids complete gradient dependency which is the case with popular edge detectors such as DoG, Canny, etc. We initially perform three iterations over the complete image. The further iterations are performed according to saliency changes.

3.3.2 Iterations based on saliency changes (d)

(e)

Figure 3: Getting improved FG-BG mask: (a) Thresholded saliecny map, (b) FG scribbles, (c) BG scribbles, (d) Output of one-cut, and (e) Improved FG-BG mask. One-cut algorithm proposed by Tange et al. requires user interaction to provide FG scribbles and BG scribbles as input along with original image to produce FG-BG mask([29], [2]). These scribbles are generated automatically from the thresholded saliency map(figure 3a). The automatic scribble generation works because saliency technique proposed in [20] produces more saliency values in most of the FG region and some surrounding region. In order to omit the surrounding portion of FG, thresholded saliency map is eroded by some amount to generate FG scribbles (figure 3b). To get the

Initial iterations make uniform abstraction over the entire image. After each successive iteration, performed after initial iteration, we calculate the average saliency measure in FG and BG region using masks found previously (figure 3e). Since FG is of more interest in most of the images, we perform lesser number of iterations over FG and more in BG region. By tracking changes in the average saliency values in the FG and the BG regions, we stop the iterations: • over the FG region when the average saliency in the FG region attains local maximum. • over the BG region when the average saliency in the BG region attains local minimum and we have already achieved local maximum in the FG region. The main advantage of non-uniform abstraction is that more details are retained in the more salient region i.e., the FG

region as compared to the BG region. The abstraction results are shown in figure 5 and the changes in saliency map are shown in figure 6. Also, the figure 7 shows how average saliency values in the FG-BG regions change with iterations after the first three iterations on both the FG and the BG regions. Red squares show the stopping values.

(a)

(b)

(c)

Figure 5: Abstraction result: (a) Original image, (b) After minimum iterations, and (c) Final abstraction.

tel core i3 processor clocked at 2.49 GHz. Here, we present rough idea about the timing details of proposed algorithm on a color image of size 400×351. In the algorithm, the most time consuming portion is calculating saliency of the image which we have to do initially to obtain FG-BG mask as well as in each iteration of abstraction. It takes about 5 sec to the calculate saliency measure. Detecting edges using contour detection algorithm with already trained model takes about 3 sec which we have to do only once on the original image. Abstraction with guided filter takes very less time as compared to other portions. For an image under consideration, all 11 iterations (3 initial iterations plus 7 saliency based iterations) of guided filter took about 2 sec to complete the task. For a typical image of mentioned size, it takes about 41 sec including all the steps and the storage of intermediate and final results on hard drive. The computation time of algorithm is higher as compared to the implementations of algorithms proposed in [17], [33] but it is comparable with the algorithm proposed in [23]. Though the algorithm is slower to work as compared some of the state-of-art abstraction methods, it is fully automatic and requires very few parameters to vary in order to obtain good abstraction results for different images. Different parameter settings are discussed below.

4.1 Guided filter parameters (a)

(b)

(c)

Figure 6: Abstraction result: (a) Original saliency map, (b) Saliency map after minimum iterations, and (c) Final saliency map.

Guided filter implementation requires an input image and a guide image (which is the same as the input image in our case), neighborhood size r and ǫ [14]. IGF = aI + b; a =

0.498

In the above expressions, IGF represents filtered image for input image I and the filtering is controlled by parameters a and b. σ represents variance and µ represents mean over a chosen neighborhood size r. From the expressions -

0.496

0.494

A verage saliency

σ2 ; b = (1 − a)µ +ǫ

σ2

0.492

0.49

• higher the value of ǫ, lower the value of a, lower will be the contribution in IGF from the original pixel values in I.

0.488

0.486

0.484

0.482 1

2

3

4

5

6

7

8

Iteration

• lower the value of a, higher will be the contribution of b i.e., mean of pixel values in the neighborhood of size r, more will be the blurring.

(a) 0.087

In our implementation, for all the images, we used r = 3 and ǫ = 52 .

0.0865

A verage saliency

0.086

0.0855

4.2 Contour detection parameters

0.085

0.0845

0.084

0.0835 1

2

3

4

5

6

7

8

Iteration

(b)

Contour detection algorithm provides final output in range of 0-1 [8]. Here, we need to set the threshold value to get only required amount of edges. A threshold value between 0.7-0.85 provides good result in our implementation. This is the only parameter which needs to be changed depending on the image under consideration.

Figure 7: Variations in average saliency values: (a) Foreground region and (b) Background region.

5. RESULTS AND DISCUSSION

4.

In this section, we show the comparison results of our method with the other state-of-art methods proposed in [33], [17] and [23]. Figure 8 and figure 9 show these comparisons. Here the comparison is done using 12 images. Methods proposed in [33] and [17] uses a color quantization which makes

IMPLEMENTATION

The proposed algorithm is implemented and tested on MATLAB version-2015a running on Windows 10 with In-

the abstraction for some images (such as figure 8-1st and 3rd image and figure 9-last image) producing slightly better output than our method but still the outputs are comparable. Color quantization improves overall look of the above mentioned images but it creates unnecessary artifacts (especially in the BG region) which can be observed in the remaining abstraction results. Moreover the edges are better in the result using our method compared to the other three methods which can be prominently observed in the case of figure 8-5th and 6th image and figure 9-2nd and 4th image. The edge detection method used in our stylization framework enhances the quality of abstraction as well as edges in the FG region help to improve saliency measure. In very rare cases, the proposed framework produces small undesirable edges which can be seen in the case of figure 8-2nd image. Along with this, we carried out a subjective analysis to compare our method with the above mentioned three methods. In the subjective analysis, every time a user performing the analysis is asked to rate all the four abstraction results between 1 to 10 (10 being the highest). While doing this user is asked to rate each abstraction result individually. Since the user can give same rating to two or more results, the user is also asked to rank each stylization result from rank 1 to rank 4 (rank 1 being the best out of all) for each image. For every new user, images appear in random order as well as for each image, the order in which abstraction result with particular method appear is also random. This avoids any discrepancy due to the same ordering of result images. For all the users, conditions such as monitor(display) and light conditions are kept exactly same so that each result appear same to all the users. The study is carried out with 17 users and each user is asked to perform the analysis of 14 images from the mentioned dataset. This gives total cases under consideration for rating and ranking to be 14 × 17 = 238. The results of subjective analysis are shown in table 1. Table 1: Results of subjective analysis. Method Method in [33] Method in [17] Method in [23] Our method

No. of times rank 1 given 44 50 66 78

Average user rating 6.43 6.48 6.58 6.7

Standard deviation 2.001 1.751 1.783 1.763

In table 1, it can be seen that in both the cases, rating and ranking, the method proposed in the paper performs better than the other three methods. It can be observed that the proposed method gets the best rating 78 times which is significantly better than that of the other methods. The average rating is 6.7 which is also higher compared to that of the other methods. The third column in table 1, standard deviation, states the consistency of user rating. The lower the value of standard deviation more the consistent a particular method is. In consistency measure, the proposed method stands at the second position. In all the three measures the proposed method either performs best or is very near to the one which performs the best. The subjective analysis results shown in the table 1 as well as comparative results shown in the figure 8 and the figure 9 clearly shows that even though the proposed method does not out perform the other state-of-art methods by very large margins, the method is observed to produce visually appealing stylized images.

6. LIMITATIONS Here, we discuss few limitations which were realized during the implementation of the proposed approach. While deciding the number of iterations of abstraction process, we rely on the local maximum and the local minimum of average saliency values which might not be the global optima. Still the algorithm produces good quality stylization results for most of the images. Another limitation of the algorithm is when an input image does not have a distinct FG and BG, unequal abstraction may produce noticeable artifacts in the final stylization result. However, it is not very common to perform abstraction on such images according to the literature on image stylization.

7. CONCLUSION We have presented a novel approach for image abstraction based on saliency measure. By combining the saliency based segmentation with one-cut segmentation algorithm and automatic scribble generation for the foreground and the background using simple morphological operations, we have successfully achieved good FG-BG segmentation. This mask is used to calculate saliency measure in the FG and the BG regions separately in each iteration of the guided filter. According to changes in average saliency value in these regions the number iterations of guided filter in the FG and the BG regions are decided. More ever we always perform the number of iterations in the BG region more than or equal to the number of iterations in the FG region. This helps to keep more details in more salient FG region and perform more abstraction in lesser salient BG region. Also edges obtained in our implementation are more prominent which emphasizes the overall stylization result. Good stylization results are obtained usually with just a few iterations. We would like to improve the method by devising good optimization strategy which is able to provide the best saliency measure to stop the iterations in the foreground and the background regions. We would also like to evaluate the results obtained through this strategy on more number of subjects. We would like to speed up the proposed approach using fast saliency detection algorithms such as the one proposed in [34].

8. REFERENCES [1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. In Computer vision and pattern recognition, 2009. cvpr 2009. ieee conference on, pages 1597–1604. IEEE, 2009. [2] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. In International workshop on energy minimization methods in computer vision and pattern recognition, pages 359–374. Springer, 2001. [3] N. D. Bruce and J. K. Tsotsos. Saliency, attention, and visual search: An information theoretic approach. volume 9, pages 5–5. The Association for Research in Vision and Ophthalmology, 2009. [4] J. Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6):679–698, 1986. [5] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. Torr, and S.-M. Hu. Global contrast based salient region

Figure 8: Comparison with different methods: First column-original image, Second column-method proposed in [33], Third column-method proposed in [17], Fourth column-method proposed in [23], and Fifth column-Our method.

Figure 9: More comparison results with different methods: First column-original image, Second column-method proposed in [33], Third column-method proposed in [17], Fourth column-method proposed in [23], and Fifth column-Our method.

[6] [7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20] [21]

detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):569–582, 2015. DeamDreamGeneratorTeam. Deap dream generator, 2015. Available at http://deepdreamgenerator.com/. D. DeCarlo and A. Santella. Stylization and abstraction of photographs. In ACM transactions on graphics (TOG), volume 21, pages 769–776. ACM, 2002. P. Doll´ ar and C. L. Zitnick. Structured forests for fast edge detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 1841–1848, 2013. P. Doll´ ar and C. L. Zitnick. Fast edge detection using structured forests. IEEE transactions on pattern analysis and machine intelligence, 37(8):1558–1570, 2015. F. Durand, V. Ostromoukhov, M. Miller, F. Duranleau, and J. Dorsey. Decoupling strokes and high-level attributes for interactive traditional drawing. In Rendering Techniques 2001, pages 71–82. Springer, 2001. T. Gerstner, D. DeCarlo, M. Alexa, A. Finkelstein, Y. Gingold, and A. Nealen. Pixelated image abstraction. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, pages 29–36. Eurographics Association, 2012. B. Gooch, E. Reinhard, and A. Gooch. Human facial illustrations: Creation and psychophysical evaluation. ACM Transactions on Graphics (TOG), 23(1):27–44, 2004. J. Hays and I. Essa. Image and video based painterly animation. In Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, pages 113–120. ACM, 2004. K. He, J. Sun, and X. Tang. Guided image filtering. In European conference on computer vision, pages 1–14. Springer, 2010. A. Hertzmann. Painterly rendering with curved brush strokes of multiple sizes. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 453–460. ACM, 1998. J. F. Hughes, A. van Dam, M. McGuire, D. F. Sklar, J. D. Foley, S. K. Feiner, and K. Akeley. Computer graphics: principles and practice (3rd ed.). Addison-Wesley Professional, Boston, MA, USA, July 2013. H. Kang, S. Lee, and C. K. Chui. Flow-based image abstraction. IEEE transactions on visualization and computer graphics, 15(1):62–76, 2009. J. E. Kyprianidis and J. D¨ ollner. Image abstraction by structure adaptive filtering. In TPCG, pages 51–58, 2008. P. Litwinowicz. Processing images and video for an impressionist effect. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 407–414. ACM Press/Addison-Wesley Publishing Co., 1997. R. Margolin, A. Tal, and L. Zelnik-Manor. What makes a patch distinct? In CVPR, 2013. R. Nagar and S. Raman. Saliency guided adaptive image abstraction. In The 5th National Conference on

[22]

[23]

[24]

[25] [26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

[34]

[35]

Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), pages 16–19, 2015. B. M. Oh, M. Chen, J. Dorsey, and F. Durand. Image-based modeling and photo editing. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 433–442. ACM, 2001. A. Orzan, A. Bousseau, P. Barla, and J. Thollot. Structure-preserving manipulation of photographs. In Proceedings of the 5th international symposium on Non-photorealistic animation and rendering, pages 103–110. ACM, 2007. P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on pattern analysis and machine intelligence, 12(7):629–639, 1990. PrismaInc. Prisma. Available at https://play.google. com/store/apps/details?id=com.neuralprisma&hl=en. P. Rosin and J. Collomosse. Image and Video-Based Artistic Stylisation, volume 42. Springer Science & Business Media, 2012. M. P. Salisbury, M. T. Wong, J. F. Hughes, and D. H. Salesin. Orientable textures for image-based pen-and-ink illustration. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 401–406. ACM Press/Addison-Wesley Publishing Co., 1997. M. C. Sousa and J. W. Buchanan. Observational models of graphite pencil materials. In Computer Graphics Forum, volume 19, pages 27–49, 2000. M. Tang, L. Gorelick, O. Veksler, and Y. Boykov. Grabcut in one cut. In Proceedings of the IEEE International Conference on Computer Vision, pages 1769–1776, 2013. C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Computer Vision, 1998. Sixth International Conference on, pages 839–846. IEEE, 1998. G. Winkenbach and D. H. Salesin. Computer-generated pen-and-ink illustration. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 91–100. ACM, 1994. H. Winnem¨ oller. Xdog: advanced image stylization with extended difference-of-gaussians. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, pages 147–156. ACM, 2011. H. Winnem¨ oller, S. C. Olsen, and B. Gooch. Real-time video abstraction. In ACM Transactions On Graphics (TOG), volume 25, pages 1221–1226. ACM, 2006. J. Zhang, S. Sclaroff, Z. Lin, X. Shen, B. Price, and R. Mech. Minimum barrier salient object detection at 80 fps. In Proceedings of the IEEE International Conference on Computer Vision, pages 1404–1412, 2015. C. L. Zitnick and P. Doll´ ar. Edge boxes: Locating object proposals from edges. In European Conference on Computer Vision, pages 391–405. Springer, 2014.

Adaptive Artistic Stylization of Images - ACM Digital Library

Dec 22, 2016 - Adaptive Artistic Stylization of Images. Ameya Deshpande. IIT Gandhinagar [email protected]. Shanmuganathan Raman.

973KB Sizes 6 Downloads 260 Views

Recommend Documents

practice - ACM Digital Library
This article provides an overview of how XSS vulnerabilities arise and why it is so difficult to avoid them in real-world Web application software development.

6LoWPAN Architecture - ACM Digital Library
ABSTRACT. 6LoWPAN is a protocol definition to enable IPv6 packets to be carried on top of low power wireless networks, specifically IEEE. 802.15.4.

Kinetic tiles - ACM Digital Library
May 7, 2011 - We propose and demonstrate Kinetic Tiles, modular construction units for kinetic animations. Three different design methods are explored and evaluated for kinetic animation with the Kinetic Tiles using preset movements, design via anima

The Chronicles of Narnia - ACM Digital Library
For almost 2 decades Rhythm and Hues Studios has been using its proprietary software pipeline to create photo real characters for films and commercials. However, the demands of "The Chronicles of. Narnia" forced a fundamental reevaluation of the stud

Borg, Omega, and Kubernetes - ACM Digital Library
acmqueue | january-february 2016 71 system evolution. As more and more applications were developed to run on top of Borg, our application and infrastructure ...

Incorporating heterogeneous information for ... - ACM Digital Library
Aug 16, 2012 - A social tagging system contains heterogeneous in- formation like users' tagging behaviors, social networks, tag semantics and item profiles.

Proceedings Template - WORD - ACM Digital Library
knowledge-level approach (similarly to many AI approaches developed in the ..... 1 ArchE web: http://www.sei.cmu.edu/architecture/arche.html by ArchE when ...

Computing: An Emerging Profession? - ACM Digital Library
developments (e.g., the internet, mobile computing, and cloud computing) have led to further increases. The US Bureau of Labor Statistics estimates 2012 US.

GPLAG: Detection of Software Plagiarism by ... - ACM Digital Library
Along with the blossom of open source projects comes the convenience for software plagiarism. A company, if less self-disciplined, may be tempted to plagiarize ...

A guided tour of data-center networking - ACM Digital Library
Jun 2, 2012 - purpose-built custom system architec- tures. This is evident from the growth of Ethernet as a cluster interconnect on the Top500 list of most ...

On Effective Presentation of Graph Patterns: A ... - ACM Digital Library
Oct 30, 2008 - to mine frequent patterns over graph data, with the large spectrum covering many variants of the problem. However, the real bottleneck for ...

The multidimensional role of social media in ... - ACM Digital Library
informed consent to informed choice in medical decisions. Social media is playing a vital role in this transformation. I'm alive and healthy because of great doctors. Diagnosed with advanced kidney cancer, I received care from a great oncologist, a g

Performance Modeling of Network Coding in ... - ACM Digital Library
without the priority scheme. Our analytical results provide insights into how network coding based epidemic routing with priority can reduce the data transmission ...

The Character, Value, and Management of ... - ACM Digital Library
the move. Instead we found workers kept large, highly valued paper archives. ..... suggest two general problems in processing data lead to the accumulation.

Evolutionary Learning of Syntax Patterns for ... - ACM Digital Library
Jul 15, 2015 - ABSTRACT. There is an increasing interest in the development of tech- niques for automatic relation extraction from unstructured text. The biomedical domain, in particular, is a sector that may greatly benefit from those techniques due

Remnance of Form: Interactive Narratives ... - ACM Digital Library
what's not. Through several playful vignettes, the shadow interacts with viewers' presence, body posture, and their manipulation of the light source creating the.

On the Automatic Construction of Regular ... - ACM Digital Library
different application domains. Writing ... oped a tool based on Genetic Programming capable of con- ... We developed a web application containing a suite of ex-.

Home, habits, and energy: examining domestic ... - ACM Digital Library
2 HCI Institute. Carnegie Mellon University. Pittsburgh, PA 15232 USA. {jjpierce,paulos}@cs.cmu.edu. 3 SAMA Group. Yahoo!, Inc. Sunnyvale, CA 94089 USA.

Optimizing two-dimensional search results ... - ACM Digital Library
Feb 9, 2011 - Classic search engine results are presented as an ordered list of documents ... algorithms for optimizing user utility in matrix presenta- tions.

Towards a Relation Extraction Framework for ... - ACM Digital Library
to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi- supervised Natural Language Processing and ...

Distance Estimation in Virtual Environments ... - ACM Digital Library
Jul 29, 2006 - and a target avatar in the virtual world by using a joystick to adjust ... ∗email: {jingjing.meng,john.j.rieser.2,bobby.bodenheimer}@vanderbilt.

A Framework for Technology Design for ... - ACM Digital Library
Internet in such markets. Today, Internet software can ... desired contexts? Connectivity. While the Internet is on the rise in the Global South, it is still slow, unreliable, and often. (https://developers.google.com/ billions/). By having the devel