A USER-FRIENDLY INTERACTIVE IMAGE INPAINTING FRAMEWORK USING LAPLACIAN COORDINATES Wallace Casaca12 ,

Danilo Motta1 ,

Gabriel Taubin2 ,

Luis Gustavo Nonato1

1

2

ICMC, University of S˜ao Paulo (USP), S˜ao Carlos, Brazil School of Engineering, Brown University, Providence, United States ABSTRACT

from the image background. In fact, there is a few methods that address the problem of selecting the regions to be repaired in an automatic or semi-supervised way. The techniques proposed by [10, 11, 12] automatically detect defects easily to identify visually but difficult to be segmented by hand. However, those algorithms can only handle a limited class of defects, thus constraining their application to specific problems. A method that covers a broader number of cases was proposed by [13], which introduces user knowledge into the pipeline of inpainting. The authors provide an interface that allows users to guide the restoration by drawing straight lines on target regions. Meaningful improvements in [13] have been successfully developed such as [14, 15, 16]. Despite their pliability in dealing with a larger number of image classes, those algorithms still require the user intervention to entirely mark the area to be repaired. Create manually the inpainting mask while still conveying geometric information has been used in other computer vision applications. The algorithm proposed by [17, 18] relies on the nearest neighbor correspondences among parts of the image. Their algorithm has recently been introduced in the Adobe Photoshop Engine as an interactive tool to perform image retouching. Since a considerable amount of user intervention is required to select the targets and drive the whole inpainting process, the time for generating a satisfactory result is a hurdle, specially when object to be filled is inserted into a non-homogeneous image background. Aiming at overcoming the issues mentioned above while providing a friendly and intuitive interface to select the regions to be recovered, we propose a novel framework that generates pleasant outcomes with a reduced number of user interaction. Moreover, the proposed framework outperforms representative image inpainting methodologies in many aspects such as quality of reconstitution and high-adherence on contours of the selected regions.

Image inpainting is a challenging topic in computer vision that seeks to recover the natural aspect of an image where data has been partially damaged or occluded by undesired objects. A common drawback not addressed by most inpainting methodologies is that the user must manually provide the inpainting mask as input data to the method. Selecting the inpainting mask is tedious, time consuming and it often requires artistic skills to precisely determine the mask. In this work we design a new tool that allows users to easily select the desirable mask. The proposed framework combines the high-adherence on image contours of the Laplacian Coordinates segmentation approach with the efficiency of a recent inpainting technique that unifies anisotropic diffusion, inner product-based filling order mechanism and exemplar-based completion. The user can interact with the object that he/she intends to edit by stroking small parts of the object so as to proceed with the segmentation and inpainting task. Our comparisons show that the proposed framework has good performance in terms of applicability and effectiveness when compared against other existing techniques in the literature. Index Terms— image inpainting, interactive segmentation, optimization techniques. 1. INTRODUCTION Image inpainting is a modern research topic that has received great attention in recent years. It focus on studying restoration and disocclusion processes for damaged digital images and for artistic edition finalities. Methods devoted to accomplish image inpainting can be arranged in several groups, as suggested by the surveys [1, 2, 3]. In short words, existing approaches differ in terms of pixel propagation, sensibility when synthesizing textures, and the filling criterium [4, 5, 6, 7, 8, 9]. Although techniques for performing image inpainting vary in many fundamental aspects, a common drawback not covered by most inpainting systems is that they require the user to manually “carving” the targets to be edited. Selecting those targets consists of a meticulous process that often demands artistic skills from users to precisely separate targets

978-1-4799-8339-1/15/$31.00 ©2015 IEEE

Contributions We can summarize the main contributions of this work as: • A novel interactive inpainting framework that combines the accuracy of the Laplacian Coordinates segmentation approach [19] with the fast data matching scheme proposed in [3] to restore and repair images.

862

ICIP 2015

• Our approach allows for recursively inpainting the image by reintroducing new optimization constraints to achieve more refined results.

where x = (x1 , x2 , ..., xn ) is the saliency map which assigns a scalar value xi to each pixel pi ∈ I of the image. The rationale behind the Laplacian Coordinates approach is that the non-pairwise terms from Equation (1) enforce fidelity of brushed pixels pi , i ∈ B ∪ O, to scalars 0 (background) and 1 (object), respectively, while the last term imposes spatial smoothness within image segments and allows sharp jumps across image boundaries. Energy (1) is efficiently computed by solving a sparse system of linear equations [19, 20]. Weights wij are calculated from an image gradient-based function such as [19, 21]. Next, the inpainting mask Ω = (Ωi ) is obtained by trivially assigning object and background labels as follows:  1, if xi ≥ 21 . (2) Ωi = 0, otherwise

• The proposed tool is easy-to-implement, computationally low-cost and it requires just a small amount of user intervention to reach a good result. 2. PIPELINE OVERVIEW As illustrated in Figure 1, our framework comprises three main steps, namely, mask selection, image inpainting and user interaction. The mask selection step consists of making use of the interactive image segmentation algorithm [19] to initially select the objects that will be taken as input to the inpainting stage. In this stage, a label is assigned to each unfilled pixel from the specified objects according to their level of priority in the filling front. The labeled pixels are ordered and used to create the region for which pixels will be sampled to fill the inpainting area. This region is dynamically defined so as to accelerate the progress of the image completion. Sampled pixels are then copied to the unfilled objects so as to keep the visual coherence of image structures. Finally, the user can update existing regions as well as adding new ones by stroking the brushed image in the final step of our framework. This step is performed by introducing new constraints to the Laplacian Coordinates equations in order to properly generate new inpainting partitions. Details about each stage is presented below.

4. IMAGE INPAINTING STEP We now brief describe the algorithm developed in [3] so that it can be modeled to handle with the user-specified objects of the image. 4.1. Reference Image Computation Our inpainting step starts by computing a reference image u (cartoon image) from I using the anisotropic diffusion equation proposed in [22]. Image u is a non-textured function holding the global geometric structures of I (see Fig. 2(b)). In mathematical terms, u is obtained by numerically solving the following nonlinear diffusion equation:   ∂I (t) ∇I (t) = g|∇I (t) |div − (1 − g)(I (t) − I), (3) ∂t |∇I (t) |

3. MASK SELECTION STEP In our approach, entire objects can be easily marked so as to avoid the meticulous election of the boundary pixels employed by traditional image processing tools. The user selects a target region by brushing on the object of interest and the marked pixels are used as seeds for the partition process. The image background must also be roughly marked to properly settle optimization constraints for the Laplacian Coordinates. The second image in Fig. 1 illustrates the described scheme.

where I (t) is the scaled version of I, g = g(|∇Gσ ∗ I (t) |) is an edge detection function and Gσ represents the gaussian kernel. 4.2. Filling Order Assignment The use of the cartoon image u allows us to embed the image geometry into the mechanism that computes the filling order of the unrepaired pixels. This mechanism is computed in terms of the following filling priority measure δ:

3.1. Laplacian Coordinates Energy As a basic tool to compute the Laplacian Coordinates energy, we define a weighted graph G = (V, E, WE ) where V is the set of nodes corresponding to the pixels of the image, E is the edge set built from pairs of pixels locally connected in an 8-connected lattice, and WE determines the set of weights of the edges. The Laplacian Coordinates energy E is computed as follows: E(x) =

X i∈B

kxi k22 +

X i∈O

kxi −1k22 +

δ(pi ) = R(pi ) C(pi ), R(pi ) = |h∇(∆upi ), dpi i|,

dpi

(4) ∇⊥ upi , = |∇⊥ upi |

(5)

where R and C represent the relevance and biased confidence terms as detailed in [3]. R computes the direction of the image structures from the boundary ∂Ω while C accounts for the coherence during the completion process (see Figs. 2(c)-(e) for an illustration). From Equation (4), a label is then assigned for each pixel pi in the filling front.

1 X kwij (xi −xj )k22 , 2 (i,j)∈E

(1)

863

inpainting mask selection

input image

brushed image

image inpainting

target mask

target image

inpainted image

user interaction

Fig. 1. Pipeline of our inpainting framework.

(a) Input image

(b) Cartoon image u

Fig. 3. Illustrative sketch of the image completion process.

(c) Relevance Term R(p)

(e) Biased confidence term C(p)

||p − q||∆U

d(p, q) = p

(d) Inpaint towards image structures

||p||2∆U

+

||q||2∆U

, ||p||∆U :=

p

pT ∆U p ,

(6) with ∆U being a diagonal matrix defined by the Laplacian of u: ∆Uii = ∆upi , pi ∈ H(p) ∩ ΛΩp , and p = (Ip1 , Ip2 , ..., Ipk ) being a column vector containing the intensities of the given pixels on H(p) (similarity for the q case). The valid pixels from H(b q ) are then placed in corresponding pixels in H(p) in order to recursively fill the whole inpainting mask Ω.

(f) Inpainted image

Fig. 2. Illustration of the priority filling order mechanism.

5. USER INTERVENTION One of the main contributions of our approach is to exploit the flexibility provided by the Laplacian Coordinates approach towards interactively modifying the inpainting result and even add news targets to be inpainted. Laplacian Coordinates enables an interactive tool that allows for repartitioning data by inserting new seeded pixels. In fact, if the result is not satisfactory, the user can remark badly inpainted pixels and their unlabeled neighbor pixels turning them as new constraints to the Laplacian Coordinates linear system to repartition the image and, thereby to improve the resulting inpainting. Other important trait of our framework is that managing

4.3. Mask Completion In this stage we allocate the most suitable patch of pixels from the dynamic region ΛΩp to the neighborhood of p ∈ ∂Ω. Our algorithm makes use of a metric that relies on the cartoon image u to compare the fixed patch H(p) with all candidate patches H(q) ⊂ ΛΩp (see Fig. 3). More precisely, the optimal patch H(b q ) is the one which minimizes the distance between H(p) and H(q) w.r.t. the following measure:

864

multiple regions is also allowed, since Laplacian Coordinates enables to easily obtain multiple partitions at the same time. Moreover, user can recursively steer the resulting partition towards reaching a higher quality result. Figure 4 illustrates the selection of multiple targets, where the user provides color scribbles to the desirable objects (in green) and to the image background (in red). (a) Targeted (b) Selection (c) Inpainted (d) Targeted (e) Inpainted by our method from (a) by our method by Photoshop by Photoshop

Fig. 5. Comparison between our framework and Adobe Photoshop taking into account the user interface.

(a) Marked image

(b) Inpainting result

Fig. 4. Taking multiple targets to perform inpainting.

6. RESULTS AND COMPARISONS

(a) Targeted framework

In order to confirm the effectiveness of our methodology, we perform a few comparisons against the modern fill contentaware mechanism provided by Adobe Photoshop, Planar Structure-based inpainting algorithm [23], and the classical image inpainting techniques [4] and [24]. Figure 5 illustrates the capability of our method to perform inpainting from a reduced amount of user involvement. Our approach (Fig. 5(a)) is much simpler than the “patch” mechanism provided by Adobe Photoshop (Fig. 5(c)), that is, the user does not need to fully surround the whole object to reach a satisfactory result. Figure 6 establishes comparisons between our technique, Adobe Photoshop, exemplar-based method [4] and optimization-based approach [24]. In contrast to Figs. 6(d)-(f), which create some artifacts in the outcomes, our method reaches a more refined result, as depicted in Fig. 6(c). Finally, Figure 7 shows a photograph with some gaps intentionally made on complex regions of the image. It is clear that our approach outperforms [4] and [23] in terms of properly repairing the image and wPSNR measurement.

by

our (b) Targeted by Adobe (c) Inpainted by Our Photoshop framework

(d) Inpainted by Adobe Photoshop

(e) Inpainted by [4]

(f) Inpainted by [24]

Fig. 6. Inpainting produced by our approach, Adobe Photoshop, Exemplar-based [4] and Optimization-based [24].

7. CONCLUSION

(a) Targeted our method

In this work we address the fundamental problem of image inpainting as a practical application that unifies stroke-based object selection and image restoration. The combination of Laplace Coordinates, anisotropic diffusion and cartoon-based filling order mechanism were rearranged so as to provide a robust interface that allows for user involvement while managing the restoration process. In fact, the proposed framework turns out to be effective in practical situations and quite flexible to properly repair images, rendering it a very attractive interactive inpainting tool.

by (b)

Our

result (c) Result from (d) Result from

(wPSNR: 36.49)

[4](wPSNR: 33.88) [23](wPSNR:33.76)

Fig. 7. Comparison between our framework, Exemplar-based [4] and Planar Structure-based [23].

Acknowledgments The authors would like to thank the reviewers for their constructive comments. This research has been funded by CNPq and FAPESP (grants #2014/16857-0 and #2011/22749-8).

865

8. REFERENCES

regions for image inpainting,” in 13th Asian Conference on Computer Vision (LNCS), 2013, pp. 61–71.

[1] Muthukumar Subramanyam, Krishnan Nallaperumal, Pasupathi P., and Deepa S., “Analysis of image inpainting techniques with exemplar, poisson, successive elimination and 8 pixel neighborhood methods,” Int. Journal of Comp. Applications, vol. 9, no. 11, pp. 15–18, 2010.

[13] Jian Sun, Lu Yuan, Jiaya Jia, and Heung-Yeung Shum, “Image completion with structure propagation,” ACM Trans. Graph., vol. 24, no. 3, pp. 861–868, 2005. [14] Yan Zhang, Zhengxing Sun, Song Mofei, and Zhang Feiqian, “Interactive image completion with direction empirical mode,” in Conference on Technologies and Applications of Artificial Intelligence, 2011, pp. 13–18.

[2] Marcelo Bertalmio, Vicent Caselles, and Simon Masnou Guillermo Sapiro, “Inpainting,” Encyclopedia of Computer Vision, 2011.

[15] Darko Pavic, Volker Schonefeld, and Leif Kobbelt, “Interactive image completion with perspective correction,” The Visual Computer, vol. 22, no. 9, pp. 671–681, 2006.

[3] Wallace Casaca, Marcos Proenca de Almeida, Maur´ılio Boaventura, and Luis Gustavo Nonato, “Combining anisotropic diffuison, transport equation and texture synthesis for inpainting textured images,” Pattern Recognition Letters, vol. 36, pp. 36–45, 2014.

[16] Teryl Arnold and Bryan S. Morse, “Interactive image repair with assisted structure and texture completion,” in IEEE W. on Appl. of Comp. Vis., 2007, pp. 11–17.

[4] Antonio Criminisi, Patrick Prez, and Kentaro Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Transactions on Image Processing, vol. 13, pp. 1200–1212, 2004.

[17] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman, “Patchmatch: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., vol. 28(3), pp. 24:1–24:11, 2009.

[5] Aur´elie Bugeau, Marcelo Bertalm´ıo, Vicent Caselles, and Guillermo Sapiro, “A comprehensive framework for image inpainting,” IEEE Transactions on Image Processing, vol. 19, pp. 2634–2645, 2010.

[18] Connelly Barnes, Eli Schechtman, Dan B. Goldman, and Adam Finkelstein, “The generalized patchmatch correspondence algorithm,” in Proceedings of the 11th European Conference on Computer Vision (ECCV). 2010, pp. 29–43, Springer-Verlag Berlin.

[6] Fr´ed´eric Cao, Yann Gousseau, Simon Masnou, and Patrick P´erez, “Geometrically guided exemplar-based inpainting,” SIAM Journal on Imaging Sciences, vol. 4, pp. 1143–1179, 2011.

[19] Wallace Casaca, Luis Gustavo Nonato, and Gabriel Taubin, “Laplacian coordinates for seeded image segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 384–391.

[7] Shutao Li and Ming Zhao, “Image inpainting with salient structure completion and texture propagation,” Pattern Rec. Letters, vol. 32, pp. 1256–1266, 2011.

[20] Wallace Casaca, Graph Laplacian for Spectral Clustering and Seeded Image Segmentation, 161 pgs, PhD Thesis, University of S˜ao Paulo, S˜ao Carlos, Brazil, 2014.

[8] Yunqiang Liu and Vicent Caselles, “Exemplar based image inapinting using multiscale graph cuts,” IEEE Trans. Image Processing, vol. 22, pp. 1699–1711, 2013.

[21] Wallace Casaca, Afonso Paiva, Erick Gomez-Nieto, Paulo Joia, and Luis Gustavo Nonato, “Spectral image segmentation using image decomposition and inner product-based metric,” Journal of Mathematical Imaging and Vision, vol. 45, no. 3, pp. 227–238, 2013.

[9] Maxime Daisy, Pierre Buyssens, David Tschumperl´e, and Olivier L´ezoray, “A smarter exemplar-based inpainting algorithm using local and global heuristics for more geometric coherence,” in IEEE International Conference on Image Processing, 2014, pp. 4622–4626.

[22] C´elia A. Z. Barcelos, Maur´ılio Boaventura, and Evanivaldo C. Silva Jr., “A well balanced flow equation for noise removal and edge detection,” IEEE Transactions on Image Processing, vol. 12, pp. 751–763, 2003.

[10] Toru Tamaki, Hiroshi Suzuki, and Masanobu Yamamoto, “String-like occluding region extraction for background restoration,” in IEEE International Conference on Pattern Recognition, 2006, pp. 615–618.

[23] Jia-Bin Huang, Sing Bing Kang, Narendra Ahuja, and Johannes Kopf, “Image completion using planar structure guidance,” ACM Trans. Graph., vol. 33, no. 4, pp. 129:1–129:10, 2014.

[11] Toshiyuki Amano, “Correlation based image defect detection,” in IEEE International Conference on Pattern Recognition, 2006, pp. 163–166.

[24] Yonatan Wexler, Eli Shechtman, and Michal Irani, “Space-time completion of video,” IEEE Trans. PAMI, vol. 29, pp. 463–476, 2007.

[12] Milind G. Padalkar, Mukesh A. Zaveri, and Manjunath V. Joshi, “Svd based automatic detection of target

866

A USER-FRIENDLY INTERACTIVE IMAGE ...

work we design a new tool that allows users to easily select the desirable mask. The proposed framework ... classes, those algorithms still require the user intervention to entirely mark the area to be repaired. ... objects so as to keep the visual coherence of image structures. Finally, the user can update existing regions as ...

7MB Sizes 5 Downloads 313 Views

Recommend Documents

Interactive Image Colorization using Laplacian ...
photo editing and scientific illustration, to modernize old motion pictures and to enhance ... Aiming at making the colorization task simpler and less laborious, several .... Lαβ coloring system by employing basic matrix transformations as outlined

Interactive Natural Image Segmentation via Spline ...
Dec 31, 2010 - approach is popularly used in Photoshop products as a plus tool. However ... case that the data distribution of each class is Gaussian. ...... Conference on Computer Vision and Pattern Recognition, New York, USA, 2006, pp.

LNCS 5876 - Interactive Image Inpainting Using DCT ...
Department of Mechanical and Automation Engineering. The Chinese University of Hong Kong [email protected]. Abstract. We present a novel ...

Interactive Natural Image Segmentation via Spline ...
Dec 31, 2010 - The computational complexity of the proposed algorithm ... existing algorithms developed in statistical inference and machine learning ... From the second to the ninth are the segmentations obtained by Linear Discriminative Analysis (L

Two-way interactive system, terminal equipment and image pickup ...
Oct 24, 2002 - pickup apparatus preferably has a person authentication unit for performing ... analysis result of the signature of an interlocutor, or (4) perform ...

Interactive and progressive image retrieval on the ...
INTERNET, we present the principle of an interactive and progressive search ... make difficult to find a precise piece of information with the use of traditional text .... images, extracted from sites of the architect and providers of building produc

Efficient Label Propagation for Interactive Image ...
A novel algorithm for interactive multilabel image/video segmentation is ... the user to specify several free parameters; the graph cut based algorithms only return ...

image-stylization-by-interactive-oil-paint-filtering
... is adaptively smoothed according to the. 3. Page 3 of 16. cag-2016-semmo-article--image-stylization-by-interactive-oil-paint-filtering--authors-version.pdf.

Interactive Image Segmentation with Multiple Linear ...
Oct 20, 2011 - [12], snapping [5] and jet-stream [13] require the user to label the pixels near ...... have ||M1 − C1||F = 5.5164 and ||M2 − C2||F = 1.7321 × 104.

Interactive color image segmentation with linear ...
Jan 18, 2008 - Some remedies have been proposed to salvage the SDP- .... The first term is known as a data-term, capturing the color consistency, and the ...

Interactive Learning with Convolutional Neural Networks for Image ...
data and at the same time perform scene labeling of .... ample we have chosen to use a satellite image. The axes .... For a real scenario, where the ground truth.

Towards a clearer image - Nature
thus do not resolve the question of whether humans, like monkeys, have mirror neurons in the parietal lobe. However, there are several differ- ences between the studies that have to be taken into account, including the precise cortical location of th

Underwater Image Enhancement Techniques: A Survey - IJRIT
Different wavelength of light are attenuated by different degree in water. Underwater images are ... 75 found in [28]-[32]. For the last few years, a growing interest in marine research has encouraged researchers ..... Enhancement Using an Integrated

IMAGE RESTORATION USING A STOCHASTIC ...
A successful class of such algorithms is first-order proxi- mal optimization ...... parallel-sum type monotone operators,” Set-Valued and Variational. Analysis, vol.

A HIERARCHICAL IMAGE AUTHENTICATION ...
watermark is Wong's scheme [2], which embeds a digital signature of most significant bits ... particular, the signature embedded in a block may be calculated us-.

A NOVEL SECURE IMAGE STEGANOGRAPHY.pdf
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... A NOVEL SE ... GRAPHY.pdf. A NOVEL SE ... GRAPHY.pdf. Open. Extract. Open with. Sign In. Main

Evaluating a Visualisation of Image Similarity - rodden.org
University of Cambridge Computer Laboratory. Pembroke Street ... very general classes of image (such as “surfing” or “birds”), that do not depend on a user's ...