Preserving coherent illumination in style transfer functions for volume rendering Herrera, I. and Buchart, C. and Borro, D. CEIT and Tecnun (University of Navarra) [email protected], [email protected], [email protected] Abstract Volume rendering has been widely used in different fields where several rendering algorithms have been developed, such as shear-warp, ray casting or splatting. But independently of the rendering method, transfer functions are usually used for mapping values and other properties of the volume into colors. As an improvement of transfer functions, style transfer functions are being used, where sphere maps extracted from artwork are used instead of plain colors. In this paper we propose an interactive designer that would allow the user to create styles in a easy way, and shade them with just a color or a texture. In addition, it guarantees a coherent illumination, making it possible to easily use style transfer functions to achieve realistic rendering. Keywords— Illumination, Transfer Functions, Volumetric Rendering

1

Introduction

Volumetric visualization is a powerful technique, widely used in different fields, such as medicine, physics, chemistry, etc. Volume visualization can be seen as the 3D extension of plane images. Several methods of volume rendering exist, many of them are discussed in detail in [5]. But independently from the rendering method used, additional information is usually required for a more complete visualization, as the volume data generally only contains density. The most common techniques are transfer functions (TF), which assign optical properties to different ranges of densities. Transfer functions take one or more of the data available in each voxel, and assign optical properties depending on its value. The design of the transfer functions is a complicated process due to the way the voxel values and the properties are linked nonintuitively, often resulting in a trial and error method. In addition, to define TFs, many details have to be taken into account. For example, in medical visualization, the user may want to highlight some features of the volume such as vessels, or fade away other parts, as skin.

As an improvement of transfer functions, Bruckner et al. introduced the concept of style transfer functions[1], to simplify the creation of illustrative transfer functions. By assigning styles captured from artwork instead of just colors, style transfer functions offer an user friendly way of defining illustrative transfer functions. In addition, the illumination of the style creates an implicit light model which is then transfered to the volume, resulting in the transference of all implicit lighting models from the styles to the volume. However if the lighting models of the different styles in the rendered volume are not coherent between them, the result will not have a coherent illumination, which could be accepted in some kinds of illustrative rendering, but not in realistic rendering. In this paper we propose a designing framework that would allow the user to create styles with different effects, colors and textures, guaranteeing a coherent illumination. Through this, in addition to illustrative rendering, our work would allow the use of style transfer functions in realistic rendering without having to use a third party program to create the styles.

2

Related work

TFs can be 1-D, assigning optical properties based on just one piece of information in the voxel, or multi dimensional, assigning optical properties using multiple data from the voxel. For example, a 1-D TF maps colors and opacities relying only on the voxel density, while a 2-D TF can use the gradient magnitude to differentiate the interface of the materials. 1-D TFs allow easy manipulation and are more easily understandable than multi dimensional TFs. On the other hand, multi dimensional TFs achieve better visual results, but as the creation process is exponentially harder, 2-D TFs are used more commonly than higher dimensional TFs.

2.1

Transfer Function Design

Because of the aforementioned difficulty, there is an ongoing effort in creating interfaces that would allow an intuitive creation of multi dimensional transfer functions[8][9]. User aided design offers a great flexibility

as the user can modify the created transfer function, generate it from scratch, or guide the algorithm in its creation. However, the need for the user intervention raises the knowledge required to use the application. Moreover, transfer function definition is not trivial and therefore it may become into a trial and error method. For example, to overcome this, Wu et al.[9] created an user guided interface that allowed the user to design a transfer function by modifying volume rendered images. In this, way they took the above mentioned strong points of user aided design and presented a very flexible interface. However, there are another approaches to solve it by generating the transfer functions without user intervention. This automatic transfer function generation usually focuses on the features [6] or contour [10] of the volume. This kind of methods face a major challenge when dealing with the flexibility of the method, as the transfer functions needed for different fields are very diverse, and even inside the same task the transfer function may differ completely for each volume. By detecting the desired visualization and the appropriate transfer function, these methods make volume visualization available for a wider population, as it doesn’t require any additional skill to use it.

and the one used in our examples, is the ray casting[3]. This method casts one ray through the volume for each pixel in the final picture, where the final color of each pixel is the accumulated light along the ray. To achieve a more realistic and useful rendering light absorption, emission and occlusion need to be approximated. Equation (1) describes the accumulation function for the ray, where the number of steps, L, has to be at least twice the number of voxels that will be sampled. α and c are the opacity and color for each point respectively, being I(x) the pixel’s final color.   L l−1 X Y cl αl I(x) = (1 − αj ) (1)

2.2

3.1

Style Transfer Functions

As we have said before, the work of Bruckner et al. introduced the concept of style transfer functions [1], where the transfer function maps a sphere map (instead of just a color) to the voxel density and computes the final color with the style and the voxel’s normal (approximated with the gradient of the voxel). Rautek et al. continued this work adding a semantic layer to the style transfer function definition simplifying its design [7]. Furthermore, Herghelegiu et al. proposed to use style transfer functions to achieve illustrative rendering through differently lit sphere maps [4]. All the work done until now has been based on predefined styles, adding different layers to simplify their use and modification. In this paper, we propose a style editor that would allow the user to create the styles without having to rely on predefined styles (which are usually edited in third party applications such as photo editors). In this way, the user can easily manipulate the lighting model, which is maintained coherent by the editor, or change the optical properties of the material by changing the base color or applying a texture. The editor also allows applying different effects to the style as contour highlighting [1] or central transparency [2].

3

Illumination

Volume rendering, due to its nature, is a computationally expensive action as it needs to run through all the data in order to render it. The most used algorithm,

l=0

j=0

As it can be seen, the function does not calculate the illumination of each sample point, so it needs to be included. Adding the illumination to the function implies having to calculate the illumination equation chosen in every sample point, usually using the gradient as an approximation of the normal. It is evident that it adds extensive work to the GPU, but the results without illumination would hardly create realistic images.

Implicit Illumination

A compromise between quality and computational cost regarding illumination can be reached using style transfer functions, as they remove the need for the additional illumination calculation. Style transfer functions use sphere maps that contain an implicit illumination model. Once the style of the corresponding voxel has been decided, the gradient is used to create an approximation of the normal in eye space coordinates and then to choose the color of the voxel from the map (Figure 1).

Figure 1: Lit sphere shading [1] In that way, the illumination from the sphere is transferred to the corresponding part of the volume,

each one with its implicit illumination, and so the need of illumination calculations is removed from the ray casting. On the other hand, as every style has its own illumination model and multiple styles are usually used in one volume, there is some risk of having styles with incoherent illumination models. This may be acceptable or even desirable in some styles of illustrative rendering, but in most cases, and specially in realistic rendering, it is utterly important to have a coherent illumination model, i.e., each one of the styles, even having their own independent models, must be coherent with all the other styles. Following, we will explain what we propose in order to solve it.

3.2

the light, can be modified independently of the selected style (using a click and drag operation on the selected sphere). While the modification of the light’s direction is done in the selected style, the changes will be reflected equally in all styles (shown in the left panel). In this way, the user can have a preview of the result without having to select the other styles.

Achieving Coherent Illumination

As we have said before, a coherent illumination is a very desirable feat in volume rendering, but at the same time, direct illumination causes an undesirable computational overhead, and is hard to maintain the coherence between multiple styles. To do that, the user would need to use external programs and require additional skill in order to change the pictures used in the styles while maintaining everything with the same implicit model.

Figure 2: Sphere maps differently illuminated created with our designer

Figure 3: Style designer The user can choose, through the editor, a texture for the selected style, so a real photo of the material to be rendered can be selected. For example, in medical visualization, the user could import a photo of a part of a bone in the style to be used to render the skull. Also, the user can select a plain color, allowing an easily controllable illustrative style that, even though not realistic, it will be illuminated in the same direction as the other styles. Then the user can choose to mix the two types of rendering without losing the realistic illumination, as it can be seen in Figure 4 where the left style shades the bone in a realistic way and the right style highlight the teeth.

We propose an interactive designer where the user has total control over the styles and at the same time ensures that the styles will maintain a coherent illumination (independently of the changes made to them). Doing that, the time and skill previously required are dramatically reduced. Additionally, as the styles are not simple pictures but information rendered on the fly, the illumination of the styles can be changed interactively, making it possible to interact with the light as easily and intuitively as with direct illumination (Figure 2).

4

Styles editor

The editor we propose enables the interactive use of style transfer functions through an easily usable style design and modification interface. The interface (Figure 3) shows the existing styles in the library, as well as the specific information of the selected style. Nevertheless, the general property that all the styles share, the direction of

Figure 4: Realistic and illustrative styles Albeit uncommon, the user might not want the illustrative style to be shaded in order to be more strongly

highlighted. The editor makes this possible by means of the direct control of the way the light interacts with the styles. As it can be seen in Figure 3, the editor also gives the user power over the styles light’s ambient, diffuse, and specular values. By modifying these values the user can create shaded or not shaded styles, as well as any combination between them. This offers a great flexibility when the styles are being created.

Figure 6: Skull and teeth with and without central transparency

5

Figure 5: Contour highlighted skull In addition to the usual properties, the editor allows the inclusion of illustrative effects. One of these effects is contour highlighting, introduced by Bruckenr et al. [1], where based on the normal projection on the sphere map, a different color can be assigned to the edge of the style in order to highlight contours in the rendered volume (Figure 5). The color of the edge will be independent of the properties of the illumination, ensuring that even if the style is shaded the contours will be correctly rendered with the selected color. Another effect, introduced by Buchart et al. [2], is the illustrative central transparency. This effect creates a smooth void in the center of the style, resulting in the material rendered with transparency in the zones that faces the viewer. The user can then see through other materials in order to have a better view of the key materials, without having completely eliminating the materials in the middle (Figure 6). This effect can be combined with the edge highlighting to create purely contour highlighting styles in order to see only the contour of a material (Figure 7). 1 http://www.lighthouse3d.com/opengl/glsl/index.php?dirlightpix

Implementation

The aforementioned illustrative effects have been implemented using dynamic shaders. With dynamic shaders the need for many if statements inside the shaders is removed, reducing the burden on the GPU. To create the contour highlighting effect this following code is inserted into the selected style’s fragment shader: dist = distance(f ragment) if (range.min < dist < range.max) then f ragment.color = contourColor f ragment.opacity = 1.0 end if When the style includes central transparency, the fragment shader makes use of the function smoothstep. This function creates a smooth transition from total transparency to total opacity using as the input the distance of the pixel to the center of the sphere and the minimum and maximum of the transparency range as reference. The transparency’s interpolation range is calculated by creating a band based in a percentage of the spheres actual size and the radius the user selected for it (which is a percentage of the total radius). This is the piece of code inserted: min = sphere.radius ∗ (transpRad − band) min = sphere.radius ∗ (transRad + band) dist = distance(f ragment) f ragment.opacity = smoothstep(min, max, dist) The base shader are the usual algorithms for per pixel illumination 1 and texture mapping, so they will be omitted in this explanation.

Acknowledgments We are thankful to the ”Beca Universidad de Navarra” of the University of Navarra for funding this project.

References [1] S. Bruckner and M. E. Groeller. Style transfer functions for illustrative volume rendering. COMPUTER GRAPHICS FORUM, 26(3, Sp. Iss. SI):715–724, 2007. 28th Annual Conference of the European-Association-for-Computer-Graphics ( EUROGRAPHICS 2007), Prague, CZECH REPUBLIC, SEP 03-07, 2007. [2] Carlos Buchart, Gaizka San Vicente, Aiert Amundarain, and Diego Borro. Hybrid visualization for maxillofacial surgery planning and simulation. In INFORMATION VISUALIZATION, IV 2009, PROCEEDINGS, pages 266–273, 2009. 4th Information Visualization Conference, Barcelona, SPAIN, JUL 15-17, 2009. Figure 7: Only skin contour rendered

6

Conclusions

In this paper, an intuitive and interactive way to create sphere maps for the style transfer functions has been presented. This application removes the need for external applications to create styles, allowing an easier use of style transfer functions and a more flexible combination of rendering styles. It allows the use of textures or colors for the styles, and the user can specify the individual lighting properties while the interface maintains coherent lighting. In addition, we have implemented two different effects for the styles, contour highlighting and illustrative central transparency. Lastly, the graphical interface has been designed to present a user-friendly layout, allowing inclusion for non expert users and applications.

7

Future Work

Although the presented interface allows the creation of realistic styles based on textures, some times the textures at hand may not have the desired result, as they contain a non-uniform lighting. This might lead to lighting artifacts, if the lighting in the texture is specially strong. In the future we are planning to add an option to the interface that applies a filter to the texture in order to make the lighting as uniform as possible. Due to the flexibility of style transfer functions, we plan on implementing more effects. For example we are planning to expand the options of the contour highlighting by allowing the user to use degradations instead of plain colors.

[3] J. Danskin and P. Hanrahanm. Fast algorithms for volume ray tracing. In Workshop on volume visualization, pages 91–98, 1992. [4] P Herghelegiu and V Manta. Volume illumination based on lit sphere maps. Technical Report Tome LIV (LVIII), The ”Gheorghe Asachi” Technical University of Iasi, 2008. [5] Arie Kaufman and Klaus Mueller. Overview of Volume Rendering (The Visualization Handbook). Elsevier Academic Press, 2005. [6] Ross Maciejewski, Insoo Woo, Wei Chen, and David S. Ebert. Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 15(6):1473–1480, NOV-DEC 2009. IEEE Information Visualization Conference/IEEE Visualization Conference, Atlantic City, NJ, OCT 11, 2009. [7] P. Rautek, S. Bruckner, and M. E. Groller. Semantic layers for illustrative volume rendering. Ieee Transactions On Visualization And Computer Graphics, 13(6):1336–1343, 2007. [8] Christof Rezk Salama, Maik Keller, and Peter Kohlmann. High-level user interfaces for transfer function design with semantics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 12(5):1021–1028, SEP-OCT 2006. IEEE Visualization Conference

(Vis 2006)/IEEE Symposium on Information Visualization (InfoVis 2006), Baltimore, MD, OCT 29-NOV 03, 2006. [9] Yingcai Wu and Huamin Qu. Interactive transfer function design based on editing direct volume rendered images. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 13(5):1027–1040, SEP-OCT 2007.

[10] Jianlong Zhou and Masahiro Takatsuka. Automatic Transfer Function Generation Using Contour Tree Controlled Residue Flow Model and Color Harmonics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 15(6):1481–1488, NOV-DEC 2009. IEEE Information Visualization Conference/IEEE Visualization Conference, Atlantic City, NJ, OCT 11, 2009.

Preserving coherent illumination in style transfer ...

would allow the user to create styles with different effects, colors and textures, guaranteeing a coherent illumination. Through this, in addition to illustrative ...

3MB Sizes 0 Downloads 160 Views

Recommend Documents

Depth-Preserving Style Transfer - GitHub
When the input scene exhibits large variations in depth (A), the current state of the art tends to destroy the layering and lose the depth variations, producing a “flat” stylization result (B). This paper aims to address this issue by incorporati

On-line Motion Style Transfer
their work on developing algorithms for editing the capture motion data, and syn- thesizing new .... analysis to obtain a style translation model between two motions. Style editing is also ..... Journal of graphics tools, 3(3):29–48, 1998. 21.

Perceptual Global Illumination Cancellation in ... - Computer Science
For the examples in this paper, we transform the appear- ance of an existing ... iterative design applications. 2. Related Work ..... On a desktop machine with an.

Perceptual Global Illumination Cancellation in ... - Computer Science
based radiosity framework [GTGB84]. We define Kp to be ... iterative design applications. 2. Related Work ..... On a desktop machine with an. NVIDIA GeForce ...

CMA Misconvergence in Coherent Optical ...
This sim- ple experimental realization leads to symbol correlation. When modulators are studied in combination with adaptive receiver algorithms that rely on .... 0.4. 0.6. 0.8. 1. Quadrature. In−Phase. Fig. 4: The constellation generated using the

Privacy Preserving Support Vector Machines in ... - GEOCITIES.ws
public key and a signature can be used. .... authentication code (MAC) which is derived from the ... encryption-decryption and authentication to block the.

Multidimensional generalized coherent states
Dec 10, 2002 - Generalized coherent states were presented recently for systems with one degree ... We thus obtain a property that we call evolution stability (temporal ...... The su(1, 1) symmetry has to be explored in a different way from the previo

Influence of the illumination on weak antilocalization in ...
tion based on a self-consistent solution of the Schrödinger and Poisson equations,22 including the charge balance equa- tion and the effect of exchange correlation on the Coulomb interaction23 should be performed based on the assumptions of the origi

Finding Multiple Coherent Biclusters in Microarray Data ... - IEEE Xplore
Finding Multiple Coherent Biclusters in Microarray. Data Using Variable String Length Multiobjective. Genetic Algorithm. Ujjwal Maulik, Senior Member, IEEE, ...

pdf-1819\coherent-states-applications-in-physics-and-mathematical ...
... the apps below to open or edit this item. pdf-1819\coherent-states-applications-in-physics-and- ... lauder-bo-sture-skagerstam-j-klauder-b-skagerstam.pdf.

Continuous light-shift correction in modulated coherent ...
Oct 13, 2006 - 1. In the experiment, light from a vertical-cavity-surface-emitting laser (VCSEL) a)Also at: Physics Department, University of Colorado, Boulder, ...

Coherent ow structures in bubble column reactors
+91-22-414-5616; fax: +91-22-414-5614. E-mail address: ..... Virtual. Lapin and Lubbert (1994). Rectangular, Cylindrical. Uniform point. —. (a). NC. NC. 1, 2, 3, 4, ...... three-dimensional simulations, especially in the bubble-free region, is quit

Content-Preserving Graphics - GitHub
audience possible and offers a democratic and deliberative style of data analysis. Despite the current ... purpose the content of an existing analytical graphic because sharing the graphic is currently not equivalent to ... These techniques empower a

PhysRevD_q-def coherent state.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

Controlling illumination to increase information in a ...
Green values observed by the cam- era related to modulated ρ intensity of pro- jected green. DLP projector. To solve the system w = Kh(ρ), samples of the pro-.

Influence of the illumination on weak antilocalization in ...
the wurtzite-type lattice, i.e., the bulk inversion asymmetry. (BIA). The electric field originating ... dresses: [email protected] and [email protected]. APPLIED PHYSICS .... erostructures. Those works will be carried out in a future study.

Horizontal gene transfer in plants - Oxford Academic
significant barrier to obtaining a comprehensive view of the tempo and pattern of ... Transactions of the Royal Society B: Biological Sciences 360,. 1889–1895.

Radiative Heat Transfer in CFAST -
Conclusions. Conservation of Energy... rate of change of .... Source code from the SVN repository, 'radiation.f90'. Martin Clouthier. Radiative Heat Transfer ...

Transfer pricing in 2010 and forwards
the charging of such services within a group. Moreover, authorities from various jurisdictions are increasingly active in developing and enforcing additional transfer pricing related regulations. Besides the activity of the JTPF with respect to Europ

Transfer learning in heterogeneous collaborative ... - ScienceDirect.com
Tudou,3 the love/ban data in Last.fm,4 and the “Want to see”/“Not interested” data in Flixster.5 It is often more convenient for users to express such preferences .... In this paper, we consider the situation where the auxiliary data is such

Transfer learning in heterogeneous collaborative filtering domains
E-mail addresses: [email protected] (W. Pan), [email protected] (Q. Yang). ...... [16] Michael Collins, S. Dasgupta, Robert E. Schapire, A generalization of ... [30] Daniel D. Lee, H. Sebastian Seung, Algorithms for non-negative matrix ...