Rendering Fur in “Life of Pi” Ivan Neulander Google Inc. We present several technical advancements developed at Rhythm & Hues for the efficient rendering of photorealistic fur in Ang Lee’s Oscar-winning feature film Life of Pi. We drew heavily on this work to stereoscopically render the tiger Richard Parker and several other animals, all with sufficient realism and aesthetic control to capture the director’s ambitious vision. We summarize existing work on which our implementation builds, and describe in detail some recent improvements in the areas of hair shading, performance optimizations, and post-rendering tools for motion blur and stereo image synthesis.

Toshi Kato, Kevin Beason Rhythm & Hues Studios A large proportion of render time was spent testing occlusion along hair-based gather rays, with layers of semitransparent fur being the most common and expensive occluder. We employed two techniques to accelerate this: The first was a modified form of the volumetric occlusion approximation described in [Neulander 2010], which reliably identified rays that were likely to be blocked by nearby skin, alowing the renderer to avoid tracing them. The remaining non-skin-bound rays were attenuated using an accurate raytraced estimation of their occlusion. The second optimization leveraged our renderer’s dual representation of fur as both scanline triangles and raytraced hair primitives. This allowed using screen door transparency to accelerate the ray tracing: For the ray-occluding fur, semitransparent tube primitives were made opaque and their thicknesses were correspondingly reduced so as to preserve coverage. These operations were applied at the control vertices of each strand, allowing for precise lengthwise variation in opacity and thickness. By making these strands thin and opaque, we eliminated the need to trace multiple levels of transparency rays through them, and we also reduced the number of ray intersections by shrinking the ray targets. This produced a dramatic, artifact-free speed increase for secondary gather rays, while preserving the desired look of true transparency for the primary rays.

Postprocessing

Hair Shading Life of Pi was our first feature film to rely almost exclusively on area lights. We used multiple importance sampling (MIS) to combine lightbased and BSDF-based sample rays, deploying our adaptive importance sampler [Neulander 2011] to weed out occluded or otherwise unimportant light paths during heavy ray gathering. We used a combination of an HDRI-mapped infinite sphere to capture distant lighting, with smaller rectangular and finite-sphere lights to model nearby source of illumination. Image-based importance sampling, based on precomputed two-dimensional CDF tables, was critical to rendering low-noise, high-contrast lighting from the HDRIs, which were generally sampled at full resolution in order to preserve crisp shadow detail. The BSDF used for our hair strands was based on the cone-shell model described in [Neulander 2010], which strikes a good balance of forward and backward scattering, and a relatively simple derivation. For Life of Pi we implemented two enhancements: 1) a secondary specular lobe that could be shifted along a hair strand and whose shape and color could vary independently from the primary specular lobe (this mimics the TRT component of Marschner’s model). 2) We opted for a more aggressive BSDF-based importance sampler, using a precomputed CDF table to approximate the Wigner semicircle distribution (the optimal choice for this BSDF). When used with MIS for specular reflections, this produced less noise than the flatter piecewise-linear importance PDF originally proposed.

Renderer Optimizations We made extensive use of the hair reflection cache described in [Neulander 2010], which sparsely stores primary hair shading samples along each strand in a temporally coherent yet refinable fashion. We extended this cache to store individual contributions from various light paths, and optimized its memory layout using clustered allocations to reduce fragmentation. We also implemented a read-copy-update mechanism to minimize thread locking while accessing the cache. The use of this cache cut our render times in half, simultaneously reducing noise relative to uncached hair shading.

Due to the immense geometric complexity of our fur, computing motion blur in the renderer by explicitly sampling visibility over time would have greatly increased our render times. So we made extensive use of our pixmotor tool [Neulander 2007] to synthesize motion blur by postprocessing static images using motion vectors (efficiently output by our renderer). For Life of Pi, we redesigned pixmotor’s highresolution work buffer to reduce its memory footprint. This was entailed by the high number of color layers stored in each rendered image (containing individual contributions of various lights and various scatter events per light). Our solution was to store normalized device coordinates in the work buffer, rather than explicit colors as before. While this added a level of indirection in accessing pixel colors, it ultimately sped up pixmotor by significantly reducing memory bandwidth, especially with multithreading. To improve efficiency on stereoscopic projects such as Life of Pi, we extended pixmotor to synthesize right-eye images from rendered lefteye images, a postprocess we labeled pixstereo. Primarily horizontal motion vectors were accurately computed from the parallax between the left and right cameras, and these were used in a specialized singleiteration mode of pixmotor to generate a pixel-shifted image as seen from the right-eye camera. We applied more aggressive hole-filling algorithms here than with pixmotor, and used an even higher-resolution work buffer (often 6x) to preserve quality. The above memory optimization was crucial, since storing all color channels at 6x resolution would have exceeded our RAM budget. For some rendered elements, the pixstereo images were of sufficient quality for final shots, but mostly they were suitable only for preview. However, with runtimes under a minute, the right-eye images effectively came for free, allowing many iterations of animation and lighting to be viewed stereoscopically without the added cost of right-eye rendering.

References N EULANDER , I. 2007. Pixmotor: a pixel motion integrator. In ACM SIGGRAPH 2007 sketches, SIGGRAPH ’07. N EULANDER , I. 2010. Fast furry ray gathering. In ACM SIGGRAPH 2010 Talks, SIGGRAPH ’10. N EULANDER , I. 2011. Adaptive importance sampling for multi-ray gathering. In ACM SIGGRAPH 2011 Talks, SIGGRAPH ’11.

Rendering Fur in “Life of Pi” - Research at Google

shading, performance optimizations, and post-rendering tools for mo- tion blur and stereo ... A large proportion of render time was spent testing occlusion along hair-based gather ... This produced a dramatic, artifact-free speed increase for ...

528KB Sizes 2 Downloads 46 Views

Recommend Documents

Life of Pi Pagetracker.pdf
Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Life of Pi Pagetracker.pdf. Life of Pi P

Life of pi lat
The grudge 3 yify.Life of pilat.April oneil pornpros. U2 24 flac.Tastingmenu irina.Boston greatest hit.Life of pilat.Game ofThrones s05e07 1080.The michael vick.

Automatic generation of research trails in web ... - Research at Google
Feb 10, 2010 - thematic exploration, though the theme may change slightly during the research ... add or rank results (e.g., [2, 10, 13]). Research trails are.

Fur display
Earlier versions of this concept often used rigid surfaces like tabletops, but Fur Display presents touchable fur with surprising dynamic movement. The device is ...

Life of Pi Character Arc Essay
Character Arc Essay. Honors World Literature. TOTAL: _____ / 50. FOCUS. •. Identifies the specific and most important ways in which Pi “grows and matures” as ...

by Cape fur seals (Arctocephalus pusillus - Shark Research ...
16 Mar 2015 - To cite this article: C Fallows, HP Benoît & N Hammerschlag (2015) Intraguild predation and partial consumption of blue sharks Prionace glauca by Cape fur seals Arctocephalus pusillus pusillus, African Journal of Marine Science, 37:1,

RECOGNIZING ENGLISH QUERIES IN ... - Research at Google
2. DATASETS. Several datasets were used in this paper, including a training set of one million ..... http://www.cal.org/resources/Digest/digestglobal.html. [2] T.

Hidden in Plain Sight - Research at Google
[14] Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. 2017. Google Vizier: A Service for Black-Box Optimization. In. Proc. of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data M

Domain Adaptation in Regression - Research at Google
Alternatively, for large values of N, that is N ≫ (m + n), in view of Theorem 3, we can instead ... .360 ± .003 .352 ± .008 ..... in view of (16), z∗ is a solution of the.

Fur display
Fur Display makes invisible information visible. It not only delivers dynamic movements of appealing, bushy fur, but it is also a feathery, visual, tactile display that invites touch and interaction. Earlier versions of this concept often used rigid

Collaboration in the Cloud at Google - Research at Google
Jan 8, 2014 - all Google employees1, this paper shows how the. Google Docs .... Figure 2: Collaboration activity on a design document. The X axis is .... Desktop/Laptop .... documents created by employees in Sales and Market- ing each ...

Collaboration in the Cloud at Google - Research at Google
Jan 8, 2014 - Collaboration in the Cloud at Google. Yunting Sun ... Google Docs is a cloud productivity suite and it is designed to make ... For example, the review of Google Docs in .... Figure 4: The activity on a phone interview docu- ment.

Yann Martel - Life of Pi (2001).pdf
Page 3 of 146. Yann Martel - Life of Pi (2001).pdf. Yann Martel - Life of Pi (2001).pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Yann Martel - Life ...