PDF

Description

In this dissertation, we focus on physically-based rendering that synthesizes realistic images from 3D models and scenes. State of the art rendering still struggles with two fundamental challenges — realism and speed. The rendered results look artificial and overly perfect, and the rendering process is slow for both offline and interactive applications. Moreover, better realism and faster speed are inherently contradictory, because the computational complexity increases substantially when trying to render higher fidelity detailed results. We put emphasis on both ends of the realism-speed spectrum in rendering by introducing the concept of detailed rendering and appearance modeling to accurately represent and reproduce the rich visual world from micron level to overall appearance, and combining sparse ray sampling with fast high dimensional filtering to achieve real-time performance. To make rendering more realistic, our first claim is that, we need details. However, rendering a complex surface with lots of details is far from easy. Traditionally, the surface microstructure is approximated using a smooth normal distribution, but this ignores details such as glinty effects, easily observable in the real world. While modeling the actual surface microstructure is possible, the resulting rendering problem is prohibitively expensive using Monte Carlo point sampling: the energy is concentrated in tiny highlights that take up a minuscule fraction of the pixel. We instead compute the accurate solution that Monte Carlo would eventually converge to, using a completely different deterministic approach (Chapter 3). Our method considers the highly complicated distribution of normals on a surface patch seen through a single pixel. We show different methods to evaluate this efficiently with closed-form solutions, assuming a surface patch is made up of either 2D planar triangles [147] or 4D Gaussian elements [145], respectively. We also show how to extend our method to accurately handle wave optics [148]. Our results show complicated, temporally varying glints from materials such as bumpy plastics, brushed and scratched metals, metallic paint and ocean waves. In the above, although rendering details imposes many challenges, we assumed we know how the surface reflects light. However, there are a lot of natural materials in the real world where we are not sure exactly how they interact with the light. To render these materials realistically, we need accurate appearance/reflectance models derived from microstructures to define their optical behavior. We demonstrate this by introducing a reflectance model for animal fur in Chapter 4. Rendering photo-realistic animal fur is a long-standing problem in computer graphics. Considerable effort has been made on modeling the geometric complexity of human hair, but the appearance/reflectance of fur fibers is not well understood. Based on anatomical literature and measurements, we develop a double cylinder model for the reflectance of a single fur fiber, where an outer cylinder represents the biological observation of a cortex covered by multiple cuticle layers, and an inner cylinder represents the scattering interior structure known as the medulla, often absent from human hair fibers. We validate our physical model with measurements on real fur fibers, and introduce the first database in computer graphics of reflectance profiles for nine fur samples. For efficient rendering, we develop a method to precompute 2D medulla scattering profiles and analytically approximate our reflectance model with factored lobes [144]. We then develop a number of optimizations that improve efficiency and generality without compromising accuracy [141]. And we present the first global illumination model, based on dipole diffusion for subsurface scattering, to approximate light bouncing between individual fur fibers by modeling complex light and fur interactions as subsurface scattering, and using a simple neural network to convert from fur fibers’ properties to scattering parameters [142]. However, even without these details to improve rendered realism, current rendering still suffers from low performance with state of the art Monte Carlo ray tracing. Physically correct, noise-free images can require hundreds or thousands of ray samples per pixel, and take a long time to compute. Recent approaches have exploited sparse sampling and filtering;the filtering is either fast (axis-aligned), but requires more input samples, or needs fewer input samples but is very slow (sheared). We present a new approach for fast sheared filtering on the GPU in Chapter 5 [143]. Our algorithm factors the 4D sheared filter into four 1D filters. We derive complexity bounds for our method, showing that the per-pixel complexity is reduced from O(n^{2}l^{2}) to O(nl), where n is the linear filter width (filter size is O(n^{2})) and l is the (usually very small) number of samples for each dimension of the light or lens per pixel (spp is l^{2}). We thus reduce sheared filtering overhead dramatically. We demonstrate rendering of depth of field, soft shadows and diffuse global illumination at interactive speeds.

Details

Files

Statistics

from
to
Export
Download Full History