IN TRADITIONAL COMPUTER GRAPHICS, 3D objects are created using highlevel
surface representations such as polygonal meshes, NURBS (nonuniform
rational B-spline) patches, or subdivision surfaces. Using this modeling
paradigm, visual properties of surfaces, such as color, roughness, and
reflectance, are described by means of a shading algorithm, which might
be as simple as the Lambertian diffuse reflection model or as complex as
a fully-featured shift-variant anisotropic BRDF.1 Because light transport
is evaluated only at points on the surface, these methods usually lack the
ability to account for light interaction that takes place in the atmosphere
or in the interior of an object.
Compared with surface rendering, volume rendering describes a wide
range of techniques for generating images from 3D scalar data. These
techniques are originally motivated by scientific visualization, where volume
data is acquired by measurement or generated by numerical simulation.
Typical examples are medical data of the interior of the human body
obtained by computerized tomography (CT) or magnetic resonance imaging
(MRI). Other examples are data from computational fluid dynamics
(CFD), geological and seismic data, and abstract mathematical data such
as the 3D probability distribution of a random number, implicit surfaces,
or any other 3D scalar function.
It did not take long for volume-rendering techniques to find their way
into visual arts. Artists were impressed by the expressiveness and beauty of
the resulting images. With the evolution of efficient rendering techniques,
volume data is also becoming more and more important for applications in
computer games. Volumetric models are ideal for describing fuzzy objects,
such as fluids, gases, and natural phenomena like clouds, fog, and fire.