this subtree contains volume rendering¹ implementations
For other documentation see toolbox/documents/volumeRendering
the sixth incarnation of a hardware-accelerated volume renderer based upon the texture mapping capabilities available in Silicon Graphics' Onyx Reality Engine architecture. The renderer provides the ability to control the mapping from scalar data value to color and opacity as well as the ability to render voxel and geometric data together. The renderer loads data as a collection of 2D image files in one of many formats.
Volume Rendering Primer provides an example of hardware accelerated volume
rendering on the Indigo² High Impact, Maximum Impact, Reality Engine, and
Infinite Reality workstations from Silicon Graphics.
VRP was written for the express purpose of providing an easy to understand
example of hardware accelerated volume rendering. It addresses the issues
of lookup tables for volume feature enhancement and using embedded geometry
within a volume using the depth buffer along with the fundamental approach
necessary to visualize volumetric data using the available 3D texturing
extensions of OpenGL.
All code specific to volume rendering is written in OpenGL. All code specific
to embedded geometry is written in OpenInventor. The source to VRP is well
documented.
In order to locate tumors, the doctors analyse a stack of parallel
scanned images of the brains of their patients, which often requires
re-slicing this volume along another axis.
The data size represents the most drastic constraints of Medical
Imaging Applications: 256x256x124 or 512x512x64 16-bit datasets
are fairly common, which often exceed the texture memory
capacity. The dataset has then to be divided (real-time Tiling)
into separate tiles, which are processed individually: the
results are assembled in the final view.
This demonstration prototype was developed with the collaboration of
G.E. Medical Systems, in France.
- For those not familiar with this form of rendering, excerpts from the beginning of Todd Kulick's Building an OpenGL Volume Renderer (volren-6/doc/how-to/article.html) follow:
The ability to produce volume-rendered images interactively opens the door to a host of new application capabilities. Volumetric data is commonplace today. Radiologists use magnetic resonance images (MRI) and computed tomography (CT) data in clinical diagnoses. Geophysicists map and study three-dimensional voxel Earth models. Environmentalists examine pollution clouds in the air and plumes underground. Chemists and biologists visualize potential fields around molecules and meteorologists study weather patterns. With so many disciplines actively engaging in the study and examination of three-dimensional data, today's software developers need to understand techniques used to visualize this data. You can use three-dimensional texture mapping, an extension of two-dimensional texture mapping, as the basis for building fast, flexible volume renderers.
Volume rendering is a powerful rendering technique for three-dimensional data volumes that does not rely on intermediate geometric representation. The elements of these volumes, the three-dimensional analog to pixels, are called voxels. The power of volume-rendered images is derived from the direct treatment of these voxels. Contrasting volume rendering with isosurface methods reveals that the latter methods are computationally expensive and show only a small portion of the data. On the other hand, volume rendering lets you display more data, revealing fine detail and global trends in the same image. Consequently, volume rendering enables more direct understanding of visualized data with fewer visual artifacts.
All volume-rendering techniques accomplish the same basic tasks: coloring the voxels; computing voxel-to-pixel projections; and combining the colored, projected voxels. Lookup tables and lighting color each voxel based on its visual properties and data value. You determine a pixel's color by combining all the colored voxels that project onto it. This combining takes many forms, often including summing and blending calculations. This variability in coloring and combining allows volume-rendered images to emphasize, among other things, a particular data value, the internal data gradient, or both at once.