Section 4 IMPACT Graphics Subsystem
After being projected to the screen by the Geometry Subsystem, all vertices retain 8 bits of fractional positioning information instead of being coerced into integers. This gives an accurate description of the primitive's position in a floating-point space. Without this feature, primitives would be rendered incorrectly and would jitter as they move, also causing serious problems with the anti-aliasing features described below.
4.4.2 Blending
IMPACT graphics supports both source and destination alpha for complete compositing capabilities.
The pixel data in the framebuffer is replaced with weighted average of itself and the pixel data being drawn. The user selects the function controlling both the source and destination factors used in the blend.
One common blend operation uses the alpha component of the pixel being drawn as the source factor and one minus alpha as the destination factor. The greater the alpha value, the more weight is given to the incoming data in the blend. This method is used to draw anti-aliased lines and to generate transparencies. It can be used any time subpixel coverage is demanded.
IMPACT also supports a number of extended blending modes. One of these sets the blend equation so that each component of the resultant color is the minimum or maximum of the corresponding component of the source and destination color. This is particularly useful in medical volume rendering for calculating the maximum intensity projection (or MIP). Another blending equation implemented on IMPACT allows any bitwise logical operation to be applied to each of the source and destination color components. Two more blend equations allow subtraction instead of addition in blending the source and destination components. The "subtract" equation subtracts the product of the destination factor and destination color from the product of the source factor and source color. The "reverse subtract" does the opposite, subtracting the source product from the destination product. Blend subtraction is useful for finding differences between two images. Finally, a constant color or constant alpha can be chosen as the source or destination factor allowing for efficient fades and blending between two images.
4.4.3 Point Anti-Aliasing
To render an anti-aliased point, a 2 x 2 grid of pixels is used to approximate the area covered by a filtered point. The four pixels are given blend weights proportional to the distance from their pixel centers to the actual point location in sub-pixel space.
4.4.4 Line Anti-Aliasing
Lines are anti-aliased by drawing a 2-pixel-wide line with higher weights for pixels closer to the line in the minor axis, and lower values toward the outer pixels. Effectively, we are approximating the location of a line by a wide line that is filtered in the minor axis. Similar to points, RGB lines are blended into the framebuffer by the weights. For color indexed anti-aliased lines, instead of generating a weight for blending, the hardware substitutes the lower 4 bits of the color index value. The new color then indexes into a ramp in the color look-up tables.
Slope Correction
IMPACT graphics automatically adjusts pixel intensity so that a line appears uniform at all angles.
Endpoint Filtering
So far, the weights of pixels that make up anti-aliased lines have been adjusted only in the minor axis. The endpoints of the lines must also be adjusted in the major axis to avoid popping from one pixel to the next. To correct this, the hardware uses the subpixel information in the major axis to adjust the intensity of the endpoint color. This way the apparent endpoint moves gradually from one pixel to the next.
4.4.5 Accumulation Buffer
IMPACT graphics offers a 64-bit software accumulation buffer to combine or accumulate a set of scenes. The Open GL Graphics Library API also allows for a weighted blend of each of the scenes into the accumulated image. The weight of each scene is defined by the user. These weights can be used with other features of the graphics subsystem (i.e., projection matrix) to define User-Programmed Filter Functions.
Progressive Refinement
As each frame is accumulated into the accumulation buffer, a more accurately sampled image is produced. The user can choose to render fewer frames to support real-time constraints, or to render many frames to obtain a high-quality image.
Multi-Pass Spatial Anti-Aliasing
Multi-Pass Spatial Anti-Aliasing is done by rendering the same objects for several frames while moving them spatially. By jittering the subpixel offsets (i.e., projection matrix) and accumulating the scenes together, an anti-aliased image is rendered. Furthermore, the user can choose a desired filter function to define the weights for each pass.
Optical Effects
By modifying the projection matrix as images are accumulated, viewing the scene from various points across the aperture of a lens, the sense of depth of field is created. Objects that are further from the focal plane of the lens are blurred while closer objects are made sharper.
Convolutions
An image can be quickly filtered using the accumulation buffer. Since the user has control of the weighted accumulation of each image, and the image can be moved about on screen in multiples of pixel coordinates, the accumulation buffer can be used to convolve the image using many filtering techniques.
Convolution is also supported by the GE11 ASIC using a pixel operation for image processing.
Orthogonality
The accumulation buffer provides a solution for the problem of spatial aliasing, motion-blur, depth of field, and penumbra. Another feature of the accumulation buffer is that all these techniques can be used together in any combination to render a high-quality image.
4.4.6 Lighting Features
The IMPACT architecture supports a wide range of lighting capabilities to enable the realistic rendering of geometric primitives. Lighting effects are computed on a per-vertex basis (Phong lighting) and are thus supported in the Geometry Engines.
IMPACT Graphics supports all of the following Open GL Graphics Library API lighting capabilities in hardware.
Light Sources
Up to eight light sources may be used simultaneously. The user can specify the color and position of each light source.
Surface Properties
The Open GL Graphics Library API allows the user to configure a number of surface properties to achieve a high degree of realism. Specifically, the user can define the emissivity of a surface and its ambient, diffuse, or specular reflectivity, as well as its transparency coefficients. A shininess coefficient is provided to specify how reflective an object is. The Command Processor and Geometry Engines were specifically designed so that surface properties can be modified on a per-vertex basis very quickly. This feature is particularly useful for scientific visualization. For example, an aeronautical engineer can change the diffuse reflectance at every vertex to show the stress contour across an airplane wing.
Two-Sided Lighting
The user can specify different surface properties for the front and back sides of geometric primitives to display objects whose inside and outside colors differ. This obviates the need to specify and render two separate primitives.
4.4.7 Local Light and Viewer Positioning
Traditionally, hardware-supported lighting models assume that the viewer and light sources are positioned infinitely far from the object being illuminated. Although the positioning of the viewer and/or light sources at a finite distance from the object can enhance the realism of the scene, these models are often avoided because of costly inverse square root operations. The IMPACT Geometry Engines include special VLSI support for computing inverse square roots, thus speeding local lighting calculations enormously.
4.4.8 Atmospheric Effects
IMPACT graphics supports fast per-pixel fog calculations in hardware. The fogging is done using 8 bits of precision. The fog value is then blended with the calculated color for the pixel based on span iteration.
The Open GL Graphics Library API simulates those fog and haze effects required for visual simulation applications by blending the object color with a user-specified fog color. The user enjoys control over the fog density through the Open GL Graphics Library interface. This functionality can also be used for depth cuing. With IMPACT, all fog functions (linear, exponential and exponential squared) can be used at the same level of performance.
4.4.9 Texture Mapping
Motivation
IMPACT supports high speed hardware texture mapping that is capable of generating images of the same high quality as those produced on Silicon Graphics' high-end visualization products. Texture mapping has traditionally been used in fields such as visual simulation and computer animation to enhance scene realism. As of late, its use has spread to many other markets (see section 1.2, "Markets,").
Quality
It is essential that errors be minimized during the texture-mapping process. Perspective correction of texture coordinates is performed during the scan-conversion process to prevent textures from "swimming" as an object moves in perspective. Impact supports perspective correction of one and two dimensional textures. Perspective correction and mipmapping of three dimensional (volume) textures was too expensive to implement in hardware and judged to be of limited utility in the applicable markets.
Texture aliasing is minimized by filtering the texture for each pixel textured. Without filtering, textures on surfaces appear to sparkle as surfaces move. Filtering is accomplished using an interpolation of the mip-maps in the TRAM. All core OpenGL 1.0 texture filtering modes are supported, including trilinear mipmapping. Quadlinear filters for four dimensional textures (only used for pixel texture color conversions) are possible by using multiple passes and alpha blending. Filtered representations of a texture are precomputed at different levels of resolution. IMPACT supports textures with 1, 2, 3, or 4 components (luminance, luminance alpha, RGB, or RGBA) each having 4, 8, or 12 bits in depth. The texture mezzanine option is necessary to support some 8 and twelve bit texel modes that are not supported under the standard configuration, namely three and four component 8 bit texels and two, three and four component 12 bit texels.
IMPACT has the ability to define three 256-entry 8-bit texture color tables that can be applied to luminance or RGB textures after texture interpolation and filtering. These are of particular interest to markets like medical imaging where interactive contrast adjustment is necessary in viewing textured images and volumes.
Flexibility
A variety of texture types and environments are provided to support the diverse applications of textures. Textures can be defined to repeat across a surface or to clamp outside of a texture's unit range. Textures can be in monochrome or color, with alpha or without. Texture alpha can be used to make a polygon's opacity vary at each pixel. For instance, when an RGBA image of a tree is mapped onto a quadrilateral, objects behind the polygon can appear through the polygon wherever the opacity of the tree map is low, thereby creating the illusion of an actual tree.
Textures can be combined with their surfaces in a variety of ways. A monochrome texture can be used to blend between the surface color and a constant color to create effects such as grass on dirt or realistic asphalt. By adding alpha, a texture can be used to create translucent clouds. Textures can also be used to modulate a surface's color or be applied as a decal onto a surface.
The IMPACT architecture can automatically generate texture coordinates, based on user-specified behavior. This feature can be used to texture map contours onto an object without requiring the user to compute or store texture coordinates for the object.
4.4.10 Stencil Planes
The 8 independent stencil bitplanes implemented in the Raster Subsystem depth buffer provides an effective mechanism for affecting the results of pixel algorithms. In many ways, the stencil can be thought of as an independent, high-priority Z-buffer. The stencil value can be tested during each pixel write, and the result of the test determines both the resulting stencil value, and whether the pixel algorithm will produce any other result.
One application of the stencil is Z-Buffered image copy. With one pass, the stencil planes record the result of depth comparisons between source and destination areas of the framebuffer; with a second pass, the image is copied from source to destination, with only the pixels that passed the depth comparison being updated. As an example, this method can be employed with a library of small 3D images, such as spheres and rods, to quickly construct molecular models in the framebuffer.
A second application is the ability to draw hollow polygons--useful for visualizing the structure of solid models. By drawing the outline of each facet into the stencil, and subsequently performing Z-Buffered drawings of the whole facet while using the stencil as a mask, the true joining edges of an object's surface can be displayed alone, highlighted, or with the background color filled to expose a hidden-line representation.
Most significantly, the stencil mechanism allows Constructive Solid Geometry pixel algorithms to be implemented in a parallelized environment. The flexible testing and updating constructs designed into the Image Engines allows the construction of unions and intersections of primitive shapes, all with the attributes of texture mapping, transparency, and anti-aliasing.
4.4.11 Arbitrary Clipping Planes
The Geometry Subsystem supports the definition of six planes in 3D space. Geometric primitives can be clipped against these planes in addition to the normal six planes that describe the current viewing volume, providing an ideal mechanism for viewing the cross-section of model components or volumetric data.
Alternatively, the distance between a primitive and any plane can be calculated. This distance can be used as a texture-mapping coordinate, which then can be used to produce a contour map applicable to any 3D model for improved visualization.
4.4.12 Pixel Read, Write, and Copy
IMPACT graphics offers a host of features that greatly enhance the pixel read, write, and copy operation. At the core of these features is a 64-bit DMA channel that provides ultra high-speed pixel transfers between the host and the framebuffer. In addition to the standard 32-bit pixel, various packed pixel formats are also supported for conserving system memory and bus bandwidth upon drawing.
Those interested in large data sets will discover that pan and zoom are supported by the hardware at interactive rates.
For pixel reads or writes, the screen-relative direction of the read or fill (right-to-left or left-to-right, bottom-to-top or top-to-bottom) is user selectable.
IMPACT also has dedicated hardware to support transfer and assembly of non-contiguous images (also called image tiling). This is particularly useful when roaming through a large image (typically done in GIS and prepress applications), as less data is required to be passed through the GIO bus, and portions of the image can be brought off disk on an as-needed basis.
4.4.13 Imaging Operations
* 12/12/12 RGB color single buffer
* 8/8/8/8 RGBA color single buffer
* 5/5/5/1 RGBA color double buffer
* 4/4/4/4 RGBA color double buffer
* 12 color index double buffer
* 24 bit depth buffer and 8 bit stencil buffer (also called ZST)
Some color buffers must, of course, be displayable. Those left over can be used as pbuffers.
1 Available in patch 1105 for IRIX 5.3 and IRIX 6.2.
2 These visuals can only be accessed by using "setmon 1280x1024_xx_32db.
Full screen stereo (Full) visuals allow users to create high quality stereoscopic applications and display them with the aid of stereoscopic glasses. Stereo in a window (Window) allows the user the extra flexibility of displaying the application in the normal desktop environment. The IMPACT architecture supports stereo across the line, although the available resolutions and color depths depend on the amount of framebuffer available.
4.4.15 Detail Texture
The problem with ordinary texture mapping of objects is that at close range, the structure of the texture map becomes very apparent when the texels are magnified. However, providing a texture map with sufficient detail to withstand very close visual scrutiny will frequently require more texture memory than could possibly be provided. To cope with this in certain situations, the concept of detail texturing was developed. Detail texturing allows the user to define two texture maps, the normal texture and a "detail texture". When the user moves in past a certain LOD, the detail texture is used to modify the base texture. This can create the effect of revealing the underlying structure of a texture. For instance, one could use a detail texture on a texture map of a cloth to reveal the stitch pattern of the fabric rather than just the colors when zooming in. IMPACT supports additive detail texturing in hardware.
4.4.18 Framebuffer Configuration (FBConfig)
4.4.20 Other OpenGL Extensions
Generated with CERN WebMaker
This page last modified: February 01, 1996.