[Prev] [Up] [Top]

Section 4 IMPACT Graphics Subsystem

IMPACT Graphics Features


4.4.1 MicroPixel Sub-Pixel Positioning

After being projected to the screen by the Geometry Subsystem, all vertices retain 8 bits of fractional positioning information instead of being coerced into integers. This gives an accurate description of the primitive's position in a floating-point space. Without this feature, primitives would be rendered incorrectly and would jitter as they move, also causing serious problems with the anti-aliasing features described below.

4.4.2 Blending

IMPACT graphics supports both source and destination alpha for complete compositing capabilities.

The pixel data in the framebuffer is replaced with weighted average of itself and the pixel data being drawn. The user selects the function controlling both the source and destination factors used in the blend.

One common blend operation uses the alpha component of the pixel being drawn as the source factor and one minus alpha as the destination factor. The greater the alpha value, the more weight is given to the incoming data in the blend. This method is used to draw anti-aliased lines and to generate transparencies. It can be used any time subpixel coverage is demanded.

IMPACT also supports a number of extended blending modes. One of these sets the blend equation so that each component of the resultant color is the minimum or maximum of the corresponding component of the source and destination color. This is particularly useful in medical volume rendering for calculating the maximum intensity projection (or MIP). Another blending equation implemented on IMPACT allows any bitwise logical operation to be applied to each of the source and destination color components. Two more blend equations allow subtraction instead of addition in blending the source and destination components. The "subtract" equation subtracts the product of the destination factor and destination color from the product of the source factor and source color. The "reverse subtract" does the opposite, subtracting the source product from the destination product. Blend subtraction is useful for finding differences between two images. Finally, a constant color or constant alpha can be chosen as the source or destination factor allowing for efficient fades and blending between two images.

4.4.3 Point Anti-Aliasing

To render an anti-aliased point, a 2 x 2 grid of pixels is used to approximate the area covered by a filtered point. The four pixels are given blend weights proportional to the distance from their pixel centers to the actual point location in sub-pixel space.

4.4.4 Line Anti-Aliasing

Lines are anti-aliased by drawing a 2-pixel-wide line with higher weights for pixels closer to the line in the minor axis, and lower values toward the outer pixels. Effectively, we are approximating the location of a line by a wide line that is filtered in the minor axis. Similar to points, RGB lines are blended into the framebuffer by the weights. For color indexed anti-aliased lines, instead of generating a weight for blending, the hardware substitutes the lower 4 bits of the color index value. The new color then indexes into a ramp in the color look-up tables.

Slope Correction

IMPACT graphics automatically adjusts pixel intensity so that a line appears uniform at all angles.

Endpoint Filtering

So far, the weights of pixels that make up anti-aliased lines have been adjusted only in the minor axis. The endpoints of the lines must also be adjusted in the major axis to avoid popping from one pixel to the next. To correct this, the hardware uses the subpixel information in the major axis to adjust the intensity of the endpoint color. This way the apparent endpoint moves gradually from one pixel to the next.

4.4.5 Accumulation Buffer

IMPACT graphics offers a 64-bit software accumulation buffer to combine or accumulate a set of scenes. The Open GL Graphics Library API also allows for a weighted blend of each of the scenes into the accumulated image. The weight of each scene is defined by the user. These weights can be used with other features of the graphics subsystem (i.e., projection matrix) to define User-Programmed Filter Functions.

Progressive Refinement

As each frame is accumulated into the accumulation buffer, a more accurately sampled image is produced. The user can choose to render fewer frames to support real-time constraints, or to render many frames to obtain a high-quality image.

Multi-Pass Spatial Anti-Aliasing

Multi-Pass Spatial Anti-Aliasing is done by rendering the same objects for several frames while moving them spatially. By jittering the subpixel offsets (i.e., projection matrix) and accumulating the scenes together, an anti-aliased image is rendered. Furthermore, the user can choose a desired filter function to define the weights for each pass.

Optical Effects

By modifying the projection matrix as images are accumulated, viewing the scene from various points across the aperture of a lens, the sense of depth of field is created. Objects that are further from the focal plane of the lens are blurred while closer objects are made sharper.

Convolutions

An image can be quickly filtered using the accumulation buffer. Since the user has control of the weighted accumulation of each image, and the image can be moved about on screen in multiples of pixel coordinates, the accumulation buffer can be used to convolve the image using many filtering techniques.

Convolution is also supported by the GE11 ASIC using a pixel operation for image processing.

Orthogonality

The accumulation buffer provides a solution for the problem of spatial aliasing, motion-blur, depth of field, and penumbra. Another feature of the accumulation buffer is that all these techniques can be used together in any combination to render a high-quality image.

4.4.6 Lighting Features

The IMPACT architecture supports a wide range of lighting capabilities to enable the realistic rendering of geometric primitives. Lighting effects are computed on a per-vertex basis (Phong lighting) and are thus supported in the Geometry Engines.

IMPACT Graphics supports all of the following Open GL Graphics Library API lighting capabilities in hardware.

Light Sources

Up to eight light sources may be used simultaneously. The user can specify the color and position of each light source.

Surface Properties

The Open GL Graphics Library API allows the user to configure a number of surface properties to achieve a high degree of realism. Specifically, the user can define the emissivity of a surface and its ambient, diffuse, or specular reflectivity, as well as its transparency coefficients. A shininess coefficient is provided to specify how reflective an object is. The Command Processor and Geometry Engines were specifically designed so that surface properties can be modified on a per-vertex basis very quickly. This feature is particularly useful for scientific visualization. For example, an aeronautical engineer can change the diffuse reflectance at every vertex to show the stress contour across an airplane wing.

Two-Sided Lighting

The user can specify different surface properties for the front and back sides of geometric primitives to display objects whose inside and outside colors differ. This obviates the need to specify and render two separate primitives.

4.4.7 Local Light and Viewer Positioning

Traditionally, hardware-supported lighting models assume that the viewer and light sources are positioned infinitely far from the object being illuminated. Although the positioning of the viewer and/or light sources at a finite distance from the object can enhance the realism of the scene, these models are often avoided because of costly inverse square root operations. The IMPACT Geometry Engines include special VLSI support for computing inverse square roots, thus speeding local lighting calculations enormously.

4.4.8 Atmospheric Effects

IMPACT graphics supports fast per-pixel fog calculations in hardware. The fogging is done using 8 bits of precision. The fog value is then blended with the calculated color for the pixel based on span iteration.

The Open GL Graphics Library API simulates those fog and haze effects required for visual simulation applications by blending the object color with a user-specified fog color. The user enjoys control over the fog density through the Open GL Graphics Library interface. This functionality can also be used for depth cuing. With IMPACT, all fog functions (linear, exponential and exponential squared) can be used at the same level of performance.

4.4.9 Texture Mapping

Motivation

IMPACT supports high speed hardware texture mapping that is capable of generating images of the same high quality as those produced on Silicon Graphics' high-end visualization products. Texture mapping has traditionally been used in fields such as visual simulation and computer animation to enhance scene realism. As of late, its use has spread to many other markets (see section 1.2, "Markets,").

Quality

It is essential that errors be minimized during the texture-mapping process. Perspective correction of texture coordinates is performed during the scan-conversion process to prevent textures from "swimming" as an object moves in perspective. Impact supports perspective correction of one and two dimensional textures. Perspective correction and mipmapping of three dimensional (volume) textures was too expensive to implement in hardware and judged to be of limited utility in the applicable markets.

Texture aliasing is minimized by filtering the texture for each pixel textured. Without filtering, textures on surfaces appear to sparkle as surfaces move. Filtering is accomplished using an interpolation of the mip-maps in the TRAM. All core OpenGL 1.0 texture filtering modes are supported, including trilinear mipmapping. Quadlinear filters for four dimensional textures (only used for pixel texture color conversions) are possible by using multiple passes and alpha blending. Filtered representations of a texture are precomputed at different levels of resolution. IMPACT supports textures with 1, 2, 3, or 4 components (luminance, luminance alpha, RGB, or RGBA) each having 4, 8, or 12 bits in depth. The texture mezzanine option is necessary to support some 8 and twelve bit texel modes that are not supported under the standard configuration, namely three and four component 8 bit texels and two, three and four component 12 bit texels.

IMPACT has the ability to define three 256-entry 8-bit texture color tables that can be applied to luminance or RGB textures after texture interpolation and filtering. These are of particular interest to markets like medical imaging where interactive contrast adjustment is necessary in viewing textured images and volumes.

Flexibility

A variety of texture types and environments are provided to support the diverse applications of textures. Textures can be defined to repeat across a surface or to clamp outside of a texture's unit range. Textures can be in monochrome or color, with alpha or without. Texture alpha can be used to make a polygon's opacity vary at each pixel. For instance, when an RGBA image of a tree is mapped onto a quadrilateral, objects behind the polygon can appear through the polygon wherever the opacity of the tree map is low, thereby creating the illusion of an actual tree.

Textures can be combined with their surfaces in a variety of ways. A monochrome texture can be used to blend between the surface color and a constant color to create effects such as grass on dirt or realistic asphalt. By adding alpha, a texture can be used to create translucent clouds. Textures can also be used to modulate a surface's color or be applied as a decal onto a surface.

The IMPACT architecture can automatically generate texture coordinates, based on user-specified behavior. This feature can be used to texture map contours onto an object without requiring the user to compute or store texture coordinates for the object.

4.4.10 Stencil Planes

The 8 independent stencil bitplanes implemented in the Raster Subsystem depth buffer provides an effective mechanism for affecting the results of pixel algorithms. In many ways, the stencil can be thought of as an independent, high-priority Z-buffer. The stencil value can be tested during each pixel write, and the result of the test determines both the resulting stencil value, and whether the pixel algorithm will produce any other result.

One application of the stencil is Z-Buffered image copy. With one pass, the stencil planes record the result of depth comparisons between source and destination areas of the framebuffer; with a second pass, the image is copied from source to destination, with only the pixels that passed the depth comparison being updated. As an example, this method can be employed with a library of small 3D images, such as spheres and rods, to quickly construct molecular models in the framebuffer.

A second application is the ability to draw hollow polygons--useful for visualizing the structure of solid models. By drawing the outline of each facet into the stencil, and subsequently performing Z-Buffered drawings of the whole facet while using the stencil as a mask, the true joining edges of an object's surface can be displayed alone, highlighted, or with the background color filled to expose a hidden-line representation.

Most significantly, the stencil mechanism allows Constructive Solid Geometry pixel algorithms to be implemented in a parallelized environment. The flexible testing and updating constructs designed into the Image Engines allows the construction of unions and intersections of primitive shapes, all with the attributes of texture mapping, transparency, and anti-aliasing.

4.4.11 Arbitrary Clipping Planes

The Geometry Subsystem supports the definition of six planes in 3D space. Geometric primitives can be clipped against these planes in addition to the normal six planes that describe the current viewing volume, providing an ideal mechanism for viewing the cross-section of model components or volumetric data.

Alternatively, the distance between a primitive and any plane can be calculated. This distance can be used as a texture-mapping coordinate, which then can be used to produce a contour map applicable to any 3D model for improved visualization.

4.4.12 Pixel Read, Write, and Copy

IMPACT graphics offers a host of features that greatly enhance the pixel read, write, and copy operation. At the core of these features is a 64-bit DMA channel that provides ultra high-speed pixel transfers between the host and the framebuffer. In addition to the standard 32-bit pixel, various packed pixel formats are also supported for conserving system memory and bus bandwidth upon drawing.

Those interested in large data sets will discover that pan and zoom are supported by the hardware at interactive rates.

For pixel reads or writes, the screen-relative direction of the read or fill (right-to-left or left-to-right, bottom-to-top or top-to-bottom) is user selectable.

IMPACT also has dedicated hardware to support transfer and assembly of non-contiguous images (also called image tiling). This is particularly useful when roaming through a large image (typically done in GIS and prepress applications), as less data is required to be passed through the GIO bus, and portions of the image can be brought off disk on an as-needed basis.

4.4.13 Imaging Operations

Color Tables

Color tables are used to provide a one-dimensional lookup table per component, whether that component be intensity, luminance, red, green, blue or alpha. It is a more powerful mechanism than the pre-existing pixel map facility provided in the core OpenGL 1.0. While pixel map was restricted to mapping from a color index to RGBA or from RGBA to RGBA (both of which require four separate color lookup tables), the color table mechanism minimizes the amount of work to be done by allowing color lookups on a subset of those components. For instance, you can map only luminance to luminance with color tables, requiring only one lookup per pixel. To do the same operation with pixel map would require four times the work (and time). Color tables have applications in numerous markets including prepress, medical imaging, oil/gas visualization and GIS. On IMPACT, color tables are accelerated and implemented in GE11 microcode. To ensure that they run as fast as possible, reduce the number and size of your color tables to an acceptable minimum.

Convolutions

Convolutions are a generalized mechanism upon which many different imaging operations such as blurring, sharpening and edge detection can be implemented. A convolution is a spacial operation used to calculate what is going on with the pixel intensities around the point of processing. By taking information about the neighbors of the pixel being processed it is possible to calculate spacial frequency activity in the area and make discretionary decisions regarding the area's frequency content. This is achieved by making the output pixel brightness dependent on a group of pixels surrounding the center pixel in the input image. For every pixel in the input image a value for the output image pixel is determined by calculating a weighted average of it and its surrounding neighbors. The average is formed by placing the center of a square kernel (also known as a convolution mask or filter), which contains the weights or convolution coefficients, on the pixel being processed in the input image and multiplying the corresponding underlying pixels by the coefficients and adding the results to obtain the value of the output pixel. For example, in a typical convolution with a 3 x 3 kernel, nine multiplications and 8 additions are required per pixel in the output image.

On IMPACT, a kernel is defined per component (or channel). The kernel (which can be thought of as a matrix) can be 3x3, 5x5 or 7x7 where each element is a floating point weight. The convolution operation is implemented in GE11 microcode. By increasing the kernel size, the flexibility of the filter is increased by taking into account more neighboring pixels but more processing is required and performance is reduced. Greater performance can be achieved on IMPACT by using separable convolutions. A convolution kernel is separable if it can be supplied as separate horizontal and vertical components. This allows the convolution to be performed sequentially in separate horizontal (rows) and vertical (columns) passes and greatly reduces the number of required multiplications and additions.

Finally a post scale/bias operation is provided after the convolution to optionally alter the value of all output pixels by multiplying them by a common scale factor and adding a common bias.

Color Matrix

Color Matrix provides the functionality for the RGBA components to be treated as a 4x1 vector and multiplied by a user specified 4x4 matrix. This allows for each component to be duplicated, eliminated, have its order switched with another component or be linearly combined with other components. For example, linear color conversions such as HSV to RGB can be easily accomplished by using an appropriate matrix.

On IMPACT, the matrix multiplication is implemented in GE11 microcode. To disable color matrix for increased performance, it is critical to use glLoadIdentity on the GL_COLOR matrix, rather than the generic glLoadMatrix.

Finally, a post scale/bias operation is provided per component after the color matrix operation to optionally alter the value of all output pixels by multiplying them by a common scale factor and adding a common bias.

Histogram

The Histogram operation counts the specific color components and provides a one dimensional array (per component) containing the number of occurrences of each color component value in the input image. For best performance, the internal format of the histogram should match the external format of the image data in order to minimize unnecessary expansion and compression of the data.

On IMPACT, the histogram operation is implemented in GE11 microcode. Histogram is one of the last operations performed in the pipeline so if the image data is no longer needed, it is possible to discard it once the histogram is calculated so that no drawing or texture loading will take place.

Minmax

The Minmax operation scans the specific color components and provides a minimum and a maximum value for each color component of the input image. For best performance the internal format of the Minmax should match the external format of the image data in order to minimize unnecessary expansion and compression of the data.

On IMPACT, the Minmax operation is implemented in GE11 microcode. Minmax is one of the last operations performed in the pipeline so if the image data is no longer needed, it is possible to discard it once the minima and maxima are calculated so that no drawing or texture loading will take place.

4.4.14 Standard Visuals

The IMPACT framebuffer is based upon two types of fundamental building blocks: 36 bit buffers and 9 bit buffers. All supported visuals are some construction or combination of these. The desired screen resolution determines how many of each type of buffer may be used to construct a given visual. A great deal of flexibility is also given in determining the purpose of each buffer. For instance, each 36 bit buffer can take on the following identity (or use):

* 12/12/12 RGB color single buffer

* 8/8/8/8 RGBA color single buffer

* 5/5/5/1 RGBA color double buffer

* 4/4/4/4 RGBA color double buffer

* 12 color index double buffer

* 24 bit depth buffer and 8 bit stencil buffer (also called ZST)

The 9 bit buffers are far less flexible and may only be used for an 8 bit color index overlay. If one adds the bits shown in each of buffer configurations above, he or she will notice that they do not always equal 36, and that the 9 bit buffer has one bit "left over". These extra bits are not wasted, but are used for identifying regions on the screen as requiring special interpretation upon pixel scan-out. An example would be two double buffered windows, where the displayed buffers can change independently. Additionally, some "extra" bits are used for very fast (or tagged) clears. The one case that completely utilizes all the bits for color (12/12/12 RGB) cannot take advantage of some of the these features, and will take somewhat longer to clear. Furthermore, a visual cannot be comprised of entirely 12/12/12 RGB buffers (e.g. 1280x1024 12/12/12/ RGB double buffered on High IMPACT), as no control bits are left over to facilitate pixel interpretation and window clipping.

Some color buffers must, of course, be displayable. Those left over can be used as pbuffers.

Now that the building blocks have been explained, one needs to know how many are available on each IMPACT model at a given screen resolution.


The combinatorics of the buffer configurations in the table above can result in the very large number of available visuals. Those visuals that are currently supported on High and Maximum IMPACT are shown in the tables below. The same visuals available in 1600x1200 resolution are available in HDTV resolution. Additionally, all visuals are available with or without a 64 bit software accumulation buffer.

Use /usr/gfx/setmon -x [video format option] and restart the X server to change the video output formats (screen resolution and vertical refresh). Check the files in the directory /usr/gfx/ucode/MGRAS/vof for additional video output formats. Use /usr/sbin/findvis to determine what visuals are available on your IMPACT model with your current video format. The1280x492 full screen stereo format is usually set by the application, but may be set using "setmon 1280x492_120s" without restarting the X server.

Table 4: Available Visuals on IMPACTS
Table 4: Available Visuals on IMPACTS, con't.


1 Available in patch 1105 for IRIX 5.3 and IRIX 6.2.

2 These visuals can only be accessed by using "setmon 1280x1024_xx_32db.

Full screen stereo (Full) visuals allow users to create high quality stereoscopic applications and display them with the aid of stereoscopic glasses. Stereo in a window (Window) allows the user the extra flexibility of displaying the application in the normal desktop environment. The IMPACT architecture supports stereo across the line, although the available resolutions and color depths depend on the amount of framebuffer available.

4.4.15 Detail Texture

The problem with ordinary texture mapping of objects is that at close range, the structure of the texture map becomes very apparent when the texels are magnified. However, providing a texture map with sufficient detail to withstand very close visual scrutiny will frequently require more texture memory than could possibly be provided. To cope with this in certain situations, the concept of detail texturing was developed. Detail texturing allows the user to define two texture maps, the normal texture and a "detail texture". When the user moves in past a certain LOD, the detail texture is used to modify the base texture. This can create the effect of revealing the underlying structure of a texture. For instance, one could use a detail texture on a texture map of a cloth to reveal the stitch pattern of the fabric rather than just the colors when zooming in. IMPACT supports additive detail texturing in hardware.

4.4.16 Pixel Textures

Pixel texturing allows the OpenGL pixel path to be diverted and each pixel used as a look up into a four dimensional texture map. The process of performing this multi-dimensional lookup is performed for each pixel in the image. Each RGBA pixel is mapped to an equivalent texture coordinate STRQ, which is indexed into a four dimensional texture in texture memory. Non-linear color space conversions can be performed through pixel texturing, and by using IMPACT's fast and sophisticated texture interpolation, the conversion will be highly accurate and can be accomplished at interactive rates. This will allow users to change CMYK to RGB and CMYK to RGB conversion parameters without having to wait for the results.

4.4.17 Pixel Buffers

Pixel buffers or pbuffers are a concept that has been requested for some time in OpenGL. They are offscreen rendering areas analogous to a GLXpixmap. However, whereas pixmaps are static and cannot use hardware acceleration, pbuffers can be volatile or nonvolatile and exist in framebuffer memory, thereby benefitting fully from hardware acceleration. More explanation is necessary to understand how pbuffers can be used with IMPACT. However, due to time limitations and the fact that pbuffers will first be available in IRIX 6.2, one must wait for that information to be added in the near future.

4.4.18 Framebuffer Configuration (FBConfig)

This extension to OpenGL allows RGBA contexts to be bound to color index framebuffers, allowing the use of the powerful OpenGL RGBA rendering semantics with a color index visual. When converting to color index, the red channel is used for doing the lookup. Any other channels are ignored.

4.4.19 Video Texturing

Several OpenGL extensions and the IMPACT architecture make the application of textures generated by video sources possible. IMPACT Compression or IMPACT Video can be given a straight path to texture memory. The TRAM's ability to simultaneously load and render textures allows D1 video to be applied to geometry in real time. This is crucial for the film and video industry where page turns, complex wipes and other special effects can be implemented using texture mapping. The IMPACT Video Option card can be used for realtime color conversion or mipmap generation for the highest quality video texturing available on a general purpose workstation.

4.4.20 Other OpenGL Extensions

The OpenGL extensions for IMPACT are too numerous to fully explain here. Read the glIntro(1) man page for a complete list of what is currently available and for a pointer to the individual man pages.


[Prev] [Up] [Top]

Generated with CERN WebMaker

SGI Confidential Information

This page last modified: February 01, 1996.