Texture mapping methods
As explained earlier, texture mapping is a data-intensive operation
which consists in warping a bitmap onto a 3D object or polygon to add more
details, thus enhancing realism. The original bitmap used as the texture to be
mapped is also called the source texture. There are several ways to map textures
onto a 3D object with perspective correction:
· Point sampling: The most common way to map a texture to a given
polygon is through a method called point-sampling. This method allows the
graphics engine to approximate the color value of a given pixel on the resulting
texture map by replicating the value of the closest existing pixel on the source
texture. Point sampling provides very good results when used in conjunction with
tile-based MIP mapping (see below), and maintains high performance levels at a
low cost
· Filtering: In some cases source textures will need a
considerable amount of warping, which might lead to some pixel blockiness.
Although this blockiness is mostly visible in scenes with little motion, some
vendors might decide to use a technique called bi-linear filtering, which can be
employed to "blur" the textured pixels, making them appear smoother.
Bilinear filtering of textures is similar to digital video; four source texel
values are read, and their color values are then blended together with
weightings based on proximity. The resulting values will be used for the texel
to be drawn. While this technique is useful, the resulting quality cannot be
compared to that achieved by using high resolution source textures. These larger
source textures, however, will use higher portions of available off-screen
memory, and therefore can only be done effectively with a graphics accelerator
which supports some type of palletized textures, such as the Matrox Mystique.
Graphics accelerators without support for palletized textures will have to scale
down the textures to store them, and apply filtering to map them onto polygons,
resulting in poor quality rendering.
· MIP mapping: Another way to improve the quality of the 3D
texture mapped scene is to use a method called MIP-mapping. The more alterations
made to a texture to "fit" an object, the less it will resemble the
source texture. One way to avoid this severe deviation from the original texture
is to create three copies, or MIP levels, of the same source texture, in
different sizes. MIP-mapping can be implemented in three ways:
a) Tile-based MIP-mapping: Depending on the size of the polygon, the
application will determine which MIP-level is the closest in size, and provide
this MIP-level to the graphics accelerator to be used as the source texture for
that polygon. Tile-based MIP-mapping, as supported by the MGA-1064SG, does not
require extra circuitry, since it is programmed in software by the game
developer. It results in better overall quality, while its negative effect on
performance and cost is minimal.
b) Per-Pixel MIP mapping: The
graphics accelerator calculates, on a per pixel basis, which MIP-level provides
the best source. In this way, the graphics accelerator can use different MIP
levels on the same tile, accomodating to a change of size in the polygon being
drawn. When performed in hardware, it will either result in a significant hit on
performance, or if implemented to be effective in speed, will result in a
dramatic increase in cost.
c) Tri-linear MIP-mapping: The
graphics accelerator reads source pixels from two different MIP levels,
performing bilinear interpolation between the values of each MIP level, and
interpolates the values of the two pixels to calculate the resulting pixel. This
requires a lot of bandwith, because two source texture maps need to be read
simultaneously. When performed in hardware, it will either result in a
significant hit on performance, or if implemented to be effective in speed, will
result in a dramatic increase in cost.
Using video as texture maps Some 3D chips, such as the MGA-1064SG, also
incorporate video playback capabilities, such as scaling and color-space
conversion. This allows them to store a video clip as it is being decompressed
into the frame buffer, where the 3D graphics processor can then use it as it
would a source texture, to apply it onto a 3D polygon. This cannot be performed
by an graphics processor that does not incorporate color-space-conversion.
Fogging
In order to maintain high levels of performance, developers created "tricks"
to reduce the amount of rendering needed for a given scene. One of these tricks
is called fogging, and mostly used in landscape scenes, such as flight
simulators. Fogging allows the developer to "hide" the background of a
scene behind a layer of "fog", therefore mixing the textures' color
values with a monochrome color such as white. Some graphics chips support
fogging in hardware, which allows the developer to use this trick. A similar
visual effect can also be achieved through depth-cued lighting tricks, a feature
which is supported in hardware by the MGA-1064SG.
Blending
Blending is a visual effect which allows the developer to "mix"
two textures together to apply them on the same object. Different levels of
blending can be implemented to create visual effects. The simplest method, ,
which is supported by the MGA-1064SG, is called screen door or "stippling"
"seeing-through" an object by writing only some of the pixels making
up that image. For example, the developer would decide that an object would be
50% transparent. The graphics accelerator would therefore draw the background
image, and then write only every second pixel of the "transparent"
object. This approach is easy to implement in hardware and delivers a reasonable
quality at a low cost.
By contrast, true alpha blending is a data-intensive operation,
which involves reading the values of two source textures and performing the
perspective calculations on both textures simultaneously. This effect is very
taxing on performance, especially with a low-bandwidth frame buffer, and costly
to implement in an effective way. The resulting effect does not warrant the
loss in performance and therefore is not essential for 3D games.
Anti-aliasing and other rendering effects
High-end 3D application users rely on techniques to improve the quality of
the graphics, such as anti-aliasing, Phong shading and Ray-Tracing. These
techniques, however, if performed in hardware, are extremely taxing on
performance, and require a large amount of dedicated circuitry which raises the
cost of the graphics accelerator beyond the $1,000 range. Given the
price/performance requirement of the gamer, anti-aliasing is not necessary or
realistic in a 3D game accelerator.
Standards vs proprietary 3D techniques
The 3D market has been slow in growing because until recently it
lacked a standardized interface, or Applications Programmers' Interface (API),
to allow all software and hardware to work together seamlessly. To answer this
lack of standardization, several software vendors have designed different APIs
over the last two years. Under Windows 95, Microsoft's Direct3D has become
the predominant standard API in the industry, adopted by the majority of
developers.
However, graphics chip manufacturers such as Matrox also have designed their
own proprietary API to let software developers write directly to their hardware,
therefore allowing them to take full advantage of any feature available. While
some developers might choose to write different versions of their applications
to take advantage of different graphics accelerators, most of them will develop
a Direct 3D version of the same application, in order to have access to the
entire installed base of Direct3D compliant hardware. For this reason, Matrox
has designed its MGA-1064SG processor for the Matrox Mystique to be as close to
the Direct3D specifications as possible, allowing the standard Direct3D version
of the applications to be fully optimized when running with the Matrox hardware.
Some graphics engines use proprietary architectures, such as rational
quadratic patches (Nvidia) and infinite plane support (NEC), claiming to deliver
better quality and performance. While these techniques are interesting in
theory, they entail complicated re-compiling of the applications, which is time
consuming for the developers. In fact, Microsoft has officially stated that it
does not support non-traditional polygon setting in Direct3D. This means that
unless games are written directly to these controllers, they will not take
advantage of the special features built into them. It can be assumed that not a
lot of developers will choose to do so. By contrast, Matrox's 3D architecture is
based on the existing traditional software architectures, using triangle
polygons, which ensures full compatibility and ease of porting for developers.
Summary
In the end, a good 3D graphics accelerator aimed at satisfying
the demanding consumer market, must offer a wealth of functionality at the right
price. The majority of this paper supports the fact that high frame rate is
essential to a 3D game player. The appropriate mix of features, including
resolution, color depth and perspective-correct texture mapping should also be
implemented as long as frame rate is not affected negatively. A compelling 3D
engine for the consumer market would be one in which 3D games would run at or
above 20 fps at 640 x 400 resolutions in 16-bit color. Also to be noted are the
other application areas that need to be accelerated in order for a 3D graphics
accelerator to be a viable consumer solution. Aside from excellent accelerations
for 3D games, high performance for Windows, video and DOS as well as a
multimedia upgrade path, at a price below $300, will all be part of the decision
making process of the home-buyer. |