home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!dtix!darwin.sura.net!mips!odin!fido!babar.asd.sgi.com!mtj
- From: mtj@babar.asd.sgi.com (Michael Jones)
- Newsgroups: comp.sys.sgi
- Subject: Re: RealityEngine texturing question
- Keywords: texture VGX RealityEngine
- Message-ID: <nlnv390@fido.asd.sgi.com>
- Date: 23 Jul 92 18:24:05 GMT
- References: <1992Jul23.035752.3604@thunder.mcrcim.mcgill.edu>
- Sender: news@fido.asd.sgi.com (Usenet News Admin)
- Organization: Silicon Graphics, Inc.
- Lines: 58
-
- In article <1992Jul23.035752.3604@thunder.mcrcim.mcgill.edu>, panisset@thunder.mcrcim.mcgill.edu (Jean-Francois Panisset) writes:
-
- |> I saw the glossies for RealityEngine, and I would sure want one...
-
- Me too.
-
- |> My question is the following: on VGX/VGXT, the texture coordinates are
- |> specified at the vertices of a polygon using the t2?() commands. The
- |> intermediate s,t texture coordinates are then computed for the pixels
- |> inside the polygon using linear interpolation between the "real" values
- |> at the vertices. Since this does not yield the exact texture coordinates
- |> (you need to do a per-pixel division to get those), you end up having to
- |> subdivide your polygons (the name of the call to do this automatically
- |> escapes me at this point), which is not always a satisfying solution.
-
- VGX performs linear interpolation of texture coorinates across polygons,
- so the half-way point of a texture will be half-way along an edge in
- screen-space, which is incorrect when edges are viewed in perspective.
- Polygon subdivision achieves a piece-wise linear approximation, and is
- activated via the scrsubdivide() GL entry point.
-
- VGXT performs the desired per-pixel perspective correction so that the
- half-way point of the texture is half-way along the edge in world-space
- rather than screen-space. This is the proper thing to do, and is what
- earns it the "T for Texture" suffix in its name. This process is complex
- and is why texture has been considered expensive in terms of performance.
-
- |> So the question is: does RealityEngine do a per-pixel division to get
- |> the exact s,t coordinates, or does it still rely on polygon subdivision
- |> to approximate the real answer?
-
- RealityEngine continues to support per-pixel perspective correction, and
- provides support for three-dimensional textures as well (3 coordinates).
- The RealityEngine provides "deluxe" texture support including tri-linear
- interpolation, perspective correction, and so on at full rated speed. It
- does not approximate the real answer -- it computes the real answer at 80,
- 160, or 320 million shaded, textured, anti-aliased pixels per second.
-
- |> Also, does it do any form of directional filtering in the mip-map for
- |> edge-on polygons, and if not, how does it determine the level of detail
- |> from which to fetch the texture pixels: does it select it based on the
- |> largest or smallest dimension of the pixel once projected on the polygon?
-
- Non isotropic filtering can be achieved at the user level by selecting one
- of several MIP-map hierarchies derived from appropriate rectangular source
- images. For example, rather than using a 128x128 road texture, one might
- select a 64x256 source image that would, when viewed obliquely, produce
- somewhat "square" texel-space footprints -- avoiding "premature blur".
-
- When presented with the choice between the largest and smallest dimension,
- the resulting choice is between a pixel which is too blurred or one which
- will exhibit point-sampling artifacts. The preferred choice is blurred,
- which means that the map of smaller size (== larger filter radius) will
- be used.
-
- -- Michael Jones mtj@sgi.com 415.390.1455 M/S 7U-550
- Silicon Graphics, Advanced Systems Division
- 2011 N. Shoreline Blvd., Mtn. View, CA 94039-7311
-