home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.sys.sgi
- Path: sparky!uunet!snorkelwacker.mit.edu!thunder.mcrcim.mcgill.edu!panisset
- From: panisset@thunder.mcrcim.mcgill.edu (Jean-Francois Panisset)
- Subject: RealityEngine texturing question
- Message-ID: <1992Jul23.035752.3604@thunder.mcrcim.mcgill.edu>
- Summary: How are the texture coordinates computed?
- Keywords: texture VGX RealityEngine
- Organization: McGill Research Centre for Intelligent Machines
- Date: Thu, 23 Jul 92 03:57:52 GMT
- Lines: 22
-
-
- I saw the glossies for RealityEngine, and I would sure want one... My
- question is the following: on VGX/VGXT, the texture coordinates are specified
- at the vertices of a polygon using the t2?() commands. The intermediate
- s,t texture coordinates are then computed for the pixels inside the polygon
- using linear interpolation between the "real" values at the vertices.
- Since this does not yield the exact texture coordinates (you need to
- do a per-pixel division to get those), you end up having to subdivide
- your polygons (the name of the call to do this automatically escapes
- me at this point), which is not always a satisfying solution. So the
- question is: does RealityEngine do a per-pixel division to get the
- exact s,t coordinates, or does it still rely on polygon subdivision
- to approximate the real answer? Also, does it do any form of directional
- filtering in the mip-map for edge-on polygons, and if not, how does
- it determine the level of detail from which to fetch the texture pixels:
- does it select it based on the largest or smallest dimension of the pixel
- once projected on the polygon?
-
- Thanks in advance for all answers...
-
- JF Panisset
- --
- Jean-Francois Panisset
- INET: panisset@mcrcim.mcgill.ca
- panisset@larry.mcrcim.mcgill.edu
- UUCP: ...!mcgill-vision!panisset
-