home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!gatech!prism!gg10
- From: gg10@prism.gatech.EDU (GALLOWAY)
- Newsgroups: comp.graphics
- Subject: Z Buffer Precision and Homogeneous Coordinates
- Message-ID: <78173@hydra.gatech.EDU>
- Date: 12 Dec 92 18:16:42 GMT
- Organization: Georgia Institute of Technology
- Lines: 42
-
-
- I have written a Z-buffer renderer and now a scanline Z-buffer one.
- But after going back and reading several new books on the subject, I
- find myself confused about the scaling of Z values as part of the viewing
- transformation pipeline. In the new book by Alan Watt and Mark Watt,
- "Advanced Animation and Rendering Techniques, Theory and Practice", they
- reference Newman and Sproull and say that:
-
- B
- Z(screen) = A + ------
- Z(eye)
-
- where A and B are constants. And that this is done so that "in moving
- from eye space to screen space, lines transform into lines and planes
- transform into planes." The depth value gets normalized from the range
- [near, far] to [0, 1].
-
- If the viewing transformation matrix consists of only linear operations,
- translation, rotation, and scaling, then this appears to degrade the
- precision of the depth value. In my current Z buffer-based renderer my
- Z buffer is an array of 32 bit floats. I preform clipping in the 3D view
- space (or eye space) before perspective projection.
-
- My questions are:
-
- a) Why do software (non-hardware) Z buffers normalize the depth value?
- Why not just keep around a float and compare floats?
-
- b) Are most Z buffers integerized as well? How many bits?
- What is the advantage?
-
- c) When is the correct time to clip? And are homogeneous coordinates
- necessary if you are not current rendering NURBs?
- For most surface, except NURBs, W is always 1 anyway, right?
-
- Thanks,
-
- Greg Galloway
- gg10@prism.gatech.edu
-
- P.S. Watt's book is excellent and I highly recommend finding a copy.
-
-