home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!charon.amdahl.com!pacbell.com!mips!darwin.sura.net!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!olivea!mintaka.lcs.mit.edu!ai-lab!fibula!ringrose
- From: ringrose@fibula.ai.mit.edu (Robert Ringrose)
- Newsgroups: comp.sys.sgi
- Subject: Re: SGI shadows/help
- Keywords: n
- Message-ID: <26798@life.ai.mit.edu>
- Date: 20 Aug 92 00:01:44 GMT
- References: <1992Aug10.153515.14594@husc3.harvard.edu> <1992Aug19.015046.22997@odin.corp.sgi.com>
- Sender: news@ai.mit.edu
- Distribution: usa
- Organization: MIT Artificial Intelligence Lab
- Lines: 49
-
- In article <1992Aug10.153515.14594@husc3.harvard.edu>, basu@scws4.harvard.edu (Archan Basu) writes:
- > Hi!
- >
- > I'm working on a Vision application on the SGI Iris Indigo.
- > The current problem is to render a human face (from lazer rangefinder data)
- > including cast shadows, say across the left eye due to occlusion by the
- > nose, of incident light from the right. The face is an 86x31 rectangle
- > represented as a polygonal mesh (~5000 triangles) or perhaps as a NURB
- > with ~2600 control points.
- >
- > The shadows, ofcourse, are the stumbling block. One idea I had was
- > (assuming access to SGI's internal data representations, including
- > transform's, zbuffer's, etc): Place eye at light source. Perform hidden
- > surface elimination, and save zbuffer. Now transform to actual eye frame.
- > Determine, for each viewport pixel, whether its corresponding scene point
- > was visible to light source. Hence, shade bright or dark.
- >
- > Have others tried this? Are there other approaches? Where can I get
- > the info I need to access SGI memory, etc? I need your help BADLY. Please
- > respond with any relevant thoughts to basu@zeus.harvard.edu
-
- Yes, we did precisely this in _On_The_Run_, at last year's siggraph.
-
- Take a picture "from the sun's point of view". Grab the transformation matrix
- which will take global coordinates to the sun's POV and the z-buffer (look at
- lrectread to get the Z buffer). Beware - the PI has some bugs dealing with
- getting the modelview matrix. They're subtle and nasty.
-
- Take two pictures from the camera's point of view and get the transformation
- matrix which will take global coordinates to the camera's POV. The two views
- should be with lights on and lights off.
-
- Invert the camera matrix (there's a routine in Numerical Rec. in C), multiply
- the two, and you've got a matrix which will (pretty much) go from points on the
- screen to points in the Z-buffer.
-
- Oh, check to see if your Z-buffer is signed or not, and where the sign bit is.
-
-
-
-
-
-
-
-
- - Robert Ringrose
- (ringrose@ai.mit.edu)
-
- "There's always one more bu6."
-