home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!usc!hela.iti.org!cs.widener.edu!eff!sol.ctr.columbia.edu!caen!uwm.edu!ogicse!news.u.washington.edu!stein.u.washington.edu!hlab
- From: jpc@tauon.ph.unimelb.edu.au (John Costella)
- Newsgroups: sci.virtual-worlds
- Subject: Re: PAPER: Galilean Antialiasing for VR, Part 01/04
- Message-ID: <1992Nov9.215722.10521@u.washington.edu>
- Date: 7 Nov 92 23:49:59 GMT
- Article-I.D.: u.1992Nov9.215722.10521
- Sender: news@u.washington.edu (USENET News System)
- Organization: University of Washington
- Lines: 144
- Approved: cyberoid@milton.u.washington.edu
- Originator: hlab@stein.u.washington.edu
-
-
- Gday viewers,
-
- I'd like to thank Michael Deering for his thoughtful and prompt review of
- my posting. I agree with many of his points; disagree with some; but respect
- his opinion in each case.
-
- I am not too worried about defending every point he raises; better to keep
- quiet and be thought a fool than open your mouth and prove it <grin>. But
- I think that Michael's review deserves a few explanatory comments, rather
- than complete silence, on my part. Of course, I would rather have any
- interested sci.v-w readers actually read [parts of] the paper than my
- meagre defence of it!
-
- So with those caveats, and a stiff upper lip, here goes ...
-
-
- > SUMMARY
- > The (92 page) paper describes several ideas. Starting with our basic
- > assumptions about visual perception, alternate frame buffer display
- > technologies for 3D rendered images are derived, similar to motion estimation
- > algorithm based systems for image decompression (e.g. MPEG 2) (which the
- > author doesn't seem to be aware of). After postulating some hardware
-
- MPEG 2: Yes and no. As I point out, motion estimation isn't my idea: it
- goes back to Galileo, and of course it is used in many different
- ways, in different situations. What I aimed to do was to outline
- precisely how one would implement motion estimation in a VR
- situation, TODAY. I do not believe that motion estimation is used
- as widely as it could be, in the realm of virtual world
- applications. If my pitiful attempt makes people think about
- doing it *now* (as they can), then it will have served its purpose.
-
- > The central idea is that one can save money by not always re-rendering
- > from scratch 3D Z-buffered images every frame, but by augmenting the
- > frame buffer with local motion vectors, a smart frame buffer can in-between
- > several sub-frames before the whole rendering system deliveres a complete
- > new frame. (A variation of this approach is indeed used by pixar and
- > others for their motion blurring algorithms, with limitations.)
-
- I am *not* talking about motion blur. That is a further topic that I
- realise has been treated well elsewhere.
-
- I would not be too surprised if the whole paper is a repeat of older work;
- just about everything has been thought of by someone somewhere sometime.
- However, I have been unable to find such information; I have practically
- exhausted the limited amount of literature available in this country in
- this field. The fact is that the approach I suggest is *not* being
- employed widely, if at all, as of this time. And it is not difficult to do.
-
- > As a designer of digital circuits for rendering, I have a fairly good feel
- > for how much circuitry would be required to implement this sort of algorithm.
- > The problem is that it appears that it would cost *more* to support the smart
- > pixels that the current brute force rendering circuitry takes! (In the
- > authors defence the relative amounts of circuits needed are not obvious.)
-
- I think I might not have made the cost-benefit case clear; my apologies.
- It is definitely somewhat more costly (in $$$) to implement the ideas I
- propose than to not implement them at all! However, once you have done
- that, you can achieve (say) 50 or 100 Hz effective update rate, while
- only rendering images at 5 or 10 Hz (or lower, if you go to section 4
- types of methods).
-
- Thus, you pay a few $$$ and get relatively large performance increases
- for each $.
-
- The only *speed* cost is in maintaining velocity and acceleration information
- in your model (for the control points); the (enhanced) scan-converter
- can rasterise this information across (e.g.) a polygon in parallel; that
- part doesn't slow down things one bit.
-
- In terms of *how many* dollars, I cannot agree. You could whack this
- stuff onto a PC for the price of an expensive video card (once you're
- past prototyping, of course). Boosting the rest of your system by a
- factor of ten, as you seem to suggest, is not quite so easy. (Do you
- have a 1 GHz 486? <grin>) Implementing it on more sophisticated systems
- would become increasingly pitifully cheap, relative to the funds sunk
- into the rest of the system.
-
- I also do not agree that the amount of circuits needed are not obvious:
- I did not give 74LS numbers, if that is what you mean <grin>,
- but that is because you could put most of the thing (except the RAM) on
- a VLSI chip or two, given incentive to do so; and so in implementation
- terms there will be different strokes for different folks.
-
- I may have missed your point, however; my apologies if so.
-
- > Furthermore slightly similar algorithms have been proposed in the past:
- > in the days when batch raytracing took too long to generate enough images
- > for motion sequences, people tried to do forms of automatic in-betweening.
-
- Yes, the idea is 350 years old <grin>. But the fact is that VR systems STILL
- jerk around with motion at 5 or 10 Hz update rate ... the message hasn't
- gotten through: there is no need to. If I am guilty of screaming the
- obvious at the unconverted, then so be it.
-
- > we can come close''. Unfortunatly the edges is just where the human visual
- > system is looking, even small glitches here turn out to be quite percieveble.
-
- Yes; but jerky, delayed vision is worse. I have a software demo if you'd like
- to see it working in real life; ... it does work! <he says, hysterically :>
- Seeing is believing in this case.
-
- > Also the amount and complexity of bookeeping could be quite high. Finially,
-
- No. Once the hardware is built and debugged, all the software needs is the
- motional data for the control points of the model. This is a small overhead,
- not a big problem. You win by a large margin.
-
- > these sort of algorithms are not the sort of thing you have to build custom
- > hardware to test out, just modify a slow all-software rendering system,
- > batch compute a few dozen frames, and blit the results at 60Hz (or more)
- > on a conventional display. (This would take some effort, but much less
- > than typing a 90 page paper.)
-
- Agree; agree; agree; sort of; and no. <grin> You want a proof-of-concept
- software simulation; I have the very program you describe right here,
- ready for posting if there is interest. (OK, it's only 25 Hz, not 60,
- and pretty unoptimised to boot, but it's proof of concept, and it proves
- it, no wuckins.)
-
- Your parenthetical remarks are not quite correct. It took nine days to
- type / proof the paper, but two weeks to write the software <grin>.
- [Thinking time (and batteries) not included.]
-
- [ I lost track of the following argument ... *grin*]
- > pages about the tram and problem with the turist not knowing about the
- > double jerk and thus the problem with the matron's lap and, um where was I?
- > Oh yes, I was saying that these deversions can tend to make one lose
- > track of the point being made, though making the document funnier to read.
-
- Guilty as charged, Your Honour. You can get the boy out of mischief,
- but you can't get the mischief out of the boy. :)
-
-
- Thanks again to Michael for a thoughtful response; and to Mark and Bob
- for tolerating this additional traffic at a hectic time <sheepish grin>.
-
- John
-
- ----------------------------------------------------------------------------
- John P. Costella School of Physics, The University of Melbourne
- jpc@tauon.ph.unimelb.edu.au Tel: +61 3 543-7795, Fax: +61 3 347-4783
- ----------------------------------------------------------------------------
-