home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!zephyr.ens.tek.com!uw-beaver!news.u.washington.edu!milton.u.washington.edu!hlab
- From: bar-zeev@tortoise.cis.ohio-state.edu (avi bar-zeev)
- Newsgroups: sci.virtual-worlds
- Subject: Re: TECH: Text interfaces
- Message-ID: <1992Jul22.142256.5790@cis.ohio-state.edu>
- Date: 22 Jul 92 14:22:56 GMT
- References: <1992Jul22.005552.9901@u.washington.edu>
- Sender: news@u.washington.edu (USENET News System)
- Organization: The Ohio State University, Department of Computer
- Lines: 23
- Approved: cyberoid@milton.u.washington.edu
- Originator: hlab@milton.u.washington.edu
-
-
-
- I like the idea of a virtual monitor WITHIN the virtual world. You
- can either use texture mapping to render the text/graphics in 3-D or
- modify scalable font routines to draw in 3-D instead of 2-D. I can
- imagine a virtual monitor resting in space, maybe even with controls
- which mimic real controls (brightness, on/off, etc). The user can, of
- course, rotate and pull the monitor as close as he/she wants without a
- "mom" jumping out and warning us about going blind.
-
- As for input, when we have fast update, tactile feedback, and two
- gloves per person, a virtual keyboard with realistic action (in any
- style - manual, electric, computer, etc) might be possible, although
- it might not be practical.
-
- There may be a better way, though. I don't remember if it was this
- group or comp.human-factors which was talking about using Sign
- language in VR. I don't know the current state of gesture
- recognition, though.
-
- Then again, there's always speech recognition...
-
- Avi
-