home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.virtual-worlds
- Path: sparky!uunet!cs.utexas.edu!zaphod.mps.ohio-state.edu!saimiri.primate.wisc.edu!usenet.coe.montana.edu!news.u.washington.edu!stein.u.washington.edu!hlab
- From: piggy@hilbert.cc.utas.edu.au (La Monte Yarroll)
- Subject: Re: TECH: Neural Interfacing
- Message-ID: <1992Dec18.164014.7328@u.washington.edu>
- Originator: hlab@stein.u.washington.edu
- Sender: news@u.washington.edu (USENET News System)
- Organization: University of Tasmania, Australia.
- References: <1992Dec12.041658.16932@u. <1992Dec13.030227.13564@u.washington.edu>
- Date: Tue, 15 Dec 1992 23:21:06 GMT
- Approved: cyberoid@milton.u.washington.edu
- Lines: 25
-
-
- dstampe@psych.toronto.edu (Dave Stampe) writes:
-
- > Potentially, you could train yourself by biofeedback to associate certain
- > commands with whatever gave an EEG signal detectable by the computer.
- > But I don't think that's what we're looking for.
-
- Would you not be willing to train yourself to use a new input system
- if it meant a higher bandwidth feeding into the computer, with higher
- reaction speeds to boot?
-
- True, such a device is a little out of the current trend of VR. Most
- VR work is aimed at getting the computer to simulate believable
- reactions to "normal" actions. But how do you "believably" simulate
- e.g. 4 extra limbs? Or a menu as an extra appendage?
-
- I think I know how to do these things, but I've not gotten very far
- yet. (For those engaged in garage research, I recommend against
- having a baby, changing jobs, and moving to Australia all in the same
- month. :-)
- --
- La Monte H. Yarroll Home: piggy@baqaqi.chi.il.us
- Work: piggy@hilbert.maths.utas.edu.au
- AKA: piggy@gargoyle.uchicago.edu
- Once upon a time: postmaster@clout.chi.il.us
-