home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!spool.mu.edu!uwm.edu!ogicse!news.u.washington.edu!stein.u.washington.edu!hlab
- From: jpc@tauon.ph.unimelb.edu.au (John Costella)
- Newsgroups: sci.virtual-worlds
- Subject: Re: TECH: Neural Interfacing
- Message-ID: <1992Dec14.014514.6568@u.washington.edu>
- Date: 13 Dec 92 19:35:59 GMT
- Article-I.D.: u.1992Dec14.014514.6568
- Sender: news@u.washington.edu (USENET News System)
- Organization: University of Washington
- Lines: 58
- Approved: cyberoid@milton.u.washington.edu
- Originator: hlab@stein.u.washington.edu
-
-
- > From: chadwell@utkvx3.utk.edu (Chadwell, Leonard)
- >
- > In article <1992Dec12.041658.16932@u.washington.edu>, dstampe@psych.toronto.e
- > (Dave Stampe) writes...
- >
- > >There seems to be a lot on misinformation about neural connectivity
- > >going around, so let me add mine (B-{))
- > >
- > >First, let's survey the techniques used so far: the up/down computers with
- > >pattern recognition and scalp electrodes, multi-electrode EEG, direct
- > >cortical contacts, nerve interface chips, and high-resolution NMR.
- > >
- > [stuff deleted]
- >
- > Sorry, Dave, but L. Pinneo at SRI DID use EEG systems to do cortical wave
- > pattern matching to qive qualified thought pattern detection. Running on a P
- > in 1974, and a skull cap with electrodes (no shaving or such required), the
- > cursor on screen could be issued 7 commmands by thought: UP,DOWN,LEFT,RIGHT,
- > SLOW,FAST,STOP. The main limitations on the system were the RAM (around 32K)
- > and the speed of the processor. The commands were merely thought, and the
- > system could accurately recognize the commands on 60% of the people who were
- > tested on the system, and with modification to pattern matching code, could
- > also match those people. With the progress made in raw computing power and
- > memory capacity, more commands could be recognized with greater accuracy.
-
- If this could be brought up to speed today, for a reasonable price,
- then it could be useful for just what is mentioned here: navigating.
- No more point, press, or whatever to move around, just think of the
- direction you want to move. You'd have to add FORWARD and BACKWARDS
- to the commands for VR, but I'm sure that's not too bad. All other
- aspects of the VR hardware would still be the same.
-
- You'd have to deal with all the mental activity generated by just
- being in the virtual world to start with; maybe that would wipe out
- the ``movement'' info from noise. But at least it might be feasible,
- especially with the meaty processing power around today.
-
- But, IMHO, from the info in the posts supplied it's pretty clear that
- this type of app, namely, acting as a physical transducer *from* the
- participant, with a low rate of info (bits per second) is about all
- that non-intrusive `neural interfaces' will do in the next 10 years.
- I don't think that you'll be able to plug a 80687 maths co-processor
- into your brain real soon. :) I'm not even sure that the brain could
- handle too many more input devices ... aren't all its slots full? :)
- (People with a loss of one or more senses excluded, of course.) Your
- best bet is to use the existing inputs (e.g. eyes) and just interface
- with them as efficiently as possible (like as in `impedance matched').
- Things like the retinal scanner seem to be about the optimal way
- to do that (any `ear-drum scanners' out there yet? :), if it turns
- out to be cost-effective and light.
-
- John
-
- ----------------------------------------------------------------------------
- John P. Costella School of Physics, The University of Melbourne
- jpc@tauon.ph.unimelb.edu.au Tel: +61 3 543-7795, Fax: +61 3 347-4783
- ----------------------------------------------------------------------------
-