home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!darwin.sura.net!sgiblab!sgigate!rutgers!cs.utexas.edu!uwm.edu!csd4.csd.uwm.edu!info-high-audio-request
- From: deanaj@elec.canterbury.ac.nz (A. J. Dean)
- Newsgroups: rec.audio.high-end
- Subject: Re: Why digitize: some bunk
- Message-ID: <1e89hvINNjst@uwm.edu>
- Date: 15 Nov 92 13:13:32 GMT
- References: <1dtoflINN32a@uwm.edu>
- Organization: Electrical Engineering, University of Canterbury, New Zealand
- Lines: 75
- Approved: tjk@csd4.csd.uwm.edu
- <01GR7BA63VA88ZIFJD@csc.canterbury.ac.nz>; Mon, 16 Nov 1992 02:13:43 +1300
- 16 Nov 92 02:13:36 NZD
- NNTP-Posting-Host: 129.89.7.4
- Content-Transfer-Encoding: 7BIT
- Originator: tjk@csd4.csd.uwm.edu
-
- Bill Alford (bill@rsphy1.anu.edu.au) wrote:
- : I've been pondering why it is that I've yet to hear a digital system image
- : anywhere near as well as a modest LP system and the conclusion that I've
- : come is that the basic assumptions in the current CD standard must be wrong.
- : I was reading in a magazine about Pioneer's Legata CD player and it was
- : stated, from memory, that the processes involved in human sound location
- : (like when we use time differences between the two ears) were equivalent
- : to 90KHz! Can anyone supply more information on this, as it ties in very
- : nicely with Prof Johnson's HDCD process (as mentioned in Novembers Sterophile
- : and the New York Times article for October 25, 1992).
-
- I'm starting to wonder myself, even more than before. I'm not disputing the
- mathematical correctness of the fourier transform, just the models we (as in
- engineers) use. After all, the ear is essentially a parallel array of
- sensors, each (well, some) capable of very accurate resolution of events in
- time (if that makes sense). For instance, to represent a tone, nerves can
- fire in 'volley' and even the simple process of ORing them together is
- enough to recover the oringinal waveform that one nerve alone could not
- carry. This happens in some creatures without cochlears (ie the mechanical
- frequency differentiating bit) though apparently it happens in humans too
- though we rely on it less. Anyway, this parallel arrangement coupled with a
- processor can sense higher-bandwidth signals than our frequency sensing
- cochlears may be attributed with (it's getting late, excuse wording :-)
- Nyquist tells us that a parallel array of say 30000 independent channels
- each with a firing rate in the region of 30 to 400 Hz would be able to
- detect and discriminate between frequencies up to and beyond 10 MHz!
- Obviously this would require each nerve to fire with an accuracy of
- 1/30000th sec and neurons and stuff to process it accurately, so is a 'bit'
- optomistic + my rough theory is, well, rough.
-
- Supporting this extended bandwidth view is the fact that us humans have one
- of the poorest upper frequency responses (what we say we can hear) of the
- mammals (ie highly developed ears), despite similar ear structure. All the
- bones and fluids and bits have a good frequency response well over 20kHz but
- we 'can't' hear those sounds. We have one of the best LF responses for some
- reason (not entirely size tho).
-
- [Wading into deep water here...] Also supporting this extended bandwith view
- is the 'fact' that our cochlear is there to give us better pitch
- discrimination and better sensitivity, but it does not rule out some sort of
- _additional_ time domain or transient sensitivity - after all, the waves
- still travel along the cochlear. Perhaps a different and additional form of
- sensing occurs for complex time-varying waveforms at higher volumes. This
- means that (a) the sensitivity imparted by the cochlear is no longer that
- important in dertermining what gets sensed where, and (b) a complex waveform
- is more likely to be picked up as a 'moving pattern' than a single tone. The
- volley firing of say a 40kHz tone travelling along the cochlear _may_ be
- picked up as a random firing of the sensors, while 40 kHz frequency
- components of a complex pattern (ie including lower frequencies) may
- eventually be noticed after it has travelled far enough. Meaning - hearing
- is no longer limited to 20kHz... A bit far fetched maybe (my references were
- a bit old) but perhaps something similar happens in a diluted form which
- allows us to hear even the tiniest amount of this 'out of band' info under
- certain circumstances.
-
- While I'm at it I might as well postulate that we hear a lot more than we
- counciously realise. After all, we are primarily visual creatures. Our
- brains are wired that way, compared to creatures like bats and dolphins, or
- cats whose brains are wired 'a bit of both'. Apparently positioning clues
- from our ears go to areas in our brain which are closely coupled with the
- movement of our eyes, while in dolphins etc it goes to different places
- (perhaps linked with visual images themselves). Medial superiory olives or
- something. Experiments have shown that the position of a sound can be
- pin-pointed very arracurately by eye position, while the listener may be
- pretty unsure. To say we are fully concious of everything we sense is a bit
- of an arrogant view of how we work I think.
-
- Whoops. This is a bit long, sorry for going on (and on...). But perhaps the
- related physics and numbers are best in this group. Along with software
- engineering, high-end audio must be one of the worst researched and
- understood things I have come across (not intending to insult anyone having
- a good bash at it...). Whatever happens, numbers and theory are never going
- to change how we hear (just maybe what we hear!).
-
- Antony
-