home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!mcsun!uknet!edcastle!aiai!jeff
- From: jeff@aiai.ed.ac.uk (Jeff Dalton)
- Newsgroups: sci.philosophy.tech
- Subject: Re: Dualism
- Message-ID: <7889@skye.ed.ac.uk>
- Date: 10 Nov 92 16:45:52 GMT
- References: <1992Oct23.142733.2301@oracorp.com> <7827@skye.ed.ac.uk> <OZ.92Nov3171704@ursa.sis.yorku.ca>
- Sender: news@aiai.ed.ac.uk
- Organization: AIAI, University of Edinburgh, Scotland
- Lines: 106
-
- In article <OZ.92Nov3171704@ursa.sis.yorku.ca> oz@ursa.sis.yorku.ca (Ozan Yigit) writes:
- >Jeff Dalton [please see the previous articles in this thread to get
- >a better idea of what Jeff is trying to say]:
- >
- > Then why didn't people take notice when Minsky came into it?
- >
- >When? I have checked through my archives, and I can't find a posting
- >by Minsky regarding Putnam rocks in comp.ai.philosophy.
-
- Minsky's posting was on the table lookup machine. My point was
- (roughly) that people didn't respond to McCullough and Minksy
- on TLMs the way the responded to Putnam on rocks.
-
- > ... Another factor, which I think is significant,
- > though not sufficient, is that it was important to some people
- > that Putnam be wrong and not so important that you (and Minsky)
- > be wrong.
- >
- >Here is a list of people who have posted something on the subject "A
- >rock implements every FSA", in case it helps you to be somewhat more
- >specific than the usual "some people" generalizations.
-
- I take it that you'd prefer if I named names. But why should
- I make it more of a personal dispute than it already is?
-
- Anyway, since you obviously have access to all the postings, why
- don't you tell us how many of these people agreed with Putnam?
- Now, how many posted disagreements with Daryl's claims about the
- complexity of, and mappings involving, the data in the TLM?
-
- Also, I'd still be interested on how many attacked the TT in
- the "TT is a scientific criterion" thread (or any other recent
- TT thread).
-
- In any case, I will save you the trouble of finding the Minsky
- article. Here it is:
-
- ----------------------------------------------------------------------
- >From: minsky@media.mit.edu (Marvin Minsky)
- Newsgroups: comp.ai.philosophy
- Subject: Re: Intelligence Testing
- Message-ID: <1992Jan18.195906.15800@news.media.mit.edu>
- Date: 18 Jan 92 19:59:06 GMT
- References: <1992Jan18.144220.11862@oracorp.com>
- Sender: news@news.media.mit.edu (USENET News System)
- Organization: MIT Media Laboratory
- Lines: 53
- Cc: minsky
-
- In article <1992Jan18.144220.11862@oracorp.com> daryl@oracorp.com writes:
- >David Chalmers writes:
- >
- (Discussion of lookup table, etc. omitted)
-
- >> I personally think that the case against 1 and 2 is made compellingly
- >> by the example of the giant lookup table -- a ridiculous example,
- >> impossible in practice but not in principle, but enough to make the
- >> case. I think that it's like that any reasonable-in-practice
- >> mechanisms that has the right behaviour will have mentality, however.
- >
- >I agree that the giant lookup table is ridiculous as a way to
- >implement AI, but I don't understand why it is so obvious that such an
- >implementation would lack mentality. Your answer might be that it
- >would lack the internal states that real minds have, but I don't even
- >grant that: in the case of the lookup table, the internal state would
- >be coded as a location in the lookup table. It is certainly true that
- >this interpretation of internal state would not obey the same
- >transition rules as our own internal states, but what makes the one
- >"conscious processing" and the other not?
- >
- >Daryl McCullough
- >ORA Corp.
- >Ithaca, NY
-
- Umm, I agree with the conclusion, that the anti-conscious theses gets
- no support. But I don't see any reason to admit :it is certainly true
- that this .. would not obey the same transition rules as our own
- internal states. To be sure, it might not. However, a reasonable
- guess might be that the state-transition table for the internal
- location states must be -- what's the mathematical word for this -- a
- structure of which the simulated brain's transition semi-group is a
- homomorphism. Of course this isn't rigorous because each human will
- have lots of inaccessible states -- that is, one's which never affect
- behavior -- hence the super-table could be simplified in those
- respects.
-
- My point is that some skeptics could miss Darryl's point because of
- not realizing that an adequate such table-machine must indeed be so
- large that, as he says, the internal state-transition mechan ism must
- indeed be of the order of graph-complexity as the wiring of the brain!
- After all, the table itself has as many entries as the brain has
- states. It would be rash indeed for a skeptic to feel confident that
- a machine of this magnitude -- it has perhaps 2**10**10 nodes, which
- is quite a few googols - could "obviously" not be conscious, whatever
- that might (or might not) mean. To insist on that would simply
- clarify the weakness of (Searle's?) thesis which, so far as I can see
- says something like:
- Let's assume that no machine can be conscious (or understand
- anything, or have intentionality).
- Therefore the Chinese room machine cannot be conscious, etc.
-
- A fine bit of logic, for sure, but a faulty bit of reasoning.
-
- ----------------------------------------------------------------------
-
- -- jd
-