home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.philosophy
- Path: sparky!uunet!secapl!Cookie!frank
- From: frank@Cookie.secapl.com (Frank Adams)
- Subject: Re: Human intelligence vs. Machine intelligence
- Message-ID: <1992Nov21.211537.130472@Cookie.secapl.com>
- Date: Sat, 21 Nov 1992 21:15:37 GMT
- References: <1992Nov17.152753.13786@oracorp.com>
- Organization: Security APL, Inc.
- Lines: 21
-
- In article <1992Nov17.152753.13786@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
- >frank@Cookie.secapl.com (Frank Adams) writes:
- >>When you try to understand what G means, by interpreting the "diagonalizing"
- >>operator, you get an infinite regress (in fact, a simple self-reference, but
- >>it is not difficult to construct examples using diagonalization where the
- >>references are not self-referential, but do produce an infinite regress).
- >
- >There is no infinite regress! To understand what G means, you only
- >need to know (1) what diagonalization means, and (2) what it means for
- >David Chalmers to believe something. [...]
- >
- >The only way that G becomes paradoxical is if we assume that David
- >Chalmers is so smart that he believes something if and only if it is
- >true. In that case, G reduces to the Liar Paradox, which I admit does
- >cause problems.
-
- IMO, it is sufficient to make G paradoxical that the meaning of G is
- *relevant* to whether David Chalmers believes it. (Actually, "paradoxical"
- is not quite the right word; 'This sentence is true.' is not precisely
- paradoxical; but it (on most accounts, including mine) shares the
- unacceptability of "This statement is false.".)
-