home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.ai.neural-nets:3192 comp.ai:3094 comp.compression:3013 comp.theory:1763
- Newsgroups: comp.ai.neural-nets,comp.ai,comp.compression,comp.theory
- Path: sparky!uunet!s5!sethb
- From: sethb@fid.morgan.com (Seth Breidbart)
- Subject: Re: measuring complexity (was Re: Kolmogorov Complexity)
- Message-ID: <1992Aug14.170429.13981@fid.morgan.com>
- Organization: Morgan Stanley & Co., New York, NY
- References: <1992Aug11.124142@sees.bangor.ac.uk> <arms.713549466@spedden> <1992Aug12.173546.3298@access.usask.ca>
- Date: Fri, 14 Aug 1992 17:04:29 GMT
- Lines: 19
-
- In article <1992Aug12.173546.3298@access.usask.ca>
- choy@skorpio.usask.ca (I am a terminator.) writes:
- >In article <arms.713549466@spedden>, arms@cs.UAlberta.CA (Bill Armstrong) writes:
- >|> mather@sees.bangor.ac.uk (Paul Mather) writes:
-
- >|> The trouble is that all you need to define any function is a
- >|> (complete) table of input-output pairs. If you allow this table as
- >|> (part of) the "instruction set", all of computability theory becomes
- >|> uninteresting. So yes, normally algorithms have to be finite.
- >
- >Why would computability theory become uninteresting?
-
- A function is (can be viewed as) the set of input-output pairs. If
- you allow all such sets as part of the instruction set of a machine,
- then all functions are computable. If all functions are computable,
- then the theory about which functions are not computable becomes
- rather small.
-
- Seth sethb@fid.morgan.com
-