home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.ai.neural-nets:3202 can.ai:38 comp.ai:3107 comp.compression:3023 comp.theory:1768
- Newsgroups: comp.ai.neural-nets,can.ai,comp.ai,comp.compression,comp.theory
- Path: sparky!uunet!utcsri!torn!newshost.uwo.ca!gaul.csd.uwo.ca!koops
- From: koops@gaul.csd.uwo.ca (Luke Koops)
- Subject: Kolmogorov (Summary of responses)
- Organization: Computer Science Dept., Univ. of Western Ontario, London, Canada
- Date: Sun, 16 Aug 1992 06:32:13 GMT
- Message-ID: <1992Aug16.063213.15122@julian.uwo.ca>
- Keywords: kolmogorov, complexity, turing machine
- Sender: news@julian.uwo.ca (USENET News System)
- Nntp-Posting-Host: obelix.gaul.csd.uwo.ca
- Lines: 160
-
- In article <1992Aug10.104156.25017@julian.uwo.ca> I wrote:
- > I have been poring over the article [1], which seems to be an
- >excellent paper. In it the definition of Kolmogorov complexity of a
- >finite string is given (pg. 353) as the smallest prefix turing machine
- >to generate the string (or infinite if no such turing machine exists).
- >Why is the smallest turing machine considered the measure of
- >complexity, and not the smallest boolean expression, or the smallest
- >decision tree, or the smallest neural network description, or the
- >shortest program in 6502 Assembly language?
- >
- > I would prefer an answer which does not _just_ appeal
- >dogmatically to the fact that the Turing machine is a general
- >definition of computation, but has some explanatory content. I would
- >also be interested in references that explain K-complexity (or its
- >applications) as simply as possible.
- >
- >-----
- >
- >1. Ming Li, Paul M. B. Vitanyi, Inductive Reasoning and Kolmogorov
- > Complexity, Journal of Computer and System Sciences. 44 (1992),
- > 343-384.
- >
- >-----
- >
- >--
- > Some newsreaders use proportional fonts!
- >
- > ||||<>////\\
- >- <>
- >\------
- >
- >So your signature looks like this ^^^^^ koops@csd.uwo.ca
-
- The e-mail responses to this post follow:
-
-
-
- From shallit@graceland.uwaterloo.ca Mon Aug 10 07:23 EDT 1992
- Organization: University of Waterloo
-
- You might try sending mail to Ming Li here at Waterloo: mli@math.uwaterloo.ca.
- He and Vitanyi are working on a book on the subject -- it is close to
- finished, as far as I know.
-
- The short answer to your question is that it doesn't matter what model
- is used to define Kolmogorov complexity -- within reason, of course.
- The fundamental theorem says that complexity within all your models
- (with the possible exception of neural nets) is the same, within a constant
- additive factor. This is well-explained in the book of Li & Vitanyi.
-
- Jeff Shallit
-
- [ ed. I wrote to Ming Li and am awaiting a reply]
-
-
-
- From congrunt!jbs@uu3.psi.com Mon Aug 10 19:31 EDT 1992
-
- Why is the smallest turing machine considered the measure of
- complexity, and not the smallest boolean expression, or the smallest
- decision tree, or the smallest neural network description, or the
- shortest program in 6502 Assembly language?
-
- It is a completely arbitrary choice.
-
- Jeffrey Siegal
-
-
- From congrunt!jbs@uu3.psi.com Mon Aug 10 20:13 EDT 1992
- In-Reply-To: koops@obelix.gaul.csd.uwo.ca's message of 10 Aug 92 16:37:25 GMT
- Subject: Re: Kolmogorov Complexity
-
- >I still have the same question, but perhaps it can be framed a bit
- >differently (assuming that my understanding is correct). How come
- >turing machine length gives an unbiased estimate of the complexity
- >of a string, while other methods (like those listed above) do not?
-
- It doesn't. All the estimates are biased. But the point is that you
- are only interested in the answer in general terms, like O(n) or
- O(n.logn). All methods will give you the same answer.
-
- Jeffrey Siegal
-
-
-
- From wayne@mathematik.uni-Bremen.de Tue Aug 11 13:11 EDT 1992
- Organization: 250 Million Pet Rocks
-
- In article <1992Aug10.104156.25017@julian.uwo.ca> you write:
- > {see top of post}
-
- It seems intuitive to me that the "Universality of Computation
- Machines" means not only that you can always _translate_ a program
- for one machine to run on another, but that the translation process
- (which, aside for neural network descriptions which I don't know about,
- involves "linear" string substitutions whenever you get down to atomic
- operations) "must" preserve complexity asymptotically. A loose anlogy
- would be with vector spaces, where all norms are equivalent and continuous
- mappings preserve the integrity of the norms, so a sequence which converges
- in the taxicab norm must converge in the sphere norm.
-
- At least that's how I would like to believe things. Obviously,
- by the fuzziness of what I just said, I'm not a card-carrying complexity
- theorists, but I have been thinking about the complexity of algebraic
- computation somewhat and that's what the picture seems like it "should"
- be to me.
-
- Anyway it doesn't appeal to simple dogma; I would like to know
- if it turns out to be right.
-
- --
- Wayne Tvedt wayne@mathematik.uni-bremen.de
- ...fold globally, expand locally
-
-
- From koops@gaul.csd.uwo.ca Tue Aug 11 14:14 EDT 1992
- To: wayne@mathematik.uni-Bremen.de
-
- In article <1992Aug10.104156.25017@julian.uwo.ca> you write:
- > {see previous message}
-
- I don't know if it _is_ right, but it sounds right. I found this principle in
- a book called "Handbook of Theoretical Computer Science, Volume A"
-
- INVARIANCE THESIS "Reasonable" machines can simulate eachother within a
- polynomially bounded overhead in time and a *constant-factor overhead in
- space*.
-
-
-
- From lima@ral.rpi.edu Fri Aug 14 16:08 EDT 1992
-
- According to my notes and the book 'Theory of Computation' by Derick
- Wood, an algorithm must consist of a finite set of instructions but it doesn't
- necessarily have to halt. Wood divides algorithms in algorithms that halt
- and partial algorithms.
- To be more precise, one could call PROCEDURES to both algorithms
- (that must halt) and partial algorithms (that do not terminate on all inputs).
-
- According to Li's paper, the constant bias of the complexity of a
- Turing Machine relatively to another may be interpreted as an initial part
- of the input tape consisting of a code 0^n, where the # of zeros identify
- TM Tn, a separator 1 and then the original program p. The universal TM U
- simulates Tn on input p after it reads the 1. Thus, the idea that it doesn't
- matter to have a TM, a C interpreter or a Pascal interpreter (for example
- written in C) expressed by John Tromp, seems to me a good one.
-
- Pedro Lima
- lima@ral.rpi.edu
-
-
-
- Thanks for contributing.
-
- -Luke Koops (koops@csd.uwo.ca)
- --
- Some newsreaders use proportional fonts!
-
- ||||<>////\\
- - <>
-