home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.ai.neural-nets:3197 comp.ai:3099 comp.compression:3019 comp.theory:1765
- Newsgroups: comp.ai.neural-nets,comp.ai,comp.compression,comp.theory
- Path: sparky!uunet!utcsri!torn!newshost.uwo.ca!asterix.gaul.csd.uwo.ca!koops
- From: koops@asterix.gaul.csd.uwo.ca (Luke Koops)
- Subject: Re: measuring complexity (was Re: Kolmogorov Complexity)
- Reply-To: koops@asterix.gaul.csd.uwo.ca (Luke Koops)
- Organization: University of Western Ontario
- Date: Fri, 14 Aug 1992 21:20:05 GMT
- Message-ID: <1992Aug14.212005.3438@julian.uwo.ca>
- References: <1992Aug10.104156.25017@julian.uwo.ca> <arms.713457373@spedden> <1992Aug10.163725.26986@julian.uwo.ca> <1992Aug10.115352.1@acad3.alaska.edu>
- Sender: news@julian.uwo.ca (USENET News System)
- Nntp-Posting-Host: asterix.gaul.csd.uwo.ca
- Lines: 53
-
- In article <1992Aug10.115352.1@acad3.alaska.edu>, fxsdb@acad3.alaska.edu (Scott D. Berry) writes:
- |> In article <1992Aug10.163725.26986@julian.uwo.ca>, koops@obelix.gaul.csd.uwo.ca (Luke Koops) writes:
- |> > In article <arms.713457373@spedden>, arms@cs.UAlberta.CA (Bill Armstrong) writes:
- |> >> koops@asterix.gaul.csd.uwo.ca (Luke Koops) writes:
- |> > .
- |> > .
- |> >> >Why is the smallest turing machine considered the measure of
- |> >> >complexity, and not the smallest boolean expression, or the smallest
- |> >> >decision tree, or the smallest neural network description, or the
- |> >> >shortest program in 6502 Assembly language?
- |> > .
- |> > .
- |> >> I think the answer to your question [...] is that other models that you
- |> >> could use, like a random access machine, will *only* differ in their results
- |> >> by a constant factor.
- |> >
- |> > I suspect that, even though the models _only_ differ by a constant
- |> > factor, in some cases that factor could be very large
- |> The reason you ignore even a very large constant factor is that complexity, on
- |> the theoretical level, takes very large sets into consideration. As the data
- |> set gets infinitely large, the constants matter less (10000000000n is worse
- |> that n^2, for large n).
- |>
- |> The reason the other methods don't give an "unbiased estimate of the
- |> complexity" is that a Turing machine is a finite-state machine, while an
- |> assembly language program, for example, may have an infinite number of states.
- |> A program that adds one to a number and jumps backwards will certainly be
- |> small, but has "infinite complexity", and does not describe an algorithm, as it
- |> doesn't terminate.
- |> -Scott (fxsdb@acad3.alaska.edu)
-
- I have learned a bit more about kolmogorov complexity since I brought
- up the topic. K-complexity usually applies to finite strings. This is because inifinite
- strings usually have inifinite complexity (i.e. are uncomputable and would require
- an infinitely long Turing machine to generate them). In the set of all infinite strings,
- only a infinitesimally small subset of them have a finite complexity (i.e. periodic
- strings, binary representation of pi). Periodic strings can obviously be enumerated
- by finite strings (the string that is being repeated). Transcendental numbers which
- can be generated by finite length turing machines are a different story, but, by
- definition, they have a finite k-complexity.
-
- Also, I beleive that K-complexity is not unbiased, but is biased towards
- Turing machine representations of concepts. It just provides a really good estimate
- of complexity. This is merely my opinion, I would appreciate it if someone could
- confirm it.
- --
- Some newsreaders use proportional fonts!
-
- ||||<>////\\
- - <>
- \------
-
- which could make your signature looks like this ^^^^^ koops@csd.uwo.ca
-