home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!portal!cup.portal.com!ts
- From: ts@cup.portal.com (Tim W Smith)
- Newsgroups: comp.compression
- Subject: Re: PERPETUAL MOTION
- Message-ID: <62692@cup.portal.com>
- Date: Thu, 23 Jul 92 21:42:17 PDT
- Organization: The Portal System (TM)
- References: <35610@sdcc12.ucsd.edu> <62020@cup.portal.com>
- <7486@public.BTR.COM>
- Lines: 59
-
- > I have not seen any counterexample to the primary refutation, that if
- > you consider all files of size less then or equal to N bits, you have a
- > certain number of different possible files, and if you compress them
- > ALL using a single method, either all will end up exactly the same
- > size, or at least one of them will get larger, or two different files
- > will compress to the same result meaning the method isn't reversible,
- > or the method simply will refuse to work on at least one file.
-
- Most impossible compressors claim to only work on files past a certain
- size. Let that size be M bits. Let F(X) be the total number of files
- of X bits or less. Let f(X) be the total number of files of X bits.
-
- Assume we have some ordering of all files. With each file, associate
- a number, which is the position of the file in this ordering relative
- to the first file of the same size.
-
- Now consider the following compressor. For files of length M+1, the
- first F(M) of them are mapped to files of length M or less. The rest
- of the files of length M+1 are mapped to the files whose number is
- F(M) less than theirs.
-
- This leaves us F(M) files of length M+1 that do not have files mapped
- onto them.
-
- We can now do a similar thing for files of length M+2. We map the
- first F(M) onto the files of length M+1 that have not been mapped to,
- and them map the rest to files whose number is F(M) less.
-
- This leaves us F(M) files of length M+2 that have not been mapped
- onto.
-
- Repeat forever. At each step, there are F(M) files that compress, and
- applying this to a file of size N at most f(N)/F(M) times will compress
- it.
-
- I don't recall all the WEB claims, but I think that this satisfies them
- in spirit.
-
- Actually, the above is needlessly complex. Arrange all possible files
- by increasing size. Call the position of a given file on this list
- the index of that file. "Compress" a file by mapping it to the file
- whose index is one less. No file gets bigger, some files get smaller,
- and repeated application to a given file will make it smaller.
-
- The above is the kind of thing I was trying to capture with the
- "subtract one" compressor. I screwed up on the leading zeros,
- though.
-
- Note that the above is completely useless as a real compressor.
- The fraction of files actually compressed by a single pass goes
- to zero as the file size goes up. If you apply it repeatedly, you
- will use as much space to remember how many times you applied it
- as you save by using it. And the compression ratio goes down
- as the file size goes up, approaching a limit of 1.
-
- However, I believe that it *DOES* provide a counterexample to the
- usual counting argument.
-
- --Tim Smith
-