home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!decwrl!btr!rem
- From: rem@btr.BTR.COM (Robert E. Maas rem@btr.com)
- Newsgroups: comp.compression
- Subject: Re: Compressing English text to 1.75bits or better (80%)
- Keywords: FAQ
- Message-ID: <7990@public.BTR.COM>
- Date: 13 Sep 92 19:59:18 GMT
- References: <1992Sep12.103552.24873@rhrk.uni-kl.de> <1992Sep12.154713.14396@uwm.edu>
- Organization: BTR Public Access UNIX, MtnView CA. Contact: Customer Service cs@BTR.COM
- Lines: 17
-
- markh@csd4.csd.uwm.edu (Mark) said:
- <<I have in mind an compression algorithm. It has a list of every
- single text that was produced anywhere, is being produced, or ever will
- be produced, and indexes each with a unique 64 bit number. This
- algorithm will compress every English text to 64 bits.>>
-
- Your proposed algorithm would be useless for normal volumes of data,
- because the header to transmit that corpus of English would be so large
- it'd be bigger than the stuff you wanted to compress. Remember, when
- decompressing, all the software and data tables must be present, in
- addition to the actual compressed files, and the total of all that must
- be significantly smaller than the original files or the compression
- wasn't worthwhile. In your method all those tables are orders of
- magnitude LARGER than the original text most users would want to
- compress.
-
- May I suggest such silly methods as yours not be posted in the future?
-