home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!sdd.hp.com!uakari.primate.wisc.edu!ames!agate!bho
- From: bho@ocf.berkeley.edu (Bing Ho)
- Newsgroups: comp.compression
- Subject: Re: DELETE PROTECTION AND FILE FIX UNDER ON-THE-FLY COMPRESSION
- Message-ID: <16he2oINNks4@agate.berkeley.edu>
- Date: 14 Aug 92 23:02:48 GMT
- References: <1992Aug11.172604.29703@ncsu.edu> <169b62INNemt@agate.berkeley.edu> <2984@accucx.cc.ruu.nl>
- Organization: U.C. Berkeley Open Computing Facility
- Lines: 28
- NNTP-Posting-Host: tornado.berkeley.edu
-
- In article <2984@accucx.cc.ruu.nl> nevries@accucx.cc.ruu.nl
- (Nico E de Vries) writes:
-
- >For larger .EXE .SYS .COM files diet etc do better. Anyone using Clipper
- >will be very glad about that :-).
-
- That's very interesting to know. Which of the compressors, pklite,
- diet, or lzexe does best for compression?
-
- >This is not the main reason. The main reason archivers etc do better than
- >Stacker is the speed vs ratio limit. Stacker is faster than most
- >archivers but therefore has a worse compression ratio. Also notice there
- >are compressors even slower than ARJ with more compression. The limit
- >seems to be the Ashford compressor taking hours for an file at about
- >30% more compression than ARJ (see comp.compression FAQ for more info).
-
- That's true for perfectly programmed algorithms. However, until that
- occurs, there are many ways to optimize existing schemes for speed.
- There is always an inverse proportion to speed and compression, but
- I find that optimizing still has a lot to do with speed. The question I
- would like answered is if Stacker's scheme is inherently worse than the
- prevalent zip, arj, lzh schemes.
-
- >Nico E. de Vries
-
-
- Bing Ho
- bho@ocf.berkeley.edu
-