home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!dtix!darwin.sura.net!zaphod.mps.ohio-state.edu!cs.utexas.edu!milano!cactus.org!ritter
- From: ritter@cactus.org (Terry Ritter)
- Newsgroups: sci.crypt
- Subject: Re: more mcrypt discussion
- Summary: More Mass Encryption Comments
- Keywords: mcrypt
- Message-ID: <1992Aug13.181102.11451@cactus.org>
- Date: 13 Aug 92 18:11:02 GMT
- References: <1992Aug12.185029.13607@mintaka.lcs.mit.edu>
- Organization: Capital Area Central Texas UNIX Society, Austin, Tx
- Lines: 124
-
-
- Continuing the discussion of mcrypt with knight@hal.gnu.ai.mit.edu:
-
-
- > The increase is the excess blocking. I encrypt in exactly
- >1024 byte (1K) blocks. . . .
- > . . . due to computer file clusters, this block doesn't
- >take up any additional space from the drive.
-
- Well, we must be clear about whether the cipher is working on actual
- disk sectors or files. Only files have been mentioned.
-
- I think it is important to end up (after enciphering and deciphering)
- with an exact duplicate of the original file, in data and length.
- Any change from variable-length (files) to block-oriented processing
- must add something to the original data to be able to recover the
- original length. And if the original length was 1023 mod 1024, there
- will be no free space in the last block to hold additional data.
- Thus, any block-oriented cipher which works on variable-length files
- must expand some files. Small blocks imply a small expansion, but
- frequently; large blocks imply a larger expansion less frequently.
-
- If the cipher does not reproduce a file with exactly the original
- length in bytes, then we can expect that a CRC (or any other data
- validity check) on the recovered file will give a different result
- than on the original file. This is not good.
-
-
- >> * Because of a lack of any statement otherwise, apparently the
- >> scheme always enciphers in the same way with the same User Key.
- >> If so, this is a serious weakness because it allows anagramming
- >> between different messages (and this is the classical method of
- >> breaking transposition ciphers). In general, no cipher scheme
- >> should "ever" re-encipher the same data in the same way even
- >> with the same User Key. (One way around this is to first encipher
- >> a "really random" Message Key, and use the plaintext Message Key
- >> as the internal key for ciphering the actual data.)
-
- >You'll have to explain this to me, because I'm wondering if my
- >implementation doesn't completely by-pass this possibility.
-
- Well, I guess it all depends on what you want to do with it. Your
- mention of NSA implied to me that you were thinking you had a cipher
- which would give experts some work. If you just want to obscure the
- data, that's another thing.
-
- If a cipher is supposed to be strong, then it is a very bad idea
- to encipher multiple blocks in the same way, or multiple messages
- in the same way. It is certainly true that bit-transposition makes
- things harder for cryptanalysis. But cryptanalysis of transposition
- has always had to deal with the possibility that the correct symbol
- for any particular position could come from anywhere in the cipher.
- Despite this, transposition ciphers have been broken many times.
-
- One reason that transposition ciphers have been broken is that their
- transposition permutations often have to be of a special form which
- could be constructed from the User Key, and that may be a factor here
- as well. The simple fact that there are a large number of permutations
- for a given block size does not mean that all of them--or even any large
- fraction--could be selected by User Keys. The key to cryptanalysis
- is to strip away huge numbers of possibilities, instead of just
- trying each possibility one at a time. We have this situation when
- only certain types of permutation can be constructed.
-
- A big part of the strength argument with this cipher is whether the
- Simple Substitution step prevents cryptanalysis. If we consider only
- analysis based on two unknown ciphertext blocks, I'm sure it helps.
- But, in real applications, the opponent will eventually have some
- amount of plaintext, the corresponding ciphertext, and lots of
- unknown ciphertext to play with.
-
- Transposition ciphers have historically been cracked by anagramming,
- and this only works when multiple messages (blocks) are enciphered
- with the same permutation. That same weakness occurs in the
- proposed cipher. In the absence of some plausible argument that
- this cipher is so unique as to not be susceptible to the old
- weakness, one should conclude that the old weakness could be
- exploited.
-
- While the Simple Substitution step will complicate things, it is
- hard to imagine that it makes cryptanalysis actually impossible;
- Simple Substitution is just too weak to provide much of a screen.
- On the other hand, even a fairly simple Vernam system (e.g. a
- 32-bit statistical RNG with XOR combining)--instead of Simple
- Substitution--could make the argument much more interesting.
- But it would be better to have *both* a randomizer step *and*
- a different permutation for each and every block *and* use
- sizable cryptographic RNG's to drive the whole thing.
-
-
- >Every attempt made to crack mcrypt will result in a file that will
- >be produced as a result of the key.
-
- This is hard to believe. Presumably, an attempt at cracking
- the cipher could produce any of the 2^1024 different possible
- plaintext blocks. But only those plaintext blocks which could
- actually transformed into the known ciphertext need be considered.
- It is true that there are far more than 2^1024 permutations,
- and so, enough potential internal keys. But the issue is how all
- of these internal keys could possibly be selected, and, if they
- cannot, the extent to which this affects both the size and
- distribution of the permutation space.
-
- As I mentioned in my paper, a randomizer stage is an alternative
- to exact bit-balancing, and a bit-transposition cipher with a
- balanced data and sizable blocks can be a strong cipher. But
- we can expect that a bit-transposition stage which is designed to
- reach any possible overall permutation (using, say, the Shuffle
- algorithm of Durstenfeld), must be relatively slow, probably
- slower than DES. Each 1K block implies 8192 bit exchanges under
- bit Shuffle, and if we really want to reach all possible
- permutations, it is hard to see how any algorithm could be much
- faster.
-
- Stream ciphers can be faster, but strong stream ciphers will
- always need more processing than a little statistical RNG and XOR.
- Thus, any strong cipher is going to be slower than its weaker cousins
- until processing is no longer an issue. (For example, a multiprocessor
- machine based on streams could have a cipher processor, which
- might increase the delay in the pipe, yet not reduce the data
- rate.)
-
- ---
- Terry Ritter ritter@cactus.org
-