home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!wupost!waikato.ac.nz!aukuni.ac.nz!cs18.cs.aukuni.ac.nz!pgut1
- Newsgroups: sci.crypt
- Subject: Re: SHS/SHA source code + random thoughts
- Message-ID: <1992Sep11.090308.26993@cs.aukuni.ac.nz>
- From: pgut1@cs.aukuni.ac.nz (Peter Gutmann)
- Date: Fri, 11 Sep 1992 09:03:08 GMT
- Sender: pgut1@cs.aukuni.ac.nz (PeterClaus Gutmann )
- Organization: Computer Science Dept. University of Auckland
- Lines: 62
-
- In article <6169@transfer.stratus.com> green alien space slime made
- Carl Ellison (cme@ellisun.sw.stratus.com) write:
-
- >How do we know A Good Thing when we see it?
- >
- >We know that hash functions are supposed to be 1-way.
- >
- >Does anyone have a function for evaluating how much more an algorithm
- >approaches 1-way if you add this additional round or some other feature?
- >[If not, we're in danger of throwing in complications because they look
- >complex to us -- not because we know they do any good.]
-
- I agree with this - it seems intuitive that using a different mix of input
- data in each round instead of reusing the same data would make the function
- stronger, but on the other hand it does smack of "add enough mess and it'll
- look random" design principles - I admit I can't really prove my claim that
- it's A Good Thing.
-
- Then in article <1992Sep4.060230.28313@cs.aukuni.ac.nz> tall men in dark
- coats forced Kevin S. McCurley <mccurley@cs.sandia.gov> to write:
-
- >The only direct comparision I see to Peter's figures are for the Decstation
- >5000, where my results are about a factor of 2.8 faster. This comparison
- >may not be fair, since I don't know how Peter could find a better compiler
- >for his, or maybe he is reading stuff from a file.
-
- I used gcc without any -O, as I was generally more interested in the speed
- ratio to MD5 than the absolute speed (I also wanted to distribute some SHA
- code to the masses rather spend a lot of work on a highly-tuned super-fast
- implementation - so call me lazy :-). In addition, the machine I was using
- has (supposedly) recently been cannibalized/hacked for parts for other
- machines, so its performance may be unusually low.
-
- >The only comparision I made to MD5 is using the code distributed by
- >anonymous FTP from rsa.com, which ran about 15% faster than my SHA code on a
- >Sparcstation 2.
-
- That ratio probably isn't too far off my measured one of 1/3 faster (since my
- SHA code isn't optimised that much). For a real speed comparison it would be
- interesting to get something like Dhrystone ratings for the various machines
- and compilers, and use these as a baseline for comparisons, for example for
- the peecees there's a big difference between gcc (optimised 32-bit code), MSC
- V7 (optimised 16-bit? code), and Turbo C (pessimized 16-bit code). I'll
- leave this as an exercise for the reader - as I mentioned above, I was mainly
- interested in a speed comparison + getting the code to the public.
-
- Finally, Richard Schroeppel <rcs@cs.arizona.edu> has suggested an optimization
- to the f1 and f3 functions:
-
- #define f3(x,y,z) ( ( x & y ) | ( x & z ) | ( y & z ) ) /* Rounds 40-59 */
-
- This could be ( ( x & y ) | ( z & ( x | y ) ) ), with one fewer boolean ops.
-
- #define f1(x,y,z) ( ( x & y ) | ( ~x & z ) ) /* Rounds 0-19 */
-
- This could be z ^ ( x & ( y ^ z ) ), losing the ~ from the original
- definition. (And assuming that the machine *has* a ^ instruction.)
-
- This results in a speed increase for SHA on all the platforms I tested it
- on. This optimization can also be applied to the G function in MD5.
-
- Peter.
-