home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.crypt
- Path: sparky!uunet!newsgate.watson.ibm.com!yktnews!admin!wo0z!lwloen
- From: lwloen@rchland.vnet.ibm.com (Larry Loen)
- Subject: Re: What's the overall ratio?
- Sender: news@rchland.ibm.com
- Message-ID: <1993Jan05.194654.10849@rchland.ibm.com>
- Date: Tue, 05 Jan 1993 19:46:54 GMT
- Reply-To: lwloen@rchland.vnet.ibm.com
- Disclaimer: This posting represents the poster's views, not necessarily those of IBM
- References: <cfG9faf0Bwwb4F5T9z@transarc.com> <1993Jan05.160811.29681@rchland.ibm.com> <1icec7INNo3h@uwm.edu> <9301051753.AA21758@TIS.COM>
- Nntp-Posting-Host: wo0z.rchland.ibm.com
- Organization: IBM Rochester
- Lines: 54
-
- In article <9301051753.AA21758@TIS.COM>, mjr@TIS.COM (Marcus J. Ranum) writes:
- |>
- |> This is probably a naive question, but..
- |>
- |> When designing cryptosystems I'd imagine it's desireable to use
- |> algorithms that don't parallelize well. Is it feasible to somehow add
- |> a step to a process like exponentiation that won't decompose well? Lenstra's
- |> experiment with using loads of machines to factor primes shows us that
- |> massively parallel attacks are just a matter of networking. ;) Is there
- |> anything like factorization that also has the property of not parallelizing
- |> well or at all?
- |>
- |> mjr.
-
- Interesting idea. But, I wonder if it matters as much as it seems. All
- that seems necessary is to have a certain amount of sequentiality built-in
- to the attack process. Something as simple as doing a multiply followed by
- a compare probably does not "parallelize" particularly well, particularly
- if the whole result is needed for some later step in the process.
-
- One always has the choice of implementing the whole algorithm (whether it
- is prime factoring or DES key exhaustion or whatever) on a single chip and
- building as many of these as one wishes, accepting the delay of some number
- of sequential steps, N. N will almost certainly be proportional to the
- time of encryption of a single step, which must by definition be affordable.
- One overcomes the fact that the N steps must be performed (say) 2 to the 54
- times for the attack to succeed by building lots of chips.
-
- At some point, it becomes cheaper to buy more of a particular chip with N
- steps than to try and combine a few of the remaining steps and build an
- instrinsically faster chip with, say, N-5 steps by trying to combine various
- adjacent steps in a single joint step. Offhand, I'd say that DES
- is hard to parallelize past the point of doing a single "round" in parallel.
- Perhaps it can now be done, but I can easily imagine the hardware to do, say,
- two DES steps at once being nearly the square of the amount of hardware to
- do one round. If so, it is net cheaper (and, as fast in the long run) to
- simply buy more of the chips that take time unit N.
-
- Likewise, for factoring, one could imagine a similar combinatoric explosion
- if one tried to do certain multiplication and division steps in a single
- hardware "step". The "N" cost of a step may be a little higher or lower
- than RSA encryption, per se (I don't know, offhand), but it must surely be
- the number of times the factoring process is run, not the time unit N for
- a given attempt, that makes the problem intractable with known methods.
-
- I have trouble imagining a case where an encryption process does not produce
- an affordable unit "N" that will not be able be attacked in this manner.
- That is, the base algorithm, if it is affordable, will be able to be the
- minimum time unit (or, proportional to some usable equivalent) to be a useful
- departure point for parallelism.
-
- --
- Larry W. Loen | My Opinions are decidedly my own, so please
- | do not attribute them to my employer
-