home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!spool.mu.edu!agate!ames!pacbell.com!charon.amdahl.com!amdahl!amdcad!angelo!roberts
- From: roberts@angelo.amd.com (Dave Roberts)
- Newsgroups: comp.arch
- Subject: Logic Minimization Algorithms
- Message-ID: <1992Dec15.024904.22554@amd.com>
- Date: 15 Dec 92 02:49:04 GMT
- Sender: usenet@amd.com (NetNews)
- Organization: Advanced Micro Devices, Santa Clara, CA
- Lines: 69
- Nntp-Posting-Host: angelo
-
-
- I couldn't think of a more appropriate news group to post this in, so
- here goze:
-
- What sort of algorithms are typically used to minimize logic equations
- specified in high level hardware description languages (PAL software,
- Verilog, etc.)?
-
- I've read and understand Quine-McClusky, and doodled around with
- implementing my own versions. It seems to work well except for the
- final step: choosing a set of prime implicants from the set that you
- just produced (by reductions of the form x * y + x * ~y = x, for those
- of you who don't remember. Prime implicants are those product terms
- that can't be reduced any further by the application of the above
- identity).
-
- The book that I read that had this in it does the following: it
- enumerates all the minterms used by the original function and finds a
- set of prime implicants that covers all the minterms.
-
- A few problems that I ran into when I fiddling around:
-
- (1) Often the input function isn't specified as a sum of minterms.
- It's specified as a sum of product terms which may or may not be
- minterms. Some of the minterms have usually been combined or reduced
- already. Generating all the minterms could be infeasible for
- functions will a large number of variables. For instance, if you
- think about it, a function with 32 variables could easily generate
- over 1 million minterms, as 2^32 is 4G. Representing this with any
- sort of data structure could easily exhaust all your memory, physical
- or virtual.
-
- (2) If the minterms can't be enumerated, it doesn't seem like the
- prime implicants that have been generated can be tested for coverage
- against the minterms. Minterms are needed to give a shot at an
- optimal solution. Testing for coverage against the input product
- terms, which remember may be prereduced terms, not minterms, could
- lead to a very non-optimal selection of prime implicants. For
- instance, in the worst case, the routine might be handed the set of
- prime implicants to start with. If the resulting prime implicants of
- the reduction process (which are the exact same ones are were input to
- the routine since no reduction is possible because they are prime
- implicants) are tested against the input set, the routine would
- determine that all the prime implicants would be necessary to cover
- the input pterms. Clearly, this may not be the case, and a suboptimal
- solution would result.
-
- (3) Even if the minterms can be enumerated, it seems like finding the
- set of optimal prime implicants is on the order of n!, where n is the
- number of prime implicants left after the reduction. Depending on the
- number of variables and the function, this could be a lot. How is, or
- rather, is this done in a reasonable time? Are optimal solutions
- sought? Are heuristics applied, with suboptimal solutions derived, as
- a result?
-
- So I guess my question is, how are these problems overcome? Do people
- use other algorithms than Quine-McClusky instead that don't suffer
- these problems? If so, what are they?
-
- Does someone have references of articles that describe this stuff?
- Articles themselves?
-
- What about publicly available source code that implements the various
- algorithms?
-
- Thanks for any help people can give to satisfy my curiousity.
-
- Dave Roberts
- roberts@brahms.amd.com
-