home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: news.answers,sci.answers,sci.math.num-analysis
- Path: senator-bedfellow.mit.edu!enterpoop.mit.edu!gatech!howland.reston.ans.net!math.ohio-state.edu!cs.utexas.edu!uunet!timbuk.cray.com!walter.cray.com!jwg
- From: jwg@cray.com (John W. Gregory)
- Subject: Linear Programming FAQ
- Message-ID: <linear-programming-faq-1-738951133@cray.com>
- Followup-To: sci.math.num-analysis
- Summary: A List of Frequently Asked Questions about Linear Programming
- Originator: jwg@ceres
- Keywords: FAQ, LP, Linear Programming
- Lines: 837
- Nntp-Posting-Host: ceres.cray.com
- Reply-To: jwg@cray.com (John W. Gregory)
- Organization: Cray Research, Inc., Eagan MN USA
- Date: 1 Jun 93 11:12:27 CDT
- Approved: news-answers-request@MIT.Edu
- Expires: 08/05/93
- Xref: senator-bedfellow.mit.edu news.answers:8965 sci.answers:230 sci.math.num-analysis:8427
-
- Posted-By: auto-faq 2.4
- Archive-name: linear-programming-faq
- Last-modified: June 1, 1993
-
-
- Linear Programming - Frequently Asked Questions List
- (linear-programming-faq)
- Most recent update: June 1, 1993
-
- --------------------------------------------------------------------------
-
- 0. "What's in this FAQ?"
-
- A: Table of Contents
- 1. "What is Linear Programming?"
- 2. "Where is there a good code, preferably public domain, to solve
- Linear Programming problems?"
- 3. "Oh, and we also want to solve it as an integer program. I think
- there will be only a few thousand variables or so."
- 4. "I've written my own optimization code. Where are some test models?"
- 5. "What is MPS format?"
- 6. "What software is there for nonlinear optimization?"
- 7. "Just a quick question..."
- 8. "What references are there in this field?"
- 9. "Who maintains this FAQ list?"
-
- --------------------------------------------------------------------------
-
- 1. "What is Linear Programming?"
-
- A: A Linear Program (LP) is a problem that can be put into the form
-
- minimize cx
- subject to Ax=b
- x>=0
-
- where x is a vector to be solved for, A is a matrix of known coefficients,
- and c and b are vectors of known coefficients. All these entities must
- have consistent dimensions, of course, and you can add "transpose" symbols
- to taste. The matrix A is generally not square, hence you don't solve an
- LP by just inverting A. Usually A has more columns than rows, so Ax=b
- is therefore underdetermined, leaving great latitude in the choice of x
- with which to minimize cx.
-
- Other formulations can be used in this framework. For instance, if you
- want to maximize instead of minimize, multiply the c vector by -1. If
- you have constraints that are inequalities rather than equations, you
- can introduce one new variable (a "slack") for each inequality and treat
- the augmented row of the matrix as an equation. LP codes will usually
- take care of such "bookkeeping" for you.
-
- LP problems are usually solved by a technique known as the Simplex Method,
- developed in the 1940's and after. Briefly stated, this method works by
- taking a sequence of square submatrices of A and solving for x, in such a
- way that successive solutions always improve, until a point is reached
- where improvement is no longer possible. A family of LP algorithms known
- as Interior Point methods has been developed starting in the 1980's, that
- can be faster for many (but so far not all) problems. Such methods are
- characterized by constructing a sequence of trial solutions that go
- through the interior of the solution space, in contrast to the Simplex
- Method which stays on the boundary and examines only the corners (vertices).
-
- LP has a variety of uses, in such areas as petroleum, finance, transportation,
- forestry, and military.
-
- The word "Programming" is used here in the sense of "planning"; the
- necessary relationship to computer programming was incidental to the
- choice of name. Hence the phrase "LP program" to refer to a piece of
- software is not a redundancy, although I tend to use the term "code"
- instead of "program" to avoid the possible ambiguity.
-
- --------------------------------------------------------------------------
-
- 2. "Where is there a good code, preferably public domain, to solve
- Linear Programming problems?"
-
- A: It depends on the size and difficulty of your models. LP technology
- and computer technology have both made such great leaps that models that
- were previously considered "large" are now routinely solved. Nowadays,
- with good commercial software (i.e., code that you pay for), models with
- a few thousand constraints and several thousand variables can be tackled
- with a PC. Workstations can often handle models with variables in the
- tens of thousands, or even greater, and mainframes can go larger still.
- Public domain codes can be relied on to solve models of smaller dimension.
- The choice of the code can make more difference than the choice of computer
- hardware. It's hard to be specific about model sizes and speed, a priori,
- due to the wide variation in things like model structure and variation in
- factorizing the basis matrices.
-
- There is a recently released public domain code, written in C, called
- "lp_solve" that is available on Usenet in the "comp.sources.reviewed"
- newsgroup. Its author (Michel Berkelaar, email at michel@es.ele.tue.nl )
- claims to have solved models with up to 30,000 variables and 50,000
- constraints. My own experience with this code is not quite so uniformly
- optimistic (new users of LP are sometimes shocked to learn that just
- because a given code has solved a model of a certain dimension, it may not
- be able to solve all models of the same size). Still, for someone who
- isn't sure just what kind of LP code is needed, it represents a very
- reasonable first try, and the price is certainly right. The code is
- archived at anonymous ftp site "ftp.uu.net", in directory
- "/usenet/comp.sources.reviewed/volume02/lp_solve".
- This directory consists of three files, part00.Z, part01.Z and part02.Z.
- You should download them in binary mode, and use the `uncompress` utility
- to expand them to normal ASCII format. The file called part00 contains
- reviewers' comments, and the other two files can be unpacked by removing
- the first 9 lines and executing the files as shell scripts (for example,
- `sh part01`). Then follow the instructions in the README and INSTALL
- files.
-
- For DOS/PC users, Prof. Timo Salmi at the University of Vaasa in Finland
- offers a Linear Programming and Linear Goal Programming binary called
- "tslin". You should be able to access it by ftp at garbo.uwasa.fi in
- directory /pc/ts (the current file name is tslin33b.zip, apparently
- using ZIP compression), or else I suggest contacting Prof. Salmi at
- ts@uwasa.fi .
-
- The consensus is that the LP code published in Numerical Recipes is not at
- all strong, and should be avoided for heavy-duty use. If your requirement
- is for a solver that can handle 100-variable models, it might be okay.
-
- There is an ACM TOMS routine for LP, #552, available from the netlib server,
- in directory /netlib/toms. See the section on test models for detail on
- how to use this server.
-
- If you have access to one of the commercial math libraries, such as IMSL or
- NAG, you may be able to use an LP routine from there.
-
- There are books that contain source code for the simplex method.
- John Chinneck has brought to my attention:
- - Best and Ritter, Linear Programming: active set analysis and computer
- programs, Prentice-Hall, 1985.
- - Bunday and Garside, Linear Programming in Pascal, Edward Arnold
- Publishers, 1987.
- - Bunday, Linear Programming in Basic (presumably the same publisher).
-
- If your models prove to be too difficult for free software to handle,
- then you can consider acquiring a commercial LP code. There are dozens
- of such codes on the market. I have my own opinions, but for reasons of
- space, generality and fairness, I will not attempt even to list the codes
- I know of here. Instead I refer you to the annual survey of LP software
- published in "OR/MS Today", a joint publication of ORSA (Operations
- Research Society of America) and TIMS (The Institute of Management
- Science). I think it's likely that you can find a copy of the June, 1992
- issue, either through a library, or by contacting a member of these two
- organizations (most universities probably have several members among the
- faculty and student body). The survey lists almost fifty actively marketed
- products. This publication also carries advertisements for many of these
- products, which may give you additional information to help make a decision.
-
- There are many considerations in selecting an LP code. Speed is important,
- but LP is complex enough that different codes go faster on different models;
- you won't find a "Consumer Reports" article 8v) to say with certainty which
- code is THE fastest. I usually suggest getting benchmark results for your
- particular type of model if speed is paramount to you. Benchmarking may
- also help determine whether a given code has sufficient numerical stability
- for your kind of models.
-
- Other questions you should answer: Can you use a stand-alone code, or do
- you need a code that can be used as a callable library, or do you require
- source code? Do you want the flexibility of a code that runs on many
- platforms and/or operating systems, or do you want code that's tuned to
- your particular hardware architecture (in which case your hardware vendor
- may have suggestions)? Is the choice of algorithm (Simplex, Interior
- Point) important to you? Do you need an interface to a spreadsheet
- code? Is the purchase price an overriding concern? Is the software
- offered at an academic discount (assuming you are at a university)? How
- much hotline support do you think you'll need?
-
- It may not always be true that "you get what you pay for," but it is rare
- that you get more than you pay for. 8v) There is usually a large
- difference in LP codes, in performance (speed, numerical stability,
- adaptability to computer architectures) and features, as you climb the
- price scale. If a code seems overpriced to you, you may not yet
- understand all of its features.
-
- --------------------------------------------------------------------------
-
- 3. "Oh, and we also want to solve it as an integer program. I think
- there will be only a few thousand variables or so."
-
- A: Hmmmm. You want
- - Nontrivial model size
- - Integer solutions
- - Public domain code
- Pick one or maybe two of the above. You can't have all three. 8v)
-
- Integer LP models are ones where the answers must not take fractional
- values. It may not be obvious that this is a VERY much harder problem
- than ordinary LP, but it is nonetheless true. The buzzword is "NP-
- Completeness", the definition of which is beyond the scope of this
- document but means in essence that, in the worst case, the amount of
- time to solve a family of related problems goes up exponentially as the
- size of the problem grows, for all known algorithms that solve such
- problems to a proven answer.
-
- Integer models may be ones where only some of the variables are to be
- integer and others may be real-valued (termed "Mixed Integer LP" or
- MILP, or "Mixed Integer Programming" or MIP); or they may be ones where
- all the variables must be integral (termed "Integer LP" or ILP). The
- class of ILP is often further subdivided into problems where the only
- legal values are {0,1} ("Binary" or "Zero-One" ILP), and general integer
- problems. For the sake of generality, the Integer LP problem will be
- referred to here as MIP, since the other classes can be viewed as special
- cases of MIP.
-
- You should be prepared to solve far smaller MIP models than the
- corresponding LP model, given a certain amount of time you wish to
- allow (unless you and your model happen to be very lucky). There exist
- models that are considered challenging, with mere hundreds of variables.
- Conversely, some models with tens of thousands of variables solve
- readily. It all depends, and the best explanations of "why" always
- seem to happen after the fact. 8v)
-
- A major exception to this gloomy outlook is that there are certain models
- whose LP solution always turns out to be integer. Best known of these
- are the so-called Transportation Problem, Assignment Problem, and
- Network-Flow Problem. The theory of unimodular matrices is fundamental
- here. It turns out that these particular problems are best solved by
- specialized routines that take major shortcuts in the Simplex Method,
- and as a result are relatively quick running even compared to ordinary
- LP. See the section on references for a book by Kennington and Helgason,
- which contains some source code for Netflo. Netflo is available by
- anonymous ftp at dimacs.rutgers.edu, in directory
- /pub/netflow/mincost/solver-1
- but I don't know the copyright situation (I thought you had to buy the
- book to get the code). Some commercial LP solvers also include a
- network solver.
-
- People are sometimes surprised to learn that MIP problems are solved
- using floating point arithmetic. Although various algorithms for MIP
- have been studied, most if not all available general purpose large-scale
- LP codes use a method called "Branch and Bound" to try to find an optimal
- solution. Briefly stated, B&B solves MIP by solving a sequence of related
- LP models. Good codes for MIP distinguish themselves more by solving
- shorter sequences of LP's, than by solving the individual LP's faster.
- Even more so than with regular LP, a costly commercial code may prove
- its value to you if your MIP model is difficult.
-
- As a point of interest, the Simplex Method currently retains an advantage
- over the newer Interior Point methods for solving these sequences of LP's.
-
- The public domain code "lp_solve", mentioned earlier, accepts MIP models,
- as do a large proportion of the commercial LP codes in the OR/MS Today
- survey. I have seen mention made of algorithm 333 in the Collected
- Algorithms from CACM, though I'd be surprised if it was robust enough
- to solve large models.
-
- Mustafa Akgul (AKGUL@TRBILUN.BITNET) at Bilkent University maintains an
- archive via anonymous ftp (firat.bcc.bilkent.edu.tr or 139.179.10.13).
- In addition to a copy of lp_solve (though I would recommend using the
- official source listed in the previous section), there is mip386.zip,
- which is a zip-compressed code for PC's. He also has a couple of
- network codes and various other codes he has picked up. All this is
- in directory pub/IEOR/Opt and its further subdirectories LP, PC,
- and Network.
-
- The better commercial MIP codes have numerous parameters and options to
- give the user control over the solution strategy. Most have the capability
- of stopping before an optimum is proved, printing the best answer obtained
- so far. For many MIP models, stopping early is a practical necessity.
- Fortunately, a solution that has been proved by the algorithm to be within,
- say, 1% of optimality often turns out to be the true optimum, and the bulk
- of the computation time is spent proving the optimality. For many modeling
- situations, a near-optimal solution is acceptable anyway.
-
- Once one accepts that large MIP models are not typically solved to a
- proved optimal solution, that opens up a broad area of approximate
- methods, probabilistic methods and heuristics, as well as modifications
- to B&B. Claims have been made for Genetic Algorithms and Simulated
- Annealing, though (IMHO) these successes have been problem dependent
- and difficult to generalize. Two references for GA are a book by
- David Goldberg, "Genetic Algorithms in Search, Optimization, and Machine
- Learning" (Addison-Wesley, 1989), and an article by Michalewicz et al. in
- volume 3(4), 1991 of the ORSA Journal on Computing. An extensive survey
- of GA software is maintained by Nici Schraudolph (nici@cs.ucsd.edu).
-
- Whatever the solution method you choose, when trying to solve a difficult
- MIP model, it is usually crucial to understand the workings of the physical
- system (or whatever) you are modeling, and try to find some insight that
- will assist your chosen algorithm to work better. A related observation
- is that the way you formulate your model can be as important as the actual
- choice of solver. You should consider getting some assistance if this
- is your first time trying to solve a large (>100 variable) problem.
-
- --------------------------------------------------------------------------
-
- 4. "I've written my own optimization code. Where are some test models?"
-
- A: In light of the comments above, I hope your aims are fairly modest,
- for there are already a lot of good codes out there. I hope your LP
- code makes use of sparse matrix techniques, rather than using a tableau
- form of the Simplex method, because the latter usually ends up being
- numerically unstable and very slow.
-
- If you want to try out your code on some real-world LP models, there is
- a very nice collection of small-to-medium-size ones on netlib. If you
- have ftp access, you can try "ftp research.att.com", using "netlib"
- as the Name, and your email address as the Password. Do a "cd lp/data"
- and look around. There should be a "readme" file, which you would
- want to look at first. Alternatively, you can reach an e-mail
- server via "netlib@ornl.gov", to which you can send a message saying
- "send index from lp/data"; follow the instructions you receive.
-
- The Netlib LP files (after you uncompress them) are in a format called
- MPS, which is described in another section of this document.
-
- There is a collection of MIP models, housed at Rice University. Send
- an email message containing "send catalog" to softlib@rice.edu , to get
- started. Or you may be able to do anonymous ftp to softlib.cs.rice.edu,
- then "cd /pub/miplib".
-
- There is a collection of network-flow codes and models at DIMACS (Rutgers
- University). Use anonymous FTP at dimacs.rutgers.edu. Start looking in
- /pub/netflow. Another network generator is called NETGEN and is available
- on netlib (lp/generators).
-
- John Beasley has posted information on his OR-Lib, which contains various
- optimization test problems. Send e-mail to umtsk99@vaxa.cc.imperial.ac.uk
- to get started. Or have a look in the Journal of the Operational Research
- Society, Volume 41, Number 11, Pages 1069-72.
-
- There are various test sets for NLP (Non-Linear Programming). Among
- those I've seen mentioned are
- - ACM TOMS (Transactions on Mathematical Software), V13 No3 P272
- - publications (listed in another section of this list) by Schittkowski;
- Hock & Schittkowski; Floudas & Pardalos; Torn; Hughes & Grawiog.
- Many of the other references also contain various problems that you
- could use to test a code.
-
- The modeling language GAMS comes with about 100 test models, which
- you might be able to test your code with.
-
- --------------------------------------------------------------------------
-
- 5. "What is MPS format?"
-
- A: MPS format was named after an early IBM LP product and has emerged
- as a de facto standard ASCII medium among most of the various commercial
- LP codes. You will need to write your own reader routine for this, but
- it's not too hard. The main things to know about MPS format are that it
- is column oriented (as opposed to entering the model as equations), and
- everything (variables, rows, etc.) gets a name. MPS format is described
- in more detail in Murtagh's book, referenced in another section. Here
- is a little sample model, explained in more detail below:
-
- NAME METALS
- ROWS
- N VALUE
- E YIELD
- L FE
- L MN
- L CU
- L MG
- G AL
- L SI
- COLUMNS
- BIN1 VALUE 0.03 YIELD 1.
- BIN1 FE 0.15 CU .03
- BIN1 MN 0.02 MG .02
- BIN1 AL 0.7 SI .02
- BIN2 VALUE 0.08 YIELD 1.
- BIN2 FE .04 CU .05
- BIN2 MN .04 MG .03
- BIN2 AL .75 SI .06
- BIN3 VALUE 0.17 YIELD 1.
- BIN3 FE .02 CU .08
- BIN3 MN .01 AL .8
- BIN3 SI .08
- BIN4 VALUE 0.12 YIELD 1.
- BIN4 FE .04 CU .02
- BIN4 MN .02 AL .75
- BIN4 SI 0.12
- BIN5 VALUE 0.15 YIELD 1.
- BIN5 FE .02 CU .06
- BIN5 MN .02 MG .01
- BIN5 SI .02 AL .8
- ALUM VALUE 0.21 YIELD 1.
- ALUM FE .01 CU .01
- ALUM AL .97 SI .01
- SILCON VALUE 0.38 YIELD 1.
- SILCON FE .03 SI .97
- RHS
- ALOY1 YIELD 2000. FE 60.
- ALOY1 CU 100. MN 40.
- ALOY1 MG 30. AL 1500.
- ALOY1 SI 300.
- BOUNDS
- UP PROD1 BIN1 200.00
- UP PROD1 BIN2 750.00
- LO PROD1 BIN3 400.00
- UP PROD1 BIN3 800.00
- LO PROD1 BIN4 100.00
- UP PROD1 BIN4 700.00
- UP PROD1 BIN5 1500.00
- ENDATA
-
-
- MPS is an old format, so it is set up as though you were using
- punch cards, and is not free format. Fields start in column 1,
- 5, 15, 25, 40 and 50. Sections of an MPS file are marked by
- so-called header cards, which are distinguished by their starting
- in column 1. Although it is typical to use upper-case throughout
- the file (like I said, MPS has long historical roots), many
- MPS-readers will accept mixed-case for anything except the
- header cards, and some allow mixed-case anywhere.
-
- The NAME card can be anything you want. The ROWS section defines
- the names of all the constraints; entries in column 2 or 3 are E
- for equality rows, L for less-than ( <= ) rows, G for greater-than
- ( >= ) rows, and N for non-constraining rows (the first of which
- would be interpreted as the objective function).
-
- The largest part of the file is in the COLUMNS section, which is
- the place where the entries of the A-matrix are put. All entries
- for a given column must be placed consecutively, although within
- a column the order of the entries (rows) is irrelevant. Rows not
- mentioned for a column are implied to have a coefficient of zero.
-
- The RHS section allows one or more right-hand-side vectors to be
- defined; most people don't bother having more than one. In the
- above example, the name of the RHS vector is ALOY1, and has non-zero
- values in all 7 of the constraint rows of the problem. Rows not
- mentioned in an RHS vector would be assumed to have a right-hand-side
- of zero.
-
- The optional BOUNDS section lets you put lower and upper bounds on
- individual variables (no * wild cards, unfortunately), instead of
- having to define extra rows in the matrix. All the bounds that have
- a given name in column 5 are taken together as a set. Variables
- not mentioned in a BOUNDS set are taken to be non-negative. There
- is another optional section called RANGES that I won't go into
- here. The final card must be the ENDATA, and yes, it is spelled
- funny.
-
- For comparison, here is the same model written out in an equation-
- oriented format.
-
- Minimize
- VALUE: 0.03 BIN1 + 0.08 BIN2 + 0.17 BIN3 + 0.12 BIN4 + 0.15 BIN5
- + 0.21 ALUM + 0.38 SILCON
- Subject To
- YIELD: BIN1 + BIN2 + BIN3 + BIN4 + BIN5 + ALUM + SILCON = 2000
- FE: 0.15 BIN1 + 0.04 BIN2 + 0.02 BIN3 + 0.04 BIN4 + 0.02 BIN5
- + 0.01 ALUM + 0.03 SILCON <= 60
- MN: 0.02 BIN1 + 0.04 BIN2 + 0.01 BIN3 + 0.02 BIN4 + 0.02 BIN5
- <= 40
- CU: 0.03 BIN1 + 0.05 BIN2 + 0.08 BIN3 + 0.02 BIN4 + 0.06 BIN5
- + 0.01 ALUM <= 100
- MG: 0.02 BIN1 + 0.03 BIN2 + 0.01 BIN5 <= 30
- AL: 0.7 BIN1 + 0.75 BIN2 + 0.8 BIN3 + 0.75 BIN4 + 0.8 BIN5
- + 0.97 ALUM >= 1500
- SI: 0.02 BIN1 + 0.06 BIN2 + 0.08 BIN3 + 0.12 BIN4 + 0.02 BIN5
- + 0.01 ALUM + 0.97 SILCON <= 300
- and
- 0 <= BIN1 <= 200
- 0 <= BIN2 <= 750
- 400 <= BIN3 <= 800
- 100 <= BIN4 <= 700
- 0 <= BIN5 <= 1500
-
- --------------------------------------------------------------------------
-
- 6. "What software is there for nonlinear optimization?"
-
- A: I don't claim very much expertise in this area, but the question is
- frequent enough to be worth addressing. If I get enough feedback, this
- section might grow large enough to need to be split off as a separate
- FAQ list.
-
- It's unrealistic to expect to find one general NLP code that's going to
- work for every kind of nonlinear model. Instead, you should try to find
- a code that fits the problem you are solving. Nonlinear solution techniques
- can be divided into various categories, such as unconstrained, linearly
- constrained, convexly constrained, or general. If your problem doesn't
- fit in any category except "general", or if you insist on a proven optimal
- solution (except when there no chance of encountering multiple local minima),
- you should be prepared to have to use a method that boils down to exhaustive
- search, i.e., you have an intractable problem. See the comments in the MIP
- section on Simulated Annealing and Genetic Algorithms.
-
- Several of the commercial LP codes referenced above have specialized
- routines, particularly for Quadratic Programming (QP, linear constraints
- with a quadratic objective function). Many nonlinear problems can be
- solved (or at least confronted) by application of a sequence of LP or
- QP approximations.
-
- There is an ACM TOMS routine for QP, #587, available from the netlib server,
- in directory /netlib/toms. See the section on test models for detail on
- how to use this server.
-
- There is a directory, /netlib/opt, on the netlib server containing a
- collection of optimization routines. The last time I checked, I saw
- - "praxis" (unconstrained optimization, without requiring derivatives)
- - "tn" (Newton method for unconstrained or simple-bound optimization)
- - "ve08" (optimization of unconstrained separable function).
- - "simann" (unconstrained optimization using Simulated Annealing)
- - "vfsr" (constrained optimization using Simulated Annealing)
- - "subplex" (unconstrained optimization of general multivariate functions)
- Again, see the section on test models for detail on how to use this server.
-
- Here is a summary of codes mentioned in newsgroups in the past year, not
- sorted into categories.
-
- - MINOS - Stanford University, Office of Technology Licensing, 415-723-0651.
- Mailing address: 350 Cambridge Ave., Suite 250, Palo Alto, CA 94306 (USA).
- This code is often used by researchers as a "benchmark" for others to
- compare with.
- - NPSOL - Stanford University, Office of Technology Licensing.
- - LSSOL - Stanford University, Office of Technology Licensing.
- This code does convex (positive semi-definite) QP. Is the QP solver
- used in current versions of NPSOL.
- - NAG Library routine E04UCF (essentially the same as NPSOL).
- - IMSL routine UMINF and UMIDH.
- - Harwell Library routine VF04.
- - Hooke and Jeeves algorithm - reference?
- - MINPAK I and II - Contact Steve Wright, wright@mcs.anl.gov
- - GENOCOP - Genetic algorithm, Zbigniew Michalewicz, zbyszek@mosaic.uncc.edu
- said to be available by ftp at unccsun.uncc.edu (152.15.10.88).
- - DFPMIN - Numerical Recipes (Davidon-Fletcher-Powell)
- - Amoeba - Numerical Recipes
- - Brent's Method - Numerical Recipes
- - FSQP - Contact Andre Tits, andre@src.umd.edu. Said to be free of charge
- to academic users. Solves general nonlinear constrained min-max problems.
- - QLD - Contact: Klaus.Schittkowski@uni-bayreuth.de. Solves Quadratic
- Programming problems.
- - CONMIN - Vanderplaats and Associates, Goleta CA
- - NOVA - DOT Products, Houston TX
- - GRG2 - Leon Lasdon, University of Texas, Austin TX
- - GINO - LINDO Systems, Chicago, IL
-
- --------------------------------------------------------------------------
-
- 7. "Just a quick question..."
-
- Q: What is a matrix generator?
- A: This is a code that creates input for an LP (or MIP, or NLP) code,
- using a more natural input than MPS format. There are no free ones.
- Matrix generators can be roughly broken into two classes, column
- oriented ones, and equation oriented ones. The former class is
- older, and includes such commercial products as OMNI (Haverley Systems)
- and DATAFORM (Ketron). Big names in the latter class are GAMS
- (Scientific Press), LINGO (LINDO Systems), and AMPL (information
- is in netlib/opt on the Netlib server). These products have links
- to various solvers (commercial and otherwise).
-
- Q: How do I diagnose an infeasible LP model?
- A: A model is infeasible if the constraints are inconsistent, i.e., if
- no feasible solution can be constructed. It's often difficult to track
- down a cause. The cure may even be ambiguous: is it that some demand
- was set too high, or a supply set too low? A useful technique is Goal
- Programming, one variant of which is to include two explicit slack
- variables (positive and negative), with huge cost coefficients, in each
- constraint. The revised model is guaranteed to have a solution, and you
- can look at which rows have slacks that are included in the "optimal"
- solution. By the way, I recommend a Goal Programming philosophy even if
- you aren't having trouble with feasibility; "come on, you could probably
- violate this constraint for a price." Another approach is Fourier-
- Motzkin Elimination (article by Danztig and Eaves in the Journal of
- Combinatorial Theory (A) 14, 288-297 (1973). Lastly, a software system
- called ANALYZE was developed by Harvey Greenberg to provide computer-
- assisted analysis, including rule-based intelligence. For further
- information about this code, and a bibliography of more than 400
- references on the subject of model analysis, contact Harvey at
- HGREENBERG@cudnvr.denver.colorado.edu.
-
- Q: I want to know specifically which constraints contradict each other.
- A: This may not be a well posed problem. If by this you mean you want to
- find the minimal set of constraints that should be removed to restore
- feasibility, this can be modeled as an Integer LP (which means, it's
- potentially a harder problem than the underlying LP itself). Start with
- a Goal Programming approach as outlined above, and introduce some 0-1
- variables to turn the slacks off or on. Then minimize on the sum of
- these 0-1 variables. An article covering aspects of this question is
- by Chinneck and Dranieks in the Spring 1991 ORSA Journal on Computing.
-
- Q: I just want to know whether or not there *exists* a feasible solution.
- A: From the standpoint of computational complexity, finding out if a
- model has a feasible solution is essentially as hard as finding the
- optimal LP solution, within a factor of 2 on average, in terms of
- effort in the Simplex Method. There are no shortcuts in general,
- unless you know something useful about your model's structure (e.g.,
- if you are solving some form of a transportation problem, you may
- be able to assure feasibility by checking that the sources add up
- to at least as great a number as the sum of the destinations).
-
- Q: I have an LP, except it's got several objective functions.
- A: This is indeed a difficult class of model. Fundamental to it is that
- there may no longer be one unique solution. Approaches that have worked
- are Goal Programming (treat the objectives as constraints with costed
- slacks), Pareto preference analysis, and forming a composite objective
- from the real ones. There is a section on this whole topic in Reference
- [1]. My general advice is to attempt to cast your model in terms of
- physical realities, or dollars and cents, wherever possible; sometimes
- the multiple objectives disappear! 8v)
-
- Q: I have an LP that has large almost-independent matrix blocks that are
- linked by a few constraints. Can I take advantage of this?
- A: In theory, yes. See section 6.2 in Reference [1] for a discussion of
- Dantzig-Wolfe decomposition. However, I am unaware of any commercial
- codes that will help you do this, so you'll have to create your own
- framework and then call your chosen LP solver to solve the subproblems.
- The folklore is that generally such schemes take a long time to converge
- so that they're slower than just solving the model as a whole. My
- advice, unless your model is so huge that a good solver can't fit it
- in memory, is to not bother decomposing it. It's probably more cost
- effective to upgrade your solver, if the algorithm is limiting you,
- than to invest your time; but I suppose that's an underlying theme in
- a lot of my advice. 8v)
-
- Q: I need to find all integer points in a polytope.
- A: There is no known way of doing this efficiently (i.e., with an algorithm
- that grows only polynomially with the problem size). A related question
- is how to find all the vertices of an LP, with the same pessimistic
- answer. The book by Schrijver is said to discuss this. For small
- models, it may be practical to find your answer by complete enumeration.
-
- Q: Are there any parallel LP codes?
- A: IBM has announced a parallel Branch and Bound capability in their
- package OSL, for use on clusters of workstations. "This is real, live
- commercial software, not a freebie. Contact forrest@watson.ibm.com".
- Jeffrey Horn (horn@cs.wisc.edu) recently compiled a bibliography of
- papers relating to research on parallel B&B. There is an annotated
- bibliography of parallel methods in Operations Research in general,
- in Vol 1 (1), 1989 of the ORSA Journal on Computing, although by now
- it might be a little out of date.
- I'm not aware of any implementations (beyond the "toy" level) of sparse
- simplex or interior-point solvers on parallel machines.
-
- Q: I am trying to solve a Traveling Salesman Problem ...
- A: TSP is a famously hard problem that has attracted many of the best minds
- in the field. Look at the bibliography in the Integer Programming section
- of Reference [1], particularly the ones with the names Groetschel and/or
- Padberg in them. I don't believe there are any commercial products to
- solve this problem. For that matter, I'm not aware of any public domain
- codes either.
-
- Q: I heard about a Russian algorithm for Traveling Salesman Problems.
- A: You are speaking of Khachian's method for LP, published in 1979. The
- connection to TSP is false, brought about by an erroneous New York Times
- article back then. It was the first LP algorithm to have a polynomial
- bound on the amount of work. Though important theoretically, it has had
- no impact in practice, because the polynomial bound is huge. (It was
- Karmarkar's method [1984] that was the first *practical* polynomial
- algorithm for LP.) Khachian's method works by surrounding the solution
- space with a sequence of shrinking ellipsoids. There continues to be
- some research done on related methods, for NLP.
-
- Q: I need to do post-optimal analysis.
- A: Many commercial LP codes have features to do this. Also called Ranging
- or Sensitivity Analysis, it gives you information about how much the
- coefficients in the problem could change without affecting the nature of
- the solution. Most LP textbooks, such as Reference [1], describe this.
- Unfortunately, all this theory applies only to LP.
-
- For a MIP model with both integer and continuous variables, you could
- get a limited amount of information by fixing the integer variables at
- their optimal values, resolving the model as an LP, and doing standard
- post-optimal analyses on the remaining continuous variables. Another
- MIP approach would be to choose the coefficients of your model that are
- of the most interest, and generate "scenarios" using values within a
- stated range created by a random number generator. Perhaps five or ten
- scenarios would be sufficient; you would solve each of them, and by some
- means compare, contrast, or average the answers that are obtained.
- Noting patterns in the solutions, for instance, may give you an idea of
- what solutions might be most stable. A third approach would be to
- consider a goal-programming formulation; perhaps your desire to see
- post-optimal analysis is an indication that some important aspect is
- missing from your model.
-
- Q: Some versions of the simplex algorithm require as input a vertex. Do
- all systems require a vertex?
- A: No. You just have to give an LP code the constraints and the objective
- function, and it will construct the vertices for you. Most codes go
- through a so-called two phase method, wherein the code first looks for
- a feasible solution, and then works on getting an optimal solution. The
- first phase can begin anywhere, such as with all the variables at zero.
- So, no, you don't have to give a code a starting point. On the other
- hand, it is not uncommon to do so, because it can speed up the solution
- time tremendously. Commercial codes usually allow you to do this (they
- call it a "basis", though that's a loose usage of a specific linear
- algebra concept); free codes generally don't. You'd normally want to
- bother with a starting basis only when solving families of related and
- difficult LP's (i.e., in some sort of production mode).
-
- --------------------------------------------------------------------------
-
- 8. "What references are there in this field?"
-
- A: Too many to count. Here are a few that I like or have been
- recommended on the net. I have *not* reviewed them all.
-
- General reference [1]
- - Nemhauser, Rinnooy Kan, & Todd, eds, Optimization, North-Holland, 1989.
- (Very broad-reaching, with large bibliography. Good reference; it's the
- place I tend to look first. Expensive, and tough sledding for beginners.)
-
- LP
- - Chvatal, Linear Programming, Freeman, 1983. (I find it hard to whole-
- heartedly recommend any one LP textbook, but this is one I'd probably use
- in teaching an undergraduate course.)
- - Hughes & Grawiog, Linear Programming: An Emphasis on Decision Making,
- Addison-Wesley, 1973.
- - Luenberger, Introduction to Linear and Nonlinear Programming, Addison
- Wesley, 1984. (Updated version of an old standby.)
- - Murtagh, B., Advanced Linear Programming, McGraw-Hill, 1981. (Good one
- after you've read an introductory text.)
- - Murty, K., Linear and Combinatorial Programming.
- - Schrijver, Theory of Linear and Integer Programming, Wiley.
- - Williams, H.P., Model Building in Mathematical Programming, Wiley 1985.
- (Little on algorithms, but excellent for learning what makes a good model.)
-
- Interior Point LP
- - Marsten, et al., "Interior point methods for linear programming",
- Interfaces, pp 105-116, July-August 1990. (Introductory article, written
- by authors of a good commercial code.)
- - Marsten, et al., article to appear in ORSA Journal on Computing, 1993.
- (The latest results; a tech report version may be available sooner.)
- - Wright, M., "Interior methods for constrained optimization", Acta
- Mathematica, Cambridge University Press, 1992. (Survey article.)
- There is also a bibliography (with over 1000 entries!?!) obtainable by
- mailing to "netlib@ornl.gov" and saying "send intbib.bib from bib".
-
- Nonlinear Programming (can someone help classify these more usefully?)
- - Bazaraa & Shetty, Nonlinear Programming, Theory & Applications.
- - Coleman & Li, Large Scale Numerical Optimization, SIAM Books.
- - Dennis & Schnabel, Numerical Methods for Unconstrained Optimization
- and Nonlinear Equations, Prentice Hall, 1983.
- - Fiacco & McCormick, Sequential Unconstrained Minimization Techniques,
- SIAM Books. (An old standby, given new life by the interior point
- LP methods.)
- - Fletcher, R., Practical Methods of Optimization, Wiley 1987. (Good
- reference for Quadratic Programming, among other things.)
- - Floudas & Pardalos, A collection of test problems for constrained
- optimization algorithms, Springer-Verlag, 1990.
- - Gill, Murray & Wright, Practical Optimization, Academic Press, 1981.
- (An instant NLP classic when it was published.)
- - Hock & Schittkowski, Test Examples for Nonlinear Programming Codes,
- Springer-Verlag, 1981.
- - Kahaner, Moler & Nash, Numerical Methods and Software, Prentice-Hall.
- - More, "Numerical Solution of Bound Constrained Problems", in Computational
- Techniques & Applications, CTAC-87, Noye & Fletcher, eds, North-Holland,
- 29-37, 1988.
- - More & Toraldo, Algorithms for Bound Constrained Quadratic Programming
- Problems, Numerische Mathematik 55, 377-400, 1989.
- - Nocedal, J., summary of algorithms for unconstrained optimization
- in "Acta Numerica 1992".
- - Schittkowski, Nonlinear Programming Codes, Springer-Verlag, 1980.
- - Torn & Zilinskas, Global Optimization, Springer-Verlag, 1989.
-
- Simulated Annealing
- - Harland & Salamon, "Simulated annealing: A review of the thermodynamic
- approach" Nuclear Physics B (Proc. Suppl.) 5A (1988) pp. 109-115.
- - Ruppeiner et al, "Ensemble approach to simulated annealing"
- J. Phys. I vol. 1 (1991) 455-470.
- - Hoffman, et al, "Optimal ensemble size for parallel implementations
- of simulated annealing" Appl. Math. Lett. vol. 3, no. 3 (1990) 53-56.
- - Kirkpatrick, Gelatt & Vecchi, Optimization by Simulated Annealing,
- Science, 220 (4598) 671-680, 1983.
-
- Simulated Re-Annealing
- - Ingber & Rosen, "Genetic algorithms and very fast simulated annealing:
- a comparison" Mathematical and Computer Modelling, 16(11) 1992, 87-100.
- - Ingber "Very fast simulated re-annealing" Mathematical and Computer
- Modelling, 12(8) 1989, 967-973
-
- Genetic Algorithms
- - De Jong, "Genetic algorithms are NOT function optimizers" in Foundations
- of Genetic Algorithms: Proceedings 24-29 July 1992, D. Whitley (ed.)
- Morgan Kaufman
- - Ackley, "An empirical study of bit vector function optimization" in Genetic
- Algorithms and Simulated Annealing, L. Davis (ed.) Morgan Kaufman (1987)
-
- Other publications
- - Davis, L. (ed.), Genetic Algorithms and Simulated Annealing, Morgan Kaufmann,
- 1989.
- - Forsythe, Malcolm & Moler, Computer Methods for Mathematical Computations,
- Prentice-Hall.
- - Greenberg, H.J., A Computer-Assisted Analysis System for Mathematical
- Programming Models and Solutions: A User's Guide for ANALYZE, Kluwer
- Academic Publishers, 1993.
- -Hansen, Global Optimization Using Interval Analysis, Marcel Dekker, 1991(?).
- (I'd be interested if anyone has any opinions on this one.)
- - Kennington & Helgason, Algorithms for Network Programming, Wiley, 1980.
- (A special case of LP.)
- - Press, Flannery, Teukolsky & Vetterling , Numerical Recipes, Cambridge,
- 1986. (Comment: use with care.)
-
-
- --------------------------------------------------------------------------
-
- 9. "Who maintains this FAQ list?"
-
- A: John W. Gregory
- Applications Department
- Cray Research, Inc.
- Eagan, MN 55121 USA
- jwg@cray.com
- 612-683-3673
-
- I suppose I should say something here to the effect that "the material
- in this document does not reflect any official position taken by Cray
- Research, Inc." Also, "use at your own risk", "no endorsement of products
- mentioned", etc., etc. I probably should have scattered more "IMHO"s
- around in the text, but that acronym seems weaselly and once you start it's
- hard to know where to stop. I should have put in a few more smilies here
- and there too, to assist the humor impaired - be on your toes. 8v)
-
- I've tried to keep my own biases (primarily, toward the high end of
- computing) from dominating what I write here, and other viewpoints that
- I've missed are welcome. Suggestions, corrections, topics you'd like to
- see covered, and additional material (particularly on NLP) are solicited.
- Comments to the effect "who died and made *you* optimal?" will likely
- be ignored. 8v)
-
- I regret that when I started this list I didn't keep careful track of all
- the contributors whose suggestions I incorporated into the FAQ. In several
- instances, the information herein comes from postings on the net, which I
- assumed to be fair use and in the public domain. It being too late now to
- make individual acknowledgements, I offer a blanket thanks to all who have
- posted on these topics to the net.
-
- This FAQ list is also being posted to news.answers and sci.answers, and is
- archived in the periodic posting archive on rtfm.mit.edu [18.172.1.27],
- in the anonymous ftp directory /pub/usenet/sci.answers. The name under
- which FAQs are archived appears in the Archive-name line at the top of the
- article. This FAQ is archived as linear-programming-faq.Z (compressed).
- You will find many other FAQs, covering a staggering variety of topics, in
- this hierarchy. There's a mail server on that machine. You can send an
- e-mail message to mail-server@pit-manager.mit.edu containing:
- send usenet/sci.answers/linear-programming-faq
- as the body of the message.
-
- Copies of this FAQ list may be made freely, as long as it is distributed
- at no charge and with this disclaimer included.
-
- --------------------------------------------------------------------------
- END lp_faq
-