Having used ILPs in research for around 12 months, I get the sense that they can be extremely fickle: their performance can vary wildly and unexpectedly depending on the input data.

Some models I have worked with solved one test case in milliseconds, while trivially different test cases took 24 hours+. The answers tend to lie in high levels of symmetry and the tightness of constraints, or other reasons, which do not seem obvious to me from the outset.

Hence it seems to me that for big ILPs (eg. 100k+ variables/constraints), deployment to production may be extremely risky - an module which works fine one day may, the next day, take unreasonably long for no apparent reason.

Is this assessment reasonable, or is the performance of ILPs relatively stable when modeled correctly?

asked 05 Apr '18, 00:04

brendanhill's gravatar image

accept rate: 0%

I agree about the high degree of variability in solution times in general, though some problem types may exhibit fairly consistently short (or long) solution times. I think that a key question, though, is what the user wants/needs from the deployed model. In my experience, if the model is reasonable (not necessarily the tightest formulation possible, but not sloppy), a good quality solver is being used (with maybe a bit of tuning), and adequate hardware is provided, the following usually holds: a "good quality" solution is found "reasonably quickly"; what turns out to be an optimal solution is found eventually; and proof of optimality may or may not come within my lifetime.

Proof of optimality is the holy grail of academics, but I don't think industrial users are that hung up on it. So a couple of key questions are "how good is good enough" and "is the possibly/probably suboptimal solution we get in <insert user="" patience="" value=""> time better than what the user would have without this model".

I used to cast optimality in the following terms for my students: if you're willing to pay the dollar value of the optimality gap out of your pocket, feel free to terminate the solution and call it close enough to optimal. From a company's perspective, a somewhat larger optimality gap might be perfectly acceptable.

Also, keep in mind that you're trying to find the "optimal" solution to a model that approximates the actual business problem, which frequently includes turning a blind eye to inherent randomness/uncertainty in the business problem. So trying to close that last 0.1% gap is basically (to borrow a phrase from my grad student days) milking a duck. It's not a useful exercise, and neither you nor the duck will enjoy it.


answered 05 Apr '18, 15:32

Paul%20Rubin's gravatar image

Paul Rubin ♦♦
accept rate: 19%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: 05 Apr '18, 00:04

Seen: 1,140 times

Last updated: 05 Apr '18, 15:32

OR-Exchange! Your site for questions, answers, and announcements about operations research.