5
2

The following problem make lp_solve run into memory problems on my laptop with 2GB RAM:

  • 8000 real variables: x1, ..., x8000

  • upper and lower bounds: -4000 <= x(i) <= 6500, these bounds are independent of i

  • sum restrictions: sum_{i=1}^j x(i) <= 20000 for j=1,...,8000, the right-hand side is independent of j

Is this problem really so hard to solve? Are there any transformations of the constraints that would make it feasible for lp_solve?

asked 03 May '11, 11:08

karstenw's gravatar image

karstenw
7515
accept rate: 0%

edited 03 May '11, 11:12

Makes me wonder. What if we're trying to optimize a problem involving all facebook members? Can we continue to presume that even a single solution fits into the RAM? Ironically, as the RAM sizes increase (to GB's), the database seem to increase even more (to TB's).

(03 May '11, 11:50) Geoffrey De ... ♦
1

Indeed, it has been ever thus.

RAM sizes now are TBs. We have a machine here with two of 'em (wasn't cheap, though...). Surely databases are well on their way to PBs.

(03 May '11, 13:45) Matthew Salt... ♦

Might be a little counterintuitive, but try adding a variable \(s_j\) for each partial sum, define the partial sums as \(s_j = s_{j-1} + x_j\), and give all the \(s_j\) an upper bound of 20,000 (and lower bound of \(-\infty\)). That will make the matrix considerably more sparse. I tried this with CPLEX 12.2 (Java API) and a variety of objective functions, and in all cases got an optimal solution in under 3 seconds on my dual core laptop. I didn't monitor RAM use, but the laptop only has about 2 GB free after the OS is loaded, so I couldn't have used very much.

link

answered 03 May '11, 17:52

Paul%20Rubin's gravatar image

Paul Rubin ♦♦
14.6k413
accept rate: 19%

edited 01 Mar '14, 11:47

fbahr's gravatar image

fbahr ♦
4.6k716

1

This is great! After the transformation, lp_solve solves the problem in 20 seconds without problem.

(04 May '11, 03:49) karstenw
1

Credit where credit is due: my answer was "inspired" by this blog post by Erwin Kalvelagen.

(04 May '11, 10:55) Paul Rubin ♦♦

This is a standard modeling technique, where you're trading sparsity for increased number of rows and columns.

(07 May '11, 17:03) Greg Glockner

@Greg, agree and the performance gain/loss might depend on which algorithm is used.

(07 May '11, 17:10) Bo Jensen ♦

@Greg: It's "standard", but I'm not sure how well known it is. Has this made it into LP/NLP text books? AFAIK none of them mentioned it back when I was a student, but then again CPUs were so slow back (and basis factorization algorithms so much less advanced) that I think most users would balk at any extra constraints.

(07 May '11, 17:22) Paul Rubin ♦♦
1

@Paul, Sure everything about exploiting sparsity is "non-standard" for the average user. Similar ideas can be used in presolve and is described in some papers and books. Your answer was exactly what OP needed.

(07 May '11, 17:32) Bo Jensen ♦

@Bo: Slightly off topic, I suppose, but I have a conjecture that blogs and forums will become unofficial (and not always accurate) "supplements" to text books. As in, learn the theory from the text books; learn tactics and "tricks of the trade" from blogs/forums (with apologies to H. P. Williams and his wonderful book).

(07 May '11, 17:53) Paul Rubin ♦♦

@paul: Sounds like an opportunity for a better textbook. Care to volunteer to write it? ;-)

(07 May '11, 20:16) Greg Glockner
1

@Greg: I'm both too busy and (much) too lazy to write a book (an unfortunate combination, if you think about it). I also don't know enough. Maybe it should be a WikiBook, so that anyone with a clue could collaborate (plus, if it were free, someone might actually read it).

(08 May '11, 09:40) Paul Rubin ♦♦
showing 5 of 9 show 4 more comments

I entered following equivalent model into MPL:

INDEX 
   i := 1..8000;
   j := i;
VARIABLES
   x[i];
MODEL
   MAX z = SUM(i: x);
SUBJECT TO
   con[j]: SUM(i<=j: x) <= 20000;
BOUNDS
   -4000 <= x <= 6500;
END

I just added the objective function z = SUM(i: x), so each solver would try to get the same result.

Here are the results from different solvers, running on i7, 3.4GHz, 64-bit Windows 7 with 16Gb of memory:

*With Presolve automatically on*
LPSolve: 142 seconds (2 Gb)
CoinMP: 3 seconds (3 Gb)
Cplex: >30 minutes (2 Gb)
Gurobi: 6 seconds (5 Gb)

*With Presolve turned off*
Cplex:  1 second (1 Gb)
Gurobi: 9 seconds (5 Gb)

As you can see LPSolve spends a lot more time, but it solves it in the end, using similar amount of memory (2Gb) as some of the other solvers. CPLEX got into big trouble with its Presolve, I gave up after 30 minutes, but was the fastest (1 second) when Presolve was turned Off. Gurobi did relatively well whether the Presolve was turned On or Off, but it used the most memory (5Gb).

The problem you are having with memory is clearly related to the density of the constraint matrix, which is total of 32 million nonzeros and 50% density. To solve this problem, you will either need to switch to a solver that uses less memory (like CPLEX without Presolve), or get a larger machine with 64-bit OS.

link

answered 03 May '11, 12:09

BjarniMax's gravatar image

BjarniMax
8641312
accept rate: 13%

Or at least more RAM on the laptop. Looks like only Gurobi can't fit in the 4GB available to most OS's in 32-bit mode.

(03 May '11, 13:43) Matthew Salt... ♦

Thank you for the comparison of different solvers, it makes me think..

(04 May '11, 03:51) karstenw

HI bjarnimax, which cplex version are you using? and could you generate the .lp file of this model, because I don't have MPL installed.

(05 May '11, 15:23) John Cui

Hi John Cui,

I was using CPLEX 12.2.0.0. The same problem happens both when running within MPL, and in CPLEX standalone mode.

As I do not think its possible to upload any files directly here on OR-Exchange, I have uploaded the LP file generated from MPL/CPLEX, on to my FilesAnywhere cloud file-sharing account:

https://www.filesanywhere.com/fs/v.aspx?v=8a6b6b8f5e656fb4b36b

Password: OR-Exchange

By the way, if you do not have MPL, you can get a free full-size development copy, by just filling out the application form linked at the bottom this page:

http://www.maximalsoftware.com/freedev

(08 May '11, 10:51) BjarniMax

good, thanks!

(10 May '11, 11:35) John Cui

If you just want to solve the problem and use the solution, then neos might do the trick for you. At least it's excellent for trying out solvers without any install etc.

link

answered 03 May '11, 11:41

Bo%20Jensen's gravatar image

Bo Jensen ♦
5.0k2919
accept rate: 14%

Thank you! I will use this to check my results.

(04 May '11, 03:52) karstenw

At double precision, that's 256MB of coefficients (plus rim vectors and variables, probably around another .5MB) and more if your rows and columns have names. I can believe that that's a tight fit on your laptop, with the additional storage for basis factors, updates, and other auxiliary data, plus code, plus resident memory for the OS.

link

answered 03 May '11, 11:19

Matthew%20Saltzman's gravatar image

Matthew Salt... ♦
4.7k310
accept rate: 17%

Using 64 bit OS and solver has solved the memory issues I had (with large scheduling problems). I have seen that turning off the pre-solve often solves the out of memory issues. Most of the commercial Modeling languages have tools to reduce memory usage as well, worth trying them.

-Tauhid

link

answered 04 May '11, 11:58

Tauhid%20Rahman's gravatar image

Tauhid Rahman
411
accept rate: 0%

edited 04 May '11, 12:01

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×231

Asked: 03 May '11, 11:08

Seen: 5,578 times

Last updated: 01 Mar '14, 11:48

OR-Exchange! Your site for questions, answers, and announcements about operations research.