The following problem make lp_solve run into memory problems on my laptop with 2GB RAM:
Is this problem really so hard to solve? Are there any transformations of the constraints that would make it feasible for lp_solve? asked 03 May '11, 11:08 karstenw 
Might be a little counterintuitive, but try adding a variable \(s_j\) for each partial sum, define the partial sums as \(s_j = s_{j1} + x_j\), and give all the \(s_j\) an upper bound of 20,000 (and lower bound of \(\infty\)). That will make the matrix considerably more sparse. I tried this with CPLEX 12.2 (Java API) and a variety of objective functions, and in all cases got an optimal solution in under 3 seconds on my dual core laptop. I didn't monitor RAM use, but the laptop only has about 2 GB free after the OS is loaded, so I couldn't have used very much. answered 03 May '11, 17:52 Paul Rubin ♦♦ fbahr ♦ 1
This is great! After the transformation, lp_solve solves the problem in 20 seconds without problem.
(04 May '11, 03:49)
karstenw
1
Credit where credit is due: my answer was "inspired" by this blog post by Erwin Kalvelagen.
(04 May '11, 10:55)
Paul Rubin ♦♦
This is a standard modeling technique, where you're trading sparsity for increased number of rows and columns.
(07 May '11, 17:03)
Greg Glockner
@Greg, agree and the performance gain/loss might depend on which algorithm is used.
(07 May '11, 17:10)
Bo Jensen ♦
@Greg: It's "standard", but I'm not sure how well known it is. Has this made it into LP/NLP text books? AFAIK none of them mentioned it back when I was a student, but then again CPUs were so slow back (and basis factorization algorithms so much less advanced) that I think most users would balk at any extra constraints.
(07 May '11, 17:22)
Paul Rubin ♦♦
1
@Paul, Sure everything about exploiting sparsity is "nonstandard" for the average user. Similar ideas can be used in presolve and is described in some papers and books. Your answer was exactly what OP needed.
(07 May '11, 17:32)
Bo Jensen ♦
@Bo: Slightly off topic, I suppose, but I have a conjecture that blogs and forums will become unofficial (and not always accurate) "supplements" to text books. As in, learn the theory from the text books; learn tactics and "tricks of the trade" from blogs/forums (with apologies to H. P. Williams and his wonderful book).
(07 May '11, 17:53)
Paul Rubin ♦♦
@paul: Sounds like an opportunity for a better textbook. Care to volunteer to write it? ;)
(07 May '11, 20:16)
Greg Glockner
1
@Greg: I'm both too busy and (much) too lazy to write a book (an unfortunate combination, if you think about it). I also don't know enough. Maybe it should be a WikiBook, so that anyone with a clue could collaborate (plus, if it were free, someone might actually read it).
(08 May '11, 09:40)
Paul Rubin ♦♦
showing 5 of 9
show 4 more comments

I entered following equivalent model into MPL:
I just added the objective function z = SUM(i: x), so each solver would try to get the same result. Here are the results from different solvers, running on i7, 3.4GHz, 64bit Windows 7 with 16Gb of memory:
As you can see LPSolve spends a lot more time, but it solves it in the end, using similar amount of memory (2Gb) as some of the other solvers. CPLEX got into big trouble with its Presolve, I gave up after 30 minutes, but was the fastest (1 second) when Presolve was turned Off. Gurobi did relatively well whether the Presolve was turned On or Off, but it used the most memory (5Gb). The problem you are having with memory is clearly related to the density of the constraint matrix, which is total of 32 million nonzeros and 50% density. To solve this problem, you will either need to switch to a solver that uses less memory (like CPLEX without Presolve), or get a larger machine with 64bit OS. answered 03 May '11, 12:09 BjarniMax Or at least more RAM on the laptop. Looks like only Gurobi can't fit in the 4GB available to most OS's in 32bit mode.
(03 May '11, 13:43)
Matthew Salt... ♦
Thank you for the comparison of different solvers, it makes me think..
(04 May '11, 03:51)
karstenw
HI bjarnimax, which cplex version are you using? and could you generate the .lp file of this model, because I don't have MPL installed.
(05 May '11, 15:23)
John Cui
Hi John Cui, I was using CPLEX 12.2.0.0. The same problem happens both when running within MPL, and in CPLEX standalone mode. As I do not think its possible to upload any files directly here on ORExchange, I have uploaded the LP file generated from MPL/CPLEX, on to my FilesAnywhere cloud filesharing account: https://www.filesanywhere.com/fs/v.aspx?v=8a6b6b8f5e656fb4b36b Password: ORExchange By the way, if you do not have MPL, you can get a free fullsize development copy, by just filling out the application form linked at the bottom this page:
(08 May '11, 10:51)
BjarniMax
good, thanks!
(10 May '11, 11:35)
John Cui

If you just want to solve the problem and use the solution, then neos might do the trick for you. At least it's excellent for trying out solvers without any install etc. answered 03 May '11, 11:41 Bo Jensen ♦ Thank you! I will use this to check my results.
(04 May '11, 03:52)
karstenw

At double precision, that's 256MB of coefficients (plus rim vectors and variables, probably around another .5MB) and more if your rows and columns have names. I can believe that that's a tight fit on your laptop, with the additional storage for basis factors, updates, and other auxiliary data, plus code, plus resident memory for the OS. answered 03 May '11, 11:19 Matthew Salt... ♦ 
Using 64 bit OS and solver has solved the memory issues I had (with large scheduling problems). I have seen that turning off the presolve often solves the out of memory issues. Most of the commercial Modeling languages have tools to reduce memory usage as well, worth trying them. Tauhid answered 04 May '11, 11:58 Tauhid Rahman 
Makes me wonder. What if we're trying to optimize a problem involving all facebook members? Can we continue to presume that even a single solution fits into the RAM? Ironically, as the RAM sizes increase (to GB's), the database seem to increase even more (to TB's).
Indeed, it has been ever thus.
RAM sizes now are TBs. We have a machine here with two of 'em (wasn't cheap, though...). Surely databases are well on their way to PBs.