Can someone help me make heads and tails of this baropt run log? When I solve the same problem using dualopt, there is no problem and the problem solves withing a minute. However, the baropt run goes out of memory after the Nested Dissection. I want to try out Barrier algorithm as a similar problem is taking way too long using simplex, but gives the same out of memory error with baropt. Am I missing something here? Any settings that could help?
asked 21 Jun '11, 15:46 Anurag Bo Jensen ♦ 
CPLEX log says :
which is quite a lot and the reason your machine run out of memory. Looking at the running time, I am sure the machine has been swapping for a long time. Cholesky factor on The best way to check the structure of A and gain better understanding is loading a smaller example into matlab (or octave) and use "spy" to play around with it. You could play with different ordering methods, but I am not familiar with doing that in CPLEX. Alternatively you could also try to switch off dual formulation, since I can see CPLEX presolve sets up the dual problem. Contact CPLEX at their support forum and ask for help, they can properly come up with suggestions to reduce memory. answered 21 Jun '11, 15:59 Bo Jensen ♦ I tried the other two ordering methods provided. They were much faster than nested dissection. However, once the ordering was done, the same error appeared. It seems the trouble is elsewhere. Will try out some more stuff and post the results here. Thanks for your suggestions!
(22 Jun '11, 11:56)
Anurag

I don't use barrier (or solve large problems), so the following are just guesses. You might try playing with CPLEX's BarColNz parameter, to see if special treatment of dense columns in the presolved model helps. The BarOrder parameter does what Bo suggested (alters ordering). You can turn MemoryEmphasis on  it might slow the solution down, but it might avoid the outofmemory error. answered 21 Jun '11, 17:56 Paul Rubin ♦♦ Agree, dense columns (or rows) are properly the root cause. Initially the A matrix has 9874640 nonzeros, which is a lot compared to dimension. Such a dense problem may not be ideal for simplex either, since search direction also becomes dense and degenerated pivots are more likely.
(21 Jun '11, 18:05)
Bo Jensen ♦
Bo, your assessment here might be right. Beyond a certain problem size, even simplex does not give the optimal after 10 hours. Paul, I'll try playing with BarColNz and MemoryEmphasis and let you know. Thanks!
(22 Jun '11, 11:59)
Anurag

These are all good suggestions:
Else send us you're model, and we'd be happy to take a look at it. Christian. FYI. We do routinely solve models with more factor nonzeros and FP ops than this one. But for this we use multithread machines with some decent amount of memory. answered 22 Jun '11, 10:55 Christian Bliek Christian, let me try out these suggestions.. I will post the results on this thread once I'm done. Unfortunately I don't think I can send the problem over. I'll let you know if I can. Thanks!
(22 Jun '11, 12:06)
Anurag

Since, CPLEX formed the dual, then most likely it is dense rows that is the problems. Hence, CPLEX may overlook dense columns in the dual problem. Detecting the dense columns is not so easy unfortunately. If you are will to make the problem public available then I can take a look at with MOSEK and see what comes up. Erling (working for MOSEK). answered 22 Jun '11, 01:57 Erling Erling, thanks for your offer. I'm cannot make the problem public though (I'll try with my boss though). Let me try out some of the suggestions here first.
(22 Jun '11, 12:01)
Anurag

Interactive CPLEX offers some functionality to do this, both on the original and presolved models (both the primal and dual). Use the commands 'display histogram c' and 'display histogram r' to obtain nonzero counts for rows and columns. As stated above, dense rows in the primal translate to dense columns in the dual. To see these histograms on the (unpresolved) dual model, read in your model and write out a .dua file of the model; CPLEX will generate the dual model for you. Read it back in and display the histograms. Similarly, to see them in the presolved model, write out a .pre file. With the presolve dual indicator turned on, the .pre file will contained the dual of the presolved model. Hopefully this information will help you determine whether the relatively dense Cholesky factor involves a few dense rows on the primal (and hence dense columns in the dual) where changing the barcolnz parameter will improve performance, or if, despite its large ratio of rows to columns, solving the primal rather than the dual is more promising because the primal model has few, if any, dense columns. If so, turn off CPLEX's presolve dual parameter.
answered 28 Jun '11, 13:45 Ed Klotz 
Sorry for posting this so late. I tried some (~half) of the suggestions mentioned above, but couldn't get the optimizer to get rid of the error. However, while trying to solve the instances I did notice that both primal and dual simplex were having severe numerical stability issues. For example, variables that are bounded in the range [0,1] end up taking extremely large values, that keeps increasing if I let the solver run for longer. I did not run Bo's suggestions of running a degeneracy test though. I ended up modeling my problem as a convex quadratic program towards the end of my project, for which I faced similar problems. However, I was able to code up a gradient descent algorithm that worked pretty well, so I never got around trying the other suggestions. answered 06 Dec '11, 23:59 Anurag 1
Thanks for the status report. For the numerical stability issue, you could make cplex print the condition number, if large then numerical issues might be an issue. But I agree if basic variables increasingly take on huge numbers, that in itself is a sign of numerical instability. Please note you can have a basis matrix with "nice" numbers, which has a large condition number.
(07 Dec '11, 00:09)
Bo Jensen ♦
Thanks, appreciate your comments.
(07 Dec '11, 00:27)
Samik R.

I will throw in a new reply here, which is more aimed at playing with some odd possibilities. First of all, it would be nice to know if the problem has a handful of dense rows or the density is widely spread on A ? In the first case, you might actually be lucky and get around it, in the second case I think interior point to disc might be the best option. Regarding degeneracy in the simplex method : 1) I assume you have tried both primal and dual on both primal and dual problem formulation, right ? If not try it. If you got dense rows, they become dense columns in the dual. In simplex, if such a column enters the basis, the algorithm slows down and will often make little progress. 2) Make a degeneracy test on all the options above. Perturb all bound and cost data, with a extremely large random factor (say 20% of largest value) i.e generate a new random number for each element. If this problem still takes very long time to solve, then using simplex is going to be hard. 3) If you got dense rows, you could do a heuristic to get a good starting point for the simplex algorithm, but it may end up in similar approach as the shifting algorithm. So you should also try to force using primal formulation and solve it with the shifting algorithm in CPLEX. I have a few more suggestions, but it would be nice to know some more on the non zero pattern of the A matrix. answered 22 Jun '11, 12:49 Bo Jensen ♦ 
@Anurag: Do you have any results from the suggestions you received in this thread?
Samik, I have posted some of the things I ended up doing. I tried some of the suggestion, which didn't work. I ended up modeling my problem differently and solved it using gradient descent, but due to lack of time couldn't try out all the ideas posted here.