I am working with a large scale MIP which could be solved to optimal by CPLEX, however it takes a lot of time computationally (~8 hours for 41 trucks starting from 346 depots delivering 197 customers) and would like to discuss how to improve the efficiency here. I want to know,

  1. Does reformulating the constraints with big M would help speed up the solution ? (I have a couple of big M constraints)
  2. I wrote code before the solve command (in GAMS) instructing CPLEX to use parallel processing (threads = -1), selecting MIP strategy, node selection, etc.. but while examining the log file, CPLEX hasn't followed any of these instructions -- not running in parallel mode, same Dynamic MIP branching strategy (there were no GAMS Syntax errors by the way) -- What could be wrong here?
  3. Will instructing CPLEX to to efficiently swap the memory associate with branch and bound tree to disk help to speed up the solution process? (System has 200 gig of physical memory)

Please share your experiences on what kind of parameter tuning helped to accelerate the solution process. Also any paper reference on reformulating the constrains to obtain quicker solutions would be helpful.

asked 12 Aug '14, 16:22

Pavan's gravatar image

accept rate: 0%

edited 13 Aug '14, 11:30

Big-M formulations quite often lead to very weak linear relaxations. Especially if you use one single M, like M=1mio, without considering better (smaller) values of M. If you have more than one of these big-M constraints, you might want to try to index them, the big-M's, such that they are tailored to the specific constraint in question. I came by this paper the other day: http://www.optimization-online.org/DB_HTML/2014/08/4483.html. I haven't read it, but it might be useful.

(12 Aug '14, 16:50) Sune

The link accidentally included the period ending the sentence. Trim that off and the link works.

(12 Aug '14, 17:02) Paul Rubin ♦♦

yeah got it now

(12 Aug '14, 23:26) Pavan

Do you mean "8 hours for 197" NODES or does your problem really only have 197 variables. If not, how many nodes does CPLEX process and how many constraints does your problem it have?

(13 Aug '14, 10:54) Philipp Chri...

This thread is going no where without a log dump :-)

(13 Aug '14, 10:57) Bo Jensen ♦

@Philipp Christophel: After pre-solve, 18840 rows, 1082140 columns, and 2635268 nonzeros.

(13 Aug '14, 11:28) Pavan
showing 5 of 7 show 2 more comments

Sounds like you are solving a reasonably sized vehicle routing problem. You should be able to get better runtime than what you are seeing right now. It does not look like your problem is extremely big or dense. It could be that your numerics are bad, maybe due to the bigMs or due to small values you have somewhere in your matrix?

To your questions:

  1. BigMs might be the problem but you might have to choose a totally different formulation, such as this one based on time-space networks. Then your bigMs might go away just like that.
  2. Sorry, can't help you debug GAMS.
  3. Improving memory swapping will usually not give you much speed improvement. It is the linear algebra in the solver that takes time, not the memory. Better formulation or different option settings for algorithmic things is usually much more successful.

This paper might also be helpful in finding a good formulation.


answered 13 Aug '14, 13:25

Philipp%20Christophel's gravatar image

Philipp Chri...
accept rate: 22%

edited 13 Aug '14, 13:29

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: 12 Aug '14, 16:22

Seen: 2,599 times

Last updated: 13 Aug '14, 13:29

OR-Exchange! Your site for questions, answers, and announcements about operations research.