I don't have much experience in using solvers, thus it is not clear to me how to interpret my results. The questions that I am going to ask are very general. At the moment, I am trying Gurobi (vanilla set up, no changes in the solver parameters) with a generic problem with 6400 rows and 3850 columns. After the pre-solve phase, the problem results in 5530 rows and 3653 columns, 23 continuous variables and 3680 integer variables (2380 of which binary). After 10 minutes on an i5 computer with 4GB of RAM, the gap between the best objective and the best bound is 0.06%. I know, these numbers pretty much depend on the problem formulation and on the machine I am using and on many other factors, but according to your experience, what your gut feeling suggests you? an order of magnitude of tenths of minutes for such problem dimension is "normal" or should I get some hints that there may be problems with the formulation? Moreover, from a scientific literature perspective, what is an accepted value for the solver time limit? minutes or hours? Again, I know it depends, but I would really appreciate your informed opinion. I have heard about problems with millions variables, so I wonder how long it would take to solve it on a "normal" computer :-) Any hints and directions to progress in this area are very welcome.
asked
Libra |

I personally prefer to interpret some of my benchmarking graphs to make decisions like the other ones you mentioned, because I find that I make better decisions based on graph shapes, instead on just on a single threshold value or data point. For example, a best solution vs time graph (just one type of out of many graph types): I expect to see a upside down L shape like here. If it's more an / shape, then we probably haven't scratched the surface yet. OTH, if it's flat lining near the end like here, then using more time probably won't matter much. Other graph types, for example those that look at the score loss of specific constraint types, or those that register solution mutation rate per new best solution found, allow me to find other opportunities or support my decision making.
answered
Geoffrey De ... ♦ |

"Some days you get the bear, some days the bear gets you". Maybe you got the bear this time? :-) The question of whether the model is correct is a good one, and not easy to answer. Since you are obtaining a solution (apparently a near-optimal one), you should be able to verify that it is feasible and satisfies your constraints by writing an external test program (or building a rather large spreadsheet). That's part of the battle. Verifying that it is in fact close to optimal is more problematic: if you have an unnecessary or invalid constraint (or more than one of them), you could be pruning substantially better solutions because your model "thinks" they are infeasible. You might want to have a look at the concepts of verification and validation of simulation models. If this is a real-world problem, they may apply here as well. Run your assumptions past someone expert in the domain problem (but not necessarily able to spell "optimization") to see if you are missing constraints, have unnecessary or incorrect ones, have the wrong objective function, ... Then have someone at arms length from the project but literate in optimization (or at least basic math) look at your constraint formulations to see if they match their verbal descriptions.
answered
Paul Rubin ♦♦ |

You might take a look at the MIPLIB 2010 benchmarking setup: