I have a large and difficult IP with 0/1 variables. To run on one of the hard cases that I'm interested in took about 100000 seconds on my workstation. It did get a satisfactory answer in that case, but I'd like to run it on a number of other similar problems, so, of course would like to speed it up. Just today I tried an experiment -- I used a heuristic analysis of the combinatorics of the problem which told me that it was extremely unlikely (or even impossible) that the optimum solution would set certain variables to 1. So I added constraints which set those variables to 0. Much to my surprise cplex now finds an optimal solution which is better than the unconstrained one! The only explanation that I can come up with is that there was some numerical roundoff in the original that may have led to fathoming certain nodes which shouldn't have been. Does this sound reasonable? Any other suggestions?
asked
VictorSMiller |

Depending on the magnitude of the objective function value, you might, in your first run without the heuristic, have found a solution which was within the optimality tolerance wherefore the search was terminated. Due to the fixation, the internal heuristics of CPLEX may have found a better solution in the second run. I know its a long time wait, but you can check it by setting the relative optimality gap (EpGap in the C++ API) to 0.0. To be sure also set the absolute optimality gap (EpAGap),to zero (or 0.9 if you know the objective function value to be integer)
answered
Sune |

Was the "unconstrained" solution proved optimal or just the incumbent when the run hit some limit? If optimal, was it within tolerance of the second solution (per Sune's response)?