Hello O.R Gurus, I have a MIP problem which I want to solve using CPLEX faster . I want to reach the near optimal point. So this is the technique which I applied. 1) Solve the Entire Problem as a L.P and get the reduced cost of all the columns. 2) Now take the columns which have negative and zero reduced cost. And now I ignore the rest of the columns. ( fixing the UB and LB to zero) 3) Now I solve the MIP with few columns only ( which is now smaller when compared to original problem) I have taken this technique from Column generation but the difference is for Intital LP I take the entire problem. This has reduced the solving time of my MIP Problem. And the optimality gap is also acceptable. Can anyone suggest when can this technique fail ?
asked
SUJAY |

Two obvious things might go wrong: First of all, as @optimizer says, You have made the "reduced problem" too restrictive and it no more exhibits a feasible solution. Another thing is that you might find a solution very far from the optimal IP solution. Trying to circumvent these issues you might try to add a "local branching constraint" stating that only a limited number of these unpromising variables can be used and then solve this larger but stille reduced problem. If you need to solve your problem to optimality, you already have some of the building blocks to implement a cut and solve algorithm (see Climer and Zhang "Cut-and-solve: An iterative search strategy for combinatorial optimization problems"). They solve a series of these reduced problems in order to find an optimal solution.
answered
Sune |

I once solved a very large routing problem by doing something similar, but built a larger subset of my full problem for the final MILP model like this: Repeat a few times (1) Solve the relaxed problem (2) Identify all the (should be integer) vars that take an active part in the solution (3) Add constraints in the model to exclude them from the solution, and go back to (1) Then build the MILP model using only the variables that were selected by the above loop. The loop chooses the variables that the model "wants" to use on the first pass, then forces it to choose a different set of variables (corresponding to a possibly less favourable solution). So we then add those variables to our excluded set, and re-solve. Doing this repeatedly gives us a bigger set of choices to choose from in our MILP. The final MILP model then uses only the variables that we have gathered as above. It was much faster to solve than the original MILP, and if we did more than 5 or 10 iterations while gathering the variables, the solution was identical to the full MILP solution when we were prepared to wait long enough to solve it.
answered
timchippingt... |

is the smaller MIP always feasible?

"And the optimality gap is also acceptable." If the gap is between the integer optimum of the reduced IP and the original LP, that's good. If the gap is between the integer solution and LP relaxation of the restricted model, you could still be far from the true optimum.