I was reading a presentation about indicator constraints in mixed integer programs. The authors mention that it is risky to use bigM method because:
I am wondering if there is any risk to use bigM in binary linear programs? Specifically, I have a constraint of the form: if \(y=1\), then \(f(x)\leq a\), where \(x\) is a binary vector and \(f(\cdot)\) is a linear function. If I write the constraint as follows: \(f(x)\leq a+M(1y)\), then I would get an equivalent constraint, no? What are the issues of this in theory and in practice? If there is any known issues, can you please provide some references? asked 26 Apr '18, 09:23 zbir 
The issues with using bigM are the same regardless of whether the integer variables are binary or general integer, and I'm not aware of any reference specific to the binary case. I don't know if I would characterize any of the issues as theoretical in nature. If you have an unbounded amount of time and computer memory, and if your computer does infinite precision arithmetic, you will get the correct solution to a (bounded, feasible) problem eventually. The issues are all in practice, because you do not have unbounded time and memory and most certainly do not have infinite precision. Weak relaxations and numerical difficulties are both possible but not guaranteed. More precisely, if you choose M too small, you risk getting an incorrect solution. If you choose M too large, you pretty much guarantee numerical problems (and quite likely weak relaxations). If you choose M "just right" (big enough but just barely big enough), you still risk weak relaxations and/or numerical issues, but you may also wind up with a perfectly fine model. The best choice of M is problem dependent, and sometimes you can exploit your knowledge of the problem to come with a tight value for M. If bigM formulations for logical constraints like yours (where a binary variable basically turns a constraint on or off) prove problematic, there is also a version of Benders decomposition that might help. Here's a reference: Codato, G. & Fischetti, M. (2006). Combinatorial Benders' Cuts for MixedInteger Linear Programming. Operations Research 54, 756766. I would probably try the bigM approach first, since Benders is an "outer approximation" method, but I've had good luck with combinatorial Benders on some problems. Addendum: At Mark L. Stone's suggestion, I'm adding something about the role of integrality tolerance. Integrality tolerance refers to how much the value of integer variable can differ from the nearest integer and still be considered integerfeasible by the solver. It's a necessary compromise, since rounding error (from limited precision arithmetic) makes an exact integer solution pretty unlikely. Consider a constraint of the form \(e \le M x\) where \(e\) is some expression and \(x\) is a binary variable. Assume that \(\epsilon > 0\) is the integrality tolerance and that, say, \(x=\epsilon/2\) in the solution. The solver figures that's close enough to \(x = 0\). The intent of the constraint would be to force \(e\le 0\) if \(x = 0\), but what you actually get is \(e \le M\epsilon / 2\) ... and now you're in trouble. The right side is "something really big times something really small", which could end up being pretty much anything. The default value for integrality tolerance in CPLEX ("EpInt") is 1e5; pick a value for \(M\) in the tens of thousands or higher, and you could end up with \(x = 0\) but \(e > 1\). answered 26 Apr '18, 16:37 Paul Rubin ♦♦ Nice. Perhaps you an expand your answer with a discussion of the relationship between integrality tolerance in the solver (which most solvers allow to be changed from default), the value of M, and when/how erroneous results (due to big M constraints not working as intended) can arise due to the combination of integrality tolerance and value of M.
(27 Apr '18, 09:51)
Mark L Stone
2
Done, and thanks for the suggestion ... even if it did mean I had to wrestle with the MathJax implementation here.
(27 Apr '18, 10:33)
Paul Rubin ♦♦
