I have two questions about Reformulation Linearization techniques (RLT): 1) Does applying RLT on an MILP model (mix of binary and continuous variables) lead to better solution (in terms of objective function value and CPU time)? Since RLT does not reduce number of binary variables and it adds extra continuous variables and constraints to the original MILP model, it seems that there is no sense to apply RLT on MILP models. But I see some applications of RLT MILP models. 2) I see some research about applying RLT on quadratic models where there some terms of X1*X2. I am just wondering why, instead of applying RLT, the researchers do not linearize terms X1*X2 by defining continuous variable w=X1*X2 and solve the resulting MILP model directly? Thank you asked 10 Mar '13, 01:08 hesam eivazy fbahr ♦ 
The question has been closed for the following reason "Other" by fbahr 09 Feb '14, 14:34
The RLT is a sequential convexification technique that can be applied repeatedly. Successive applications result in tighter and tighter continuous relaxations, with the final application producing the convex hull. So the RLT does definitely result in better objective function values for the relaxations when solving MILPs. It is true that the resulting problems are much larger than the original, and that is a tradeoff. But there have been a number of successful applications where the improved bounds outweighed the additional cost of solving the relaxations. One could reduce the size of the relaxation by projecting back into the original space, but then one would need to derive the projections. One might also use RLT constraints in a cuttingplane method instead of generating the full relaxation, but it's been a while since I've looked at that and I don't know how much work there has been there. Cutting planes for the liftandproject convexification technique of Balas et al. have been studied in several contexts. For quadratic problems, the linearization you describe is a relaxation: you replace \(x_1 x_2\) with \(w_{12}\), but the new problem is only equivalent to the original if you enforce \(w_{12} = x_1 x_2\) as a constraint, so the problem is still quadratic. The linearization drops those quadratic constraints and replaces them with weaker, linear ones. Once you are in the realm of relaxations, you want again for them to be as strong as possible. So again, the RLT produces a sequence of successively stronger relaxations that dominate the simple linearization. answered 10 Mar '13, 23:44 Matthew Salt... ♦ @Matthew Saltzman: Thank you very much for your answer.
(11 Mar '13, 10:02)
hesam eivazy

I can't speak to the RLT directly (it has been many years since I last encountered it), but as for (2), \(w=X1 \cdot X2\) cannot be linearized exactly if both \(X1, X2\) are continuous variables that are decision variables that participate in the optimization. \(X1 \cdot X2\) is what is known as a bilinear expression, and despite their simplicity, are one of the nastiest nonconvex expressions in the business. There are ways to approximate it, but they are definitely nonlinear and do not have exact linearizations. Global solvers are typically needed to obtain obtain global solutions.
thanks for your comment. I should have mentioned that X1 and X2 are binary variables not continuous.
Ah, then it's essentially an AND operator i.e. X1 AND X2. It has efficient MIP formulations that do not entail additional binary variables and have tight convex hull relaxations. My recollection of the RLT is that it did something much more general than that, but I'll leave it to someone more familiar with RLT to answer.