I have a two-stage stochastic program with relatively complete recourse, binary first-stage and continuous second stage. I've implemented L-shaped method to try to solve large instances, the performance is poor, even on very small instances that I can solve exactly (hundreds of iterations without convergence). For example, on one small instance, after 25 iterations, the lower bound is about 3% below optimum, and the upper bound about 4% above optimum, with only tiny improvements after that. This is typical of instances I'm trying to solve. There is a lot of work on enhancements to Benders decomposition, but I'm trying to understand what might be effective, rather than blindly trying to implement some of the more recent methods and pray, or I try something else entirely. I've found that people often cite low-density cuts as a reason for poor convergence. I'm not sure I understand density of an optimality cut correctly - can I define density of a cut as the percentage of master problem variables with a non-zero component in a cut? All of the cuts I'm generating have non-zero components for 75-80% of the master problem variables. Any suggestions on how to ``diagnose'' performance and proceed?
asked
orDude |

You might want to take a look at: Magnanti, T. L. & Wong, R. T. Accelerating Benders Decomposition: Algorithmic Enhancement and Model Selection Criteria Operations Research, 1981, 29, 464-484. They provide some guidance in selecting optimality (bound support) cuts when the subproblem has multiple optimal solutions. I don't know if that occurs with your model, but if so this might help.
answered
Paul Rubin ♦♦ |

See the paper T. Santoso, S. Ahmed, M. Goetschalckx, and A. Shapiro. "A Stochastic Programming Approach for Supply Chain Network Design under Uncertainty," European Journal of Operational Research , vol.167, pp.96-115, 2005. Here we discuss implementations of a number acceleration schemes for 2 stage stochastic programming with binary first stage variables. These include regularization via trust region, knapsack inequalities, upper bound heuristic, Magnanti & Wong strengthening and multi-cut. (Some of these are specific to supply chain network design, and others are quite general).
answered
Shabbir Ahmed |