Questions Tagged With decompositionhttp://www.or-exchange.com/tags/decomposition/?type=rssquestions tagged <span class="tag">decomposition</span>enWed, 20 Jan 2016 10:08:37 -0500Is decomposition worth the effort ?(especially for MILPs)http://www.or-exchange.com/questions/13247/is-decomposition-worth-the-effort-especially-for-milps<p>Hi,</p>
<p>A book I often return to (and like) and use as a source of knowledge when building a new model is the book "Model building in mathematical programming" by Williams. In a section on decomposition, Williams writes: </p>
<p>"Decomposition algorithms are also of considerable computational interest, as
they offer a possibility of avoiding the large amount of time needed to solve large
models should those models be structured. Unfortunately, decomposition has only
met with limited computational success. The most widely used algorithm is the
Dantzig–Wolfe algorithm that applies to block angular structured models. There
are many circumstances, however, in which it is more efficient to solve the total
model rather than to use decomposition. The main advice to someone building
a structured model should not be to attempt decomposition on other than an
experimental basis"</p>
<p>As someone working with a large MILP, that was considering using decomposition, I became very hesitant on continuing down the path of decomposition as a tool to reduce solution time after reading the last sentence above. I have some experience with Danztig-Wolfe decompositon (and Branch and Price) for solving large MIPs, but only with partial success. That is, it required a large effort in the implementation, with only limited reduction in solution time.</p>
<p>Though I know there are quite a few articles out there that show positive results with utilizing decomposition, I wondered if the community at large agree with Williams statement about, that "The main advice to someone building a structured model should not be to attempt decomposition on other than an
experimental basis", and/or if you know of real world "success stories" utilizing decomposition for MILPs.</p>ErlendTWed, 20 Jan 2016 10:08:37 -0500http://www.or-exchange.com/questions/13247/is-decomposition-worth-the-effort-especially-for-milpsdantzig-wolfedecompositionbest-practicesUnhandled exception error c++ when using pointer of Objects IloModel and IloObjective::setLinearCoefs?http://www.or-exchange.com/questions/12859/unhandled-exception-error-c-when-using-pointer-of-objects-ilomodel-and-iloobjectivesetlinearcoefs<p>I am coding a decomposition algorithm including scenario subproblems. I used model pointer to create subproblem optimization models. Then, it is needed to modify the objective function coefficient of each subproblem as the algorithm proceeds. I used pointers to avoid creating the subproblem models every time from scratch. The code is shown partially as follows; the subproblem model pointers (for 2 scenarios) are generated and then objective function coefficients for one scenario subproblem must change using IloObjective::setLinearCoefficients. The modified subproblem must be solved again:</p>
<pre><code>int main (int argc, char **argv)
{
int ScenearioNum=2;//Number of scenarios in small example
IloEnv env;
try {
//Generate the subproblem model pointers:
IloModel* MaxProblemPtr= new(env) IloModel[ScenearioNum];
IloObjective* MaxObjPtr= new(env) IloObjective[ScenearioNum];
for (int s=0;s<ScenearioNum;s++){
IloModel MaxProblem(env);
*(MaxProblemPtr+s)=MaxProblem;
IloObjective MaxObj= IloAdd(MaxProblem, IloMaximize(env));
*(MaxObjPtr+s)=MaxObj;
IloRangeArray ConstMax;
if (s==0){
ConstMax=IloAdd(MaxProblem,
IloRangeArray(env, - IloInfinity,RHS_sub1));
}else{
ConstMax=IloAdd(MaxProblem,
IloRangeArray(env, - IloInfinity,RHS_sub2));
}
for (int j=0;j<2;j++){
X.add(IloNumVar(MaxObj(1)+ConstMax[0](1)));
Y.add(IloNumVar(MaxObj(2)+ConstMax[1](1)));
}
IloCplex maxcplex(MaxProblem);
maxcplex.solve();
if (maxcplex.solve()) {
double currentObj=maxcplex.getObjValue();
cout<<"**max Objective function for s"<<s+1<<" :"<< currentObj<<endl;
}
}
//CHANGING OBJECTIVE FUNCTION OF FIRST SCENARIO
IloNumArray XCoeff(env);
IloObjective tempMaxObj=*MaxObjPtr;
tempMaxObj.setLinearCoefs(X,XCoeff);
IloModel Maxproblem=*MaxProblemPtr;
IloCplex maxcplex(Maxproblem);
}
}
catch (IloException& ex) {
cerr << "Error: " << ex << endl;
}
catch (...) {
cerr << "Error" << endl;
}
env.end();
return 0;
}
</code></pre>
<p>But I get the Unhandled exception error in line of "tempMaxObj.setLinearCoefs". I have no idea how to fix this since the model pointer seems to be working correctly. I will appreciate if anyone helps me with that. Thanks.</p>ORSEThu, 08 Oct 2015 15:26:49 -0400http://www.or-exchange.com/questions/12859/unhandled-exception-error-c-when-using-pointer-of-objects-ilomodel-and-iloobjectivesetlinearcoefspointerconcert-technologydecompositioncplexc++Dealing with degeneracy in large-scale LP problems: references?http://www.or-exchange.com/questions/12666/dealing-with-degeneracy-in-large-scale-lp-problems-references<p>I am dealing with the exact optimization of a large-scale linear model. The model has a block-diagonal structure and a flat objective function. It simulates a system of units, connected to each other by a sparse network, over a long-term horizon.</p>
<p>I first applied column generation. Now I am going to apply Benders' decomposition, I am still in the prototyping phase.
In both cases I see that the subproblems obtained by decomposition still have an exploitable block-diagonal structure, possibly preserved during the addition of new rows and columns. However when I solve the subproblems, or their continuous relaxation, after the first iteration I also experience really slow solution times and, in some cases, numerical stability issues, probably due to (quasi-)degeneracy. </p>
<p>I can exclude other numerical problems as I verified that the model has a good conditioning number, at least in its original formulation, and large constants are easily fixed by standard presolving.</p>
<p>I wonder then if tackling the degeneracy upfront could be the best way to handle the problem, as the solution of LPs, even for few iterations, is going to be the main bottleneck in both schemes.
I already tried playing with cplex's barrier and simplex parameters with little success.</p>
<p>So I've been thinking about employing ad-hoc simplex and IPM methods that should fit my model, e.g.</p>
<blockquote>
<p>Vector Space Decomposition for Linear Programs, JB Gauthier et al</p>
<p>BlockIP: an interior-point solver for convex separable structured optimization problems, J Castro et al</p>
</blockquote>
<p>Of which the latter has a C++ implementation already available AFAIK.
However, licence issues notwithstanding, they require significant, possibly excessive, implementation effort on my part to integrate them with existing code bases in CPLEX/Concert, SCIP/Cplex and GAMS.</p>
<p>Could you suggest other approaches to tackle degeneracy that are easier to implement? At least with CPLEX/Concert?</p>
<p>TIA</p>
<p><strong> UPDATE </strong></p>
<p>I re-run the tests for both the CG and the BD schemes with perturbation ,either cplex's or a direct perturbation. It does seem to have small effects on most instances for CG. For Benders' both the continuous relaxation of the master and the slave can be quite sensitive to perturbation, leading even to dual or primal singularity. So I'd conclude the constraint matrix is quite sparse and the problem has huge degeneracy problems that are hard to overcome.
In absence of other tips I'd change to a more numerically robust heuristic approach, i.e. something that does not rely or alters the numerical stability of the initial model. </p>Andrea TMon, 03 Aug 2015 11:13:15 -0400http://www.or-exchange.com/questions/12666/dealing-with-degeneracy-in-large-scale-lp-problems-referenceslinear-programmingdecompositiondegeneracycombinatorial benders cuts in l-shapedhttp://www.or-exchange.com/questions/12510/combinatorial-benders-cuts-in-l-shaped<p>I am working on a two-stage stochastic program where the master is a mip model, and slaves only have continuous variables.
For each scenario, I pass three variables set (k, a, b: first stage decisions, all binary) from master to slave. Then stochastic parameters realize. The slave problem has M constraints between decision variables a,b and contiunous s,c,x variables (slave variables). The first stage variable k is not in any M constraint in the slave problem, but it constraints s,c variables.</p>
<p>I have two questions:
1. With these conditions, can I use combinatorial benders cuts in the form: a(sum where/=1)+(1-a)(sum where/=0)+b(sum where/=1)+(1-b)(sum where/=0) >= 1 for this problem?
2. I havent seen any papers using combinatorial benders cuts in stochastic programming to generate feasibilty cuts. Are there any other conditions? Since we generate feasibility cuts for each scenario subproblem, I believe we can use these cuts as feasibility cuts for l-shaped algorithm.</p>
<p>Thanks for the help in advance.</p>paretSat, 13 Jun 2015 08:43:08 -0400http://www.or-exchange.com/questions/12510/combinatorial-benders-cuts-in-l-shapedbenders-decompositionstochastic-optimizationcutdecompositionbenders decomposition: Magnanti-Wong pointhttp://www.or-exchange.com/questions/12338/benders-decomposition-magnanti-wong-point<p>Hi, </p>
<p>I studied "Practical enhancements to the Magnanti-Wong method" in which the author developed a very cool modification of Magnanti-Wong benders cut. However, Theorem 7 of the paper that explains how we can generate Magnanti-Wong point is not clear to me. I will state the Theorem here for convenience:</p>
<p>Theorem 7. If cone (b − Y ) ⊇ Y −y0, U # ∅, and there is an x0 ≥ 0 such that Ax0 + y0 = b, then y0
is a Magnanti–Wong point.</p>
<p>Could someone give an example how this theorem can be applied. Thank you!<br>
</p>H_ORSun, 24 May 2015 10:45:42 -0400http://www.or-exchange.com/questions/12338/benders-decomposition-magnanti-wong-pointdecompositionbendersmagnanti-wongApache Mahout - Machine Learning on Hadoophttp://www.or-exchange.com/questions/8557/apache-mahout-machine-learning-on-hadoop<p>IS anyone working on Apache Mahout? (basically it is a machine learning framework) Is it possible to link Mahout with python to create machine learning mapreduce Jobs?</p>
<p>Also, are there any techniques that can be used to compute the inverse of a matrix in MapReduce?</p>PavanTue, 24 Sep 2013 07:16:39 -0400http://www.or-exchange.com/questions/8557/apache-mahout-machine-learning-on-hadooplinear-algebrabig-datadecompositioncloud-computingmachine-learningDiagnosing Performance of L-Shaped Methodhttp://www.or-exchange.com/questions/5712/diagnosing-performance-of-l-shaped-method<p>I have a two-stage stochastic program with relatively complete recourse, binary first-stage and continuous second stage. I've implemented L-shaped method to try to solve large instances, the performance is poor, even on very small instances that I can solve exactly (hundreds of iterations without convergence). For example, on one small instance, after 25 iterations, the lower bound is about 3% below optimum, and the upper bound about 4% above optimum, with only tiny improvements after that. This is typical of instances I'm trying to solve.</p>
<p>There is a lot of work on enhancements to Benders decomposition, but I'm trying to understand what might be effective, rather than blindly trying to implement some of the more recent methods and pray, or I try something else entirely. I've found that people often cite low-density cuts as a reason for poor convergence. I'm not sure I understand density of an optimality cut correctly - can I define density of a cut as the percentage of master problem variables with a non-zero component in a cut? All of the cuts I'm generating have non-zero components for 75-80% of the master problem variables.</p>
<p>Any suggestions on how to ``diagnose'' performance and proceed?</p>orDudeFri, 15 Jun 2012 15:17:35 -0400http://www.or-exchange.com/questions/5712/diagnosing-performance-of-l-shaped-methodbenders-decompositioninteger-programmingdecompositionDantzig-Wolfe decomposition: which master problems are not set covering?http://www.or-exchange.com/questions/4462/dantzig-wolfe-decomposition-which-master-problems-are-not-set-covering<p>Many (mixed) integer progamming formulations turn into a set covering (or set partitioning, set packing) type of model (possibly with additional side constraints), when a Dantzig-Wolfe decomposition is applied.</p>
<p>Examples are bin packing, generalized assignment, vehicle routing, crew scheduling, coloring, p-median, etc.</p>
<p>I wonder whether you have examples where this is different. More precisely, my question is: Can you please name (and possibly describe) a problem, for which a "natural" Dantzig-Wolfe decomposition does not result into a set covering/partitioning/packing type of model?</p>
<blockquote>
<p><strong>edit:</strong> I am not looking for artifial decompositions, but "useful" ones which may help solve the problem. The background to my question is that we are working on "automatic DW decomposition" and I wonder what kinds of problems are out there. </p>
</blockquote>
<p>An example is cutting stock, which is very similar to bin packing, and the master problem is of "covering type", but is not a set covering problem (as column vectors and right hand sides may be non-binary). If in doubt, feel free to post all the models to which you ever applied a Dantzig-Wolfe decomposition.</p>
<p>Thanks!</p>Marco LuebbeckeWed, 11 Jan 2012 10:41:15 -0500http://www.or-exchange.com/questions/4462/dantzig-wolfe-decomposition-which-master-problems-are-not-set-coveringdantzig-wolfemipset-coverdecompositionmathematical-modelingWhen decomposing problems via lagrangean relaxation, do I need to solve each subproblem separately?http://www.or-exchange.com/questions/4148/when-decomposing-problems-via-lagrangean-relaxation-do-i-need-to-solve-each-subproblem-separately<p>I have a MIP problem decomposed in two subproblems by lagrangian relaxation. One of them is MIP, the other completely continuous. </p>
<p>I'm going to solve the main problem with a branch-and-bound algorithm coded in C, using the CPLEX solver on the subproblems to compute a dual bound, the question is solver-independent though. </p>
<p>I have two ways to implement the dual bound computation: I) by creating and solving one model object for each subproblem, or II) creating just one model object for both and solving it at once.</p>
<p>In this project (II) comes natural, in another project I preferred doing (I). Can this choice, in general, have any impact on the efficiency of the solver, apart from the chance of solving the two subproblems in parallel?</p>
<p>TIA</p>
<p>Andrea T.</p>Andrea TMon, 05 Dec 2011 14:17:32 -0500http://www.or-exchange.com/questions/4148/when-decomposing-problems-via-lagrangean-relaxation-do-i-need-to-solve-each-subproblem-separatelymipbranch-and-bounddecompositionlagrangian-relaxation