Questions Tagged With interior-point-methodshttp://www.or-exchange.com/tags/interior-point-methods/?type=rssquestions tagged <span class="tag">interior-point-methods</span>enTue, 03 Jun 2014 03:22:41 -0400When should I use interior point methods?http://www.or-exchange.com/questions/9720/when-should-i-use-interior-point-methods<p>This question is fairly simple but for me not quite obvious. I am wondering when, if ever, I should use an interior point method for solving a linear programming problem? Does there exist some guidelines for when interior point methods are appropriate; for example special structures where such a method would (probably) work better than dual simplex? Or is it purely a question of experimentation? It seems that at least CPLEX has a preference towards dual simplex, as I have never seen it report using the barrier algorithm when CPLEX chooses the algorithm automatically. References to the literature is very welcome.
Thanks.</p>SuneTue, 03 Jun 2014 03:22:41 -0400http://www.or-exchange.com/questions/9720/when-should-i-use-interior-point-methodslinear-programminginterior-point-methodsMatrix in Mehrotra formathttp://www.or-exchange.com/questions/7421/matrix-in-mehrotra-format<p>I'm working with the PCx code and can't figure out how the Mehrotra format for matrix work. The code describing the data structure are</p>
<pre><code>typedef struct {
int NumRows, NumCols, Nonzeros;
int *pBeginRow;
int *pEndRow;
int *Row;
double *Value;
} sparseMatrix;
</code></pre>raniereTue, 12 Feb 2013 07:00:17 -0500http://www.or-exchange.com/questions/7421/matrix-in-mehrotra-formatlinear-programmingoptimization-softwarepcxinterior-point-methodsmatrixPotential Reduction and Primal Path following methodhttp://www.or-exchange.com/questions/5218/potential-reduction-and-primal-path-following-method<p>In both the potential reduction and primal path following interior point methods for linear programming, a barrier function is constructed which contains the terms ( -sum log x_j ) where (x_j) are the variables. This is to keep the variables from being 0 and thus keep the current solution feasible.</p>
<p>However, in potential reduction, the new direction to move the current solution to is found by solving a linear program which contains as the single inequality constraint (||mathbf{X^{-1}d}|| leq eta) where (eta < 1), (mathbf{X}) is a diagonal matrix with (mathbf{x}) making up the diagonal, and (mathbf{d}) is the direction. This constraint is so that the new solution (mathbf{x+d}) remains (> 0) and thus feasible.</p>
<p>However in the primal path method, one can just solve the Lagrangian dual of the barrier function problem and the new solution is guaranteed to be feasible - i.e. the property (||mathbf{X^{-1}d}|| leq eta) is guaranteed to be satisfied.</p>
<p>Is there some good intuition as to why primal path can get away with just solving the Lagrangian dual (and thus not have to deal with inequalities) while potential reduction cannot? The best I can think of is that the barrier function in the potential reduction algorithm contains the term (log mathbf{s'x}) where (mathbf{s'x}) is the duality gap. This term thus strongly pulls (mathbf{x+d}) into a potentially infeasible region. The barrier function for the primal path only has the term (mathbf{c'x}) as part of the barrier function which does not exert as strong a pull (as it will not for instance go to (-infty) when (mathbf{c'x}) goes to 0 unlike (log mathbf{s'x}) which will). However I don't know how to formalize this.</p>SidThu, 12 Apr 2012 20:01:17 -0400http://www.or-exchange.com/questions/5218/potential-reduction-and-primal-path-following-methodlinear-programminginterior-point-methodsWhy ipopt needs so many iterations when starting from the optimal solution?http://www.or-exchange.com/questions/1268/why-ipopt-needs-so-many-iterations-when-starting-from-the-optimal-solution<p>I'm making some benchmarking between ipopt and conopt (from ampl) and strangely when I call solve two consecutive times (the second solve starting from the optimal solution), ipopt needs 235 iterations to rediscover this best solution while conopt only 4.</p>
<p>Do you have an idea how to avoid all these iterations when ipopt starts on the optimal solution or very closely?</p>
<p>Thank you in advance,<br>
Pierre</p>
<p>A fragment of the output when solving again:</p>
<pre><code>iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -1.6889364e+03 7.06e+01 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 -1.8076240e+03 7.06e+01 1.06e+02 -1.0 1.12e+05 - 1.12e-01 1.06e-03f 1
2 -1.7868621e+03 5.70e+01 5.00e+01 -1.0 6.42e+02 - 1.13e-01 1.92e-01h 1
3 -1.7773846e+03 5.15e+01 1.43e+02 -1.0 6.05e+02 - 2.62e-01 9.75e-02h 1
4 -1.7346472e+03 2.86e+01 2.94e+01 -1.0 6.40e+02 - 3.57e-01 4.44e-01h 1
5 -1.7138111e+03 1.80e+01 2.96e+01 -1.0 6.71e+02 - 4.02e-01 3.72e-01h 1
6 -1.6979992e+03 1.06e+01 9.60e+01 -1.0 7.65e+02 - 7.76e-01 4.09e-01h 1
7 -1.6771948e+03 2.75e+00 6.48e+01 -1.0 1.31e+03 - 2.00e-01 7.42e-01h 1
8 -1.6672749e+03 4.10e-01 1.06e+02 -1.0 9.65e+02 - 7.23e-01 9.72e-01h 1
9 -1.6663897e+03 3.55e-01 7.49e+03 -1.0 1.85e+03 - 4.22e-01 1.49e-01f 1
...
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
230 -1.6889365e+03 4.00e-09 1.38e-07 -8.6 6.52e-01 -9.6 1.00e+00 1.00e+00H 1
231 -1.6889365e+03 4.00e-09 5.57e+01 -8.6 3.22e+00 - 1.00e+00 1.56e-02h 7
232 -1.6889365e+03 4.00e-09 5.65e+01 -8.6 4.01e+00 - 1.00e+00 1.19e-07h 24
233 -1.6889365e+03 4.00e-09 2.07e-07 -8.6 3.11e+00 - 1.00e+00 1.00e+00H 1
234 -1.6889365e+03 4.00e-09 5.63e+01 -8.6 5.33e+00 - 1.00e+00 3.91e-03h 9
235 -1.6889365e+03 4.00e-09 5.33e-08 -8.6 9.49e-01 - 1.00e+00 1.00e+00s 22
</code></pre>pierre schausWed, 09 Mar 2011 15:43:42 -0500http://www.or-exchange.com/questions/1268/why-ipopt-needs-so-many-iterations-when-starting-from-the-optimal-solutionnonlinear-optimizationinterior-point-methodsipopt