This question is fairly simple but for me not quite obvious. I am wondering when, if ever, I should use an interior point method for solving a linear programming problem? Does there exist some guidelines for when interior point methods are appropriate; for example special structures where such a method would (probably) work better than dual simplex? Or is it purely a question of experimentation? It seems that at least CPLEX has a preference towards dual simplex, as I have never seen it report using the barrier algorithm when CPLEX chooses the algorithm automatically. References to the literature is very welcome. Thanks. asked 03 Jun '14, 03:22 Sune 
Nowadays, in my experience, the interior point algorithm (IPM) is the fastest pure LP algorithm for larger problems on average (for small problems it does not matter). One reason for that is that the linear algebra in IPMs can be threaded quite efficiently (compared to simplex). Another reason is that most computers have enough memory available to do IPMs fast. Of course, for a specific problem of significant size it is always possible that the dual simplex, primal simplex (less often) or a network simplex (only for very "networky" problems) is the fastest. Especially on network problems, IPM seems to be worse than a simplex algorithm. Apart from runtime, another factor is the specific solution an LP solver provides for degenerate problems (and essentially all problems are degenerate). Unless it is run with a crossover step, the IPM typically does not produce a basic solution. Basic solutions are extreme solutions, they correspond to vertices of the primal and dual polyhedra. Thus they have many variables at their bounds which most of the time means these variables are 0 and the solutions has few nonzero values. IPM solutions typically are in the middle of the optimal faces, thus tend to have variables between their bounds typically resulting in few 0 values. Depending on what you want to do with the solution of the LP it might be better to have a basic solution or it might be better to have an IPM solution. It all depends on the application. Another huge difference between IPM is warmstart. A basic solution can be used to warmstart simplex algorithms to solve related problems very quickly. That is why mixed integer programming (MIP) solvers need a very good simplex implementation and that also is why if IPM is used for the MIP root node it kind of has to be used with crossover to get a basis for warmstarts. Because of warmstart it basically (no pun intended) never makes sense to use IPM to solve nodes in the branchandbound of a MIP solver. I don't have references, this all seems like folklore to me, but you might find things in the usual text books, although only newer books will mention the "threadability" of IPM. answered 03 Jun '14, 09:28 Philipp Chri... 2
My colleague pointed out this paper that might be useful: Pivot versus Interior Point Methods: Pros and Cons, T. Illes and T. Terlaky http://www.cas.mcmaster.ca/~oplab/publication/report/20015.pdf
(03 Jun '14, 15:15)
Philipp Chri...

Not much, but Bixby often refers to comparisons between the different algorithms, like here. answered 03 Jun '14, 13:17 Marco Luebbecke ♦ 3
BTW : Some of these numbers regarding the importance of each element of MIP (presolve, cut etc), have later been questioned due too small and biased testset (of course not deliberately).
(03 Jun '14, 13:52)
Bo Jensen ♦
Where was this disproven? References, please.
(03 Jun '14, 15:53)
Greg Glockner
3
@Greg I was thinking on the numbers of importance of cuts, presolve etc displayed at page 19, which I assume is from the paper by Bixby years back, right (I might be wrong) ? Recently Achterberg and Wunderling had a paper and a talk, which tested the importance of cuts being 3.3X (as I recall). They also had some comments during the talk about the test set for the previous mentioned paper.
(03 Jun '14, 16:03)
Bo Jensen ♦
3
The paper Bo Jensen refers to is: Mixed Integer Programming: Analyzing 12 Years of Progress, Tobias Achterberg and Roland Wunderling http://rd.springer.com/chapter/10.1007/9783642381898_18# I would not say that the results were disproven, but the paper points out that previous testing methodology at CPLEX for MIP was flawed. But nothing of this has to do with IPM vs. Simplex or the original question.
(03 Jun '14, 16:36)
Philipp Chri...
Thanks Philipp, I agree has nothing to do with simplex vs. IP. My comment was to highlight the issue about testing, not the best wording, so I have changed the comment a bit.
(03 Jun '14, 16:44)
Bo Jensen ♦
the Achterberg/Wunderling chapter is not accesible, but at least there are slides
(03 Jun '14, 16:49)
Marco Luebbecke ♦
showing 5 of 6
show 1 more comments

So to recap: If the linear program is huge, IPMs are probably faster than (dual) simplex (except for highly structured problems like network flow problems with specialized simplex algorithms). However, if I need to resolve the LP for some reason (ILPs) (dual) simplex is probably faster than IPMs as I can ``warm start'' using the optimal basis of the last solved problem (IPM iteration is \(\mathcal{O}(n^3)\) and simplex iteration is \(\mathcal{O}(n^2)\) implying if \( n\) is large, many pivots are cheaper than one IPM iteration). In order to generate cuts, an extreme point is needed, wherefore a crossover is needed if the root node of a ILP is solved using an IPM. This crossover (often) "eats" the time saved by using IPM in stead of (dual) simplex.
In case of degeneracy, correct sensitivity analysis can be performed using a "maximally complementary solution given by IPMs"
by the way, the quote in the last paragraph is from http://www.cas.mcmaster.ca/~oplab/publication/report/20015.pdf provided by @Philipp Christophel
Sounds about right, but don't forget that IPMs can make good use of threads in modern computers, pivoting algorithms struggle to get significant speedup from threading, although every once in a while somebody claims something different. I would not get hung up on the Onotation comparison, in a modern implementation it can happen that a lot less or a lot more than in the order of nsquared operations are done in a simplex iteration.
"although every once in a while somebody claims something different"  and they all sort of fail :)