There has been a couple of threads related to sensitivity (SA) lately, such as Paul's blog : http://orinanobworld.blogspot.com/2010/09/dualvaluesforprimalvariablebounds.html Without making a long story about the theory, in short the standard textbook SA is misleading if the problem is degenerated (which in practice almost all problems are). You can do more advanced and computational heavy SA based on the optimal partition instead that does not assume nondegeneracy, which is explained here : http://www.mosek.com/fileadmin/products/5_0/tools/doc/html/tools/node016.html one source is also : C. Roos, T. Terlaky and J. Ph. Vial. Theory and algorithms for linear optimization: an interior point approach. John Wiley and Sons, New York, 1997. Does anybody use this theory or do you just stick with the old standard SA and in other ways avoid the pitfalls ? asked 12 Sep '10, 07:34 Bo Jensen ♦ fbahr ♦ 
When I taught LP (which was quite a while ago), I started with conventional sensitivity analysis, and pointed out the connection between degeneracy and "onesided" dual prices. I also pointed out that the ranging tables would tell you when a dual price was onesided and in which direction it worked (the corresponding RHS would be at either the upper or lower limit of its range). I also pointed out the analogous issue with reduced costs. Back in those days, we were actually allowed to challenge the students, so I'd extend this to simultaneous changes in multiple righthand sides or multiple bounds, which led to somewhat conservative estimates on how far the duals could be trusted. (I think this is a neglected aspect, as far as textbooks go. It's easy to formulate examples where you might be swapping one resource for another, not necessarily 1for1, or increasing a quota on one output while decreasing a quota on a complementary output.) I didn't attempt to compute an interval of dual prices for one constraint, nor a range of linearity, as in the MOSEK link (thanks for that, by the way!). What I did do was point out that if you were interested in going beyond the (possibly conservative) limit of change where the dual price remained accurate, obtained from the sensitivity table, you could just nudge the RHS slightly beyond that limit and reoptimize, then repeat ad nauseum. For simultaneous changes to multiple constraints, I recommended that students introduce a new variable (call it t), whose constraint coefficients represented the changes to the RHS, and add the constraint t=0. Solving that produced the same optimal solution but also produced a shadow price for the extra constraint (the marginal objective change per unit of perturbation) and a range for the RHS of the new constraint (how far t could be pushed before the shadow price changed). Ultimately, though, I decided that sensitivity analysis might be a thing of the past. Other than possibly in disgustingly large models, it's probably easier (and reasonably cheap) to solve multiple scenarios to do whatif analysis. Dual information still is necessary for various decomposition methods, but there the issue is marginal rate of change, not how far it works. Recognizing when a shadow price is onesided (and in which direction it works) might be useful in that context, but I'm intellectually lazy  I just pass the shadow prices I get to the subproblem, and trust that if they generate a new column or new constraint that doesn't help much I'll get the correct shadow price in a later iteration. (Sorry for the Dickensian response.) answered 12 Sep '10, 15:16 Paul Rubin ♦♦ Agree, whatif multiple scenarios is becoming more important than SA due to better algorithms and computing power.
(12 Sep '10, 15:30)
Bo Jensen ♦
