The concept of reduced cost appears virtually in any work about linear programming. I have a feeling, however, that the concept itself goes well beyond the boundaries of linear optimisation. For instance, there is an obvious similarity between the concept of reduced cost in linear programming and the cost of a move in local search (the simplex algorithm itself is a form or local search, after all). I was wondering where there exists a general theory of reduced costs, that transcends the boundaries of a specific optimisation technique. Maybe this is a trivial subject, but I haven't encountered any work that studies reduced costs under a unifying theory. Can anyone provide pointers to the literature, if available? asked 22 Sep '16, 20:23 Tommaso Urli |
(1/2) Just a comment. The name "reduced cost" comes from the fact that you substract "something" from the cost when evaluating the "promise" of a variable (or in local search: a move) for improvement. This particular computation is inherently tied to the linear equation systems that describe the mechanics of the simplex method. The reduced cost give a per unit improvement estimate (not clear whether realizable or not).
(2/2) (split in two, was too long) In a local search move, there probably would never be a partial realization of a move, but only "move or not". Whenever you compute the improvement promised by the move and decide to move greedily, you essentially do Dantzig's "pivot rule" in local search. Don't know if there is anything that needs to be considered that is more general than this.