I am using column generation to solve a linear programming problem where I have many variables. The problem is simply minimizing \(\sum_i x_i c_i\) subject to some constraints where \(x_i \leq 1\) are my variables. I solve the subproblem using dual variables from the master problem, which is minimizing \(c(y)\Pi y\), and add \(y\) as a new column then solve the master program again. I repeat this process until the objective of the subproblem is \(\geq 0\). But I realized that, sometimes solution to the master problem after adding the generated column does not improve the objective function. Is this possible or am I getting something wrong? Thank you for your answers. 
You can end up doing a degenerate pivot with the new column entering the basis at value 0. The dual values will change, but the primal solution (hence objective value) stays the same. This is not uncommon. answered 25 Nov '13, 08:03 Mike Trick ♦♦ For a toy example, I initialize my columns with the optimums (which I know corresponding \(x_i\) are non zero for the solution). Even then column generation generates new columns that does not improve objective. In the end I end up enumerating many possible columns.
(25 Nov '13, 08:10)
nimcap
2
Try adding small random perturbations to the right sides of the constraints before solving. If that reduces the number of columns and eliminates or at least reduces the number of columns that fail to improve the solution, you have (as Mike said) a degeneracy problem.
(25 Nov '13, 22:14)
Paul Rubin ♦♦
Are these degenerate column generations supposed to happen even if I arrived at an optimum? I was not expecting to find any columns with negative reduced cost at the pricing subproblem.
(13 Jul '15, 13:11)
nimcap
