Dear Friends, I have an optimization which in some of equality constraints there is a parameter with uncertainty. (f(x)=h, which h is an uncertain parameter). My model is nonlinear. How can I solve this model? Could you also introduce me a reference? asked 28 Dec '12, 07:39 Reza Salehiz... 
Usually, the term "robust optimization" involves constructing a model that minimizes the potential damage from nonideal values of the uncertain parameters. That might include maximizing over your decision variables (assuming you want to maximize) the minimum value of the objective over the parameter \(h\) or it might involve including a term in the objective to be minimized representing the variance or the downside risk associated with \(h\). If you have a twostage or multistage problem, where you make a decision then have a chance to respond to the outcome, you can construct a scenariobased model (as @Gilead suggests). Once you have the model, you can apply the appropriate methods. answered 28 Dec '12, 12:35 Matthew Salt... ♦ 
Here's my crude way of dealing with an uncertain parameter in a nonconvex optimization problem. It may not be the best way (and I'd be very interested in better approaches if there are any  please drop me a comment if you know of any!). Suppose I have a problem that looks like this: $$ \begin{align} & \min \phi(x)\\ \text{s.t. } & f(x,h) = 0\\ & g(x,h) \leq 0\\ & h \in [a,b] \end{align} $$ I discretize the uncertain parameter by breaking it up into \(n+1\) knots: $$ \begin{align} \Delta h = (ba)/n\\ h_i = a + i\cdot \Delta h, \quad i=0,\ldots,n \end{align} $$ If there is more than one uncertain parameter, they are all discretized, and the resulting indices are simply the Cartesian product of all the individual parameter's index set. I then write the following multiscenario problem (where \(h_i\) are constants): \begin{align} & \min \phi(x)\\ \text{s.t. } & f_i(x,h_i) = 0, \quad i=0,\ldots,n\\ & g_i(x,h_i) \leq 0, \quad i=0,\ldots,n \end{align} Depending on the value of \(n\), the problem can explode into a huge one. But the Jacobian is sparse and has a special block structure that can be exploited. Although I use this approach, it is somewhat unsatisfactory. The problem is that it is difficult to know how fine the discretization needs to be to get a correct answer, and how to balance that with the problem size. answered 28 Dec '12, 16:59 Gilead ♦ 1
One approach might be to discretize roughly at first, then make intervals smaller near the optimum and resolve. Starting by extending the current solution to the new grid might help speed the process. That sort of follows the idea of multigrid methods for PDEs.
(28 Dec '12, 18:42)
Matthew Salt... ♦
Thank you. But the resulting problem may be infeasible by considering all scenarios simlultanously.
(28 Dec '12, 20:30)
Reza Salehiz...
1
@Reza: consider the replication of constraints \(f\) and \(g\); this means each set of constraints is independent apart from the linking variables \(x\). Essentially we're searching for a single \(x\) that is robust to all realizations of \(h\). The reasoning is, if indeed the problem is infeasible, then we can deduce that there does not exist a solution that is robust (discounting the discretization) to given uncertainty interval of \(h\). That is to say, you may be able to find \(x\) that are feasible for certain values of \(h\), but not for all values of \(h\).
(28 Dec '12, 21:02)
Gilead ♦
But I should hasten to add that this is only one approach. You may want a solution that is robust to some worstcase scenario (resulting a maxmin problem), or to some probability distribution of the uncertain parameter.
(28 Dec '12, 21:14)
Gilead ♦
@Matthew: Thanks, that's a good idea. I need to think about this a bit more, because the exact value of \(h\) (which we assume to be exogeneous to the problem) doesn't actually appear in the objective because we are searching for a single \(x\) that attains the optimum as well as is feasible for all values (or knots) of \(h\) (this gives a necessarily conservative \(x\)). Another application of your idea might be to make the intervals smaller for values of \(h\) with the highest probability. For a normally distributed \(h\), we might put more knots around the mean, for instance.
(28 Dec '12, 21:52)
Gilead ♦
1
@Matthew: I reread your answer, and it's becoming clearer to me that perhaps what I need to be doing is solving the minimax problem to find the worstcase \(h\) through this discretization and refinement method (assuming a priori that the problem is feasible for all realizations of \(h\). I'll think about this a bit more. Thanks!
(28 Dec '12, 22:08)
Gilead ♦
showing 5 of 6
show 1 more comments

If your model is convex (and then some  there are additional requirements), look into conic programming. Otherwise, look into multiscenario optimization.