Let the objective be to maximize the sum of fi(xi) (edit: subject to linear constraints) where all fi are strictly increasing (possible nonlinear) functions. Maximizing a convex function is hard as a local maximum might not be a global one. However for the special case described above, since fi are strictly increasing, from what I understand, the local maximum should be the global one. If so, are there any special techniques that can optimize the objective (or maybe a piecewise approximation of it) subject to linear inequality constraints quickly? Thanks asked 18 Sep '11, 19:00 Opt 
Updated: if I understand correctly, your problem looks like this: $$ egin{align} &max_{x_{i}} ; J = sum_{i} f_{i}(x_{i})\ s.t. quad & A x leq b\ & x = [ x_1, x_2, ldots, x_n ]^{T} end{align} $$ where (f_{i}) are convex and strictly increasing. This is a nonconvex problem, because you are maximizing a linear combination of convex functions (equivalently, you are minimizing a linear combination of concave functions). I may be wrong, I'm not sure if the fact that (f_{i}) are strictly increasing actually helps, because you will still have to the determine the active set in the polytope defined by the constraint set. That said, interiorpoint (i.e. barrier) algorithms are likely to be polynomialtime with respect to constraints (since they don't need to enumerate the active sets) even in your nonconvex case. (They have been proven to be polynomialtime in the convex case). You might want to try those. The best interior point solver for nonconvex problems that I know of is IPOPT, which is free. However, being a local method, a global solution cannot be guaranteed. If you are interested in global optimality for the minimization of concave objective over a convex set, here are a few papers I found by way of google: answered 18 Sep '11, 19:26 Gilead ♦ The problem you defined is virtually unconstrained (except for bounds on x_i). I'm asking if there's a weakly/strongly polynomial time algorithm if there are linear inequality constraints (forming a polytope) along with the objective function.
(18 Sep '11, 19:35)
Opt
Thanks for your answer. Since interiorpoint methods don't need to enumerate the active sets and since the function has no nonglobal maxima, it seems to me that they would give a global optimum. Is this correct or is there something I'm missing?
(18 Sep '11, 20:35)
Opt
1
That would the logic, yes. Unfortunately, it's not that straightforward. Consider a simple problem where the objective is a sum of strictly increasing convex functions, and the constraints define a convex region: ({max x^2 + y^2: x geq 0, y geq 0, x + y leq 1}). Starting from an initial guess of ((x,y)=(0,0)), I get (0,0) from CONOPT (a GRGbased solver) and (0.5,0.5) from IPOPT. Both are (suboptimal) KKT points. The (nonunique) global optimum lies either at (0,1) or (1,0). This goes to show that when you have a nonconvex problem, global optimality cannot be guaranteed.
(19 Sep '11, 00:44)
Gilead ♦
I tried the same in Matlab (except with the constraint x+y=1) and it too gave me .5 as the objective function value. I'm not sure what's happening since if for instance, I start with (.5, .5), it says that there are no feasible directions which increase the objective which doesn't make sense as any change in x would (because of the constraint) cause a change in y and increase the objective.
(19 Sep '11, 02:08)
Opt
The above I guess can be explained by the fact that the gradient is 0 at .5, .5. Not sure about the case where x+y <= 1 instead of x+y=1 though.
(19 Sep '11, 02:21)
Opt
If you really need global optimality, try Couenne (free). It automatically constructs LPbased convex envelopes and solves your problem to global optimality. If you use an LP subsolver like CPLEX or Clp, you can pick either the barrier or simplex method. Important remark: though barrier methods are polynomialtime, they aren't necessarily always faster than simplex. Polynomial time refers to the worstcase complexity, not average performance. For many modest sized problems, simplex usually outperforms the barrier in my experience, especially when using highlytuned codes like CPLEX.
(19 Sep '11, 09:27)
Gilead ♦
Thanks for the pointer to Couenne. However from what I understand, isn't it exponentialtime since it uses a tree search? The structure of this problem is something that I really think one might be able to exploit...
(19 Sep '11, 10:30)
Opt
Yes, it has a worstcase exponential complexity. We know that finding the global optima of nonconvex (including concave) programming problems is NPhard in general. For instance, it has been proven that something as simple as minimizing a concave quadratic function in a hypercube is NPhard. Local methods cannot guarantee global solutions: even the simple example I gave above (which belongs to the problem class you're interested in) has multiple stationary points. So you can either have 1) polynomialtime 2) global solution, but it's unlikely you can have both for concave programming problems.
(19 Sep '11, 12:13)
Gilead ♦
That said, I think it's worth emphasizing the fact that exponential complexity refers to the worst case, so I wouldn't be unduly worried about that. You may be able to find an algorithm that exploits your problem structure and delivers good realworld performance. For instance, though Klee & Minty showed that simplex algorithm is exponential time, on average it is extremely efficient and performs far better than its complexity bound. In contrast, Khachiyan's ellipsoidal algorithm is provably polynomialtime, but performs close to its complexity bound and has poor real world performance.
(19 Sep '11, 12:38)
Gilead ♦
showing 5 of 9
show 4 more comments
