Answers to: Decide whether a convex, non-differentiable function attaints its minimum at zerohttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero<p>Hi, what would be the best way (theoretically and practically) to decide whether a convex but (only) in 0 non-differentiable function attains its global minimum in zero? A possibility would be to use a subgradient method still needing the subgradients. Is there a way to easily test, if 0 is the global minimizer? For example, is it to easy decide if there is a point where 0 is in its subdifferential or to decide if no descent direction in this point exists? Is it sufficient to evaluate for example all points (0+e_i) and (0-e_i) for all unit vectors e_i and show that the objective function value is non-negative?</p>enWed, 11 Mar 2015 10:42:03 -0400Comment by Long on Long's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11645<p>Hi Paul, thank you. By the way how can if write math code here?</p>LongWed, 11 Mar 2015 10:42:03 -0400http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11645Comment by Paul Rubin on Long's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11643<p>You should post this as a separate question, so as not to confuse future readers with intermingled answers to two different questions.</p>Paul RubinWed, 11 Mar 2015 10:30:08 -0400http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11643Answer by Longhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11641<p>A related question: Assume that i know the optimal solution is NOT zero, is there a way (preferred analytically, or if not possible with as less effort as possible) to compute a point with objective function value < 0 ? My conjecture is that there is a vertex e that satisfies f(e) < 0 so that i can just enumerate all vertices. But i could not prove it so far nor did i find a counterexample. </p>LongWed, 11 Mar 2015 09:43:14 -0400http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11641Comment by Long on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11187<p>What i thought is maybe it can be proven that the optimal solution is either 0 or "far away" from zero such that imposing your suggested constraint is even equivalent in case the optimal solution is not zero, implying that taking the minimum of f(0) and the optimal solution of the problem including your constraint gives the optimal solution</p>LongWed, 28 Jan 2015 13:43:23 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11187Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11184<p>True. The added constraint I suggested also lets one try to solve the modified problem "by hand", using the KKT conditions, without bumping into the differentiability problem at the origin.</p>Paul RubinWed, 28 Jan 2015 10:14:20 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11184Comment by optimizer on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11181<p>Conic solvers are usually of the interior point type. Interior point solvers will also have no issues with the original formulation because they will never consider x=0.</p>optimizerWed, 28 Jan 2015 08:23:13 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11181Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11174<p>If you add the constraint that the \(x\) variables sum to at least \(\epsilon > 0\), the differentiability issue goes away, because 0 is not in the modified feasible region. You just have to live with the possibility that the true optimum has variables summing to a positive value less than \(\epsilon\).</p>Paul RubinMon, 26 Jan 2015 17:57:34 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11174Comment by Long on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11173<p>You are both right, i assume the variables x to be non-negative.
<a href="/users/222/paul">@Paul</a>: what do you mean by keeping the original objective? The original one is not differentiable in zero, i think i might misunderstood something.</p>
<p>The reason why i'm still looking for a cheap (maybe analytical) alternative is that this kind of problem might has to be solved several times as a subproblem. From an implementation point of view i want to keep my overall code as simple as possible without the usage of one more solver if possible.</p>
<p>Doesn't it also depend on the solver if there exist non-differentiable points?</p>LongMon, 26 Jan 2015 17:25:19 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11173Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11172<p>Agreed, provided the lack of differentiability at zero does not bother the solver.</p>Paul RubinMon, 26 Jan 2015 16:50:42 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11172Comment by optimizer on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11171<p>If you allow for a solve step, you could as well optimize the original problem (second order conic optimization problems can be solved almost as efficiently as linear optimization problems).</p>optimizerMon, 26 Jan 2015 16:20:03 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11171Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11170<p>If so, a conceptually simpler solution than hacking the objective function is to keep the original objective and add the constraint \(\sum x_i \ge \epsilon\). Solve that (convex differentiable objective over a polytope) and compare the objective value to the value at 0.</p>Paul RubinMon, 26 Jan 2015 15:07:24 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11170Comment by optimizer on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11169<p>He probably means \(x \geq 0\) in combination with \(a^T x \leq b\). When the elements of a are nonnegative, it has the interpretation of a knapsack constraint.</p>optimizerMon, 26 Jan 2015 14:50:53 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11169Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11167<p><a href="/users/25192/long"><a href="/users/25192/long">@Long</a></a>: You mentioned "knapsack constraint" in a comment to my first response. Do you by any chance constrain \(x \ge 0\)?</p>Paul RubinMon, 26 Jan 2015 10:57:36 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11167Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11166<p><sigh>Two senior moments in one response.</sigh> You're right about the gradient of \(f\), which means I should have said \[g(z) = \frac{z'z}{2\epsilon}+\frac{\epsilon}{2}\] (barring more senior moments, of course). The other senior moment, which I realized after I posted, was that I shifted from \(z\) back to \(x\). Everything after "Let" should be in terms of \(z\), not ((x\).</p>Paul RubinMon, 26 Jan 2015 10:46:34 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11166Answer by optimizerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11165<p>$$
\begin{pmatrix} P^T & 0 & 0 & -I & 0 \\ -c^T & 1 & 0 & 0 & -1 \\ a^T & 0 & 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} x \\ t \\ s \\ v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ b \end{pmatrix} \\
x \in \mathbb{R^n}, t \in \mathbb{R}, s \in \mathbb{R_+}, ||v_1|| \leq v_2
$$
The KKT conditions are:</p>
<ul>
<li>
<p>primal feasibility (\(x=0, s = a^T x - b = -b, t = ||P^T x|| + c^T x = 0\); we assume that s is nonnegative)</p>
</li>
<li>
<p>dual feasibility:
$$
(1) \quad P y_1 + y_3 a - y_2 c = 0 \\
(2) \quad y_2 = 1 \\
(3) \quad y_3 \geq 0 \\
(4) \quad ||y_1|| \leq y_2
$$</p>
</li>
<li>
<p>complementary slackness, which implies that when \(b=0\), (3) holds with equality (\(y_3 = 0\)).</p>
</li>
</ul>
<p>It now seems to make sense to distinguish between \(b=0\) and \(b \neq 0\). When \(b=0\), obviously \(x=0\) is optimal. Otherwise, we need to find a y that satisfies 1-4 to show that x=0 is optimal; i.e., solve:
$$
P y_1 + y_3 a = c \\
y_3 \geq 0 \\
||y_1|| \leq y_2
$$
Note that the columns in P can be assumed to be linearly independent by construction.
I don't see an easy way to conclude if this system is feasible. Maybe someone else can comment on it.</p>optimizerMon, 26 Jan 2015 09:59:44 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11165Comment by optimizer on optimizer's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11164<p>In this case the primal solution x=0 is given (for the formulation in standard form, you also have values for s,t and v), so just a dual variable y needs to be found. The equations are nonlinear, but since this problem is so small, you may be able to analytically solve the KKT conditions (or show that no such y exists, which shows that 0 is not optimal). In another answer I have worked out these conditions.</p>optimizerMon, 26 Jan 2015 09:08:54 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11164Comment by Long on optimizer's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11159<p>In general, the KKT system is a system of nonlinear equations, so it can also be solved only numerically by for example some Newton-type methods, right?</p>LongMon, 26 Jan 2015 06:01:06 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11159Comment by Long on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11158<p>Thank you Paul for your very useful comment! There is a small mistake, since the gradient of f is x'/eps, but nothing that cannot be fixed.</p>LongMon, 26 Jan 2015 05:59:26 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11158Answer by Paul Rubinhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11145<p>Check the following math carefully, since I last studied calculus at Newton's knee. The linear part of your objective function is harmless, so I'm going to ignore it for simplicity. Consider the function \(f(z)=\parallel z\parallel \), which is within a change of variables (\(z=Hx\) where \(Q = H'H\)) of your function. Pick a small \(\epsilon > 0\). Let \[g(x)=\frac{x'x}{4\epsilon}+\frac{3\epsilon}{4},
\]which is convex and smooth. Now define \[h(x)=\begin{cases}f(x) & \parallel x\parallel\ge\epsilon\\g(x) & \parallel x\parallel<\epsilon.\end{cases}\]At any \(x\) with \(\parallel x\parallel = \epsilon\), we have \[g(x)=\frac{\epsilon^{2}}{4\epsilon}+\frac{3\epsilon}{4}=\epsilon=f(x)\]and \[\nabla g(x) = \frac{1}{2\epsilon}x' = \nabla f(x) \] (I think). So you have a continuous first derivative, which should let you optimize \(h\) pretty easily once you confirm that \(h\) is in fact convex. (I'm pretty sure about that but too lazy to check it.)</p>Paul RubinFri, 23 Jan 2015 11:46:59 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11145Comment by Paul Rubin on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11144<p>Not enough space in a comment; I'll have to add another answer.</p>Paul RubinFri, 23 Jan 2015 11:35:54 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11144Comment by optimizer on optimizer's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11140<p>No, the question becomes much easier. Let us first formulate the problem into standard form: \(\min\{t : v = [Px; t-c'x], a'x-b+s=0, x \in R^n, t \in R, s \ge 0, v \in K\}\). You can now easily write down <a href="https://inst.eecs.berkeley.edu/~ee227a/fa10/login/l_dual_conic.html">the KKT conditions</a> for x=0, and try to find a y that satisfies the conditions. Since you only have linear constraints, the Slater condition is satisfied, so the KKT conditions are necessary and sufficient for optimality.</p>optimizerFri, 23 Jan 2015 09:29:31 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11140Comment by Long on optimizer's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11139<p>I understand that it can be solved by some SOCP solver. What i'm wondering is, if it can be decided more clever. Actually i don't want to find the optimal solution, but only decide if 0 is the optimal solution or is this question as hard to answer as the seeking for an optimal solution?</p>LongFri, 23 Jan 2015 08:54:56 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11139Answer by optimizerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11138<p>If you decompose \(Q\) into \(PP^T\), the objective can be written as \(||P^T x||+c^Tx\). This means that you have a a second order conic optimization problem. These are easy to optimize, and most conic solvers will give you a primal-dual pair that allows you to verify optimality. You could try to find a dual solution yourself.</p>optimizerFri, 23 Jan 2015 08:15:40 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11138Comment by Long on Paul Rubin's answerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11136<p>Thank you for your answers. </p>
<p>Actually, I have a linear constraint.</p>
<p>To be more precise, the problem I'm in interested in, is finding out if 0 is the optimizer of the following continuous optimization problem:</p>
<p>\[\min \sqrt{x'Qx} + c'x, s.t. a'x <= b\]</p>
<p>We have that Q is positive semidefinite, \(c \le 0\), \(a \ge 0\) and \(b \ge 0\), so that we have a convex function and a knapsack constraint. </p>
<p><a href="/users/27/paul-rubin"><a href="/users/27/paul-rubin">@Paul</a></a>, can you give me hint of how to compute such a function g?</p>
<p>Maybe a reference?</p>LongThu, 22 Jan 2015 17:18:05 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero#11136Answer by Paul Rubinhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11134<p>I'm not positive, but I think you should be able to approximate the original function \(f\) with a convex function \(g\) that is differentiable everywhere and satisfies \(\sup |f(x)-g(x)| \le \epsilon\) for an arbitrary (small) \(\epsilon > 0\). If \(f(0) \le \min_x g(x)\), then 0 is at least \(\epsilon\)-optimal for \(f\) (which is as much of a guarantee as you are going to get from any numerical method, due to rounding and convergence tolerance).</p>Paul RubinThu, 22 Jan 2015 15:37:28 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11134Answer by optimizerhttp://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11122<p>Your question is confusing, since you want to verify if 0 is the optimum but also mention a <a href="http://en.wikipedia.org/wiki/Subgradient_method">subgradient method</a> as a possible technique. The latter is an iterative optimization technique that can find the global minimum of a convex function. Using that technique and comparing the solution with 0 is an inefficient way of verifying optimality.</p>
<p>It is not possible to efficiently determine the subgradient numerically. Evaluating all points for unit vectors is insufficient, e.g., take f(x,y) = (x-y)^2 + 0.5(x+y). Then f(0,0) = 0, while the four function values at unit vectors are 1.5, 0.5, 1.5 and 0.5, yet 0 is not optimal (since f(-1,-1) = -1).</p>
<p>Can't you determine the subgradient analytically? What is the function?</p>
<p>Otherwise, the only thing I can think of, is to check if the derivative is 0 at some point other than 0, because that point would be a local optimum. You can compare the function value at that point with the function value at 0. </p>
<p>On a sidenote, for <em>constrained</em> optimization, there are KKT conditions for convex functions that use the subgradient instead of the derivative. See, e.g,. Theorem 3.1 and Corollary 3.5 in <a href="http://www.staff.science.uu.nl/~balde101/cao12/cursus10_1cc.pdf">this document</a>. Another good reference is the book by Rockafellar on Convex Analysis. One of the criteria for optimality is that 0 is in the subgradient of the Lagrangean at 0, which shows that these conditions are not of any help for <em>un</em>constrained optimization.</p>optimizerTue, 20 Jan 2015 10:55:05 -0500http://www.or-exchange.com/questions/11116/decide-whether-a-convex-non-differentiable-function-attaints-its-minimum-at-zero/11122