I had some ideas for optimization programs, but I'm not sure if I'm off in wonderland. :) The first one is: Minimise: [z = √(x^2 + y^2)]Subject to: [x^2 + y^2 > h^2]x, y, z are integers > 0Here h is the length of the hypotenuse of a Pythagorean triad, this minimization program is for finding the next highest value Pythagorean triad. The idea is that running this program over and over would give a series of Pythagorean triads. The next program is: Minimise: a, bSubject to: √2 = a/ba, b are integers > 0The idea here is to show somehow that this minimisation program violates some convex optimisation feasibility condition. This would prove that √2 is not a rational number. I call this method of proof: "proof by infeasibility." XD The next program combines the first two in an obvious way: Minimise: [z = (x^n + y^n)^(1/n)]Subject to: [x^n + y^n > 0]x, y, z are integers > 0If it’s possible to show that this minimisation program violates some convex optimisation feasibility condition for values of n ≥ 3, i.e. the program is infeasible, then this would hopefully prove Fermat’s last theorem. And the last program is an attempt to frame the Riemann hypothesis as an optimization program. If this method of doing things isn’t a crackpot way of doing things then maybe it would be possible to show that the nontrivial zeros of the Riemann zeta function are only feasible on the line s = ½ + ei. Minimise: s = a + ciSubject to: s > b + diξ(s) = 0Here s = b + di is a known nontrivial zero of the Riemann zeta function, and s = a + ci is the next nontrivial solution to Riemann’s zeros. We don’t need to actually find the next solution, we just need to show that for any nontrivial solution of Riemann’s zeros which lies on the line s = ½ + ei, the feasible region for this program strictly lies on the same line. This would be like a proof by induction. Unfortunately, complex numbers are unordered, so I'm not sure if optimizing complex functions is possible... But I'm not sure about most of this stuff. :) asked 29 May '15, 23:05 Chickenmales 
Without commenting on each of your proposed problems, let me make the observation that to prove things based on showing a problem is infeasible, there needs to be a rigorous (global) proof of infeasibility. And that means that among other things, roundfoff errors and tolerances must be rigorously dealt with, which is not normally done in garden variety math programming, and not even in BARON, which despite being called a global optimizer, is not rigorous, and is subject to the vagaries of roundoff errors and constraint tolerances. So for instance, if you can prove infeasibility using rigorous outwardrounding interval or radial arithmetic, then you are golden. You may find this to be not necessarily easy to do for actual problems of interest. The closer a call is it to infeasibility, the large the number of (outward rounded) digits you may need in your calculations. Given that there are no hardware implementations for extended precision interval or radial arithmetic, such calculations may be quite slow. More seriously, if you don't start with a compact set for consideration, such as a box, I don't see how you are going to get anywhere in such an approach. And proving infeasibility in a box does not correspond to your problem statement. So that pretty much puts the kibosh on all of your proposed problems as written. Yes, you will have to do some work to have a welldefined objective function for number 4, with unambiguous function value ordering. Doing things for every possible integer value  good luck with that. And aside from all the other difficulties, are you going to separately prove Fermat's theorem, so to speak, for each value of n from 3 to infinity? Anyhow, I have figured out how to do it, but this answer box is not ample enough to contain my code. answered 29 May '15, 23:53 Mark L Stone So I was off in wonderland. :) I see the problem with the "proof by infeasibility," idea, the problem is essentially unbounded. I thought because problems 2 and 3 were minimization problems and they had a lower bound then they would be okay, but because they're linear integer programming problems, they need an upper bound as well, i.e. they need to be a box. Is that a correct interpretation of what you said? With problem number 4, would it be possible to order the function values so that a + bi has a value of (a  ½) + b? Or am I off in wonderland again?
(30 May '15, 05:31)
Chickenmales
1
I am stating that for a rigorous outwardrounded interval or radial arithmetic approach to work, I don't know how to do that without bounding the domain of the optimization variables. I'm not saying anything more than that. Regarding #2, I don't see how you are going to do that even with arbitrarily high precision interval or radial arithmetic. You need infinite precision arithmetic. I suppose you could write down an optimization problem which you solve to prove sqrt(2) is irrational, but that might just amount in essence to a classical proof. As for #4, I leave that to you to figure out.
(30 May '15, 11:27)
Mark L Stone
