Dear all, I have a probability matrix (A), and I want to solve the following system of equations : Ax =b Speficially A matrix is defined as follows
The matrix A is square upper triangular  thus a unique solution x must be found by backwardsubstititution. However, as the size of A grows, the entries of A_ij approach to 0, and after 270th row I observe 0's in diagonals. Thus, the matrix becomes singular. On the other hand b vector has a range of values between [0, 10 million]. As a remedy, I thought I could divide each row i of matrix A , and vector B by Binom(x=i, size=i, prob), so that the updated matrix would have 1's in the diagonal. This made the stucture of A much nicer, however in that case the updated b vector has a range betweeen [0, 10e+310] and quickly violates the double upper bound(e+308) and throws an error when it comes to computing x's. With or without any scaling of A, b  I am aware that when I minimize norm(Axb,2)  I won't be able to get 0 error because of precision errors. Still currently when I do so (as this is equal to solving Ax=b) the current range of error is circa 1e+168, which makes me highly question my current approach. And what bothers me the most, is that there surely exists an optimal solution to this system. So far I have used the following tools to solve this problem:
I would appreciate any recommendations, tools. Thank you in advance.. EDIT: I guess my problem boils down to how do we compute Ax =b , when the range of x variables exceed double storage limits. asked 11 Nov '12, 16:53 Roark 
You could try using exact approaches (Matlab Symbolic Math toolbox, QSopt_ex, Maple, Mathematica, something selfmade using GMP, ...). answered 11 Nov '12, 18:26 T O Well, what I'm doing is as an exact approach actually. I looked online, but I guess the tools you mention use are limited to double max limit as well.
(12 Nov '12, 10:55)
Roark
3
I don't think so. The maximal value of a double is something like 1.79769e+308. In Maple for example, you can express much larger numbers. If you could provide the Matrix and the RHS in an appropriate format, I could do some tests with Maple or my own exact LU decomposition using GMP.
(12 Nov '12, 11:19)
T O
I sent you an email.
(13 Nov '12, 03:15)
T O

From the numbers you have it does not look like double precision will work for any larger version of your problem. Using quad precision might get you a little further, but probably not much, so the exact approach might be your only choice. My personal rule of thump is that if the difference between the largest and the smallest coefficient in a row or the difference between the smallest coefficient of the right hand side is larger than 1e9 you have to expect numerical trouble. This does'nt mean it can't work, just don't be surprised if it doesn't. If you want to use a general purpose LP solver like Gurobi, you should be aware that these only quarantee a certain precision in their solution, the feasibility tolerance, usually in the area of 1e6. This can be tightened a bit but results might be strange. Also, to increase your chance of getting something useful out of a general purpose solver, you should disable presolve. It is a common feature of LP solvers to drop matrix coeffcients below a certain threshold (something like 1e9), so be aware of that. answered 12 Nov '12, 09:37 Philipp Chri... Thank you very much. Well in my case the difference can be up to e100, and that causes a very big trouble. My intention was to minimize to the extent possible, L2 norm so I don't expect much from LP solvers. However in any case it seems like whatever solution I get it will be bad:(
(12 Nov '12, 10:59)
Roark
