hi all, I have wrote a routine in Python language using OpenOpt and its solvers to solve dense / sparse problems
with specifiable accuracy (This parameter, Constraints (box-bound, linear, quadratic) also could be easily connected. This problem is very often encountered in many areas, e.g. machine learning, sparse approximation, see for example: http://scikit-learn.org/stable/modules/linear_model.html#elastic-net First of all solver large-scale gsubg is recommended. Some hand-tuning of its parameters also could essentially speedup the solver. Also you could be interested in other OpenOpt NSP solvers – ralg and amsg2p (they are medium-scaled although). You can see the source of the routine and its demo result here. You shouldn't expect gsubg will always solve your problem and inform of obtained result with specifiable accuracy - for some very difficult, e.g. extremely ill-conditioned problems it may - fail to solve QP subproblem (default QP solver is cvxopt, you may involve another one, e.g. commercial or free-for-educational cplex)
- exit with another stop criterion, e.g. maxIter has been reached, or maxShoots have been exceeded (usually latter means you have reached solution, but it cannot be guaranteed in the case)
BTW, you can start/pause/stop solver using a basic OpenOpt GUI. First of all I have created the routine to demonstrate gsubg abilities; I haven't decided yet commit or not commit the routine to OpenOpt, with or without special class for this problem; in either case you can very easily create problems like this one in FuncDesigner (without having to write a routine for derivatives) to solve them by gsubg or another NSP solver; however, IIRC FuncDesigner dot() doesn't work with sparse matrices yet, but it could be done. Regards, Dmitrey |