To assess the uncertainty of a process under consideration, I thought of using monte carlo simulation on LP optimization problem. The idea is to assign a distribution to an input parameter and then find the distribution of an output. Are there any references where this idea is either explained or further developed? Are there any other relatively faster and less computation intensive methods which can be considered to analyze the uncertainty? Thanks asked 29 Aug '12, 17:53 Ram 
I'm not sure it's pertinent to what you want, but you might take a look at robust optimization. answered 29 Aug '12, 19:55 Paul Rubin ♦♦ fbahr ♦ Thank you so much. It was really useful.
(30 Aug '12, 00:50)
Ram

You can also look at simulation optimization/stochastic optimization, where you can consider a broader set of uncertainties with possibly nonnormal distributions. Even if the deterministic version of the problem is LP, the stochastic version is nonlinear, and the optimization is typically solved using metaheuristics (with the uncertainties analyzed using Monte Carlo simulation). For a quick demonstration of stochastic optimization with MS Excel, look at the implementation of Tabu Search called OptQuest for solving these problems in Oracle Crystal Ball. Disclaimer: I work for Oracle Crystal Ball. answered 31 Aug '12, 13:36 Samik R. @Samik R.:Thanks for the information. OCB is a really interesting project it seems. Do you develop algorithms specific to a problem?
(31 Aug '12, 18:23)
Ram
@Ram: First, we (Oracle Crystal Ball) outsource the solving to OptQuest (also mentioned by @Steve Mack below). Second, the implementation is generic, not specific to any type of problem, and does not take into a/c specific problem structure. Having said that, the solver is very powerful, and is used by most leading simulation vendors, except Frontline (they have their own engine).
(01 Sep '12, 00:21)
Samik R.

The general structure of your approach seems valid to me as I've done similar experiments in the past. Although similar to you, I've not seen any reference on the subject. I usually do such experiments as follows:
you might also check doing a discreteevent simulation based on the logic of your problem. For example, a discreteevent simulation is more appropriate to analyze an inventory system or a production planning system. Finally, I think if you consult some resources on Monte Carlo and/or discreteevent simulation on how to generate random numbers and variables (if you're doing everything from the scratch) and design you experiments, you would be good to go. PS. Methods, proposed by @Paul and @Samik, focus on situations where you want to model a problem with uncertain or probabilistic parameters. They are both worth investigating as they are proactive against uncertainty and risk unlike your own approach.1.2. answered 31 Aug '12, 16:29 Ehsan ♦ Thank you so much for the detailed information. I am trying to compare all these approaches side by side. I will try to upload my comparison in couple of days.
(31 Aug '12, 16:47)
Ram

You can buy Monte Carlo Optimization packages offtheshelf. See systems by Frontline and Palisade. Both have the OptQuest solver embedded in them. You can download trial versions of each to test them. The packages aren't cheap, but building out an Opt/Sim system yourself makes little sense because there is a lot of I/O overhead and data management required for each iteration, so the level of effort as well as expertise needed would be substantial. That could be a painful reinvention of the wheel for sure. answered 31 Aug '12, 17:45 Steve Mack I was not aware of these interesting tools. Thanks. Basically, even though I am making a model for the entire process, I will be testing only few input parameters.
(31 Aug '12, 18:17)
Ram

Following @Bo Jensen's link on ORX's google+ page, I found: Importance Sampling in Stochastic Programming: A Markov Chain Monte Carlo Approach:
answered 02 Sep '12, 03:51 yeesian 
You could also take a look at spacefilling experimental designs. Instead of sampling a (multivariate) distribution of input parameters, you use design points from a unit hypercube that are presumably maximally different and scale these values to the desired intervals of your inputs. Of course, this only gives you uniform ranges. Take a look at Spacefilling Designs. These folks have design points readily available for a large number of input dimensions. If your LP is reasonably cheap and you can easily compute 1000 different solutions, an alternative would be the usage of lowdiscrepancy frequncies, such as Sobol, Faure, or Halton sequences. In contrast to pseudorandom numbers, these sequences tend to distribute the design points nicely over the unit hypercube, but only for larger sample sizes. answered 06 Sep '12, 17:58 loehndorf Paul Rubin ♦♦ 
@Ram: Are you trying to run a Monte Carlo simulation model for the solution of a deterministic LP model to understand how it would perform under uncertainty? Or are you trying to model a problem with uncertain or probabilistic parameters?
@Ehsan: I am trying for the first scenario i.e (deterministic Opt followed by MC for uncertainty) Thanks.