Could someone please give me a list (in ascending order of difficulty) of books that can get me from a novice (basic mathematical programming) to an experienced level?

I am more interested in the issues of modelling and solving stochastic programming (stochastic control) problems, specifically multi-stage SP ones, however I realized that I need to start from the very basics.

My main difficulties until now are a proper explanation of how to model uncertainty, how mathematical problems can be expressed in a DP form and how one generates the dynamics of the problem (mainly distributions, I guess this is called system identification, correct me if I am wrong).

Also any recommendation on examples of problems and solutions using some open-source solver would be highly appreciated.

Thank you a lot in advance.

asked 09 Feb '16, 10:20

lstavr's gravatar image

accept rate: 0%

edited 09 Feb '16, 10:21

I found that the book "Stochastic Inventory Theory" by Evan Porteus does a nice job discussing stochastic dynamic programming (in the context of inventory control problems) and it is not extremely sophisticated mathematically.

(09 Feb '16, 13:07) Andreas

I started in SP by first reading Kall & Wallace , then for a more advanced treatment I went for for Shapiro, Dentcheva, Ruszczynski . In the approximation theory camp there's "Neuro-dynamic programming" by Bertsekas & Tsitsiklis which is quite rigorous too. You're asking about multi-stage problems; I know only of Dentcheva's work in this context; I think the book I've mentioned above has some treatment on this.

The whole field of SP started with the work of Dantzig who first introduced stochastic linear programs in the '50s; there has been a schism in the past decades between control engineering, mathematical programming/operations research and computational statistics, so the theme of "noise in control/optimization problems" has been tackled multiple times and described with multiple vocabularies (which is fine, since it's such a ramified concept, but it also makes it very hard to follow).

There's a very enlightening book ("Approximate dynamic programming" by Powell) that tries to tie this theoretical mess together and it's not even too heavy on the formalism.

Each of these camps ("robust control", "reinforcement learning", "(partially observable) Markov decision processes", "multi-armed bandits", "stochastic programming") focused on overlapping problems but might differ in starting assumptions: e.g. MDP's are defined over usually discrete state & discrete action spaces therefore with some defined concept of a "state transition matrix", Robust control is defined in continuous time/action/control/noise spaces that are discretized at convenience, whereas in the OR-sense of Stochastic programming we usually have finite-support distributions ("scenarios") but simply bounded but real-vector-valued stage decision variables and the aim is usually to bound the complexity of the scenario tree.

I hope I haven't confused you further, feel free to message me if you have any questions


answered 07 Mar '16, 10:09

ocramz's gravatar image

accept rate: 0%

edited 08 Mar '16, 05:51

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: 09 Feb '16, 10:20

Seen: 547 times

Last updated: 08 Mar '16, 05:51

OR-Exchange! Your site for questions, answers, and announcements about operations research.