I am not sure if the following is a legitimate question for this board. I am looking for examples of Partially observed Markov decision processes (preferably infinite horizon, Discrete time, Discrete state) where the actions impact the transitions of the underlying Markov chain. I know for sure that Machine replacement is one such example. However, I do not find other useful examples. Quite often people study POMDPs for active sensing/ target tracking via sensor networks where different actions only impact the observations not the actions. I would appreciate it if somebody could point me to references of POMDP where the actions impact the transition probabilities of the underlying markov chain. 
Mykel Kochenderfer, an aero/astro prof at Stanford, has done work in which he uses POMDPs for aircraft collision avoidance. In this case the control actions involve changing where an aircraft goes, so they would impact the state transition probabilities. ACASX, a collisionavoidance system that looks likely to be deployed operationally, is based on this work (although I'm not sure if in that case it is an MDP or a POMDP formulation). Here is a paper that discusses aircraft collision avoidance as a POMDP. There are a few other papers that might be relevant on his website. answered 22 May '14, 15:46 mbloem thanks for the references
(23 May '14, 16:27)
anon123

Partially observed linear quadratic Gaussian model, which are used quite often in control theory. answered 24 Feb '16, 00:40 adityam 
It is interesting. I always thought there must be an example of POMDP where the actions affect the transition probability. But, I cannot find one, either.
If there is an MDP with actiondependent transition and unobservable state, it might be the situation where you can not get any useful information from observation, I guess.