Modified Policy Iteration Algorithms for Discounted Markov Decision Problems
Author
Suggested Citation
DOI: 10.1287/mnsc.24.11.1127
Download full text from publisher
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Heer Burkhard & Maußner Alfred, 2011.
"Value Function Iteration as a Solution Method for the Ramsey Model,"
Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 231(4), pages 494-515, August.
- Burkhard Heer & Alfred Maussner, 2008. "Value Function Iteration as a Solution Method for the Ramsey Model," CESifo Working Paper Series 2278, CESifo.
- Mauro Gaggero & Giorgio Gnecco & Marcello Sanguineti, 2014. "Approximate dynamic programming for stochastic N-stage optimization with application to optimal consumption under uncertainty," Computational Optimization and Applications, Springer, vol. 58(1), pages 31-85, May.
- Mauro Gaggero & Giorgio Gnecco & Marcello Sanguineti, 2013. "Dynamic Programming and Value-Function Approximation in Sequential Decision Problems: Error Analysis and Numerical Results," Journal of Optimization Theory and Applications, Springer, vol. 156(2), pages 380-416, February.
- Mercedes Esteban-Bravo & Jose M. Vidal-Sanz & Gökhan Yildirim, 2014.
"Valuing Customer Portfolios with Endogenous Mass and Direct Marketing Interventions Using a Stochastic Dynamic Programming Decomposition,"
Marketing Science, INFORMS, vol. 33(5), pages 621-640, September.
- Vidal-Sanz, Jose M. & Yildirim, Gökhan, 2012. "Valuing customer portfolios with endogenous mass-and-direct-marketing interventions using a stochastic dynamic programming decomposition," DEE - Working Papers. Business Economics. WB wb121304, Universidad Carlos III de Madrid. Departamento de EconomÃa de la Empresa.
- David L. Kaufman & Andrew J. Schaefer, 2013. "Robust Modified Policy Iteration," INFORMS Journal on Computing, INFORMS, vol. 25(3), pages 396-410, August.
- Gabriel Zayas‐Cabán & Emmett J. Lodree & David L. Kaufman, 2020. "Optimal Control of Parallel Queues for Managing Volunteer Convergence," Production and Operations Management, Production and Operations Management Society, vol. 29(10), pages 2268-2288, October.
- Phelan, Thomas & Eslami, Keyvan, 2022. "Applications of Markov chain approximation methods to optimal control problems in economics," Journal of Economic Dynamics and Control, Elsevier, vol. 143(C).
- Keyvan Eslami & Tom Phelan, 2023. "The Art of Temporal Approximation An Investigation into Numerical Solutions to Discrete and Continuous-Time Problems in Economics," Working Papers 23-10, Federal Reserve Bank of Cleveland.
- Herzberg, Meir & Yechiali, Uri, 1996. "A K-step look-ahead analysis of value iteration algorithms for Markov decision processes," European Journal of Operational Research, Elsevier, vol. 88(3), pages 622-636, February.
- Pelin Canbolat & Uriel Rothblum, 2013. "(Approximate) iterated successive approximations algorithm for sequential decision processes," Annals of Operations Research, Springer, vol. 208(1), pages 309-320, September.
- Gabriel Zayas-Cabán & Mark E. Lewis, 2020. "Admission control in a two-class loss system with periodically varying parameters and abandonments," Queueing Systems: Theory and Applications, Springer, vol. 94(1), pages 175-210, February.
- Oleksandr Shlakhter & Chi-Guhn Lee, 2013. "Accelerated modified policy iteration algorithms for Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 78(1), pages 61-76, August.
- Detlef Seese & Christof Weinhardt & Frank Schlottmann (ed.), 2008. "Handbook on Information Technology in Finance," International Handbooks on Information Systems, Springer, number 978-3-540-49487-4, November.
- Keyvan Eslami & Tom Phelan, 2021. "Applications of Markov Chain Approximation Methods to Optimal Control Problems in Economics," Working Papers 21-04R, Federal Reserve Bank of Cleveland, revised 17 May 2022.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:24:y:1978:i:11:p:1127-1137. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.