Free Access
Issue
ESAIM: COCV
Volume 26, 2020
Article Number 25
Number of page(s) 47
DOI https://doi.org/10.1051/cocv/2019020
Published online 03 March 2020
  1. A. Almudevar, A dynamic programming algorithm for the optimal control of piecewise deterministic Markov processes. SIAM J. Control Opti. 40 (2001) 525–539. [CrossRef] [Google Scholar]
  2. S. Altay, K. Colaneri and Z. Eksi, Portfolio optimization for a large investor controlling market sentiment under partial information. SIAM J. Financ. Mat. 10 (2019) 512–546. [CrossRef] [Google Scholar]
  3. S. Asmussen, Applied Probability and Queues (Stochastic Modelling and Applied Probability). Vol. 51 of Applications of Mathematics, 2nd edn. Springer-Verlag, New York (2003). [Google Scholar]
  4. A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, New York (2009). [CrossRef] [Google Scholar]
  5. E. Bandini, Constrained BSDEs driven by a non quasi-left-continuous random measure and optimal control of PDMPs on bounded domains. Preprint arXiv:1712.05205 (2017). [Google Scholar]
  6. E. Bandini, Optimal control of piecewise deterministic Markov processes: a BSDE representation of the value function. ESAIM: COCV 24 (2018) 311–354. [CrossRef] [EDP Sciences] [Google Scholar]
  7. E. Bandini and M. Fuhrman, Constrained BSDEs representation of the value function in optimal control of pure jump Markov processes. Stoch. Process. Appl. 127 (2017) 1441–1474. [CrossRef] [Google Scholar]
  8. E. Bandini, A. Cosso, M. Fuhrman and H. Pham, Randomized filtering and Bellman equation in Wasserstein space for partial observation control problem. Stoch. Process. Appl. 129 (2019) 674–711. [CrossRef] [Google Scholar]
  9. E. Bandini, F. Confortola and A. Cosso, BSDE representation and randomized dynamic programming principle for stochastic control problems of infinite-dimensional jump-diffusions. Preprint arXiv:1810.01728 (2018). [Google Scholar]
  10. E. Bandini, A. Cosso, M. Fuhrman and H. Pham, Backward SDEs for optimal control of partially observed path-dependent stochastic systems: a control randomization approach. Ann. Appl. Probab. 28 (2018) 1634–1678. [Google Scholar]
  11. G. Barles, Solutions de viscosité des équations de Hamilton-Jacobi. Vol. 17 of Mathematiques & Applications. Springer-Verlag, Paris (1994). [Google Scholar]
  12. A. Bensoussan, M. Çakanyıldırım and S.P. Sethi, On the optimal control of partially observed inventory systems. C. R. Math. Acad. Sci. Paris 341 (2005) 419–426. [CrossRef] [Google Scholar]
  13. A. Bensoussan, J. Frehse and P. Yam, Mean Field Games and Mean Field Type Control Theory. Springer Briefs in Mathematics. Springer, New York (2013). [CrossRef] [Google Scholar]
  14. D.P. Bertsekas and S.E. Shreve, Stochastic Optimal Control: The Discrete Time Case. Vol. 139 of Mathematics in Science and Engineering. Academic Press, Inc., New York, London (1978). [Google Scholar]
  15. V.I. Bogachev, Measure Theory, Vol. I, II. Springer-Verlag, Berlin (2007). [CrossRef] [Google Scholar]
  16. P. Brémaud, Point Processes and Queues. Springer Series in Statistics. Springer-Verlag, New York (1981). [CrossRef] [Google Scholar]
  17. A.E. Bryson, Jr. and D.E. Johansen, Linear filtering for time-varying systems using measurements containing colored noise. IEEE Trans. Automat. Contr. AC-10 (1965) 4–10. [Google Scholar]
  18. E. Buckwar and M.G. Riedler, An exact stochastic hybrid model of excitable membranes including spatio-temporal evolution. J. Math. Biol. 63 (2011) 1051–1093. [CrossRef] [PubMed] [Google Scholar]
  19. A. Calvia, Optimal control of continuous-time Markov chains with noise-free observation. SIAM J. Control Optim. 56 (2018) 2000–2035. [Google Scholar]
  20. C. Ceci and A. Gerardi, Filtering of a Markov jump process with counting observations. Appl. Math. Optim. 42 (2000) 1–18. [Google Scholar]
  21. C. Ceci and A. Gerardi, Nonlinear filtering equation of a jump process with counting observations. Acta Appl. Math. 66 (2001) 139–154. [Google Scholar]
  22. C. Ceci and A. Gerardi, Controlled partially observed jump processes: dynamics dependent on the observed history. In Vol 47 of Proceedings of the Third World Congress of Nonlinear Analysts, Part 4 Catania, 2000 (2001) 2449–2460. [Google Scholar]
  23. C. Ceci, A. Gerardi and P. Tardelli, Existence of optimal controls for partially observed jump processes. Acta Appl. Math. 74 (2002) 155–175. [Google Scholar]
  24. K. Colaneri, Z. Eksi, F. Rüdiger and M. Szölgyenyi, Optimal liquidation under partial information with price impact. Preprint arXiv:1606.05079v4 (2019). [Google Scholar]
  25. F. Confortola and M. Fuhrman, Filtering of continuous-time Markov chains with noise-free observation and applications. Stochastics 85 (2013) 216–251. [CrossRef] [Google Scholar]
  26. O.L.V. Costa and F. Dufour, Continuous Average Control of Piecewise Deterministic Markov Processes. Springer Briefs in Mathematics. Springer, New York (2013). [CrossRef] [Google Scholar]
  27. O.L.V. Costa, F. Dufour and A.B. Piunovskiy, Constrained and unconstrained optimal discounted control of piecewise deterministic Markov processes. SIAM J. Control Optim. 54 (2016) 1444–1474. [Google Scholar]
  28. M.G. Crandall, H. Ishii and P.-L. Lions, User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. (N.S.) 27 (1992) 1–67. [CrossRef] [MathSciNet] [Google Scholar]
  29. D. Crisan, M. Kouritzin and J. Xiong, Nonlinear filtering with signal dependent observation noise. Electron. J. Probab. 14 (2009) 1863–1883. [Google Scholar]
  30. M.H.A. Davis, Control of piecewise-deterministic processes via discrete-time dynamic programming, in Stochastic Differential Systems (Bad Honnef, 1985). Vol. 78 of Lecture Notes in Control and Information Sciences. Springer, Berlin (1986) 140–150. [CrossRef] [Google Scholar]
  31. M.H.A. Davis and M. Farid, Piecewise-deterministic processes and viscosity solutions, in Stochastic Analysis, Control, Optimization and Applications. Systems Control Foundations and Applications. Birkhäuser Boston, Boston, MA (1999) 249–268. [Google Scholar]
  32. M.H.A. Davis, Markov Models and Optimization. Vol. 49 of Monographs on Statistics and Applied Probability. Chapman and Hall, London (1993). [Google Scholar]
  33. M.A.H. Dempster, Optimal control of piecewise deterministic Markov processes, in Applied Stochastic Analysis (London, 1989). Vol. 5 of Stochastics Monographs, Gordon and Breach, New York (1991) 303–325. [Google Scholar]
  34. R.J. Elliott, L. Aggoun and J.B. Moore, Hidden Markov Models: Estimation and Control. Vol. 29 of Applications of Mathematics (New York). Springer-Verlag, New York (1995). [Google Scholar]
  35. G. Fabbri, F. Gozzi and A. Swiech, Stochastic Optimal Control in Infinite Dimension: Dynamic Programming and HJB Equations, With a Contribution by Marco Fuhrman and Gianmario Tessitore. Vol. 82 of Probability Theory and Stochastic Modelling. Springer, Cham (2017). [CrossRef] [Google Scholar]
  36. W.H. Fleming and H.M. Soner, Controlled Markov Processes and Viscosity Solutions. Vol. 25 of Stochastic Modelling and Applied Probability, 2nd edn. Springer, New York (2006). [Google Scholar]
  37. L. Forwick, M. Schäl and M. Schmitz, Piecewise deterministic Markov control processes with feedback controls and unbounded costs. Acta Appl. Math. 82 (2004) 239–267. [Google Scholar]
  38. M. Jacobsen, Point Process Theory and Applications: Marked Point and Piecewise Deterministic Processes. Probability and Its Applications. Birkhäuser Boston, Inc., Boston, MA (2006). [Google Scholar]
  39. J. Jacod, Multivariate point processes: predictable projection, Radon-Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 31 (1974) 235–253. [CrossRef] [Google Scholar]
  40. M. Joannides and F. LeGland, Nonlinear filtering with continuous time perfect observations and noninformative quadratic variation, in Proceeding of the 36th IEEE Conference on Decision and Control (1997) 1645–1650. [CrossRef] [Google Scholar]
  41. I. Kharroubi and H. Pham, Feynman-Kac representation for Hamilton-Jacobi-Bellman IPDE. Ann. Probab. 43 (2015) 1823–1865. [Google Scholar]
  42. H. Körezlioğlu and W.J. Runggaldier, Filtering for nonlinear systems driven by nonwhite noises: an approximation scheme. Stoch. Stoch. Rep. 44 (1993) 65–102. [CrossRef] [Google Scholar]
  43. G. Last and A. Brandt, Marked Point Processes on the Real Line: The Dynamic Approach. Probability and Its Applications (New York). Springer-Verlag, New York (1995). [Google Scholar]
  44. R.H. Martin, Jr. Differential equations on closed subsets of a Banach space. Trans. Am. Math. Soc. 179 (1973) 399–414. [Google Scholar]
  45. J.R. Norris, Markov Chains. Vol. 2 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (1998). [Google Scholar]
  46. V. Renault, M. Thieullen and E. Trélat, Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via Optogenetics. Netw. Heterog. Media 12 (2017) 417–459. [CrossRef] [Google Scholar]
  47. L.C.G. Rogers and D. Williams, Diffusions, Markov processes, and Martingales (Foundations). Vol. 1 of Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, 2nd edn. John Wiley & Sons, Ltd., Chichester (1994). [Google Scholar]
  48. Y. Takeuchi and H. Akashi, Least-squares state estimation of systems with state-dependent observation noise. Automatica J. IFAC 21 (1985) 303–313. [CrossRef] [Google Scholar]
  49. D. Vermes, Optimal control of piecewise deterministic Markov process. Stochastics 14 (1985) 165–207. [CrossRef] [MathSciNet] [Google Scholar]
  50. J.T. Winter, Optimal Control of Markovian Jump Processes with Different Information Structures. Ph.D. thesis, Universität Ulm (2008). [Google Scholar]
  51. J. Xiong, An Introduction to Stochastic Filtering Theory. Oxford University Press, New York (2008). [Google Scholar]
  52. A.A. Yushkevich, On reducing a jump controllable Markov model to a model with discrete time. Theory Probab. Appl. 25 (1980) 58–69. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.