Issue |
ESAIM: COCV
Volume 27, 2021
|
|
---|---|---|
Article Number | 85 | |
Number of page(s) | 19 | |
DOI | https://doi.org/10.1051/cocv/2021081 | |
Published online | 27 July 2021 |
A policy iteration method for mean field games
1
Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre,
Largo S. L. Murialdo 1,
00146
Roma, Italy.
2
SBAI, Sapienza Università di Roma,
via A.Scarpa 14,
00161
Roma, Italy.
3
Dipartimento di Matematica, Università di Padova,
via Trieste 63,
35121
Padova, Italy.
* Corresponding author: fabio.camilli@uniroma1.it
Received:
21
December
2020
Accepted:
9
July
2021
The policy iteration method is a classical algorithm for solving optimal control problems. In this paper, we introduce a policy iteration method for Mean Field Games systems, and we study the convergence of this procedure to a solution of the problem. We also introduce suitable discretizations to numerically solve both stationary and evolutive problems. We show the convergence of the policy iteration method for the discrete problem and we study the performance of the proposed algorithm on some examples in dimension one and two.
Mathematics Subject Classification: 49N70 / 35Q91 / 91A16 / 49M15
Key words: Mean Field Games / policy iteration / convergence / numerical methods
© The authors. Published by EDP Sciences, SMAI 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.