Discrete-Time Approximations of Controlled Diffusions with Infinite Horizon Discounted and Average Cost Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2502.05596
We present discrete-time approximation of optimal control policies for infinite horizon discounted/ergodic control problems for controlled diffusions in $\Rd$\,. In particular, our objective is to show near optimality of optimal policies designed from the approximating discrete-time controlled Markov chain model, for the discounted/ergodic optimal control problems, in the true controlled diffusion model (as the sampling period approaches zero). To this end, we first construct suitable discrete-time controlled Markov chain models for which one can compute optimal policies and optimal values via several methods (such as value iteration, convex analytic method, reinforcement learning etc.). Then using a weak convergence technique, we show that the optimal policy designed for the discrete-time Markov chain model is near-optimal for the controlled diffusion model as the discrete-time model approaches the continuous-time model. This provides a practical approach for finding near-optimal control policies for controlled diffusions. Our conditions complement existing results in the literature, which have been arrived at via either probabilistic or PDE based methods.
Related Topics To Compare & Contrast
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2502.05596
- https://arxiv.org/pdf/2502.05596
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4407385158