Mathematics > Optimization and Control
[Submitted on 26 Aug 2024 (v1), last revised 27 Aug 2024 (this version, v2)]
Title:A Derivative-Free Martingale Neural Network Soc-Martnet For The Hamilton-Jacobi-Bellman Equations In Stochastic Optimal Controls
View PDF HTML (experimental)Abstract:In this paper, we propose an efficient derivative-free version of a martingale neural network SOC-MartNet proposed in Cai et al. [2] for solving high-dimensional Hamilton-Jacobi-Bellman (HJB) equations and stochastic optimal control problems (SOCPs) with controls on both drift and volatility. The solution of the HJB equation consists of two steps: (1) finding the optimal control from the value function, and (2) deriving the value function from a linear PDE characterized by the optimal control. The linear PDE is reformulated into a weak form of a new martingale formulation from the original SOC-MartNet where all temporal and spatial derivatives are replaced by an univariate, first-order random finite difference operator approximation, giving the derivative free version of the SOC-MartNet. Then, the optimal feedback control is identified by minimizing the mean of the value function, thereby avoiding the need for pointwise minimization on the Hamiltonian. Finally, the optimal control and value function are approximated by neural networks trained via adversarial learning using the derivative-free formulation. This method eliminates the reliance on automatic differentiation for computing temporal and spatial derivatives, offering significant efficiency in solving high-dimensional HJB equations and SOCPs.
Submission history
From: Wei Cai [view email][v1] Mon, 26 Aug 2024 16:27:26 UTC (2,056 KB)
[v2] Tue, 27 Aug 2024 03:21:16 UTC (2,056 KB)
Current browse context:
math.OC
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.