Research reports

Deep neural network approximations for Monte Carlo algorithms

by Ph. Grohs and A. Jentzen and D. Salimova

(Report number 2019-50)

Abstract
In the past few years deep artificial neural networks (DNNs) have been successfully employed in a large number of computational problems including, e.g., language processing, image recognition, fraud detection, and computational advertisement. Recently, it has also been proposed in the scientific literature to reformulate partial differential equations (PDEs) as stochastic learning problems and to employ DNNs together with stochastic gradient descent methods to approximate the solutions of such PDEs. There are also a few mathematical convergence results in the scientific literature which show that DNNs can approximate solutions of certain PDEs without the curse of dimensionality in the sense that the number of real parameters employed to describe the DNN grows at most polynomially both in the PDE dimension \(d \in \mathbb{N}\) and the reciprocal of the prescribed approximation accuracy \(\varepsilon > 0\). One key argument in most of these results is, first, to employ a Monte Carlo approximation scheme which can approximate the solution of the PDE under consideration at a fixed space-time point without the curse of dimensionality and, thereafter, to prove then that DNNs are flexible enough to mimic the behaviour of the employed approximation scheme. Having this in mind, one could aim for a general abstract result which shows under suitable assumptions that if a certain function can be approximated by any kind of (Monte Carlo) approximation scheme without the curse of dimensionality, then the function can also be approximated with DNNs without the curse of dimensionality. It is a key contribution of this article to make a first step towards this direction.In particular,the main result of this paper, roughly speaking, shows that if a function can be approximated by means of some suitable discrete approximation scheme without the curse of dimensionality and if there exist DNNs which satisfy certain regularity properties and which approximate this discrete approximation scheme without the curse of dimensionality, then the function itself can also be approximated with DNNs without the curse of dimensionality. Moreover, for the number of real parameters used to describe such approximating DNNs we provide an explicit upper bound for the optimal exponent of the dimension \(d \in \mathbb{N}\) of the function under consideration as well as an explicit lower bound for the optimal exponent of the prescribed approximation accuracy \(\varepsilon >0\). As an application of this result we derive that solutions of suitable Kolmogorov PDEs can be approximated with DNNs without the curse of dimensionality.

Keywords:

BibTeX
@Techreport{GJS19_854,
  author = {Ph. Grohs and A. Jentzen and D. Salimova},
  title = {Deep neural network approximations for Monte Carlo algorithms},
  institution = {Seminar for Applied Mathematics, ETH Z{\"u}rich},
  number = {2019-50},
  address = {Switzerland},
  url = {https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2019/2019-50.pdf },
  year = {2019}
}

Disclaimer
© Copyright for documents on this server remains with the authors. Copies of these documents made by electronic or mechanical means including information storage and retrieval systems, may only be employed for personal use. The administrators respectfully request that authors inform them when any paper is published to avoid copyright infringement. Note that unauthorised copying of copyright material is illegal and may lead to prosecution. Neither the administrators nor the Seminar for Applied Mathematics (SAM) accept any liability in this respect. The most recent version of a SAM report may differ in formatting and style from published journal version. Do reference the published version if possible (see SAM Publications).

JavaScript has been disabled in your browser