> simulation by means of second-kind Galerkin boundary element method.>> Source: Elke Spindler "Second-Kind Single Trace Boundary Integral>> Formulations for Scattering at Composite Objects", ETH Diss 23620, 2016."" > > simulation by means of second-kind Galerkin boundary element method.>> Source: Elke Spindler "Second-Kind Single Trace Boundary Integral>> Formulations for Scattering at Composite Objects", ETH Diss 23620, 2016."" > Research reports – Seminar for Applied Mathematics | ETH Zurich

Research reports

Deep Operator Network Approximation Rates for Lipschitz Operators

by Ch. Schwab and A. Stein and J. Zech

(Report number 2023-30)

Abstract
We establish universality and expression rate bounds for a class of neural Deep Operator Networks (DON) emulating Lipschitz (or Hölder) continuous maps \(\mathcal G:\mathcal X\to\mathcal Y\) between (subsets of) separable Hilbert spaces \(\mathcal X\), \(\mathcal Y\). The DON architecture considered uses linear encoders \(\mathcal E\) and decoders \(\mathcal D\) via (biorthogonal) Riesz bases of \(\mathcal X\), \(\mathcal Y\), and an approximator network of an infinite-dimensional, parametric coordinate map that is Lipschitz continuous on the sequence space \(\ell^2(\mathbb N)\). Unlike previous works ([Herrmann, Schwab and Zech: Neural and Spectral operator surrogates: construction and expression rate bounds, SAM Report, 2022], [Marcati and Schwab: Exponential Convergence of Deep Operator Networks for Elliptic Partial Differential Equations, SAM Report, 2022]), which required for example \(\mathcal G\) to be holomorphic, the present expression rate results require mere Lipschitz (or Hölder) continuity of \(\mathcal G\). Key in the proof of the present expression rate bounds is the use of either super-expressive activations (e.g. [Yarotski: Elementary superexpressive activations, Int. Conf. on ML, 2021], [Shen, Yang and Zhang: Neural network approximation: Three hidden layers are enough, Neural Networks, 2021], and the references there) which are inspired by the Kolmogorov superposition theorem, or of nonstandard NN architectures with standard (ReLU) activations as recently proposed in [Zhang, Shen and Yang: Neural Network Architecture Beyond Width and Depth, Adv. in Neural Inf. Proc. Sys., 2022]. We illustrate the abstract results by approximation rate bounds for emulation of a) solution operators for parametric elliptic variational inequalities, and b) Lipschitz maps of Hilbert-Schmidt operators.

Keywords: Neural Networks, Operator Learning, Curse of Dimensionality, Lipschitz Continuous Operators

BibTeX
@Techreport{SSZ23_1067,
  author = {Ch. Schwab and A. Stein and J. Zech},
  title = {Deep Operator Network Approximation Rates for Lipschitz Operators},
  institution = {Seminar for Applied Mathematics, ETH Z{\"u}rich},
  number = {2023-30},
  address = {Switzerland},
  url = {https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2023/2023-30.pdf },
  year = {2023}
}

Disclaimer
© Copyright for documents on this server remains with the authors. Copies of these documents made by electronic or mechanical means including information storage and retrieval systems, may only be employed for personal use. The administrators respectfully request that authors inform them when any paper is published to avoid copyright infringement. Note that unauthorised copying of copyright material is illegal and may lead to prosecution. Neither the administrators nor the Seminar for Applied Mathematics (SAM) accept any liability in this respect. The most recent version of a SAM report may differ in formatting and style from published journal version. Do reference the published version if possible (see SAM Publications).

JavaScript has been disabled in your browser