Research reports
Years: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991
ReLU Neural Network Galerkin BEM
by R. Aylwin and F. Henriquez and Ch. Schwab
(Report number 2022-01)
Abstract
We introduce Neural Network (NN for short) approximation architectures
for the numerical solution of Boundary Integral Equations (BIEs for short).
We exemplify the proposed NN approach for
the boundary reduction of the potential problem in two spatial dimensions.
We adopt a Galerkin formulation
based approach, in polygonal domains with a finite number of straight sides.
Trial spaces used in the Galerkin discretization of the BIEs are built
by using NNs that, in turn, employ the so-called
Rectified Linear Units (ReLU) as the underlying
activation function.
The ReLU-NNs used to approximate the solutions
to the BIEs depend nonlinearly on the parameters
characterizing the NNs themselves.
Consequently, the computation
of a numerical solution to a BIE by means of ReLU-NNs boils down
to a fine tuning of these parameters, in network training.
We argue that ReLU-NNs of fixed depth
and with a variable width allow us to recover well-known approximation
rate results for the standard Galerkin Boundary Element Method (BEM).
This observation hinges on existing
well-known properties concerning the regularity
of the solution of the BIEs on Lipschitz, polygonal boundaries,
i.e.~accounting for the effect of corner singularities, and
the expressive power of ReLU-NNs
over different classes of functions.
We prove that shallow ReLU-NNs, i.e.~networks
having a fixed, moderate depth
but with increasing width, can achieve
optimal order algebraic convergence rates.
We propose novel loss functions for NN training which are built from
computable, local residual a posteriori error estimators
with ReLU-NNs for the numerical approximation of BIEs.
We find that weighted residual estimators, which are
reliable without further assumptions
on the quasi-uniformity of the underlying mesh.
The proposed framework allows us to leverage
on state-of-the-art computational deep learning technologies
such as TENSORFLOW and TPUs for the numerical solution of BIEs
using ReLU-NNs.
Exploratory numerical experiments validate our theoretical
findings and indicate the viability of the proposed
ReLU-NN Galerkin BEM approach.
Keywords:
BibTeX@Techreport{AHS22_989, author = {R. Aylwin and F. Henriquez and Ch. Schwab}, title = {ReLU Neural Network Galerkin BEM}, institution = {Seminar for Applied Mathematics, ETH Z{\"u}rich}, number = {2022-01}, address = {Switzerland}, url = {https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2022/2022-01.pdf }, year = {2022} }
Disclaimer
© Copyright for documents on this server remains with the authors.
Copies of these documents made by electronic or mechanical means including
information storage and retrieval systems, may only be employed for
personal use. The administrators respectfully request that authors
inform them when any paper is published to avoid copyright infringement.
Note that unauthorised copying of copyright material is illegal and may
lead to prosecution. Neither the administrators nor the Seminar for
Applied Mathematics (SAM) accept any liability in this respect.
The most recent version of a SAM report may differ in formatting and style
from published journal version. Do reference the published version if
possible (see SAM
Publications).