Research reports
Childpage navigation
Years: 2025 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991
Efficient approximation of high-dimensional functions with deep neural networks
by P. Cheridito and A. Jentzen and F. Rossmannek
(Report number 2019-64)
Abstract
In this paper, we develop an approximation theory for deep neural networks that is based on the concept of a catalog network.
Catalog networks are generalizations of standard neural networks in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of continuous functions.
As such, catalog networks constitute a rich family of continuous functions.
We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with neural networks and provide precise estimates on the number of parameters needed for a given approximation accuracy.
We apply the theory of catalog networks to demonstrate that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems.
Keywords:
BibTeX@Techreport{CJR19_868, author = {P. Cheridito and A. Jentzen and F. Rossmannek}, title = {Efficient approximation of high-dimensional functions with deep neural networks}, institution = {Seminar for Applied Mathematics, ETH Z{\"u}rich}, number = {2019-64}, address = {Switzerland}, url = {https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2019/2019-64.pdf }, year = {2019} }
Disclaimer
© Copyright for documents on this server remains with the authors.
Copies of these documents made by electronic or mechanical means including
information storage and retrieval systems, may only be employed for
personal use. The administrators respectfully request that authors
inform them when any paper is published to avoid copyright infringement.
Note that unauthorised copying of copyright material is illegal and may
lead to prosecution. Neither the administrators nor the Seminar for
Applied Mathematics (SAM) accept any liability in this respect.
The most recent version of a SAM report may differ in formatting and style
from published journal version. Do reference the published version if
possible (see SAM
Publications).