ETH-FDS seminar series

More information about ETH Foundations of Data Science can be found here

×

Modal title

Modal content

Please subscribe here if you would you like to be notified about these events via e-mail. Moreover you can also subscribe to the iCal/ics Calender.

Autumn Semester 2021

Date / Time Speaker Title Location
23 September 2021
16:15-17:15
Nathan Kallus
Cornell University, New York
Details

ETH-FDS seminar

Title Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes
Speaker, Affiliation Nathan Kallus, Cornell University, New York
Date, Time 23 September 2021, 16:15-17:15
Location HG F 3
Abstract Contextual bandit problems are the primary way to model the inherent tradeoff between exploration and exploitation in dynamic personalized decision making in healthcare, marketing, revenue management, and beyond. Naturally, the tradeoff (that is, the optimal rate of regret) depends on how complex the underlying learning problem is -- how much can observing reward in one context tell us about mean rewards in another -- but this obvious-seeming relationship is not supported by the current theory. To characterize it more precisely we study a nonparametric contextual bandit problem where the expected reward functions belong to a Hölder class with smoothness parameter β (roughly meaning they are β-times differentiable). We show how this interpolates between two extremes that were previously studied in isolation: non-differentiable bandits (β ≤ 1), where rate-optimal regret is achieved by running separate non-contextual bandits in different context regions, and parametric-response bandits (β = ∞), where rate-optimal regret can be achieved with minimal or no exploration due to infinite extrapolatability from one context to another. We develop a novel algorithm that carefully adjusts to any smoothness setting in between and we prove its regret is rate-optimal by establishing matching upper and lower bounds, recovering the existing results at the two extremes. In this sense, our work bridges the gap between the existing literature on parametric and nondifferentiable contextual bandit problems and between bandit algorithms that exclusively use global or local information, shedding light on the crucial interplay of complexity and regret in dynamic decision making. Paper: https://arxiv.org/abs/1909.02553
Documents Video N. Kallus - ETH-​FDS talk on 23 September 2021file_download
Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimesread_more
HG F 3
11 November 2021
16:15-17:15
Dmitry Yarotsky
Skoltech Faculty, Russia
Details

ETH-FDS seminar

Title Explicit loss asymptotics in the gradient descent training of neural networks
Speaker, Affiliation Dmitry Yarotsky, Skoltech Faculty, Russia
Date, Time 11 November 2021, 16:15-17:15
Location
Abstract We show that the learning trajectory of a wide neural network in a lazy training regime can be described by an explicit asymptotic formula at large training times. Specifically, the leading term in the asymptotic expansion of the loss behaves as a power law $L(t) \sim C t^{-\xi}$ with exponent $\xi$ expressed only through the data dimension, the smoothness of the activation function, and the class of function being approximated. The constant C can also be found analytically. Our results are based on spectral analysis of the integral NTK operator. Importantly, the techniques we employ do not require a specific form of the data distribution, for example Gaussian, thus making our findings sufficiently universal. This is joint work with M. Velikanov.
Documents Video D. Yarotsky - ETH-​​FDS talk on 11 November 2021file_download
Explicit loss asymptotics in the gradient descent training of neural networksread_more
2 December 2021
16:15-17:15
Christophe Giraud
Paris Saclay University
Details

ETH-FDS seminar

Title An Approachability Perspective to Fair Online Learning
Speaker, Affiliation Christophe Giraud, Paris Saclay University
Date, Time 2 December 2021, 16:15-17:15
Location HG F 3
Abstract Machine learning is ubiquitous in daily decisions and producing fair and non-discriminatory predictions is a major societal concern. Various criteria of fairness have been proposed in the literature, and we will start with a short (biased!) tour on fairness concepts in machine learning. Many decision problems are of a sequential nature, and efforts are needed to better handle such settings. We consider a general setting of fair online learning with stochastic sensitive and non-sensitive contexts. We propose a unified approach for fair learning in this adversarial setting, by interpreting this problem as an approachability problem. This point of view offers a generic way to produce algorithms and theoretical results. Adapting Blackwell’s approachability theory, we exhibit a general necessary and sufficient condition for some learning objectives to be compatible with some fairness constraints, and we characterize the optimal trade-off between the two, when they are not compatible. joint work with E. Chzhen and G. Stoltz
Documents Video C. Giraud - ETH-FDS talk on 2 December 2021file_download
An Approachability Perspective to Fair Online Learningread_more
HG F 3
JavaScript has been disabled in your browser