Weekly Bulletin

The FIM provides a Newsletter called FIM Weekly Bulletin, which is a selection of the mathematics seminars and lectures taking place at ETH Zurich and at the University of Zurich. It is sent by e-mail every Tuesday during the semester, or can be accessed here on this website at any time.

Subscribe to the Weekly Bulletin

 

FIM Weekly Bulletin

×

Modal title

Modal content
Monday, 6 June
Time Speaker Title Location
17:00 - 18:00 Fatima-Ezzahra Jabiri
University College London
Abstract
The center of most galaxies can be described by a black hole with matter orbiting around it. In the context of relativistic kinetic theory, the Vlasov matter model is used to describe the center of galaxies, where the stars play the role of gas particles and collisions among them are neglected so that gravity is the only interaction taken into account. In this setting, stars are assumed to move along timelike future directed geodesics in a given spacetime. In this talk, we shall be interested in the final states of such self-gravitating systems. These can be described by stationary black hole solutions to the so-called Einstein-Vlasov system. More precisely, I will show a stability result for trapped timelike geodesics and discuss the ideas behind the construction of these final states.
GAuS Seminar
On the stability of trapped timelike geodesics in non-vacuum black hole spacetimes
Online via Zoom
Tuesday, 7 June
Time Speaker Title Location
10:15 - 11:15 Dr. Spencer Frei
University of California, Berkeley
Abstract
Deep learning has revealed a surprising statistical phenomenon: the possibility of benign overfitting. Experiments have revealed that trained neural networks are capable of simultaneously (1) overfitting to datasets that have substantial amounts of random label noise and (2) generalizing well to unseen data, a behavior that is inconsistent with the familiar bias-variance tradeoff in classical statistics. In this talk we investigate this phenomenon theoretically for two-layer neural networks trained by gradient descent on the cross-entropy loss. We assume the data comes from well-separated class-conditional distributions and allow for a constant fraction of the training labels to be corrupted by an adversary. We show that in this setting, neural networks indeed exhibit benign overfitting: despite the non-convex nature of the optimization problem, the empirical risk is driven to zero, overfitting the noisy labels; and as opposed to the classical intuition, the networks simultaneously generalize near-optimally. In contrast to previous works on benign overfitting that require linear or kernel-based predictors, our analysis holds in a setting where both the model and learning dynamics are fundamentally nonlinear. This talk is based on joint work with Niladri Chatterji and Peter Bartlett.
DACO Seminar
Benign Overfitting without Linearity
HG G 19.2
13:15 - 14:45 Dr. Tuomas Tajakka
Stockholm University
Abstract
Quiver representations and their moduli spaces are central objects of study in algebraic geometry and representation theory. In 1994, King constructed these moduli spaces as projective varieties using GIT. The goal of this talk is to give a new construction avoiding GIT, instead utilizing the machinery of algebraic stacks and their good moduli spaces. Using results of Alper--Halpern-Leistner--Heinloth, we show that the moduli stack of semistable quiver representations admits a proper good moduli space, on which we exhibit an ample determinantal line bundle. We also obtain effective bounds for global generation of this bundle. Joint work with Pieter Belmans, Chiara Damiolini, Hans Franzen, Victoria Hoskins, and Svetlana Makarova.
Oberseminar: Algebraische Geometrie
Projectivity of good moduli spaces of quiver representations
Y27 H 12
Wednesday, 8 June
Time Speaker Title Location
13:30 - 15:30 Yasha Eliashberg
Stanford University, ETH-ITS
CLV B 4
ITS building, Claussiusstrasse
Thursday, 9 June
— no events scheduled —
Friday, 10 June
— no events scheduled —
JavaScript has been disabled in your browser