Weekly Bulletin

The FIM provides a Newsletter called FIM Weekly Bulletin, which is a selection of the mathematics seminars and lectures taking place at ETH Zurich and at the University of Zurich. It is sent by e-mail every Tuesday during the semester, or can be accessed here on this website at any time.

Subscribe to the Weekly Bulletin

 

FIM Weekly Bulletin

×

Modal title

Modal content
Monday, 18 March
Time Speaker Title Location
16:15 - 17:15 Amie Wilkinson
The University of Chicago
Abstract
I will discuss a result with Bonatti and Crovisier from 2009 showing that the C^1 generic diffeomorphism f of a closed manifold has trivial centralizer; i.e. fg = gf implies that g is a power of f. I’ll discuss features of the C^1 topology that enable our proof (the analogous statement is open in general in the C^r topology, for r>1). I’ll also discuss some features of the proof and some recent work, joint with Danijela Damjanovic and Disheng Xu that attempts to tackle the non-​generic case.
FIM Lecture
Dynamical asymmetry is C^1-​typical
HG F 5
Tuesday, 19 March
Time Speaker Title Location
10:15 - 11:15 Prof. Dr. Matus Telgarsky
NYU, US
Abstract
The first half of this talk will describe the feature learning problem in deep learning optimization, its statistical consequences, and an approach to proving general theorems with a heavy reliance on normalization layers, which are common to all modern architectures but typically treated as an analytic nuisance. Theorems will cover two settings: concrete results for shallow networks, and abstract template theorems for general architectures. The shallow network results allow for globally maximal margins at the cost of large width and no further assumptions, while the general architecture theorems give convergence rates to KKT points for a new general class of architectures satisfying “partial lower-homogeneity”. The second half will be technical, demonstrating two core proof techniques. The first ingredient, essential to the shallow analysis, is a new mirror descent lemma, strengthening a beautiful idea discovered by Chizat and Bach. The second ingredient is the concept of “partial lower-homogeneity” and its consequences. Joint work with Danny Son; not currently on arXiv, but “coming soon”.
DACO Seminar
Feature learning, lower-homogeneity, and normalization layers
HG G 43
14:15 - 15:15 Christopher Criscitiello
EPFL, CH
Abstract
We consider the sensor network localization problem (also called metric multidimensional scaling): we observe some pairwise distances between n ground-truth points in R^d, and our goal is to recover this cloud of ground-truth points (up to translation and rotation). The corresponding optimization problem is nonconvex, and we show that it can have spurious local minima. However, inspired by numerical experiments, we argue that if one relaxes the problem by optimizing over clouds of n points in dimension k greater than d, then all second-order critical points of the problem are global minima. Specifically, we show this for two settings: (1) for arbitrary ground-truth points, when all pairwise distances are known, and k = O(sqrt{n d}), and: (2) for isotropic random ground-truth points, when most (but not necessarily all) pairwise distances are known, and k = O(d log(n)). To the best of our knowledge, these are the first landscape results for this nonconvex version of sensor network localization.
DACO Seminar
The sensor network localization problem has benign landscape under mild rank relaxation
HG G 19.1
15:15 - 16:15 Prof. Dr. Melanie Rupflin
University of Oxford
Abstract
As the energy of any map $v$ from $S^2$ to $S^2$ is at least $4\pi vert deg(v)\vert$ with equality if and only if $v$ is a rational map it is natural to ask whether maps with small energy defect $\de_v=E(v)-4\pi \abs{\deg(v)}$ are necessarily close to a rational map. While such a rigidity statement turns out to be false for maps of general degree, we will prove that any map $v$ with small energy defect is essentially given by a collection of rational maps that describe the behaviour of $v$ at very different scales and that the corresponding distance is controlled by a quantitative estimate of the form $\text{dist}^2\leq C \delta_v(1+\abs{\log\delta_v})$ which is indeed sharp.
Analysis Seminar
Sharp quantitative results for maps from $S^2$ to $S^2$ of general degree
HG G 43
16:00 - 17:00 Weiming Feng
ETH Institute for Theoretical Studies
Abstract
High-​dimensional distributions have been extensively studied in different research areas. Examples include spin systems in physics and Markov random fields in machine learning. Sampling is a central computational task for high-​dimensional distributions, which requires the algorithm to generate random samples from the input distribution in polynomial time. One of the most successful sampling algorithms is the Markov chain Monte Carlo (MCMC). I will introduce some results on theoretical analysis of MCMC algorithms. Recently, some new alternative sampling techniques were proposed. For example, resampling methods and sampling algorithms based on projection. The new techniques have some unique features compared to MCMC methods. I will also give some applications of sampling algorithms.

More information: https://eth-its.ethz.ch/activities/its-fellows--seminar/Weiming-Feng.html
ETH-ITS Fellows' Seminar
Sampling Algorithms for High-​Dimensional Distributions
CLV B 4
Clausiusstrasse 47
16:30 - 17:30 Ana Marija Vego
ETHZ
Abstract
''The Iwasawa algebra is a key object in the study of p-adic L-functions, which are a central topic in number theory. The Iwasawa algebra arises naturally in this context as a tool for understanding the behavior of certain arithmetic invariants, such as Selmer groups and class groups, in towers of number fields. It provides a framework for studying these invariants in a unified way over all the levels of the tower. This allows us to investigate the arithmetic properties of number fields and their associated objects, particularly in the context of p-adic L-functions and Galois representations. It has applications in various areas of number theory, including the study of special values of L-functions, the Birch and Swinnerton-Dyer conjecture, and the structure of class groups of number fields. In this talk we will introduce Iwasawa algebras and give some basic properties. We'll then explore how these algebras are used in constructing Euler systems and obtaining p-adic L-functions. If time allows, we'll also touch on the main conjecture of Iwasawa theory.
Zurich Graduate Colloquium
What is... an Iwasawa algebra?
KO2 F 150
Wednesday, 20 March
Time Speaker Title Location
13:30 - 14:30 Dr. Yi Pan
Institute of Science and Technology Austria
Abstract
Reducibility of quasi-periodic cocyles valued in symplectic groups is related to the spectrum of discrete Schrödinger operators on strips. We will talk about a global reducibility result: given one parameter family of such cocycles, for almost every parameter, either the maximal Lyapunov exponent is positive, or the cocycle is almost conjugate to some precise model. The techniques include Kotani thoery, KAM theory and in particular study of hyperbolicity of renormalization operator. This is a joint work with Artur Avila and Raphaël Krikorian.
Ergodic theory and dynamical systems seminar
Reducibility of quasi-periodic cocycles valued in symplectic groups
HG G 19.1
13:30 - 15:00 Dr. Chiara Meroni
ETH-ITS
Abstract
Convex hulls and their boundaries are complicated but relevant objects in convex geometry and optimization. However, there is an algebraic technique to study the convex hull of a real variety. The goal is to understand which varieties contribute to the boundary. I will explain this general method and then focus on the case of smooth surfaces in four-dimensional space, in particular Veronese, Del Pezzo, and Bordiga surfaces.
Algebraic Geometry and Moduli Seminar
The algebra of convex hulls
HG G 43
15:30 - 16:30 PD Dr. Menny Akka Ginosar
ETH Zurich, Switzerland
Abstract
In this talk I will present results from a recent joint work with Peter Feller, Alison Beth Miller and Andreas Wieser (https://arxiv.org/abs/2311.17746). I will first present a geometric approach to the classical Gauss composition of binary quadratic forms. The new method is based on a parameterisation of two-dimensional subspaces of the space of 2x2 matrices and provides an easy to remember way to compute the Gauss composition. This approach naturally leads to a robust construction of pairs of Seifert surfaces for the same knot that are non-isotopic in the 4-ball. It also provides a complete characterisation of the Seifert forms of such disjoint Seifert surfaces. These topological results will be discussed in the second part of the talk. No background in number theory or knot theory will be assumed.
Geometry Seminar
Seifert surfaces, planes in four space, and Gauss composition
HG G 43
17:15 - 18:15 Dr. Jean-Jil Duchamps
Université de Franche-Comté / Besançon
Abstract
We study an epidemiological model where infections arise in a population according to a general, non-Markovian SIR-like model with a time-dependent contact rate. We make few assumptions, only requiring that the number of potential infections generated by an individual has finite expectation on bounded time intervals. This model can be viewed as a general Crump-Mode-Jagers model with interactions, and we study the local weak convergence of its infection graph, which yields (1) a functional law of large numbers for our SIR process, and (2) the identification of a "contact-tracing Markov process" that traces back the chain of infection leading to a typical individual. This is joint work with Félix Foutel-Rodier and Emmanuel Schertzer.
Seminar on Stochastic Processes
Local weak convergence for a general stochastic SIR model
HG G 43
Thursday, 21 March
Time Speaker Title Location
10:15 - 12:00 Umut Çetin
London School of Economics
Abstract
Nachdiplomvorlesung
Mathematics of Market Microstructure
HG G 43
16:15 - 17:15 Bryon Aragam
The University of Chicago Booth School of Business
Abstract
One of the key paradigm shifts in statistical machine learning over the past decade has been the transition from handcrafted features to automated, data-driven representation learning. A crucial step in this pipeline is to identify latent representations from observational data along with their causal structure. In many applications, the causal variables are not directly observed, and must be learned from data, often using flexible, nonparametric models such as deep neural networks. These settings present new statistical and computational challenges that will be focus of this talk. We will re-visit the statistical foundations of nonparametric latent variable models as a lens into the problem of causal representation learning. We discuss our recent work on developing methods for identifying and learning causal representations from data with rigourous guarantees, and discuss how even basic statistical properties are surprisingly subtle. Along the way, we will explore the connections between causal graphical models, deep generative models, and nonparametric mixture models, and how these connections lead to a useful new theory for causal representation learning.
Research Seminar in Statistics
Research Seminar on Statistics - FDS Seminar joint talk: Statistical aspects of nonparametric latent variable models and causal representation learning
HG D 1.2
16:15 - 17:15 Bryon Aragam
The University of Chicago Booth School of Business
Abstract
One of the key paradigm shifts in statistical machine learning over the past decade has been the transition from handcrafted features to automated, data-driven representation learning. A crucial step in this pipeline is to identify latent representations from observational data along with their causal structure. In many applications, the causal variables are not directly observed, and must be learned from data, often using flexible, nonparametric models such as deep neural networks. These settings present new statistical and computational challenges that will be focus of this talk. We will re-visit the statistical foundations of nonparametric latent variable models as a lens into the problem of causal representation learning. We discuss our recent work on developing methods for identifying and learning causal representations from data with rigourous guarantees, and discuss how even basic statistical properties are surprisingly subtle. Along the way, we will explore the connections between causal graphical models, deep generative models, and nonparametric mixture models, and how these connections lead to a useful new theory for causal representation learning.
ETH-FDS seminar
Research Seminar on Statistics - FDS Seminar joint talk: Statistical aspects of nonparametric latent variable models and causal representation learning
HG D 1.2
Friday, 22 March
Time Speaker Title Location
10:15 - 12:00 Shahar Mendelson
The Australian National University
Abstract
FIM Minicourse
An introduction to Generic Chaining
HG G 43
JavaScript has been disabled in your browser