DACO seminar

×

Modal title

Modal content

Please subscribe here if you would you like to be notified about these events via e-mail. Moreover you can also subscribe to the iCal/ics Calender.

Spring Semester 2022

Date / Time Speaker Title Location
15 February 2022
17:15-18:15
Dr. Mateo Díaz
Caltech
Details

DACO Seminar

Title Clustering a Mixture of Gaussians with Unknown Covariance
Speaker, Affiliation Dr. Mateo Díaz, Caltech
Date, Time 15 February 2022, 17:15-18:15
Location Zoom
Abstract We investigate a clustering problem with data from a mixture of Gaussians that share a common but unknown, and potentially ill-conditioned, covariance matrix. We start by considering Gaussian mixtures with two equally-sized components and derive a Max-Cut integer program based on maximum likelihood estimation. We prove its solutions achieve the optimal misclassification rate when the number of samples grows linearly in the dimension, up to a logarithmic factor. However, solving the Max-cut problem appears to be computationally intractable. To overcome this, we develop an efficient spectral algorithm that attains the optimal rate but requires a quadratic sample size. Although this sample complexity is worse than that of the Max-cut problem, we conjecture that no polynomial-time method can perform better. Furthermore, we gather numerical and theoretical evidence that supports the existence of a statistical-computational gap. Finally, we generalize the Max-Cut program to a k-means program that handles multi-component mixtures with possibly unequal weights. It enjoys similar optimality guarantees for mixtures of distributions that satisfy a transportation-cost inequality, encompassing Gaussian and strongly log-concave distributions.
Clustering a Mixture of Gaussians with Unknown Covarianceread_more
Zoom
1 March 2022
14:15-15:15
Luca Ganassali
INRIA Paris
Details

DACO Seminar

Title Statistical and computational limits for sparse graph alignment
Speaker, Affiliation Luca Ganassali, INRIA Paris
Date, Time 1 March 2022, 14:15-15:15
Location HG G 19.1
Abstract Graph alignment refers to recovering the underlying vertex correspondence between two random graphs with correlated edges. This problem can be viewed as an average-case and noisy version of the well-known graph isomorphism problem. For correlated Erdős-Rényi random graphs, we will give insights on the fundamental limits for the planted formulation of this problem, establishing statistical thresholds for partial recovery. From the computational point of view, we are interested in designing and analyzing efficient (polynomial-time) algorithms to recover efficiently the underlying alignment: in a sparse regime, we exhibit an local rephrasing of the planted alignment problem as the correlation detection problem in trees. Analyzing this related problem enables to derive a message-passing algorithm for our initial task. Based on joint works with Laurent Massoulié and Marc Lelarge: https://arxiv.org/abs/2002.01258, https://arxiv.org/abs/2102.02685, https://arxiv.org/abs/2107.07623.
Statistical and computational limits for sparse graph alignmentread_more
HG G 19.1
15 March 2022
15:15-16:15
Prof. Dr. Alex Townsend
Cornell University
Details

DACO Seminar

Title What networks of oscillators spontaneously synchronize ?
Speaker, Affiliation Prof. Dr. Alex Townsend, Cornell University
Date, Time 15 March 2022, 15:15-16:15
Location HG E 22
Abstract Consider a network of identical phase oscillators with sinusoidal coupling. How likely are the oscillators to spontaneously synchronize, starting from random initial phases? One expects that dense networks of oscillators have a strong tendency to pulse in unison. But, how dense is dense enough? In this talk, we use techniques from numerical linear algebra, computational algebraic geometry, and dynamical systems to derive the densest known networks that do not synchronize and the sparsest ones that do. We will find that there is a critical network density above which spontaneous synchrony is guaranteed regardless of the network's topology, and prove that synchrony is omnipresent for random Erdos-Renyi networks just above the connectivity threshold. This is joint work with Martin Kassabov, Steven Strogatz, and Mike Stillman.
What networks of oscillators spontaneously synchronize ?read_more
HG E 22
22 March 2022
14:15-15:15
Prof. Dr. Nicolas Macris
EPF Lausanne
Details

DACO Seminar

Title Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
Speaker, Affiliation Prof. Dr. Nicolas Macris, EPF Lausanne
Date, Time 22 March 2022, 14:15-15:15
Location HG E 33.1
Abstract Recent evidence has shown the existence of a so-called double-descent and even triple-descent behavior for the generalization error of deep-learning models. This important phenomenon commonly appears in implemented neural network architectures, and also seems to emerge in epoch-wise curves during the training process. A recent line of research has highlighted that random matrix tools can be used to obtain precise analytical asymptotics of the generalization (and training) errors of the random feature model. In this contribution, we analyze the whole temporal behavior of the generalization and training errors under gradient flow for the random feature model. We show that in the asymptotic limit of large system size the full time-evolution path of both errors can be calculated analytically. This allows us to observe how the double and triple descents develop over time, if and when early stopping is an option, and also observe time-wise descent structures. Our techniques are based on Cauchy complex integral representations of the errors together with recent random matrix methods based on linear pencils. This is joint work with Antoine Bodin.
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature modelread_more
HG E 33.1
29 March 2022
14:15-15:15
Prof. Dr. Jean Barbier
International Center for Theoretical Physics (ICTP), Trieste
Details

DACO Seminar

Title Statistical limits of dictionary learning: the spectral replica method
Speaker, Affiliation Prof. Dr. Jean Barbier, International Center for Theoretical Physics (ICTP), Trieste
Date, Time 29 March 2022, 14:15-15:15
Location HG G 19.1
Abstract I will discuss dictionary learning in the Bayes-optimal setting, in the challenging regime where the matrices to infer have a rank growing linearly with the system size. This is in contrast with most existing literature concerned with the low-rank (i.e., constant-rank) regime. The analysis of this model is possible thanks to a novel combination of the replica method from statistical mechanics together with random matrix theory, coined spectral replica method. It allows us to conjecture variational formulas for the mutual information between hidden representations and the noisy data as well as for the overlaps quantifying the optimal reconstruction error. The proposed method reduces the number of degrees of freedom from O(N^2) matrix entries to O(N) eigenvalues (or singular values), and yields Coulomb gas representations of the mutual information which are reminiscent of matrix models in physics. The main ingredients are the use of Harish-Chandra-Itzykson-Zuber spherical integrals combined with a new replica symmetric decoupling ansatz at the level of the probability distributions of eigenvalues (or singular values) of certain overlap matrices.
Statistical limits of dictionary learning: the spectral replica methodread_more
HG G 19.1
10 May 2022
14:15-15:15
Prof. Dr. Gérard Ben Arous
Courant Institute, NYU
Details

DACO Seminar

Title Effective dynamics and critical scaling for Stochastic Gradient Descent in high dimensions
Speaker, Affiliation Prof. Dr. Gérard Ben Arous, Courant Institute, NYU
Date, Time 10 May 2022, 14:15-15:15
Location HG G 19.1
Abstract SGD is a workhorse for optimization and thus for statistics and machine learning, and it is well understood in low dimensions. But understanding its behavior in very high dimensions is not yet a simple task. We study here the limiting “effective” dynamics of some summary statistics for SGD in high dimensions, and find interesting and new regimes, i.e. not the expected one given by the usual wisdom, i.e. the population gradient flow. We find that a new corrector term is needed and that the phase portrait of these dynamics is quite complex and substantially different from what would be predicted using the classical low-dimensional approach, including for simple tasks. This is joint work with Reza Gheissari (UC Berkeley) and Aukosh Jagannath (Waterloo).
Effective dynamics and critical scaling for Stochastic Gradient Descent in high dimensionsread_more
HG G 19.1
7 June 2022
10:15-11:15
Dr. Spencer Frei
University of California, Berkeley
Details

DACO Seminar

Title Benign Overfitting without Linearity
Speaker, Affiliation Dr. Spencer Frei, University of California, Berkeley
Date, Time 7 June 2022, 10:15-11:15
Location HG G 19.2
Abstract Deep learning has revealed a surprising statistical phenomenon: the possibility of benign overfitting. Experiments have revealed that trained neural networks are capable of simultaneously (1) overfitting to datasets that have substantial amounts of random label noise and (2) generalizing well to unseen data, a behavior that is inconsistent with the familiar bias-variance tradeoff in classical statistics. In this talk we investigate this phenomenon theoretically for two-layer neural networks trained by gradient descent on the cross-entropy loss. We assume the data comes from well-separated class-conditional distributions and allow for a constant fraction of the training labels to be corrupted by an adversary. We show that in this setting, neural networks indeed exhibit benign overfitting: despite the non-convex nature of the optimization problem, the empirical risk is driven to zero, overfitting the noisy labels; and as opposed to the classical intuition, the networks simultaneously generalize near-optimally. In contrast to previous works on benign overfitting that require linear or kernel-based predictors, our analysis holds in a setting where both the model and learning dynamics are fundamentally nonlinear. This talk is based on joint work with Niladri Chatterji and Peter Bartlett.
Benign Overfitting without Linearityread_more
HG G 19.2
* 4 July 2022
10:15-11:45
Prof. Dr. Dan Roy
University of Toronto, Toronto, Canada
Details

DACO Seminar

Title Admissibility is Bayes optimality with infinitesimals
Speaker, Affiliation Prof. Dr. Dan Roy, University of Toronto, Toronto, Canada
Date, Time 4 July 2022, 10:15-11:45
Location HG G 19.2
Abstract We give an exact characterization of admissibility in statistical decision problems in terms of Bayes optimality in a so-called nonstandard extension of the original decision problem, as introduced by Duanmu and Roy. Unlike the consideration of improper priors or other generalized notions of Bayes optimalitiy, the nonstandard extension is distinguished, in part, by having priors that can assign "infinitesimal" mass in a sense that can be made rigorous using results from nonstandard analysis. With these additional priors, we find that, informally speaking, a decision procedure δ0 is admissible in the original statistical decision problem if and only if, in the nonstandard extension of the problem, the nonstandard extension of δ0 is Bayes optimal among the (extensions of) standard decision procedures with respect to a nonstandard prior that assigns at least infinitesimal mass to every standard parameter value. We use the above theorem to give further characterizations of admissibility, one related to Blyth's method, one to a condition due to Stein which characterizes admissibility under some regularity assumptions; and finally, a characterization using finitely additive priors in decision problems meeting certain regularity requirements. Our results imply that Blyth's method is a sound and complete method for establishing admissibility. Buoyed by this result, we revisit the univariate two-sample common-mean problem, and show that the Graybill--Deal estimator is admissible among a certain class of unbiased decision procedures. Joint work with Haosui Duanmu (HIT) and David Schrittesser (Toronto).
Admissibility is Bayes optimality with infinitesimalsread_more
HG G 19.2

Notes: events marked with an asterisk (*) indicate that the time and/or location are different from the usual time and/or location.

JavaScript has been disabled in your browser