DACO seminar

×

Modal title

Modal content

Please subscribe here if you would you like to be notified about these events via e-mail. Moreover you can also subscribe to the iCal/ics Calender.

Autumn Semester 2025

Date / Time Speaker Title Location
9 October 2025
14:15-15:15
Dr. Alexander Van Werde
University of Münster
Details

DACO Seminar

Title Universality-based concentration for matrices generated by a Markov chain
Speaker, Affiliation Dr. Alexander Van Werde, University of Münster
Date, Time 9 October 2025, 14:15-15:15
Location HG G 19.1
Abstract Many applications, ranging from reinforcement learning to the analysis of time series, involve high-dimensional random matrices that are generated by a stochastic process. Such settings can be challenging to analyze due to the dependence involved in the process. In this talk, I present a new universality principle for sums of matrices generated by a Markov chain that enables sharp concentration estimates when combined with recent advances in the Gaussian literature. A key challenge in the proof is that techniques based only on classical cumulants, which have been used by Brailovskaya and Van Handel in a setting with independent summands, fail to produce efficient estimates in our dependent setting. We hence developed a new approach based on Boolean cumulants and a change--of--measure argument. Based on joint work with Jaron Sanders, available at arXiv:2307.11632.
Universality-based concentration for matrices generated by a Markov chainread_more
HG G 19.1
* 10 October 2025
14:15-15:15
Prof. Dr. Yuehaw Khoo
University of Chicago, USA
Details

DACO Seminar

Title Tensor Density Estimator by Convolution-Deconvolution
Speaker, Affiliation Prof. Dr. Yuehaw Khoo, University of Chicago, USA
Date, Time 10 October 2025, 14:15-15:15
Location HG G 19.1
Abstract We propose a linear algebraic framework for performing density estimation. It consists of three simple steps: convolving the empirical distribution with certain smoothing kernels to remove the exponentially large variance; compressing the empirical distribution after convolution as a tensor train, with efficient tensor decomposition algorithms; and finally, applying a deconvolution step to recover the estimated density from such tensor-train representation. Numerical results demonstrate the high accuracy and efficiency of the proposed methods.
Tensor Density Estimator by Convolution-Deconvolution read_more
HG G 19.1
* 17 October 2025
13:15-14:00
Ittai Rubinstein
MIT, US
Details

DACO Seminar

Title Data Attribution in High-Dimensions and without Strong Convexity
Speaker, Affiliation Ittai Rubinstein, MIT, US
Date, Time 17 October 2025, 13:15-14:00
Location HG G 26.3
Abstract Data attribution methods aim to quantify how training examples shape model predictions, supporting applications in interpretability, unlearning, and robustness. The dominant tools in practice are influence functions (IF) and Newton step (NS) approximations, yet their theoretical guarantees and practical accuracy have remained poorly understood. In this talk, I will present new analytic techniques that uncover the scaling laws of the approximation error of IF and NS. Our results improve on prior analyses both by establishing asymptotically sharper bounds and by avoiding dependence on the global strong convexity parameter, which is often prohibitively small in practice. These insights not only explain long-standing empirical observations—such as why and when NS is more accurate than IF—but also guide the design of new methods. As an application, I will present rescaled influence functions (RIF), a simple, drop-in replacement for IF that matches the efficiency of IF while achieving the accuracy of NS. I will discuss both theoretical advances and empirical results on real-world datasets. Together, these contributions provide a first principled understanding of data attribution methods and demonstrate how to turn this understanding into more reliable tools.
Data Attribution in High-Dimensions and without Strong Convexityread_more
HG G 26.3
6 November 2025
14:15-15:15
Prof. Dr. Boaz Klartag
Weizmann Institute of Science, IL
Details

DACO Seminar

Title Lattice packing of spheres in high dimensions using a stochastically evolving ellipsoid
Speaker, Affiliation Prof. Dr. Boaz Klartag, Weizmann Institute of Science, IL
Date, Time 6 November 2025, 14:15-15:15
Location HG G 19.1
Abstract We prove that in any dimension n there exists an origin-symmetric ellipsoid {\mathcal{E}} \subset {\mathbb{R}}^n of volume c n^2 that contains no points of {\mathbb{Z}}^n other than the origin, where c > 0 is a universal constant. Equivalently, there exists a lattice sphere packing in {\mathbb{R}}^n whose density is at least cn^2 \cdot 2^{-n}. Previously known constructions of sphere packings in {\mathbb{R}}^n had densities of the order of magnitude of n \cdot 2^{-n}, up to logarithmic factors. Our proof utilizes a stochastically evolving ellipsoid that accumulates at least c n^2 lattice points on its boundary, while containing no lattice points in its interior except for the origin.
Lattice packing of spheres in high dimensions using a stochastically evolving ellipsoidread_more
HG G 19.1
* 12 November 2025
16:15-17:15
Amit Rajaraman
MIT, US
Details

DACO Seminar

Title Eigenvalue Bounds for Random Matrices via Zerofreeness
Speaker, Affiliation Amit Rajaraman, MIT, US
Date, Time 12 November 2025, 16:15-17:15
Location HG D 1.1
Abstract We introduce a new technique to prove bounds for the spectral radius of a random matrix, based on using Jensen's formula to establish the zerofreeness of the associated characteristic polynomial in a region of the complex plane. Our techniques are entirely non-asymptotic, and we instantiate it in three settings: (i) The spectral radius of non-asymptotic Girko matrices---these are asymmetric matrices M ∈ ℂ^{n × n} whose entries are independent and satisfy 𝔼 Mᵢⱼ = 0 and 𝔼 |Mᵢⱼ|² ≤ 1/n. (ii) The spectral radius of non-asymptotic Wigner matrices---these are symmetric matrices M ∈ ℂ^{n × n} whose entries above the diagonal are independent and satisfy 𝔼 Mᵢⱼ = 0, 𝔼 |Mᵢⱼ|² ≤ 1/n, and 𝔼 |Mᵢⱼ|⁴ ≤ 1/n. (iii) The second eigenvalue of the adjacency matrix of a random d-regular graph on n vertices, as drawn from the configuration model. In all three settings, we obtain constant-probability eigenvalue bounds that are tight up to a constant. Applied to specific random matrix ensembles, we recover classic bounds for Wigner matrices, as well as results of Bordenave--Chafaï--García-Zelada, Bordenave--Lelarge--Massoulié, and Friedman, up to constants.
Eigenvalue Bounds for Random Matrices via Zerofreenessread_more
HG D 1.1
20 November 2025
14:15-15:15
Prof. Dr. Marco Mondelli
ISTA
Details

DACO Seminar

Title Precise Asymptotics for Spectral Estimators: A Story of Phase Transitions, Random Matrices and Approximate Message Passing
Speaker, Affiliation Prof. Dr. Marco Mondelli, ISTA
Date, Time 20 November 2025, 14:15-15:15
Location HG G 19.1
Abstract Spectral methods are a simple yet effective approach to extract information from data which is high-dimensional, i.e., where sample size and signal dimension grow proportionally. As a prelude, we will consider the prototypical problem of inference from a generalized linear model with an i.i.d. Gaussian design. Here, the spectral estimator is the principal eigenvector of a data-dependent matrix. We will discuss the emergence of a (BBP-like) phase transition in the spectrum of this random matrix and how such phase transition is related to signal recovery. The core of the talk will then deal with two models that capture the heterogeneous and structured nature of practical data. First, we will consider a multi-index model where the output depends on the inner product between the feature vector and a fixed number $p$ of signals, and the focus is on recovering the subspace spanned by the signals via spectral estimators. By using tools from random matrix theory, we will locate the top-$p$ eigenvalues of the spectral matrix and establish the overlaps between the corresponding eigenvectors (which give the spectral estimators) and a basis of the signal subspace. Second, we will consider a generalized linear model with a correlated design matrix. Here, the analysis of the spectral estimator relies on tools based on approximate message passing, and we will present a methodology which is broadly applicable to the study of spiked matrices. In all these settings, the precise asymptotic characterization we put forward enables the optimization of the data preprocessing, thus allowing to identify the spectral estimator that requires the minimal sample size for signal recovery.
Precise Asymptotics for Spectral Estimators: A Story of Phase Transitions, Random Matrices and Approximate Message Passingread_more
HG G 19.1

Notes: events marked with an asterisk (*) indicate that the time and/or location are different from the usual time and/or location.

JavaScript has been disabled in your browser