Statistics research seminar

×

Modal title

Modal content

Autumn Semester 2024

Date / Time Speaker Title Location
10 October 2024
15:15-16:15
Lars Lorch
Institute for Machine Learning, ETH Zürich
Details

Research Seminar in Statistics

Title Causal Modeling with Stationary Diffusions
Speaker, Affiliation Lars Lorch, Institute for Machine Learning, ETH Zürich
Date, Time 10 October 2024, 15:15-16:15
Location HG G 19.1
Abstract In this talk, we develop a novel approach to causal modeling and inference. Rather than structural equations over a causal graph, we show how to learn stochastic differential equations (SDEs) whose stationary densities model a system's behavior under interventions. These stationary diffusion models do not require the formalism of causal graphs, let alone the common assumption of acyclicity, and often generalize to unseen interventions on their variables. Our inference method is based on a new theoretical result that expresses a stationarity condition on the diffusion's generator in a reproducing kernel Hilbert space. The resulting kernel deviation from stationarity (KDS) is an objective function of independent interest.
Causal Modeling with Stationary Diffusionsread_more
HG G 19.1
6 November 2024
15:00-16:00
Peter Whalley
ETH Zurich, Seminar for Statistics
Details

Research Seminar in Statistics

Title Invited talk: Unbiased Kinetic Langevin Monte Carlo with Inexact Gradients
Speaker, Affiliation Peter Whalley, ETH Zurich, Seminar for Statistics
Date, Time 6 November 2024, 15:00-16:00
Location HG G 19.1
Abstract We present an unbiased method for Bayesian posterior means based on kinetic Langevin dynamics that combines advanced splitting methods with enhanced gradient approximations. Our approach avoids Metropolis correction by coupling Markov chains at different discretization levels in a multilevel Monte Carlo approach. Theoretical analysis demonstrates that our proposed estimator is unbiased, attains finite variance, and satisfies a central limit theorem. We prove similar results using both approximate and stochastic gradients and show that our method's computational cost scales independently of the size of the dataset. Our numerical experiments demonstrate that our unbiased algorithm outperforms the "gold-standard" randomized Hamiltonian Monte Carlo.
Invited talk: Unbiased Kinetic Langevin Monte Carlo with Inexact Gradientsread_more
HG G 19.1
14 November 2024
15:15-16:15
Chen Zhou
Erasmus University, Rotterdam
Details

Research Seminar in Statistics

Title High dimensional inference for extreme value indices
Speaker, Affiliation Chen Zhou, Erasmus University, Rotterdam
Date, Time 14 November 2024, 15:15-16:15
Location HG E 41
Abstract When applying multivariate extreme values statistics to analyze tail risk in compound events defined by a multivariate random vector, one often assumes that all dimensions share the same extreme value index. While such an assumption can be tested using a Wald-type test, the performance of such a test deteriorates as the dimensionality increases. This paper introduces a novel test for testing extreme value indices in a high dimensional setting. We show the asymptotic behavior of the test statistic and conduct simulation studies to evaluate its finite sample performance. The proposed test significantly outperforms existing methods in high dimensional settings. We apply this test to examine two datasets previously assumed to have identical extreme value indices across all dimensions. This is a joint work with Liujun Chen (USTC)
High dimensional inference for extreme value indicesread_more
HG E 41
27 November 2024
15:15-16:15
Frederic Koehler
University of Chicago
Details

Research Seminar in Statistics

Title Pseudolikelihood, Score Matching, and Dynamics
Speaker, Affiliation Frederic Koehler, University of Chicago
Date, Time 27 November 2024, 15:15-16:15
Location HG G 19.1
Abstract In his 1975 paper "Statistical Analysis of Non-Lattice Data", Julian Besag proposed the pseudolikelihood method as an alternative to the standard method of maximum likelihood estimation. This method has been very influential and successful in applications like learning graphical models from data, and also inspired another related and important method called score matching. I will discuss some recent work which connects the statistical efficiency of these estimators to the computational efficiency of related sampling algorithms.
Pseudolikelihood, Score Matching, and Dynamicsread_more
HG G 19.1

Notes: if you want you can subscribe to the iCal/ics Calender.

JavaScript has been disabled in your browser