Events

This week

×

Modal title

Modal content
Monday, 5 September
— no events scheduled —
Tuesday, 6 September
— no events scheduled —
Wednesday, 7 September
— no events scheduled —
Thursday, 8 September
Time Speaker Title Location
16:15 - 17:15 Aaditya Ramdas
Carnegie Mellon University
Abstract
Conformal prediction is a popular, modern technique for providing valid predictive inference for arbitrary machine learning models. Its validity relies on the assumptions of exchangeability of the data, and symmetry of the given model fitting algorithm as a function of the data. However, exchangeability is often violated when predictive models are deployed in practice. For example, if the data distribution drifts over time, then the data points are no longer exchangeable; moreover, in such settings, we might want to use an algorithm that treats recent observations as more relevant, which would violate the assumption that data points are treated symmetrically. This paper proposes a new methodology to deal with both aspects: we use weighted quantiles to introduce robustness against distribution drift, and design a new technique to allow for algorithms that do not treat data points symmetrically. Our algorithms are provably robust, with substantially less loss of coverage when exchangeability is violated due to distribution drift or other challenging features of real data, while also achieving the same algorithm and coverage guarantees as existing conformal prediction methods if the data points are in fact exchangeable. Finally, we demonstrate the practical utility of these new tools with simulations and real-data experiments. This is joint work with Rina Barber, Emmanuel Candes and Ryan Tibshirani. A preprint is at https://arxiv.org/abs/2202.13415. Bio: Aaditya Ramdas (PhD, 2015) is an assistant professor at Carnegie Mellon University, in the Departments of Statistics and Machine Learning. He was a postdoc at UC Berkeley (2015–2018) and obtained his PhD at CMU (2010–2015), receiving the Umesh K. Gavaskar Memorial Thesis Award. His undergraduate degree was in Computer Science from IIT Bombay (2005-09). Aaditya was an inaugural inductee of the COPSS Leadership Academy, and a recipient of the 2021 Bernoulli New Researcher Award. His work is supported by an NSF CAREER Award, an Adobe Faculty Research Award (2020), an ARL Grant on Safe Reinforcement Learning, the Block Center Grant for election auditing, a Google Research Scholar award (2022) for structured uncertainty quantification, amongst others. Aaditya's main theoretical and methodological research interests include selective and simultaneous inference, game-theoretic statistics and safe anytime-valid inference, and distribution-free uncertainty quantification for black-box ML. His areas of applied interest include privacy, neuroscience, genetics and auditing (elections, real-estate, financial), and his group's work has received multiple best paper awards.
ETH-FDS seminar
Conformal prediction beyond exchangeability (= quantifying uncertainty for black-box ML without distributional assumptions)
OAS J 10
ETH AI Center, OAS, Binzmühlestrasse 13, 8050 Zürich
Friday, 9 September
Time Speaker Title Location
14:15 - 15:15 Bin Yu
UC Berkeley
Abstract
Occam's razor is a general principle for science to pursue the simplest explanation or model when the empirical support evidence is the same for the explanations or models under consideration. To quantify simplicy, a complexity measure is necessary and many such measures have been used in the literature including uniform stability. Both complexity and stability are central to interpretable machine learning. In this talk, we first give an overview of interpretable machine learning and then delve into our recent work on decisions trees, which are especially useful interpretable methods in high-stake applications such as medicine and public policy. In particular, we show that decision trees are sub-optimal for additive regression models. To improve upon decision trees, we introduce a new method called Fast Interpretable Greedy-Tree Sums (FIGS) that fits additive trees while controlling the total number of splits. The state-of-the-art performance of FIGS will be illustrated through case studies for clinical decision rules.
Research Seminar in Statistics
Complexity, simplicity, and decision trees
HG G 19.1
JavaScript has been disabled in your browser