Weekly Bulletin
The FIM provides a Newsletter called FIM Weekly Bulletin, which is a selection of the mathematics seminars and lectures taking place at ETH Zurich and at the University of Zurich. It is sent by e-mail every Tuesday during the semester, or can be accessed here on this website at any time.
Subscribe to the Weekly Bulletin
FIM Weekly Bulletin
×
Modal title
Modal content
| Monday, 25 August | |||
|---|---|---|---|
| — no events scheduled — |
| Tuesday, 26 August | |||
|---|---|---|---|
| Time | Speaker | Title | Location |
| 15:00 - 16:00 |
Hao Chen University of California, Davis, USA |
Abstract
Change-point analysis is thriving in this big data era, addressing problems that arise across many fields where massive data sequences are collected to study complex phenomena over time. It plays a crucial role in processing these data by segmenting long sequences into homogeneous parts for subsequent studies. Observations could be high-dimensional or not lie in the Euclidean space, such as network data, which are challenging to characterize using parametric models. We utilize the inter-point information of the observations and propose a series of nonparametric methods to address the issue. In particular, we take into account a pattern caused by the curse of dimensionality so that the proposed methods can accommodate a broad range of alternatives. Additionally, we work out ways to analytically approximate the p-values of the test statistics, enabling rapid type I error control. The methods are applied to Neuropixels data in the analysis of thousands of neurons’ activities.
Research Seminar in StatisticsChange-point detection for modern complex dataread_more |
HG G 19.1 |
| Wednesday, 27 August | |||
|---|---|---|---|
| — no events scheduled — |
| Thursday, 28 August | |||
|---|---|---|---|
| Time | Speaker | Title | Location |
| 15:15 - 16:15 |
John Duchi Stanford University |
Abstract
When we teach statistics and machine learning, we typically imagine
problems in which we wish to predict some target Y from data X, or to
build understanding of the relationship between these two variables, or
to test some predicted effect of intervening between them. We fit models
based on samples of these pairs. Yet we rarely investigate precisely
where our labeled data comes from, referring instead to labels (Y) in
supervised learning problems as "gold-standard" feedback, or something
similar. Yet these labels are constructed via sophisticated pipelines,
aggregating expert (or non-expert) feedback, combining observations in
sophisticated ways, and we do not model these choices in our statistical
learning pipelines. In this talk, I will discuss some work we have been
doing to try to open up this bigger picture of statistics, providing
some food for thought about how we might move beyond our standard
statistical analyses.
ETH-FDS seminar On labels in supervised learning problemsread_more |
HG D 1.2 |
| Friday, 29 August | |||
|---|---|---|---|
| — no events scheduled — |