DACO Seminar

×

Modal title

Modal content

Please subscribe here if you would you like to be notified about these presentations via e-mail. Moreover you can subscribe to the iCal/ics Calender.

Autumn Semester 2021

Date / Time Speaker Title Location
16 November 2021
14:15-15:15
Prof. Dr. Amin Karbasi
Yale University
Event Details

DACO Seminar

Title Sequential Decision Making: How Much Adaptivity Is Needed Anyways?
Speaker, Affiliation Prof. Dr. Amin Karbasi, Yale University
Date, Time 16 November 2021, 14:15-15:15
Location HG G 19.1
Abstract Adaptive stochastic optimization under partial observability is one of the fundamental challenges in artificial intelligence and machine learning with a wide range of applications, including active learning, optimal experimental design, interactive recommendations, viral marketing, Wikipedia link prediction, and perception in robotics, to name a few. In such problems, one needs to adaptively make a sequence of decisions while taking into account the stochastic observations collected in previous rounds. For instance, in active learning, the goal is to learn a classifier by carefully requesting as few labels as possible from a set of unlabeled data points. Similarly, in experimental design, a practitioner may conduct a series of tests in order to reach a conclusion. Even though it is possible to determine all the selections ahead of time before any observations take place (e.g., select all the data points at once or conduct all the medical tests simultaneously), so-called a priori selection, it is more efficient to consider a fully adaptive procedure that exploits the information obtained from past selections in order to make a new selection. In this talk, we introduce semi-adaptive policies, for a wide range of decision-making problems, that enjoy the power of fully sequential procedures while performing exponentially fewer adaptive rounds.
Sequential Decision Making: How Much Adaptivity Is Needed Anyways?read_more
HG G 19.1
* 23 November 2021
10:15-11:15
Prof. Dr. Jaouad Mourtada
ENSAE/CREST
Event Details

DACO Seminar

Title Distribution-free robust linear regression
Speaker, Affiliation Prof. Dr. Jaouad Mourtada, ENSAE/CREST
Date, Time 23 November 2021, 10:15-11:15
Location HG G 19.1
Abstract We consider the problem of random-design linear regression, in a distribution-free setting where no assumption is made on the distribution of the predictive/input variables. After surveying existing approaches and indicating some improvements, we explain why they fall short in our setting. We then identify the minimal assumption on the target/output under which guarantees are possible, and describe a nonlinear prediction procedure achieving the optimal error bound with high probability.
Distribution-free robust linear regressionread_more
HG G 19.1
23 November 2021
14:15-15:15
Prof. Dr. Lénaïc Chizat
EPF Lausanne
Event Details

DACO Seminar

Title Analysis of gradient descent on wide two-layer ReLU neural networks
Speaker, Affiliation Prof. Dr. Lénaïc Chizat, EPF Lausanne
Date, Time 23 November 2021, 14:15-15:15
Location HG G 19.1
Abstract In this talk, we propose an analysis of gradient descent on wide two-layer ReLU neural networks that leads to sharp characterizations of the learned predictor. The main idea is to study the training dynamics when the width of the hidden layer goes to infinity, which is a Wasserstein gradient flow. While this dynamics evolves on a non-convex landscape, we show that for appropriate initializations, its limit, when it exists, is a global minimizer. We also study the implicit regularization of this algorithm when the objective is the unregularized logistic loss, which leads to a max-margin classifier in a certain functional space. We finally discuss what these results tell us about the generalization performance, and in particular how these models compare to kernel methods.
Analysis of gradient descent on wide two-layer ReLU neural networksread_more
HG G 19.1
* 30 November 2021
17:15-18:15
Dr. Joao Pereira
UT Austin
Event Details

DACO Seminar

Title Landscape Analysis of an improved power method for tensor decomposition
Speaker, Affiliation Dr. Joao Pereira, UT Austin
Date, Time 30 November 2021, 17:15-18:15
Location Zoom
Abstract In this talk, I will talk about the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) by me and Joe Kileel. Unlike popular alternative functionals for tensor decomposition, the SPM objective function has the desirable properties that its maximal value is known in advance, and its global optima are exactly the rank-1 components of the tensor when the input is sufficiently low-rank. We derive quantitative bounds such that any second-order critical point with SPM objective value exceeding the bound must equal a tensor component in the noiseless case and must approximate a tensor component if the tensor is approximately low-rank. We obtain a near-global guarantee for an overcomplete random tensor model, and a global guarantee assuming deterministic frame conditions. This implies that SPM with suitable initialization is a provable, efficient, robust algorithm for low-rank symmetric tensor decomposition. Time permitting, I will mention work in progress on a version of SPM for implicit tensor decomposition, that decomposes moment tensors without explicitly calculating these, and is much faster for this type of tensors.
Landscape Analysis of an improved power method for tensor decompositionread_more
Zoom
* 21 December 2021
17:15-18:15
Prof. Dr. Sam Hopkins
MIT
Event Details

DACO Seminar

Title Matrix Discrepancy from Quantum Communication
Speaker, Affiliation Prof. Dr. Sam Hopkins, MIT
Date, Time 21 December 2021, 17:15-18:15
Location Zoom
Abstract TBA
Matrix Discrepancy from Quantum Communicationread_more
Zoom

Note: events marked with an asterisk (*) indicate that the time and/or location are different from the usual time and/or location.

JavaScript has been disabled in your browser