DACO seminar

×

Modal title

Modal content

Bitte abonnieren Sie hier, wenn Sie über diese Veranstaltungen per E-Mail benachrichtigt werden möchten. Ausserdem können Sie auch den iCal/ics-Kalender abonnieren.

Herbstsemester 2021

Datum / Zeit Referent:in Titel Ort
16. November 2021
14:15-15:15
Prof. Dr. Amin Karbasi
Yale University
Details

DACO Seminar

Titel Sequential Decision Making: How Much Adaptivity Is Needed Anyways?
Referent:in, Affiliation Prof. Dr. Amin Karbasi, Yale University
Datum, Zeit 16. November 2021, 14:15-15:15
Ort HG G 19.1
Abstract Adaptive stochastic optimization under partial observability is one of the fundamental challenges in artificial intelligence and machine learning with a wide range of applications, including active learning, optimal experimental design, interactive recommendations, viral marketing, Wikipedia link prediction, and perception in robotics, to name a few. In such problems, one needs to adaptively make a sequence of decisions while taking into account the stochastic observations collected in previous rounds. For instance, in active learning, the goal is to learn a classifier by carefully requesting as few labels as possible from a set of unlabeled data points. Similarly, in experimental design, a practitioner may conduct a series of tests in order to reach a conclusion. Even though it is possible to determine all the selections ahead of time before any observations take place (e.g., select all the data points at once or conduct all the medical tests simultaneously), so-called a priori selection, it is more efficient to consider a fully adaptive procedure that exploits the information obtained from past selections in order to make a new selection. In this talk, we introduce semi-adaptive policies, for a wide range of decision-making problems, that enjoy the power of fully sequential procedures while performing exponentially fewer adaptive rounds.
Sequential Decision Making: How Much Adaptivity Is Needed Anyways?read_more
HG G 19.1
* 23. November 2021
10:15-11:15
Prof. Dr. Jaouad Mourtada
ENSAE/CREST
Details

DACO Seminar

Titel Distribution-free robust linear regression
Referent:in, Affiliation Prof. Dr. Jaouad Mourtada, ENSAE/CREST
Datum, Zeit 23. November 2021, 10:15-11:15
Ort HG G 19.1
Abstract We consider the problem of random-design linear regression, in a distribution-free setting where no assumption is made on the distribution of the predictive/input variables. After surveying existing approaches and indicating some improvements, we explain why they fall short in our setting. We then identify the minimal assumption on the target/output under which guarantees are possible, and describe a nonlinear prediction procedure achieving the optimal error bound with high probability.
Distribution-free robust linear regressionread_more
HG G 19.1
23. November 2021
14:15-15:15
Prof. Dr. Lénaïc Chizat
EPF Lausanne
Details

DACO Seminar

Titel Analysis of gradient descent on wide two-layer ReLU neural networks
Referent:in, Affiliation Prof. Dr. Lénaïc Chizat, EPF Lausanne
Datum, Zeit 23. November 2021, 14:15-15:15
Ort HG G 19.1
Abstract In this talk, we propose an analysis of gradient descent on wide two-layer ReLU neural networks that leads to sharp characterizations of the learned predictor. The main idea is to study the training dynamics when the width of the hidden layer goes to infinity, which is a Wasserstein gradient flow. While this dynamics evolves on a non-convex landscape, we show that for appropriate initializations, its limit, when it exists, is a global minimizer. We also study the implicit regularization of this algorithm when the objective is the unregularized logistic loss, which leads to a max-margin classifier in a certain functional space. We finally discuss what these results tell us about the generalization performance, and in particular how these models compare to kernel methods.
Analysis of gradient descent on wide two-layer ReLU neural networksread_more
HG G 19.1
* 30. November 2021
17:15-18:15
Dr. Joao Pereira
UT Austin
Details

DACO Seminar

Titel Landscape Analysis of an improved power method for tensor decomposition
Referent:in, Affiliation Dr. Joao Pereira, UT Austin
Datum, Zeit 30. November 2021, 17:15-18:15
Ort Zoom
Abstract In this talk, I will talk about the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) by me and Joe Kileel. Unlike popular alternative functionals for tensor decomposition, the SPM objective function has the desirable properties that its maximal value is known in advance, and its global optima are exactly the rank-1 components of the tensor when the input is sufficiently low-rank. We derive quantitative bounds such that any second-order critical point with SPM objective value exceeding the bound must equal a tensor component in the noiseless case and must approximate a tensor component if the tensor is approximately low-rank. We obtain a near-global guarantee for an overcomplete random tensor model, and a global guarantee assuming deterministic frame conditions. This implies that SPM with suitable initialization is a provable, efficient, robust algorithm for low-rank symmetric tensor decomposition. Time permitting, I will mention work in progress on a version of SPM for implicit tensor decomposition, that decomposes moment tensors without explicitly calculating these, and is much faster for this type of tensors.
Landscape Analysis of an improved power method for tensor decompositionread_more
Zoom
* 21. Dezember 2021
17:15-18:15
Prof. Dr. Sam Hopkins
MIT
Details

DACO Seminar

Titel Matrix Discrepancy from Quantum Communication
Referent:in, Affiliation Prof. Dr. Sam Hopkins, MIT
Datum, Zeit 21. Dezember 2021, 17:15-18:15
Ort Zoom
Abstract TBA
Matrix Discrepancy from Quantum Communicationread_more
Zoom

Hinweise: mit einem Stern gekennzeichnete Ereignisse (*) zeigen an, dass die Zeit und/oder der Ort von der üblichen Zeit und/oder dem üblichen Ort abweichen.

JavaScript wurde auf Ihrem Browser deaktiviert