DACO Seminar

×

Modal title

Modal content

Please subscribe here if you would you like to be notified about these presentations via e-mail. Moreover you can subscribe to the iCal/ics Calender.

Spring Semester 2021

Date / Time Speaker Title Location
16 April 2021
17:00-18:00
Prof. Dr. Reeves Galen
Duke University, Durham, USA
Event Details

DACO Seminar

Title Statistical Limits for the Matrix Tensor Product
Speaker, Affiliation Prof. Dr. Reeves Galen , Duke University, Durham, USA
Date, Time 16 April 2021, 17:00-18:00
Location Zoom Meeting
Abstract High-dimensional models involving the products of large random matrices include the spiked matrix models appearing in principle component analysis and the stochastic block model appearing in network analysis. In this talk I will present recent theoretical work that provides an asymptotically exact characterization of the fundamental limits of inference for a broad class of these models. The first part of the talk will introduce the “matrix tensor product” model and describe some implications of the theory for community detection in correlated networks. The second part will highlight some of the ideas in the analysis, which builds upon ideas from information theory and statistical physics. The material in this talk is appears in following paper: Information-Theoretic Limits for the Matrix Tensor Product, Galen Reeves https://arxiv.org/abs/2005.11273
Statistical Limits for the Matrix Tensor Productread_more
Zoom Meeting
23 April 2021
17:00-18:00
Dr. Crawford Lorin
Microsoft Research & Brown University
Event Details

DACO Seminar

Title Statistical Frameworks for Mapping 3D Shape Variation onto Genotypic and Phenotypic Variation
Speaker, Affiliation Dr. Crawford Lorin, Microsoft Research & Brown University
Date, Time 23 April 2021, 17:00-18:00
Location Zoom Meeting
Abstract The recent curation of large-scale databases with 3D surface scans of shapes has motivated the development of tools that better detect global-patterns in morphological variation. Studies which focus on identifying differences between shapes have been limited to simple pairwise comparisons and rely on pre-specified landmarks (that are often known). In this talk, we present SINATRA: a statistical pipeline for analyzing collections of shapes without requiring any correspondences. Our method takes in two classes of shapes and highlights the physical features that best describe the variation between them. The SINATRA pipeline implements four key steps. First, SINATRA summarizes the geometry of 3D shapes (represented as triangular meshes) by a collection of vectors (or curves) that encode changes in their topology. Second, a nonlinear Gaussian process model, with the topological summaries as input, classifies the shapes. Third, an effect size analog and corresponding association metric is computed for each topological feature used in the classification model. These quantities provide evidence that a given topological feature is associated with a particular class. Fourth, the pipeline iteratively maps the topological features back onto the original shapes (in rank order according to their association measures) via a reconstruction algorithm. This highlights the physical (spatial) locations that best explain the variation between the two groups. We use a rigorous simulation framework to assess our approach, which themselves are a novel contribution to 3D image analysis. Lastly, as a case study, we use SINATRA to analyze mandibular molars from four different suborders of primates and demonstrate its ability recover known morphometric variation across phylogenies.
Statistical Frameworks for Mapping 3D Shape Variation onto Genotypic and Phenotypic Variationread_more
Zoom Meeting
7 May 2021
17:00-18:00
Dr. Inbar Seroussi
Weizmann Institute of Science, Israel
Event Details

DACO Seminar

Title How well can we generalize in high dimension?
Speaker, Affiliation Dr. Inbar Seroussi, Weizmann Institute of Science, Israel
Date, Time 7 May 2021, 17:00-18:00
Location Zoom Meeting
Abstract Deep learning algorithms operate in regimes that defy classical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study generalization properties of algorithms in high dimension. Interestingly, we show that algorithms in high dimension require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the overparametrized regime. In addition, we provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural network and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms. (Joint work with Prof. Ofer Zeitouni)
How well can we generalize in high dimension?read_more
Zoom Meeting
14 May 2021
17:00-18:00
Prof. Dr. Yi Ma
University of California, Berkeley, USA
Event Details

DACO Seminar

Title Deep (Convolution) Networks from First Principles
Speaker, Affiliation Prof. Dr. Yi Ma, University of California, Berkeley, USA
Date, Time 14 May 2021, 17:00-18:00
Location Zoom Meeting
Abstract In this talk, we offer an entirely “white box’’ interpretation of deep (convolution) networks from the perspective of data compression (and group invariance). In particular, we show how modern deep layered architectures, linear (convolution) operators and nonlinear activations, and even all parameters can be derived from the principle of maximizing rate reduction (with group invariance). All layers, operators, and parameters of the network are explicitly constructed via forward propagation, instead of learned via back propagation. All components of so-obtained network, called ReduNet, have precise optimization, geometric, and statistical interpretation. There are also several nice surprises from this principled approach: it reveals a fundamental tradeoff between invariance and sparsity for class separability; it reveals a fundamental connection between deep networks and Fourier transform for group invariance – the computational advantage in the spectral domain (why spiking neurons?); this approach also clarifies the mathematical role of forward propagation (optimization) and backward propagation (variation). In particular, the so-obtained ReduNet is amenable to fine-tuning via both forward and backward (stochastic) propagation, both for optimizing the same objective. This is joint work with students Yaodong Yu, Ryan Chan, Haozhi Qi of Berkeley, Dr. Chong You now at Google Research, and Professor John Wright of Columbia University.
Deep (Convolution) Networks from First Principlesread_more
Zoom Meeting
28 May 2021
17:00-18:00
Prof. Dr. Robert D. Nowak
University of Wisconsin-Madison, USA
Event Details

DACO Seminar

Title Banach Space Representer Theorems for Neural Networks
Speaker, Affiliation Prof. Dr. Robert D. Nowak, University of Wisconsin-Madison, USA
Date, Time 28 May 2021, 17:00-18:00
Location Zoom Meeting
Abstract This talk presents a variational framework to understand the properties of functions learned by neural networks fit to data. The framework is based on total variation semi-norms defined in the Radon domain, which is naturally suited to the analysis of neural activation functions (ridge functions). Finding a function that fits a dataset while having a small semi-norm is posed as an infinite dimensional variational optimization. We derive a representer theorem showing that finite-width neural networks are solutions to the variational problem. The representer theorem is reminiscent of the classical reproducing kernel Hilbert space representer theorem, but we show that neural networks are solutions in a non-Hilbertian Banach space. While the learning problems are posed in an infinite dimensional function space, similar to kernel methods, they can be recast as finite-dimensional neural network training problems. These neural network training problems have regularizers which are related to the well-known weight decay and path-norm regularizers. Thus, the results provide new insight into functional characteristics of overparameterized neural networks and also into the design neural network regularizers. Our results also provide new theoretical support for a number of empirical findings in deep learning architectures including the benefits of "skip connections", sparsity, and low-rank structures.
This is joint work with Rahul Parhi.Bio: Robert D. Nowak holds the Nosbusch Professorship in Engineering at the University of Wisconsin-Madison, where his research focuses on signal processing, machine learning, optimization, and statistics.
Banach Space Representer Theorems for Neural Networksread_more
Zoom Meeting
JavaScript has been disabled in your browser