Talks (titles and abstracts)

 

Helmut Bölcskei: Information-theoretic limits of approximation through deep neural networks

We develop the information-theoretic limits of approximation through deep neural networks by viewing deep networks as encoding the functions to be approximated through their topology and weights. Specifically, we establish a connection between the complexity of a function class and the complexity of deep networks approximating functions from this class to within a prescribed accuracy. Additionally, we prove that the information-theoretic optimum is achievable for a broad family of function classes.

Francesco Capponi: Trade duration and the square-root law of market impact

Market impact of trades has been empirically observed to be a concave function of trade size with an exponent close to 1/2, the so-called square-root law. We conduct an extensive empirical investigation of market impact using a large dataset of trades executed in US equity markets by institutional investors over several years. We demonstrate that price changes are explained by the square-root of duration, rather than the trade size. We posit that this square-root relation arises from the scaling of volatility as a function of duration. Conditional on trade duration, the dependence of price impact on trade size is shown to be quite small. We also find that the sign of a price change during a trade may quite often be the  opposite of what traditional impact modeling has assumed: we find a high probability (~40%) that buy (resp. sell) trades may be accompanied by a negative (resp. positive) price change. Our results show that the sign of price changes is better explained by the contemporaneous aggregate order flow imbalance rather than the direction of the individual trade. Joint work with Rama Cont and Amir Sani.

Patrick Cheridito: Deep optimal stopping

We introduce a deep learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples. As such it is broadly applicable in situations where the underlying randomness can efficiently be simulated. We test the method on two benchmark problems: the pricing of a Bermudan max-call option and the problem of optimally stopping a fractional Brownian motion.

Rama Cont: Universal features of intraday price formation: lessons from deep learning

Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model is shown to exhibit a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. Joint work with: Justin Sirignano (University of Illinois Urbana Champaign)

Rama Cont: Pathwise integration and change of variable formulas for continuous paths with arbitrary regularity

We construct a pathwise integration theory, associated with a change of variable formula, for smooth functionals of continuous paths with arbitrary regularity defined in terms of the notion of p-th variation along a sequence of time partitions. For paths with finite p-th variation along a sequence of time partitions, we derive a change of variable formula for p times continuously differentiable functions and show pointwise convergence of appropriately defined compensated Riemann sums. Results for functions are extended to regular path-dependent functionals using the concept of vertical derivative of a functional. We show that the pathwise integral satisfies an ''isometry'' formula in terms of p-th order variation and obtain a ''signal plus noise'' decomposition for regular functionals of paths with strictly increasing p-th variation. For less regular functions we obtain a pathwise Tanaka-type  formula using an appropriately defined notion of ''p-th order local time''. These results extend to multidimensional paths and yield a natural higher-order extension of the concept of ''reduced rough path''. While our construction is canonical and does not involve the specification of any rough-path superstructure, the pathwise integral coincides with a rough-path integral for a certain rough path. Joint work with Nicolas Perowski (Humboldt).

Paul Embrechts: Quantile-based risk sharing

We address the problem of risk sharing among agents using a two-parameter class of quantile-based risk measures, the so-called Range-Value-at-Risk (RVaR), as their preferences. The family of RVaR includes the Value-at-Risk (VaR) and the Expected Shortfall (ES), the two popular and competing regulatory risk measures, as special cases. We first establish an inequality for RVaR-based risk aggregation, showing that RVaR satisfies a special form of subadditivity.  Then, the Pareto-optimal risk sharing problem is solved through explicit construction. To study risk sharing in a competitive market, an Arrow-Debreu equilibrium is established for some simple, yet natural settings. Further, we investigate the problem of model uncertainty in risk sharing, and show that, generally, a robust optimal allocation exists if and only if none of the underlying risk measures is a VaR. Practical implications of our main results for risk management and policy makers are discussed, and several novel advantages of ES over VaR from the perspective of a regulator are thereby revealed.

Tobias Fissler: Elicitability and identifiability of measures of systemic risk

We establish elicitability and identifiability results for systemic risk measures of the form \(R(Y) = \{k\in\mathbb{R}^n\,|\, \rho(\Lambda(Y+k))\le0\}\).

Here, \(\Lambda \colon \mathbb R^n \to \mathbb R\) is an increasing aggregation function, \(\rho\) is a real-valued risk measure, and the random vector \(Y\) represents a system of \(n\) financial firms.

That means the risk measure \(R(Y)\) takes an a priori perspective, being the set of all capital allocation \(k\in\mathbb{R}^n\) which make the aggregated system \(\Lambda(Y+k)\) acceptable under \(\rho\). 

The elicitability of a risk measure, or more generally, a statistical functional amounts to the existence of a strictly consistent scoring or loss function. That is a function in two arguments, a forecast and an observation, such that the expected score is minimised by the correctly specified functional value, thereby encouraging truthful forecasts. Prominent examples are the squared loss for the mean and the absolute loss for the median. Hence, the elicitability of a functional is crucial for meaningful forecast comparison and forecast ranking, but also opens the way to M-estimation and regression. An identification function is similar to a scoring function, however, the correctly specified forecast is the zero of the expected identification function rather than its minimizer, thus giving rise to Z-estimation and possibilities to assess the calibration of forecasts.In this talk, we show the intimate link between the elicitability / identifiability of \(\rho\) and \(R\) making use of an integral construction. On the one hand, our results appear to be relevant and beneficial from an applied point of view. On the other hand, they turn out to be the first (non-trivial) results on set-valued functionals in the theory of elicitability, thereby establishing a novelty of theoretical interest on its own.

Luis Garcia: The multivariate Kyle model and cross-impact estimation

More than 40 years after its formulation, the Kyle model still stands as one of the pillars of theoretical economics, providing a mechanism for price formation based on a stylized agent-based model for markets with a single traded instrument. In this talk we consider its extension to the multivariate setting, in which the agents are allowed to trade multiple securities simultaneously. We illustrate the rich structure of its equilibrium strategies, providing insights on the interpretation and their implications through simple examples. In particular, we elucidate its relation with more recent empirical results highlighting the propagation of information across assets through the order flow (cross-impact). Finally, we introduce a cross-impact estimator based on the equilibrium strategies and compare it to other possible cross-impact estimators.

Lukas Gonon: Deep Hedging

We consider the problem of optimally hedging a derivative in a scenario based discrete-time market with transaction costs. Risk-preferences are specified in terms of a convex risk-measure. Such a framework has suffered from numerical intractability up until recently, but this has changed thanks to technological advances: Using hedging strategies built from neural networks and machine learning optimization techniques, optimal hedging strategies can be approximated very well, as the numerical study and theoretical results presented in this talk demonstrate. This is a joint work with Hans Bühler, Ben Wood and Josef Teichmann.

Antoine Jacquier: Pathwise moderate deviations for option pricing

We provide a unifying treatment of pathwise moderate deviations for models commonly used in financial applications, and for related integrated functionals. Suitable scaling allows us to transfer these results into small-time, large-time and tail asymptotics for diffusions, as well as for option prices and realised variances. In passing, we highlight some  intuitive relationships between moderate deviations rate functions and their large deviations counterparts; these turn out to be useful for numerical purposes, as large deviations rate functions are often difficult to compute. Joint work with Kostas Spiliopoulos (Boston University).

Arnulf Jentzen: On deep learning, the curse of dimensionality, and stochastic approximation algorithms for PDEs

Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. In particular, PDEs are a fundamental tool in portfolio optimization problems and in the state-of-the-art pricing and hedging of financial derivatives. The PDEs appearing in such financial engineering applications are often high dimensional as the dimensionality of the PDE corresponds to the number of financial asserts in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and developing efficient numerical algorithms for high dimensional PDEs is one of the most challenging tasks in applied mathematics. As is well-known, the difficulty lies in the so-called "curse of dimensionality" in the sense that the computational effort of standard approximation algorithms grows exponentially in the dimension of the considered PDE and there is only a very limited number of cases where a practical PDE approximation algorithm with a computational effort which grows at most polynomially in the PDE dimension has been developed. In the case of linear parabolic PDEs the curse of dimensionality can be overcome by means of stochastic approximation algorithms and the Feynman-Kac formula. We first review some results for stochastic approximation algorithms for linear PDEs and, thereafter, we present a stochastic approximation algorithm for high dimensional nonlinear PDEs whose key ingredients are deep artificial neural networks, which are widely used in data science applications. Numerical simulations and first mathematical results sketch the efficiency and the accuracy of the proposed stochastic approximation algorithm in the cases of several high dimensional PDEs from finance and physics.

Alexander Kalinin: Support theorem for path-dependent SDEs

We extend the Stroock-Varadhan support theorem for diffusion processes to the case of multidimensional stochastic differential equations with path-dependent coefficients. The proof is based on the functional Ito calculus. Joint work with Rama Cont.

Martin Larsson: Short- and long-term relative arbitrage in stochastic portfolio theory

A basic result in Stochastic Portfolio Theory states that a mild nondegeneracy condition suffices to guarantee long-term relative arbitrage, that is, the possibility to outperform the market over sufficiently long time horizons. A longstanding open question has been whether short-term relative arbitrage is also implied. Fernholz, Karatzas & Ruf recently showed that it is not, without giving tight bounds on the critical time horizon. We connect existence of relative arbitrage to a certain geometric PDE describing mean curvature flow, and use properties of such flows to compute the critical time horizon.

Chong Liu: Optimal extension to rough paths of Sobolev type

The Lyons-Victoir extension theorem asserts that every \(\alpha\)-Hölder continuous \(R^d\)-valued path \(x\) admits innitely many rough path lifts above it. It is natural to consider a "canonical" one among those lifts, e.g. the unique optimizer of some functional defined on the set of all admissible lifts, and use it to define "canonical" rough integral with respect to \(x\). Since the Hölder topology is not very suitable for optimization problem, we consider the Sobolev topologies in rough path theory, and show that in the 2 levels case one can find such canonical lift as the unique optimizer of certain functional by using a modified version of reconstruction theorem and classical convex analysis.

Adam Majewski: Oscillating between trend and value: insights from an agent-based model on market efficiency

In this talk we will investigate a heterogeneous financial agent-based model exhibiting a phenomenological bifurcation. The qualitative change of mispricing distribution, from unimodal to bimodal, emerges when destabilizing activity of chartists exceeds trading activity of fundamentalists. In bimodal regime market tends to either undervalue or overvalue the asset. We estimate the model on spot prices of financial instruments since 1800 using Bayesian filtering techniques. Obtained results are another evidence against the efficiency of financial markets. 

Ryan McCrickerd: Some things I have learned about volatility over the year

We cover some topics that have come up over the past year, all linked to models of stochastic volatility. This includes the hot-start Bergomi model, the rough counterpart to this, and the fast-reversion Heston model (the normal-inverse Gaussian jump process). We show that the latter offers a delightful parameterisation of the volatility surface, provides an "arbitrage-free by design" alternative to the proceedings of …eeeSSVI, and naturally takes the forward variance curve as input. We discuss applications of this parameterisation alongside networks in model calibration.

Maxime Morariu-PatrichiState-dependent Hawkes processes and their application to limit order book modelling

 

 

 

 

 

Motivated by the modelling of limit order books, we introduce a class of hybrid marked point processes, which encompasses and extends continuous-time Markov chains and Hawkes processes. While this flexible class amalgamates such existing processes, it also contains novel processes with complex dynamics. These processes are defined implicitly via their intensity and are endowed with a state process that interacts with past-dependent events. The key example we entertain is an extension of a Hawkes process, a state-dependent Hawkes process interacting with its state process. We show the existence and uniqueness of hybrid marked point processes under general assumptions, extending the results of Massoulié (1998) on interacting point processes. We also discuss an application of state-dependent Hawkes processes to high-frequency financial data.

 

 

 

 

 

Aitor Muguruza: Functional central limit theorems for rough volatility models    

We extend Donsker's approximation of Brownian motion to fractional Brownian motion with Hurst exponent \(H\in (0,1)\) and to Volterra-like processes. Some of the most relevant consequences of our "rough Donsker (rDonsker) Theorem" are convergence results for discrete approximations of a large class of rough models. This justifies the validity of simple and easy-to-implement Monte Carlo methods, for which we provide detailed numerical recipes. We test these against the current benchmark Hybrid scheme and find remarkable agreement (for a large range of values of \(H\)). This rDonsker Theorem further provides a weak convergence proof for the Hybrid scheme itself, and allows to construct binomial trees for rough volatility models, the first available scheme (in the rough volatility context) for early exercise options such as American or Bermudan. Joint work with Antoine Jacquier and Blanka Horvath.

Marvin Müller: Stochastic Stefan-type problems

Stochastic extensions of macroscopic stochastic two-phase systems in with Stefan-type boundary interaction recently came up in applications of modeling of modern financial markets. While existence and uniqueness results can be established in many situations, tractability of these non-linear systems becomes difficult. To get a deeper understanding of the solutions we discuss approximation results for such classes of stochastic moving boundary problems.

Eyal Neumann: Fractional Brownian motion with zero Hurst parameter

It has been recently established that the volatility of financial assets is rough. This means that the behavior of the log-volatility process is similar to that of a fractional Brownian motion with Hurst parameter around 0.1. Motivated by this finding, we wish to define a natural and relevant limit for the fractional Brownian motion when \(H\) goes to zero. We show that once properly normalized, the fractional Brownian motion converges to a Gaussian random distribution which is very close to a log-correlated random field. Joint work with Mathieu Rosenbaum.

Stefano Novello: A pathwise Föllmer-Protter-Shiryaev formula and extension to path-dependent functionals 

We prove a pathwise version of the generalized Ito formula of Föllmer-Protter and Shiryaev, for functions of multidimensional paths with finite quadratic variation. We then extend this formula to the case of path-dependent functionals, obtaining a functional change of variable formula for functionals which possess directional derivatives in a weak sense.

Mikko Pakkanen: Turbocharging Monte Carlo pricing for the rough Bergomi model

The rough Bergomi model, due to Bayer, Friz and Gatheral (2016), is one of the recent rough volatility models that are able to parsimoniously capture the term structure of at-the-money implied volatility skew observed in equity markets. The practical adoption of this model is, however, made difficult by its non-Markovian and non-affine structure, not amenable to standard analytical pricing methods. This motivates the quest for efficient Monte Carlo methods for the model. In this talk, I will outline a composition of variance reduction methods that will significantly reduce the computational cost of Monte Carlo pricing for the rough Bergomi model. In particular, full calibration to implied volatility surfaces is now within the realms of possibility. Joint work with Ryan McCrickerd.

Athena Picarelli: State constrained optimal control problems via reachability approach

This work deals with a class of stochastic optimal control problems in the presence of state constraints. It is well known that for such problems the value function is, in general, discontinuous, and its characterisation by a Hamilton-Jacobi equation requires additional assumptions involving an interplay between the boundary of the set of constraints and the dynamics of the controlled system. Here, we give a characterization of the epigraph of the value function without assuming the usual controllability assumptions. To this end, the stochastic optimal control problem is first translated into a state-constrained stochastic target problem. Then a level-set approach is used to describe the backward reachable sets of the new target problem. It turns out that these backward reachable sets describe the value function. The main advantage of our approach is that it allows us to easily handle the state constraints by an exact penalisation. However, the target problem involves a new state variable and a new control variable that is unbounded.

Max Reppen: Discrete dividend payments in continuous time

We propose a model in which dividend payments occur at regular intervals in an otherwise continuous model. This contrasts traditional models where either the rate of (continuous) dividend payments is controlled or the dynamics are given by discrete time processes. The model enables us to find the loss caused by infrequent dividend payments. Moreover, between two dividend payments, the structure allows for other types of control; we consider the possibility of equity issuance at any point in time. We prove the convergence of an efficient numerical algorithm which we use to study the problem.

Eric Schaanning: Measuring price-mediated contagion and reverse stress testing

How can one quantify the notion of interconnectedness that common asset holdings create? Do regulatory stress scenarios adequately test for vulnerabilities that these overlapping portfolios can generate?
First, our paper introduces two novel measures of price-mediated contagion derived from the Perron-eigenvector of the matrix quantifying the liquidity-weighted overlaps of institutional portfolios in a network. The Endogenous Risk Index (ERI) captures spillovers across portfolios in scenarios of deleveraging and has a natural micro-foundation that arises when accounting for institution-level losses occurring in a fire sale. The Indirect Contagion Index (ICI) allows to quantify the degree of ``interconnectedness" for systemically important financial institutions' portfolios by accounting for the losses that a distressed liquidation would inflict on other portfolios. Second, we develop a reverse-stress testing methodology for price-mediated contagion and use it to analyse how close the official stress scenario of the 2016 European Bank Authority was to a worst-case scenario in terms of price-mediated contagion. Our results suggest that while the official scenario is correlated to the worst-case scenario, the EBA test did not precisely target the vulnerabilities for contagion in the European Banking system, as implied by the portfolio holdings of European Banks at the time. Our findings are robust to introducing an optimal bank-deleveraging response that seeks to minimize liquidation losses.

Josef Teichmann: Generalized Feller processes and Markovian lifts

We consider stochastic (partial) differential equations from the point of view of the generalized Feller property which has been introduced in, e.g., Dörsek-Teichmann. As an application we provide existence, uniqueness and approximation results for a Markovian lift of affine rough volatility models of general jump diffusion type. We demonstrate in particular that in this Markovian light most of the arguments become transparent and almost classical. (joint work with Christa Cuchiero)

Chen Yang: Daily rebalancing of leveraged ETFs

Leveraged Exchange-traded Fund (LETF) is a financial instrument that aims at achieving target daily returns that equal to a constant multiple (e.g. 2x or 3x) of the underlying index daily returns. To meet this target, LETF must make large transactions to rebalance its portfolio at the end of each day, which is costly due to transaction costs, front-running and other reasons. We propose a model taking these friction in account and provide daily rebalancing strategies that do not require large transactions. This is a joint work with Min Dai, Steven Kou and Halil Mete Soner.

JavaScript has been disabled in your browser