Arrow Research search

Author name cluster

Adrien Corenflos

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

ICML Conference 2024 Conference Paper

Nesting Particle Filters for Experimental Design in Dynamical Systems

  • Sahel Iqbal
  • Adrien Corenflos
  • Simo Särkkä
  • Hany Abdulsamad

In this paper, we propose a novel approach to Bayesian experimental design for non-exchangeable data that formulates it as risk-sensitive policy optimization. We develop the Inside-Out SMC$^2$ algorithm, a nested sequential Monte Carlo technique to infer optimal designs, and embed it into a particle Markov chain Monte Carlo framework to perform gradient-based policy amortization. Our approach is distinct from other amortized experimental design techniques, as it does not rely on contrastive estimators. Numerical validation on a set of dynamical systems showcases the efficacy of our method in comparison to other state-of-the-art strategies.

JMLR Journal 2024 Journal Article

Parallel-in-Time Probabilistic Numerical ODE Solvers

  • Nathanael Bosch
  • Adrien Corenflos
  • Fatemeh Yaghoobi
  • Filip Tronarp
  • Philipp Hennig
  • Simo Särkkä

Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation. Aside from producing posterior distributions over ODE solutions and thereby quantifying the numerical approximation error of the method itself, one less-often noted advantage of this formalism is the algorithmic flexibility gained by formulating numerical simulation in the framework of Bayesian filtering and smoothing. In this paper, we leverage this flexibility and build on the time-parallel formulation of iterated extended Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE solver. Instead of simulating the dynamical system sequentially in time, as done by current probabilistic solvers, the proposed method processes all time steps in parallel and thereby reduces the computational complexity from linear to logarithmic in the number of time steps. We demonstrate the effectiveness of our approach on a variety of ODEs and compare it to a range of both classic and probabilistic numerical ODE solvers. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2024. ( edit, beta )

JMLR Journal 2022 Journal Article

De-Sequentialized Monte Carlo: a parallel-in-time particle smoother

  • Adrien Corenflos
  • Nicolas Chopin
  • Simo Särkkä

Particle smoothers are SMC (Sequential Monte Carlo) algorithms designed to approximate the joint distribution of the states given observations from a state-space model. We propose dSMC (de-Sequentialized Monte Carlo), a new particle smoother that is able to process $T$ observations in $\mathcal{O}(\log_2 T)$ time on parallel architectures. This compares favorably with standard particle smoothers, the complexity of which is linear in $T$. We derive $\mathcal{L}_p$ convergence results for dSMC, with an explicit upper bound, polynomial in $T$. We then discuss how to reduce the variance of the smoothing estimates computed by dSMC by (i) designing good proposal distributions for sampling the particles at the initialization of the algorithm, as well as by (ii) using lazy resampling to increase the number of particles used in dSMC. Finally, we design a particle Gibbs sampler based on dSMC, which is able to perform parameter inference in a state-space model at a $\mathcal{O}(\log_2 T)$ cost on parallel hardware. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2022. ( edit, beta )

ICML Conference 2021 Conference Paper

Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

  • Adrien Corenflos
  • James Thornton
  • George Deligiannidis
  • Arnaud Doucet

Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications.

JMLR Journal 2021 Journal Article

POT: Python Optimal Transport

  • Rémi Flamary
  • Nicolas Courty
  • Alexandre Gramfort
  • Mokhtar Z. Alaya
  • Aurélie Boisbunon
  • Stanislas Chambon
  • Laetitia Chapel
  • Adrien Corenflos

Optimal transport has recently been reintroduced to the machine learning community thanks in part to novel efficient optimization procedures allowing for medium to large scale applications. We propose a Python toolbox that implements several key optimal transport ideas for the machine learning community. The toolbox contains implementations of a number of founding works of OT for machine learning such as Sinkhorn algorithm and Wasserstein barycenters, but also provides generic solvers that can be used for conducting novel fundamental research. This toolbox, named POT for Python Optimal Transport, is open source with an MIT license. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2021. ( edit, beta )