Arrow Research search

Author name cluster

Frédéric Chazal

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
2 author rows

Possible papers

9

JMLR Journal 2024 Journal Article

Topological Analysis for Detecting Anomalies in dependent sequences: application to Time Series

  • Frédéric Chazal
  • Clément Levrard
  • Martin Royer

This paper introduces a new methodology based on the field of Topological Data Analysis for detecting structural anomalies in dependent sequences of complex data. A motivating example is that of multivariate time series, for which our method allows to detect global changes in the dependence structure between channels. The proposed approach is lean enough to handle large scale data sets, and extensive numerical experiments back the intuition that it is more suitable for detecting global changes of correlation structures than existing methods. Some theoretical guarantees for quantization algorithms based on dependent sequences are also provided. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2024. ( edit, beta )

ICML Conference 2021 Conference Paper

Optimizing persistent homology based functions

  • Mathieu Carrière
  • Frédéric Chazal
  • Marc Glisse
  • Yuichi Ike
  • Hariprasad Kannan
  • Yuhei Umeda

Solving optimization tasks based on functions and losses with a topological flavor is a very active and growing field of research in data science and Topological Data Analysis, with applications in non-convex optimization, statistics and machine learning. However, the approaches proposed in the literature are usually anchored to a specific application and/or topological construction, and do not come with theoretical guarantees. To address this issue, we study the differentiability of a general map associated with the most common topological construction, that is, the persistence map. Building on real analytic geometry arguments, we propose a general framework that allows us to define and compute gradients for persistence-based functions in a very simple way. We also provide a simple, explicit and sufficient condition for convergence of stochastic subgradient methods for such functions. This result encompasses all the constructions and applications of topological optimization in the literature. Finally, we provide associated code, that is easy to handle and to mix with other non-topological methods and constraints, as well as some experiments showcasing the versatility of our approach.

IJCAI Conference 2021 Conference Paper

Topological Uncertainty: Monitoring Trained Neural Networks through Persistence of Activation Graphs

  • Théo Lacombe
  • Yuichi Ike
  • Mathieu Carrière
  • Frédéric Chazal
  • Marc Glisse
  • Yuhei Umeda

Although neural networks are capable of reaching astonishing performance on a wide variety of contexts, properly training networks on complicated tasks requires expertise and can be expensive from a computational perspective. In industrial applications, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained. Being able to monitor the presence of such variations without retraining the network is of crucial importance. In this paper, we develop a method to monitor trained neural networks based on the topological properties of their activation graphs. To each new observation, we assign a Topological Uncertainty, a score that aims to assess the reliability of the predictions by investigating the whole network instead of its final layer only as typically done by practitioners. Our approach entirely works at a post-training level and does not require any assumption on the network architecture, optimization scheme, nor the use of data augmentation or auxiliary datasets; and can be faithfully applied on a large range of network architectures and data types. We showcase experimentally the potential of Topological Uncertainty in the context of trained network selection, Out-Of-Distribution detection, and shift-detection, both on synthetic and real datasets of images and graphs.

JMLR Journal 2018 Journal Article

Robust Topological Inference: Distance To a Measure and Kernel Distance

  • Frédéric Chazal
  • Brittany Fasy
  • Fabrizio Lecci
  • Bertrand Michel
  • Alessandro Rinaldo
  • Larry Wasserman

Let $P$ be a distribution with support $S$. The salient features of $S$ can be quantified with persistent homology, which summarizes topological features of the sublevel sets of the distance function (the distance of any point $x$ to $S$). Given a sample from $P$ we can infer the persistent homology using an empirical version of the distance function. However, the empirical distance function is highly non-robust to noise and outliers. Even one outlier is deadly. The distance-to-a-measure (DTM), introduced by \cite{chazal2011geometric}, and the kernel distance, introduced by \cite{phillips2014goemetric}, are smooth functions that provide useful topological information but are robust to noise and outliers. \cite{massart2014} derived concentration bounds for DTM. Building on these results, we derive limiting distributions and confidence sets, and we propose a method for choosing tuning parameters. [abs] [ pdf ][ bib ] &copy JMLR 2018. ( edit, beta )

JMLR Journal 2015 Journal Article

Convergence Rates for Persistence Diagram Estimation in Topological Data Analysis

  • Frédéric Chazal
  • Marc Glisse
  • Catherine Labruère
  • Bertrand Michel

Computational topology has recently seen an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and that persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results. [abs] [ pdf ][ bib ] &copy JMLR 2015. ( edit, beta )

ICML Conference 2015 Conference Paper

Subsampling Methods for Persistent Homology

  • Frédéric Chazal
  • Brittany Terese Fasy
  • Fabrizio Lecci
  • Bertrand Michel
  • Alessandro Rinaldo
  • Larry A. Wasserman

Persistent homology is a multiscale method for analyzing the shape of sets and functions from point cloud data arising from an unknown distribution supported on those sets. When the size of the sample is large, direct computation of the persistent homology is prohibitive due to the combinatorial nature of the existing algorithms. We propose to compute the persistent homology of several subsamples of the data and then combine the resulting estimates. We study the risk of two estimators and we prove that the subsampling approach carries stable topological information while achieving a great reduction in computational complexity.

ICML Conference 2014 Conference Paper

Convergence rates for persistence diagram estimation in Topological Data Analysis

  • Frédéric Chazal
  • Marc Glisse
  • Catherine Labruère
  • Bertrand Michel

Computational topology has recently seen an important development toward data analysis, giving birth to Topological Data Analysis. Persistent homology appears as a fundamental tool in this field. We show that the use of persistent homology can be naturally considered in general statistical frameworks. We establish convergence rates of persistence diagrams associated to data randomly sampled from any compact metric space to a well defined limit diagram encoding the topological features of the support of the measure from which the data have been sampled. Our approach relies on a recent and deep stability result for persistence that allows to relate our problem to support estimation problems (with respect to the Gromov-Hausdorff distance). Some numerical experiments are performed in various contexts to illustrate our results.