Arrow Research search

Author name cluster

Emanuel Aldea

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

UAI Conference 2025 Conference Paper

Stochastic Embeddings: A Probabilistic and Geometric Analysis of Out-of-Distribution Behavior

  • Anthony Nguyen
  • Emanuel Aldea
  • Sylvie Le Hégarat-Mascle
  • Renaud Lustrat

Deep neural networks perform well in many applications but often fail when exposed to out-of-distribution (OoD) inputs. We identify a geometric phenomenon in the embedding space: in-distribution (ID) data show higher variance than OoD data under stochastic perturbations. Using high-dimensional geometry and statistics, we explain this behavior and demonstrate its application in improving OoD detection. Unlike traditional post-hoc methods, our approach integrates uncertainty-aware tools, such as Bayesian approximations, directly into the detection process. Then, we show how considering the unit hypersphere enhances the separation of ID and OoD samples. Our mathematically sound method achieves competitive performance while remaining simple.

ICLR Conference 2024 Conference Paper

A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors

  • Olivier Laurent 0002
  • Emanuel Aldea
  • Gianni Franchi

The distribution of modern deep neural networks (DNNs) weights -- crucial for uncertainty quantification and robustness -- is an eminently complex object due to its extremely high dimensionality. This paper presents one of the first large-scale explorations of the posterior distribution of deep Bayesian Neural Networks (BNNs), expanding its study to real-world vision tasks and architectures. Specifically, we investigate the optimal approach for approximating the posterior, analyze the connection between posterior quality and uncertainty quantification, delve into the impact of modes on the posterior, and explore methods for visualizing the posterior. Moreover, we uncover weight-space symmetries as a critical aspect for understanding the posterior. To this extent, we develop an in-depth assessment of the impact of both permutation and scaling symmetries that tend to obfuscate the Bayesian posterior. While the first type of transformation is known for duplicating modes, we explore the relationship between the latter and L2 regularization, challenging previous misconceptions. Finally, to help the community improve our understanding of the Bayesian posterior, we release the first large-scale checkpoint dataset, including thousands of real-world models, along with our code.

AAAI Conference 2024 Conference Paper

Discretization-Induced Dirichlet Posterior for Robust Uncertainty Quantification on Regression

  • Xuanlong Yu
  • Gianni Franchi
  • Jindong Gu
  • Emanuel Aldea

Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications. An Auxiliary Uncertainty Estimator (AuxUE) is one of the most effective means to estimate the uncertainty of the main task prediction without modifying the main task model. To be considered robust, an AuxUE must be capable of maintaining its performance and triggering higher uncertainties while encountering Out-of-Distribution (OOD) inputs, i.e., to provide robust aleatoric and epistemic uncertainty. However, for vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates, and AuxUE robustness has not been explored. In this work, we propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks. Concretely, to achieve a more robust aleatoric uncertainty estimation, different distribution assumptions are considered for heteroscedastic noise, and Laplace distribution is finally chosen to approximate the prediction error. For epistemic uncertainty, we propose a novel solution named Discretization-Induced Dirichlet pOsterior (DIDO), which models the Dirichlet posterior on the discretized prediction error. Extensive experiments on age estimation, monocular depth estimation, and super-resolution tasks show that our proposed method can provide robust uncertainty estimates in the face of noisy inputs and that it can be scalable to both image-level and pixel-wise tasks.

IROS Conference 2014 Conference Paper

SuperFAST: Model-based adaptive corner detection for scalable robotic vision

  • Gaspard Florentz
  • Emanuel Aldea

In this study, we propose a novel solution to regulate the amount of interest points extracted from an image without significant additional computational cost. Our method acts at the very beginning of the detection process by using a corner occurrence model in order to predict the optimal threshold for a user-defined number of detections. Compared to existing approaches which guarantee a reasonable amount of corners by using a low threshold and then pruning the result, our approach is faster and more regular in terms of computation time as it avoids scoring and sorting the detected corners. Using the FAST detector as testbed, the strategy outlined in this article is evaluated in typical environments for robotics applications, and we report improved detection reliability during important scene variations. Taking into account the underlying visual navigation algorithms, we show that by regularizing the data input our solution facilitates a stable processing load, lower inter-frame computation time, and robustness to scene variations.