Arrow Research search

Author name cluster

Daniel Faissol

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2026 Conference Paper

Machine Learning Models Assisting the Development of Antibody Therapeutics and Vaccines – an Emerging Trend

  • Felipe Leno Da Silva
  • Mikel Landajuela
  • Edwin A. Saada
  • Piyush Karande
  • Sudeep Sarma
  • Igor D'Angelo
  • Simone Conti
  • Daniel Faissol

The development of novel effective medical treatments is one of the most important and expected beneficial effects of the AI revolution. This decade is witnessing the rise of AI models able to predict complex properties for protein-protein interactions that hold great promise in assisting in the development of antibody therapeutics and vaccines, including for diseases that long eluded us in the pursuit of an effective treatment. This paper introduces this area of research in a language accessible to an AI researcher, exploring the biological problems that can be solved by AI models, as well as the general context to make solutions feasible in practical scenarios. We survey the main current trends and works in this research area and point towards current still unsolved challenges and trade offs. We expect this paper will be extremely helpful for AI researchers trying to join the field, as well as for researchers already working in one of the subtopics that wish to have a better understanding of the general context around it.

AAAI Conference 2025 Conference Paper

DisCo-DSO: Coupling Discrete and Continuous Optimization for Efficient Generative Design in Hybrid Spaces

  • Jacob F. Pettit
  • Chak Shing Lee
  • Jiachen Yang
  • Alex Ho
  • Daniel Faissol
  • Brenden Petersen
  • Mikel Landajuela

We consider the challenge of black-box optimization within hybrid discrete-continuous and variable-length spaces, a problem that arises in various applications, such as decision tree learning and symbolic regression. We propose DisCo-DSO (Discrete-Continuous Deep Symbolic Optimization), a novel approach that uses a generative model to learn a joint distribution over discrete and continuous design variables to sample new hybrid designs. In contrast to standard decoupled approaches, in which the discrete and continuous variables are optimized separately, our joint optimization approach uses fewer objective function evaluations, is robust against non-differentiable objectives, and learns from prior samples to guide the search, leading to significant improvement in performance and sample efficiency. Our experiments on a diverse set of optimization tasks demonstrate that the advantages of DisCo-DSO become increasingly evident as problem complexity grows. In particular, we illustrate DisCo-DSO's superiority over the state-of-the-art methods for interpretable reinforcement learning with decision trees.

AAMAS Conference 2023 Conference Paper

Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement

  • Jiachen Yang
  • Ketan Mittal
  • Tarik Dzanic
  • Socratis Petrides
  • Brendan Keith
  • Brenden Petersen
  • Daniel Faissol
  • Robert Anderson

Adaptive mesh refinement (AMR) is necessary for efficient finite element simulations of complex physical phenomenon, as it allocates limited computational budget based on the need for higher or lower resolution, which varies over space and time. We present a novel formulation of AMR as a fully-cooperative Markov game, in which each element is an independent agent who makes refinement and de-refinement choices based on local information. We design a novel deep multi-agent reinforcement learning (MARL) algorithm called Value Decomposition Graph Network (VDGN), which solves the two core challenges that AMR poses for MARL: posthumous credit assignment due to agent creation and deletion, and unstructured observations due to the diversity of mesh geometries. For the first time, we show that MARL enables anticipatory refinement of regions that will encounter complex features at future times, thereby unlocking entirely new regions of the error-cost objective landscape that are inaccessible by traditional methods based on local error estimators. Comprehensive experiments show that VDGN policies significantly outperform error threshold-based policies in global error and cost metrics. We show that learned policies generalize to test problems with physical features, mesh geometries, and longer simulation times that were not seen in training. We also extend VDGN with multi-objective optimization capabilities to find the Pareto front of the tradeoff between cost and error.

NeurIPS Conference 2021 Conference Paper

Symbolic Regression via Deep Reinforcement Learning Enhanced Genetic Programming Seeding

  • Terrell Mundhenk
  • Mikel Landajuela
  • Ruben Glatt
  • Claudio P Santiago
  • Daniel Faissol
  • Brenden K Petersen

Symbolic regression is the process of identifying mathematical expressions that fit observed output from a black-box process. It is a discrete optimization problem generally believed to be NP-hard. Prior approaches to solving the problem include neural-guided search (e. g. using reinforcement learning) and genetic programming. In this work, we introduce a hybrid neural-guided/genetic programming approach to symbolic regression and other combinatorial optimization problems. We propose a neural-guided component used to seed the starting population of a random restart genetic programming component, gradually learning better starting populations. On a number of common benchmark tasks to recover underlying expressions from a dataset, our method recovers 65% more expressions than a recently published top-performing model using the same experimental setup. We demonstrate that running many genetic programming generations without interdependence on the neural-guided component performs better for symbolic regression than alternative formulations where the two are more strongly coupled. Finally, we introduce a new set of 22 symbolic regression benchmark problems with increased difficulty over existing benchmarks. Source code is provided at www. github. com/brendenpetersen/deep-symbolic-optimization.