Arrow Research search

Author name cluster

Marco Gori

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

33 papers
2 author rows

Possible papers

33

AAAI Conference 2026 Conference Paper

DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs

  • Ying Jiao
  • Rodrigo Castellano Ontiveros
  • Luc De Raedt
  • Marco Gori
  • Francesco Giannini
  • Michelangelo Diligenti
  • Giuseppe Marra

Neurosymbolic (NeSy) AI combines neural architectures and symbolic reasoning to improve accuracy, interpretability, and generalization. While logic inference on top of subsymbolic modules has been shown to effectively guarantee these properties, this often comes at the cost of reduced scalability, which can severely limit the usability of NeSy models. This paper introduces DeepProofLog (DPrL), a novel NeSy system based on stochastic logic programs, which addresses the scalability limitations of previous methods. DPrL parameterizes all derivation steps with neural networks, allowing efficient neural guidance over the proving system. Additionally, we establish a formal mapping between the resolution process of our deep stochastic logic programs and Markov Decision Processes, enabling the application of dynamic programming and reinforcement learning techniques for efficient inference and learning. This theoretical connection improves scalability for complex proof spaces and large knowledge bases. Our experiments on standard NeSy benchmarks and knowledge graph reasoning tasks demonstrate that DPrL outperforms existing state-of-the-art NeSy systems, advancing scalability to larger and more complex settings than previously possible.

IJCAI Conference 2025 Conference Paper

Grounding Methods for Neural-Symbolic AI

  • Rodrigo Castellano Ontiveros
  • Francesco Giannini
  • Marco Gori
  • Giuseppe Marra
  • Michelangelo Diligenti

A large class of Neural-Symbolic (NeSy) methods employs a machine learner to process the input entities, while relying on a reasoner based on First-Order Logic to represent and process more complex relationships among the entities. A fundamental role for these methods is played by the process of logic grounding, which determines the relevant substitutions for the logic rules using a (sub)set of entities. Some NeSy methods use an exhaustive derivation of all possible substitutions, preserving the full expressive power of the logic knowledge, but leading to a combinatorial explosion of the number of ground formulas to consider and, therefore, strongly limiting their scalability. Other methods rely on heuristic-based selective derivations, which are generally more computationally efficient, but lack a justification and provide no guarantees of preserving the information provided to and returned by the reasoner. Taking inspiration from multi-hop symbolic reasoning, this paper proposes a parametrized family of grounding methods generalizing classic Backward Chaining. Different selections within this family allow to obtain commonly employed grounding methods as special cases, and to control the trade-off between expressiveness and scalability of the reasoner. The experimental results show that the selection of the grounding criterion is often as important as the NeSy method itself.

NeurIPS Conference 2024 Conference Paper

Nature-Inspired Local Propagation

  • Alessandro Betti
  • Marco Gori

The spectacular results achieved in machine learning, including the recent advances in generative AI, rely on large data collections. On the opposite, intelligent processes in nature arises without the need for such collections, but simply by on-line processing of the environmental information. In particular, natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way to respect spatiotemporal locality. This paper shows that such a feature arises from a pre-algorithmic view of learning that is inspired by related studies in Theoretical Physics. We show that the algorithmic interpretation of the derived “laws of learning”, which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity. This opens the doors to machine learning studies based on full on-line information processing that are based on the replacement of Backpropagation with the proposed spatiotemporal local algorithm.

AAAI Conference 2024 Conference Paper

Neural Time-Reversed Generalized Riccati Equation

  • Alessandro Betti
  • Michele Casoni
  • Marco Gori
  • Simone Marullo
  • Stefano Melacci
  • Matteo Tiezzi

Optimal control deals with optimization problems in which variables steer a dynamical system, and its outcome contributes to the objective function. Two classical approaches to solving these problems are Dynamic Programming and the Pontryagin Maximum Principle. In both approaches, Hamiltonian equations offer an interpretation of optimality through auxiliary variables known as costates. However, Hamiltonian equations are rarely used due to their reliance on forward-backward algorithms across the entire temporal domain. This paper introduces a novel neural-based approach to optimal control. Neural networks are employed not only for implementing state dynamics but also for estimating costate variables. The parameters of the latter network are determined at each time step using a newly introduced local policy referred to as the time-reversed generalized Riccati equation. This policy is inspired by a result discussed in the Linear Quadratic (LQ) problem, which we conjecture stabilizes state dynamics. We support this conjecture by discussing experimental results from a range of optimal control case studies.

AAAI Conference 2022 Conference Paper

Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks

  • Simone Marullo
  • Matteo Tiezzi
  • Marco Gori
  • Stefano Melacci

Amongst a variety of approaches aimed at making the learning procedure of neural networks more effective, the scientific community developed strategies to order the examples according to their estimated complexity, to distil knowledge from larger networks, or to exploit the principles behind adversarial machine learning. A different idea has been recently proposed, named Friendly Training, which consists in altering the input data by adding an automatically estimated perturbation, with the goal of facilitating the learning process of a neural classifier. The transformation progressively fadesout as long as training proceeds, until it completely vanishes. In this work we revisit and extend this idea, introducing a radically different and novel approach inspired by the effectiveness of neural generators in the context of Adversarial Machine Learning. We propose an auxiliary multi-layer network that is responsible of altering the input data to make them easier to be handled by the classifier at the current stage of the training procedure. The auxiliary network is trained jointly with the neural classifier, thus intrinsically increasing the “depth” of the classifier, and it is expected to spot general regularities in the data alteration process. The effect of the auxiliary network is progressively reduced up to the end of training, when it is fully dropped and the classifier is deployed for applications. We refer to this approach as Neural Friendly Training. An extended experimental procedure involving several datasets and different neural architectures shows that Neural Friendly Training overcomes the originally proposed Friendly Training technique, improving the generalization of the classifier, especially in the case of noisy data.

AAAI Conference 2022 Conference Paper

Entropy-Based Logic Explanations of Neural Networks

  • Pietro Barbiero
  • Gabriele Ciravegna
  • Francesco Giannini
  • Pietro Lió
  • Marco Gori
  • Stefano Melacci

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks have arisen as explainable-by-design methods as they leverage human-understandable symbols (i. e. concepts) to predict class memberships. However, most of these approaches focus on the identification of the most relevant concepts but do not provide concise, formal explanations of how such concepts are leveraged by the classifier to make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic. The method relies on an entropy-based criterion which automatically identifies the most relevant concepts. We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy and matches black box performances.

AAAI Conference 2020 Conference Paper

A Constraint-Based Approach to Learning and Explanation

  • Gabriele Ciravegna
  • Francesco Giannini
  • Stefano Melacci
  • Marco Maggini
  • Marco Gori

In the last few years we have seen a remarkable progress from the cultivation of the idea of expressing domain knowledge by the mathematical notion of constraint. However, the progress has mostly involved the process of providing consistent solutions with a given set of constraints, whereas learning “new” constraints, that express new knowledge, is still an open challenge. In this paper we propose a novel approach to learning of constraints which is based on information theoretic principles. The basic idea consists in maximizing the transfer of information between task functions and a set of learnable constraints, implemented using neural networks subject to L1 regularization. This process leads to the unsupervised development of new constraints that are fulfilled in different subportions of the input domain. In addition, we define a simple procedure that can explain the behaviour of the newly devised constraints in terms of First-Order Logic formulas, thus extracting novel knowledge on the relationships between the original tasks. An experimental evaluation is provided to support the proposed approach, in which we also explore the regularization effects introduced by the proposed Information- Based Learning of Constraint (IBLC) algorithm.

ECAI Conference 2020 Conference Paper

A Lagrangian Approach to Information Propagation in Graph Neural Networks

  • Matteo Tiezzi
  • Giuseppe Marra
  • Stefano Melacci
  • Marco Maggini
  • Marco Gori

In many real world applications, data are characterized by a complex structure, that can be naturally encoded as a graph. In the last years, the popularity of deep learning techniques has renewed the interest in neural models able to process complex patterns. In particular, inspired by the Graph Neural Network (GNN) model, different architectures have been proposed to extend the original GNN scheme. GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism of the states among neighbor nodes, to implement an iterative procedure to compute the fixed point of the (learnable) state transition function. In this paper, we propose a novel approach to the state computation and the learning algorithm for GNNs, based on a constraint optimisation task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states), and Lagrange multipliers. The proposed approach is compared experimentally with other popular models for processing graphs.

NeurIPS Conference 2020 Conference Paper

Focus of Attention Improves Information Transfer in Visual Features

  • Matteo Tiezzi
  • Stefano Melacci
  • Alessandro Betti
  • Marco Maggini
  • Marco Gori

Unsupervised learning from continuous visual streams is a challenging problem that cannot be naturally and efficiently managed in the classic batch-mode setting of computation. The information stream must be carefully processed accordingly to an appropriate spatio-temporal distribution of the visual data, while most approaches of learning commonly assume uniform probability density. In this paper we focus on unsupervised learning for transferring visual information in a truly online setting by using a computational model that is inspired to the principle of least action in physics. The maximization of the mutual information is carried out by a temporal process which yields online estimation of the entropy terms. The model, which is based on second-order differential equations, maximizes the information transfer from the input to a discrete space of symbols related to the visual features of the input, whose computation is supported by hidden neurons. In order to better structure the input probability distribution, we use a human-like focus of attention model that, coherently with the information maximization model, is also based on second-order differential equations. We provide experimental results to support the theory by showing that the spatio-temporal filtering induced by the focus of attention allows the system to globally transfer more information from the input stream over the focused areas and, in some contexts, over the whole frames with respect to the unfiltered case that yields uniform probability distributions.

IJCAI Conference 2020 Conference Paper

Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

  • Luís C. Lamb
  • Artur d’Avila Garcez
  • Marco Gori
  • Marcelo O. R. Prates
  • Pedro H. C. Avelar
  • Moshe Y. Vardi

Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNNs) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as their relationship to current developments in neural-symbolic computing.

IJCAI Conference 2020 Conference Paper

Human-Driven FOL Explanations of Deep Learning

  • Gabriele Ciravegna
  • Francesco Giannini
  • Marco Gori
  • Marco Maggini
  • Stefano Melacci

Deep neural networks are usually considered black-boxes due to their complex internal architecture, that cannot straightforwardly provide human-understandable explanations on how they behave. Indeed, Deep Learning is still viewed with skepticism in those real-world domains in which incorrect predictions may produce critical effects. This is one of the reasons why in the last few years Explainable Artificial Intelligence (XAI) techniques have gained a lot of attention in the scientific community. In this paper, we focus on the case of multi-label classification, proposing a neural network that learns the relationships among the predictors associated to each class, yielding First-Order Logic (FOL)-based descriptions. Both the explanation-related network and the classification-related network are jointly learned, thus implicitly introducing a latent dependency between the development of the explanation mechanism and the development of the classifiers. Our model can integrate human-driven preferences that guide the learning-to-explain process, and it is presented in a unified framework. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance.

ECAI Conference 2020 Conference Paper

Relational Neural Machines

  • Giuseppe Marra
  • Michelangelo Diligenti
  • Francesco Giannini
  • Marco Gori
  • Marco Maggini

Deep learning has been shown to achieve impressive results in several tasks where a large amount of training data is available. However, deep learning solely focuses on the accuracy of the predictions, neglecting the reasoning process leading to a decision, which is a major issue in life-critical applications. Probabilistic logic reasoning allows to exploit both statistical regularities and specific domain expertise to perform reasoning under uncertainty, but its scalability and brittle integration with the layers processing the sensory data have greatly limited its applications. For these reasons, combining deep architectures and probabilistic logic reasoning is a fundamental goal towards the development of intelligent agents operating in complex environments. This paper presents Relational Neural Machines, a novel framework allowing to jointly train the parameters of the learners and of a First–Order Logic based reasoner. A Relational Neural Machine is able to recover both classical learning from supervised data in case of pure sub-symbolic learning, and Markov Logic Networks in case of pure symbolic reasoning, while allowing to jointly train and perform inference in hybrid learning tasks. Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems. The experiments show promising results in different relational tasks.

IJCAI Conference 2019 Conference Paper

Motion Invariance in Visual Environments

  • Alessandro Betti
  • Marco Gori
  • Stefano Melacci

The puzzle of computer vision might find new challenging solutions when we realize that most successful methods are working at image level, which is remarkably more difficult than processing directly visual streams, just as it happens in nature. In this paper, we claim that the processing of a stream of frames naturally leads to formulate the motion invariance principle, which enables the construction of a new theory of visual learning based on convolutional features. The theory addresses a number of intriguing questions that arise in natural vision, and offers a well-posed computational scheme for the discovery of convolutional filters over the retina. They are driven by the Euler- Lagrange differential equations derived from the principle of least cognitive action, that parallels the laws of mechanics. Unlike traditional convolutional networks, which need massive supervision, the proposed theory offers a truly new scenario in which feature learning takes place by unsupervised processing of video signals. An experimental report of the theory is presented where we show that features extracted under motion invariance yield an improvement that can be assessed by measuring information-based indexes.

AAAI Conference 2018 Conference Paper

Characterization of the Convex Łukasiewicz Fragment for Learning From Constraints

  • Francesco Giannini
  • Michelangelo Diligenti
  • Marco Gori
  • Marco Maggini

This paper provides a theoretical insight for the integration of logical constraints into a learning process. In particular it is proved that a fragment of the Łukasiewicz logic yields a set of convex constraints. The fragment is enough expressive to include many formulas of interest such as Horn clauses. Using the isomorphism of Łukasiewicz formulas and McNaughton functions, logical constraints are mapped to a set of linear constraints once the predicates are grounded on a given sample set. In this framework, it is shown how a collective classification scheme can be formulated as a quadratic programming problem, but the presented theory can be exploited in general to embed logical constraints into a learning process. The proposed approach is evaluated on a classification task to show how the use of the logical rules can be effective to improve the accuracy of a trained classifier.

NeurIPS Conference 2017 Conference Paper

Variational Laws of Visual Attention for Dynamic Scenes

  • Dario Zanca
  • Marco Gori

Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. This paper proposes a model of attentional scanpath that is based on the principle that there are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action Principle in physics. The potential energy captures details as well as peripheral visual features, while the kinetic energy corresponds with the classic interpretation in analytic mechanics. In addition, the Lagrangian contains a brightness invariance term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action, and we propose an algorithm to estimate the model parameters. Finally, we report experimental results to validate the model in tasks of saliency detection.

ECAI Conference 2010 Conference Paper

Kernel-Based Hybrid Random Fields for Nonparametric Density Estimation

  • Antonino Freno
  • Edmondo Trentin
  • Marco Gori

Hybrid random fields are a recently proposed graphical model for pseudo-likelihood estimation in discrete domains. In this paper, we develop a continuous version of the model for nonparametric density estimation. To this aim, Nadaraya-Watson kernel estimators are used to model the local conditional densities within hybrid random fields. First, we introduce a heuristic algorithm for tuning the kernel bandwidhts in the conditional density estimators. Second, we propose a novel method for initializing the structure learning algorithm originally employed for hybrid random fields, which was meant instead for discrete variables. In order to test the accuracy of the proposed technique, we use a number of synthetic pattern classification benchmarks, generated from random distributions featuring nonlinear correlations between the variables. As compared to state-of-the-art nonparametric and semiparametric learning techniques for probabilistic graphical models, kernel-based hybrid random fields regularly outperform each considered alternative in terms of recognition accuracy, while preserving the scalability properties (with respect to the number of variables) that originally motivated their introduction.

ECAI Conference 2010 Conference Paper

Multitask Kernel-based Learning with Logic Constraints

  • Michelangelo Diligenti
  • Marco Gori
  • Marco Maggini
  • Leonardo Rigutini

This paper presents a general framework to integrate prior knowledge in the form of logic constraints among a set of task functions into kernel machines. The logic propositions provide a partial representation of the environment, in which the learner operates, that is exploited by the learning algorithm together with the information available in the supervised examples. In particular, we consider a multi-task learning scheme, where multiple unary predicates on the feature space are to be learned by kernel machines and a higher level abstract representation consists of logic clauses on these predicates, known to hold for any input. A general approach is presented to convert the logic clauses into a continuous implementation, that processes the outputs computed by the kernel-based predicates. The learning task is formulated as a primal optimization problem of a loss function that combines a term measuring the fitting of the supervised examples, a regularization term, and a penalty term that enforces the constraints on both supervised and unsupervised examples. The proposed semi-supervised learning framework is particularly suited for learning in high dimensionality feature spaces, where the supervised training examples tend to be sparse and generalization difficult. Unlike for standard kernel machines, the cost function to optimize is not generally guaranteed to be convex. However, the experimental results show that it is still possible to find good solutions using a two stage learning schema, in which first the supervised examples are learned until convergence and then the logic constraints are forced. Some promising experimental results on artificial multi-task learning tasks are reported, showing how the classification accuracy can be effectively improved by exploiting the a priori rules and the unsupervised examples.

IJCAI Conference 2007 Conference Paper

  • Marco Ernandes
  • Giovanni Angelini
  • Marco Gori
  • Leonardo Rigutini
  • franco scarselli

Term weighting systems are of crucial importance in Information Extraction and Information Retrieval applications. Common approaches to term weighting are based either on statistical or on natural language analysis. In this paper, we present a new algorithm that capitalizes from the advantages of both the strategies by adopting a machine learning approach. In the proposed method, the weights are computed by a parametric function, called Context Function, that models the semantic influence exercised amongst the terms of the same context. The Context Function is learned from examples, allowing the use of statistical and linguistic information at the same time. The novel algorithm was successfully tested on crossword clues, which represent a case of Single-Word Question Answering.

IJCAI Conference 2007 Conference Paper

  • Augusto Pucci
  • Marco Gori

Recommender systems are an emerging technology that helps consumers to find interesting products. A recommender system makes personalized product suggestions by extracting knowledge from the previous users interactions. In this paper, we present ItemRank, a random-walk based scoring algorithm, which can be used to rank products according to expected user preferences, in order to recommend top-rank items to potentially interested users. We tested our algorithm on a standard database, the MovieLens data set, which contains data collected from a popular recommender system on movies, that has been widely exploited as a benchmark for evaluating recently proposed approaches to recommender system (e. g. Fouss et al. , Sarwar et al. ). We compared ItemRank with other state-of-the-art ranking techniques. Our experiments show that ItemRank performs better than the other algorithms we compared to and, at the same time, it is less complex than other proposed algorithms with respect to memory usage and computational cost too.

ECAI Conference 2006 Conference Paper

Adaptive Context-Based Term (Re)Weighting: An Experiment on Single-Word Question Answering

  • Marco Ernandes
  • Giovanni Angelini
  • Marco Gori
  • Leonardo Rigutini
  • Franco Scarselli

Term weighting is a crucial task in many Information Retrieval applications. Common approaches are based either on statistical or on natural language analysis. In this paper, we present a new algorithm that capitalizes from the advantages of both the strategies. In the proposed method, the weights are computed by a parametric function, called Context Function, that models the semantic influence exercised amongst the terms. The Context Function is learned by examples, so that its implementation is mostly automatic. The algorithm was successfully tested on a data set of crossword clues, which represent a case of Single-Word Question Answering.

ECAI Conference 2006 Conference Paper

Graph Neural Networks for Object Localization

  • Gabriele Monfardini
  • Vincenzo Di Massa
  • Franco Scarselli
  • Marco Gori

Graph Neural Networks (GNNs) are a recently proposed connectionist model that extends previous neural methods to structured domains. GNNs can be applied on datasets that contain very general types of graphs and, under mild hypotheses, they have been proven to be universal approximators on graphical domains. Whereas most of the common approaches to graphs processing are based on a preliminary phase that maps each graph onto a simpler data type, like a vector or a sequence of reals, GNNs have the ability to directly process input graphs, thus embedding their connectivity into the processing scheme. In this paper, the main theoretical properties of GNNs are briefly reviewed and they are proposed as a tool for object localization. An experimentation has been carried out on the task of locating the face of a popular Walt Disney character in comic covers. In the dataset the character is shown in a number of different poses, often in cluttered backgrounds, and in high variety of colors. The proposed learning framework provides a way to deal with complex data arising from image segmentation process, without exploiting any prior knowledge on the dataset. The results are very encouraging, prove the viability of the method and the effectiveness of the structural representation of images.

IJCAI Conference 2005 Conference Paper

Learning Web Page Scores by Error Back-Propagation

  • Michelangelo Diligenti
  • Marco Gori
  • Marco

In this paper we present a novel algorithm to learn a score distribution over the nodes of a labeled graph (directed or undirected). Markov Chain theory is used to define the model of a random walker that converges to a score distribution which depends both on the graph connectivity and on the node labels. A supervised learning task is defined on the given graph by assigning a target score for some nodes and a training algorithm based on error backpropagation through the graph is devised to learn the model parameters. The trained model can assign scores to the graph nodes generalizing the criteria provided by the supervisor in the examples. The proposed algorithm has been applied to learn a ranking function for Web pages. The experimental results show the effectiveness of the proposed technique in reorganizing the rank accordingly to the examples provided in the training set.

IJCAI Conference 2003 Conference Paper

A Learning Algorithm for Web Page Scoring Systems

  • Michelangelo Diligenti
  • Marco Gori
  • Marco Maggini

Hyperlink analysis is a successful approach to define algorithms which compute the relevance of a document on the basis of the citation graph. In this paper we propose a technique to learn the parameters of the page ranking model using a set of pages labeled as relevant or not relevant by a supervisor. In particular we describe a learning algorithm applied to a scheme similar to PageRank. The ranking algorithm is based on a probabilistic Web surfer model and its parameters are optimized in order to increase the probability of the surfer to visit a page labeled as relevant and to reduce it for the pages labeled as not relevant. The experimental results show the effectiveness of the proposed technique in reorganizing the page ordering in the ranking list accordingly to the examples provided in the learning set.

IJCAI Conference 1997 Conference Paper

On the Efficient Classification of Data Structures by Neural Networks

  • Paolo Frasconi
  • Marco Gori
  • A lessandro Sperduti

Marco Gori Dipartimento di Ingegneria deU'Informazione Universita di Siena Via Roma 56 53100 Siena, Italy Alessandro Sperduti Dipartimento di Informatica Universita di Pisa Corso Italia, 40 56125 Pisa, Italy In the last few years it has been shown that recurrent neural networks are adequate for processing general data structures like trees and graphs, which opens the doors to a number of new interesting applications previously unexplored. In this paper, we analyze the efficiency of learning the membership of DO AGs (Directed Ordered Acyclic Graphs) in terms of local minima of the error surface by relying on the principle that their absence is a guarantee of efficient learning. We give sufficient conditions under which the error surface is local minima free. Specifically, we define a topological index associated with a collection of DOAGs that makes it possible to design the architecture so as to avoid local minima.