Arrow Research search

Author name cluster

Fosca Giannotti

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
2 author rows

Possible papers

12

NeurIPS Conference 2025 Conference Paper

Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts

  • Andrea Pugnana
  • Riccardo Massidda
  • Francesco Giannini
  • Pietro Barbiero
  • Mateo Espinosa Zarlenga
  • Roberto Pellungrini
  • Gabriele Dominici
  • Fosca Giannotti

Concept Bottleneck Models (CBMs) are interpretable machine learning models that ground their predictions on human-understandable concepts, allowing for targeted interventions in their decision-making process. However, when intervened on, CBMs assume the availability of humans that can identify the need to intervene and always provide correct interventions. Both assumptions are unrealistic and impractical, considering labor costs and human error-proneness. In contrast, Learning to Defer (L2D) extends supervised learning by allowing machine learning models to identify cases where a human is more likely to be correct than the model, thus leading to deferring systems with improved performance. In this work, we gain inspiration from L2D and propose Deferring CBMs (DCBMs), a novel framework that allows CBMs to learn when an intervention is needed. To this end, we model DCBMs as a composition of deferring systems and derive a consistent L2D loss to train them. Moreover, by relying on a CBM architecture, DCBMs can explain the reasons for deferring on the final task. Our results show that DCBMs can achieve high predictive performance and interpretability by deferring only when needed.

ECAI Conference 2025 Conference Paper

Group Explainability Through Local Approximation

  • Mattia Setzu
  • Riccardo Guidotti
  • Dino Pedreschi
  • Fosca Giannotti

Machine learning models are becoming increasingly complex and widely adopted. Interpretable machine learning allows us to not only make predictions but also understand the rationale behind automated decisions through explanations. Explanations are typically characterized by their scope: local explanations are generated by local surrogate models for specific instances, while global explanations aim to approximate the behavior of the entire black-box model. In this paper, we break this dichotomy of locality to explore an underexamined area that lies between these two extremes: meso-level explanations. The goal of meso-level explainability is to provide explanations using a set of meso-level interpretable models, which capture patterns at an intermediate level of abstraction. To this end, we propose GROUX, an explainable-by-design algorithm that generates meso-level explanations in the form of feature importance scores. Our approach includes a partitioning phase that identifies meso groups, followed by the training of interpretable models within each group. We evaluate GROUX on a collection of tabular datasets, reporting both the accuracy and complexity of the resulting meso models, and compare it against other meso-level explainability algorithms. Additionally, we analyze the algorithm’s sensitivity to its hyperparameters to better understand its behavior and robustness.

AIJ Journal 2025 Journal Article

Human-AI coevolution

  • Dino Pedreschi
  • Luca Pappalardo
  • Emanuele Ferragina
  • Ricardo Baeza-Yates
  • Albert-László Barabási
  • Frank Dignum
  • Virginia Dignum
  • Tina Eliassi-Rad

IJCAI Conference 2025 Conference Paper

Human-AI Coevolution (Abstract Reprint)

  • Dino Pedreschi
  • Luca Pappalardo
  • Emanuele Ferragina
  • Ricardo Baeza-Yates
  • Albert-László Barabási
  • Frank Dignum
  • Virginia Dignum
  • Tina Eliassi-Rad

Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users' choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i. e. , scientific, legal and socio-political.

IJCAI Conference 2025 Conference Paper

Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems

  • Benedetta Muscato
  • Lucia Passaro
  • Gizem Gezici
  • Fosca Giannotti

In the realm of Natural Language Processing (NLP), common approaches for handling human disagreement consist of aggregating annotators' viewpoints to establish a single ground truth. However, prior studies show that disregarding individual opinions can lead to the side-effect of under-representing minority perspectives, especially in subjective tasks, where annotators may systematically disagree because of their preferences. Recognizing that labels reflect the diverse backgrounds, life experiences, and values of individuals, this study proposes a new multi-perspective approach using soft labels to encourage the development of the next generation of perspective-aware models—more inclusive and pluralistic. We conduct an extensive analysis across diverse subjective text classification tasks including hate speech, irony, abusive language, and stance detection, to highlight the importance of capturing human disagreements, often overlooked by traditional aggregation methods. Results show that the multi-perspective approach not only better approximates human label distributions, as measured by Jensen-Shannon Divergence (JSD), but also achieves superior classification performance (higher F1-scores), outperforming traditional approaches. However, our approach exhibits lower confidence in tasks like irony and stance detection, likely due to the inherent subjectivity present in the texts. Lastly, leveraging Explainable AI (XAI), we explore model uncertainty and uncover meaningful insights into model predictions. All implementation details are available at our github repo.

IS Journal 2021 Journal Article

Learning Complex Couplings and Interactions

  • Can Wang
  • Fosca Giannotti
  • Longbing Cao

This special issue on learning complex couplings and interactions aims to encourage deep research in the above areas and beyond, with a focus on the latest advancements in modeling complex couplings and interactions in big data, complex behaviors, and systems.

IS Journal 2019 Journal Article

Factual and Counterfactual Explanations for Black Box Decision Making

  • Riccardo Guidotti
  • Anna Monreale
  • Fosca Giannotti
  • Dino Pedreschi
  • Salvatore Ruggieri
  • Franco Turini

The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.

AAAI Conference 2019 Conference Paper

Meaningful Explanations of Black Box AI Decision Systems

  • Dino Pedreschi
  • Fosca Giannotti
  • Riccardo Guidotti
  • Anna Monreale
  • Salvatore Ruggieri
  • Franco Turini

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box.

TIST Journal 2019 Journal Article

PlayeRank

  • Luca Pappalardo
  • Paolo Cintia
  • Paolo Ferragina
  • Emanuele Massucco
  • Dino Pedreschi
  • Fosca Giannotti

The problem of evaluating the performance of soccer players is attracting the interest of many companies and the scientific community, thanks to the availability of massive data capturing all the events generated during a match (e.g., tackles, passes, shots, etc.). Unfortunately, there is no consolidated and widely accepted metric for measuring performance quality in all of its facets. In this article, we design and implement PlayeRank, a data-driven framework that offers a principled multi-dimensional and role-aware evaluation of the performance of soccer players. We build our framework by deploying a massive dataset of soccer-logs and consisting of millions of match events pertaining to four seasons of 18 prominent soccer competitions. By comparing PlayeRank to known algorithms for performance evaluation in soccer, and by exploiting a dataset of players’ evaluations made by professional soccer scouts, we show that PlayeRank significantly outperforms the competitors. We also explore the ratings produced by PlayeRank and discover interesting patterns about the nature of excellent performances and what distinguishes the top players from the others. At the end, we explore some applications of PlayeRank—i.e. searching players and player versatility—showing its flexibility and efficiency, which makes it worth to be used in the design of a scalable platform for soccer analytics.

JELIA Conference 2002 Conference Paper

LDL-M ine: Integrating Data Mining with Intelligent Query Answering

  • Fosca Giannotti
  • Giuseppe Manco 0001

Abstract Current applications of data mining techniques highlight the need for flexible knowledge discovery systems, capable of supporting the user in specifying and re. ning mining objectives, combining multiple strategies, and de. ning the quality of the extracted knowledge. A key issue is the de. nition of Knowledge Discovery Support Environment, i. e. , a query system capable of obtaining, maintaining, representing and using high level knowledge in a uni. ed framework. This comprises representation and manipulation of domain knowledge, extraction and manipulation of new knowledge and their combination.

CSL Conference 1999 Conference Paper

On the Effective Semantics of Nondeterministic, Nonmonotonic, Temporal Logic Databases

  • Fosca Giannotti
  • Giuseppe Manco 0001
  • Mirco Nanni
  • Dino Pedreschi

Abstract We consider in this paper an extension of Datalog with mechanisms for temporal, nonmonotonic and nondeterministic reasoning, which we refer to as Datalog++. We study its semantics, and show how iterated fixpoint and stable model semantics can be combined to the purpose of clarifying the interpretation of Datalog++ programs, and supporting their efficient execution. On this basis, the design of appropriate optimization techniques for Datalog++ is also discussed.