Arrow Research search

Author name cluster

Pascal Hitzler

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

20 papers
2 author rows

Possible papers

20

NAI Journal 2026 Journal Article

Toward a Neurosymbolic Understanding of Hidden Neuron Activations

  • Abhilekha Dalal
  • Rushrukh Rayan
  • Adrita Barua
  • Samatha Ereshi Akkamahadevi
  • Avishek Das
  • Cara Widmer
  • Eugene Y Vasserman
  • Md Kamruzzaman Sarker

With the widespread adoption of deep learning techniques, the need for explainability and trustworthiness is increasingly critical, especially in safety-sensitive applications and for improved debugging, given the black-box nature of these models. The explainable AI (XAI) literature offers various helpful techniques; however, many approaches use a secondary deep learning-based model to explain the primary model’s decisions or require domain expertise to interpret the explanations. A relatively new approach involves explaining models using high-level, human-understandable concepts. While these methods have proven effective, an intriguing area of exploration lies in using a white-box technique to explain the probing model. We present a novel, model-agnostic, post hoc XAI method that provides meaningful interpretations for hidden neuron activations. Our approach leverages a Wikipedia-derived concept hierarchy, encompassing approximately 2 million classes as background knowledge, and uses deductive reasoning-based concept induction to generate explanations. Our method demonstrates competitive performance across various evaluation metrics, including statistical evaluation, concept activation analysis, and benchmarking against contemporary methods. Additionally, a specialized study with large language models (LLMs) highlights how LLMs can serve as explainers in a manner similar to our method, showing comparable performance with some trade-offs. Furthermore, we have developed a tool called ConceptLens, enabling users to test custom images and obtain explanations for model decisions. Finally, we introduce an entirely reproducible, end-to-end system that simplifies the process of replicating our system and results.

NAI Journal 2025 Journal Article

Deep deductive reasoning is a hard deep learning problem

  • Pascal Hitzler
  • Rushrukh Rayan
  • Joseph Zalewski
  • Sanaz Saki Norouzi
  • Aaron Eberhart
  • Eugene Y Vasserman

Deep Deductive Reasoning refers to the training and then executing of deep learning systems to perform deductive reasoning in the sense of formal, mathematical logic. We discuss why this is an interesting and relevant problem to study, and explore how hard it is as a deep learning problem. In particular, we present some of the progress made on this topic in recent years, understand some of the theoretical limitations that can be assessed from existing literature, and discuss negative results we have obtained regarding improving on the state of the art.

NeSy Conference 2025 Conference Paper

Description Logic Concept Learning using Large Language Models

  • Adrita Barua
  • Pascal Hitzler

Recent advances in Large Language Models (LLMs) have drawn interest in their capacity for logical reasoning, an area traditionally dominated by symbolic systems that rely on complete, manually curated knowledge bases represented in formal languages. This paper introduces a framework that leverages pretrained LLMs to generate Description Logic (DL) class expressions from instance-level examples and background knowledge, translated to natural language. The baseline is Concept Induction, a symbolic learning approach that is mostly based on formal logical reasoning over a DL theory. Drawing inspiration from the DL-Learner architecture, our approach replaces traditional symbolic methods with LLM-based models to generate DL class expressions from instance-level data. We evaluate our approach using three benchmark ontologies across two LLMs: gpt-4o and o3-mini. We use a symbolic reasoner, Pellet, to verify the LLM-generated results and incorporate the reasoner’s feedback into our pipeline to ensure logical consistency, thereby generating a hybrid neurosymbolic system. By introducing controlled variations to the background knowledge, we assess the models’ reliance on commonsense versus formal reasoning. Results show that o3-mini achieves near-perfect accuracy across settings, albeit with longer runtime. These findings demonstrate that LLMs have the potential to serve as scalable and flexible DL learners when coupled in a hybrid neurosymbolic setting, offering a promising alternative to symbolic approaches—particularly in contexts where high-quality ontologies are incomplete or unavailable.

NeSy Conference 2025 Conference Paper

Towards Explainable Depression Detection: A Neurosymbolic Approach to Uncover Social Media Signals with Generative AI

  • Mohammad Saeid Mahdavinejad
  • Peyman Adibi
  • Amirhassan Monajemi
  • Pascal Hitzler

Depression remains a pervasive mental health disorder that demands prompt diagnosis and intervention. Although social media data presents a promising avenue for early detection, traditional deep neural models are frequently critiqued for their lack of interpretability and susceptibility to bias. We introduce ProtoDep—a neurosymbolic framework that integrates clinically grounded categorizations (e. g. , PHQ-9 symptoms) with large language model–assisted prototype learning. Unlike conventional black-box models, ProtoDep aligns individual tweets with symptom-level prototypes, offering interpretable explanations at three levels: (i) symptom-level insights that map user posts to recognized depressive patterns, (ii) case-based reasoning that compares users to representative prototype profiles, and (iii) transparent concept-level decisions, wherein classification at inference time is driven by the distances between the user profile and prototype user and symptom clusters, yielding clear, quantifiable explanations. By integrating symbolic mental health constructs with neural embeddings, ProtoDep achieves a mean F1-score of 94% across five benchmark datasets and establishes a foundation for interpretable depression screening pipelines with potential applicability in clinical settings.

NeSy Conference 2024 Conference Paper

Commonsense Ontology Micropatterns

  • Andrew Eells
  • Brandon Dave
  • Pascal Hitzler
  • Cogan Shimizu

Abstract The previously introduced Modular Ontology Modeling methodology (MOMo) attempts to mimic the human analogical process by using modular patterns to assemble more complex concepts. To support this, MOMo organizes ontology design patterns (ODPs) into design libraries, which are programmatically queryable. However, a major bottleneck to large-scale deployment of MOMo is the (to-date) limited availability of ready-to-use ODPs. At the same time, Large Language Models (LLMs) have quickly become a source of common knowledge and, in some cases, replacing search engines for questions. In this paper, we thus present a collection of 104 ODPs representing often occurring nouns, curated from the common-sense knowledge available in LLMs, organized into a fully-annotated modular ontology design library ready for use with MOMo.

NeSy Conference 2024 Conference Paper

Concept Induction Using LLMs: A User Experiment for Assessment

  • Adrita Barua
  • Cara Leigh Widmer
  • Pascal Hitzler

Abstract Explainable Artificial Intelligence (XAI) poses a significant challenge in providing transparent and understandable insights into complex AI models. Traditional post-hoc algorithms, while useful, often struggle to deliver interpretable explanations. Concept-based models offer a promising avenue by incorporating explicit representations of concepts to enhance interpretability. However, existing research on automatic concept discovery methods is often limited by lower-level concepts, costly human annotation requirements, and a restricted domain of background knowledge. In this study, we explore the potential of a Large Language Model (LLM), specifically GPT-4, by leveraging its domain knowledge and common-sense capability to generate high-level concepts that are meaningful as explanations for humans, for a specific setting of image classification. We use minimal textual object information available in the data via prompting to facilitate this process. To evaluate the output, we compare the concepts generated by the LLM with two other methods: concepts generated by humans and the ECII heuristic concept induction system. Since there is no established metric to determine the human understandability of concepts, we conducted a human study to assess the effectiveness of the LLM-generated concepts. Our findings indicate that while human-generated explanations remain superior, concepts derived from GPT-4 are more comprehensible to humans compared to those generated by ECII.

NeSy Conference 2024 Conference Paper

Error-Margin Analysis for Hidden Neuron Activation Labels

  • Abhilekha Dalal
  • Rushrukh Rayan
  • Pascal Hitzler

Abstract Understanding how high-level concepts are represented within artificial neural networks is a fundamental challenge in the field of artificial intelligence. While existing literature in explainable AI emphasizes the importance of labeling neurons with concepts to understand their functioning, they mostly focus on identifying what stimulus activates a neuron in most cases; this corresponds to the notion of recall in information retrieval. We argue that this is only the first-part of a two-part job; it is imperative to also investigate neuron responses to other stimuli, i. e. , their precision. We call this the neuron label’s error margin.

NeSy Conference 2024 Conference Paper

On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis

  • Abhilekha Dalal
  • Rushrukh Rayan
  • Adrita Barua
  • Eugene Y. Vasserman
  • Md. Kamruzzaman Sarker
  • Pascal Hitzler

Abstract We introduce a novel model-agnostic post-hoc Explainable AI method that provides meaningful interpretations for hidden neuron activations in a Convolutional Neural Network. Our approach uses a Wikipedia-derived concept hierarchy with approx. 2 million classes as background knowledge, and deductive reasoning based Concept Induction for explanation generation. Additionally, we explore and compare the capabilities of off-the-shelf pre-trained multimodal-based explainable methods. Our evaluation shows that our neurosymbolic method holds a competitive edge in both quantitative and qualitative aspects.

AAAI Conference 2019 Conference Paper

Efficient Concept Induction for Description Logics

  • Md Kamruzzaman Sarker
  • Pascal Hitzler

Concept Induction refers to the problem of creating complex Description Logic class descriptions (i. e. , TBox axioms) from instance examples (i. e. , ABox data). In this paper we look particularly at the case where both a set of positive and a set of negative instances are given, and complex class expressions are sought under which the positive but not the negative examples fall. Concept induction has found applications in ontology engineering, but existing algorithms have fundamental performance issues in some scenarios, mainly because a high number of invokations of an external Description Logic reasoner is usually required. In this paper we present a new algorithm for this problem which drastically reduces the number of reasoner invokations needed. While this comes at the expense of a more limited traversal of the search space, we show that our approach improves execution times by up to several orders of magnitude, while output correctness, measured in the amount of correct coverage of the input instances, remains reasonably high in many cases. Our approach thus should provide a strong alternative to existing systems, in particular in settings where other systems are prohibitively slow.

NeSy Conference 2017 Conference Paper

Explaining Trained Neural Networks with Semantic Web Technologies: First Steps

  • Md. Kamruzzaman Sarker
  • Ning Xie 0009
  • Derek Doran
  • Michael L. Raymer
  • Pascal Hitzler

The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept.

NeSy Conference 2017 Conference Paper

Propositional Rule Extraction from Neural Networks under Background Knowledge

  • Maryam Labaf
  • Pascal Hitzler
  • Anthony B. Evans

It is well-known that the input-output behaviour of a neural network can be recast in terms of a set of propositional rules, and under certain weak preconditions this is also always possible with positive (or definite) rules. Furthermore, in this case there is in fact a unique minimal (technically, reduced) set of such rules which perfectly captures the inputoutput mapping. In this paper, we investigate to what extent these results and corresponding rule extraction algorithms can be lifted to take additional background knowledge into account. It turns out that uniqueness of the solution can then no longer be guaranteed. However, the background knowledge often makes it possible to extract simpler, and thus more easily understandable, rulesets which still perfectly capture the input-output mapping.

ECAI Conference 2012 Conference Paper

Reasoning with Fuzzy-EL+ Ontologies Using MapReduce

  • Zhangquan Zhou
  • Guilin Qi
  • Chang Liu 0021
  • Pascal Hitzler
  • Raghava Mutharaju

Fuzzy extension of Description Logics (DLs) allows the formal representation and handling of fuzzy knowledge. In this paper, we consider fuzzy-EL+, which is a fuzzy extension of EL+. We first present revised completion rules for fuzzy-EL+that can be handled by MapReduce programs. We then propose an algorithm for scale reasoning with fuzzy-EL+ontologies based on MapReduce.

ECAI Conference 2012 Conference Paper

Reconciling OWL and Non-monotonic Rules for the Semantic Web

  • Matthias Knorr 0001
  • Pascal Hitzler
  • Frederick Maier

We propose a description logic extending SROIQ (the description logic underlying OWL 2 DL) and at the same time encompassing some of the most prominent monotonic and nonmonotonic rule languages, in particular Datalog extended with the answer set semantics. Our proposal could be considered a substantial contribution towards fulfilling the quest for a unifying logic for the Semantic Web. As a case in point, two non-monotonic extensions of description logics considered to be of distinct expressiveness until now are covered in our proposal. In contrast to earlier such proposals, our language has the "look and feel" of a description logic and avoids hybrid or first-order syntaxes.

ECAI Conference 2008 Conference Paper

A Coherent Well-founded Model for Hybrid MKNF Knowledge Bases

  • Matthias Knorr 0001
  • José Júlio Alferes
  • Pascal Hitzler

With the advent of the Semantic Web, the question becomes important how to best combine open-world based ontology languages, like OWL, with closed-world rules paradigms. One of the most mature proposals for this combination is known as Hybrid MKNF knowledge bases [11], which is based on an adaptation of the stable model semantics to knowledge bases consisting of ontology axioms and rules. In this paper, we propose a well-founded semantics for such knowledge bases which promises to provide better efficiency of reasoning, which is compatible both with the OWL-based semantics and the traditional well-founded semantics for logic programs, and which surpasses previous proposals for such a well-founded semantics by avoiding some issues related to inconsistency handling.

JELIA Conference 2008 Conference Paper

Cheap Boolean Role Constructors for Description Logics

  • Sebastian Rudolph
  • Markus Krötzsch
  • Pascal Hitzler

Abstract We investigate the possibility of incorporating Boolean role constructors on simple roles into some of today’s most popular description logics, focussing on cases where those extensions do not increase complexity of reasoning. We show that the expressive DLs \(\mathcal{SHOIQ}\) and \(\mathcal{SROIQ}\), serving as the logical underpinning of OWL and the forthcoming OWL 2, can accommodate arbitrary Boolean expressions. The prominent OWL-fragment \(\mathcal{SHIQ}\) can be safely extended by safe role expressions, and the tractable fragments \(\mathcal{EL}^{++}\) and DLP retain tractability if extended by conjunction on roles, where in the case of DLP the restriction on role simplicity can even be discarded.

ECAI Conference 2008 Conference Paper

Description Logic Rules

  • Markus Krötzsch
  • Sebastian Rudolph
  • Pascal Hitzler

We introduce description logic (DL) rules as a new rule-based formalism for knowledge representation in DLs. As a fragment of the Semantic Web Rule Language SWRL, DL rules allow for a tight integration with DL knowledge bases. In contrast to SWRL, however, the combination of DL rules with expressive description logics remains decidable, and we show that the DL 𝒮 ℛ 𝒪 ℐ 𝒬 - the basis for the ongoing standardisation of OWL 2 - can completely internalise DL rules. On the other hand, DL rules capture many expressive features of 𝒮 ℛ 𝒪 ℐ 𝒬 that are not available in simpler DLs yet. While reasoning in 𝒮 ℛ 𝒪 ℐ 𝒬 is highly intractable, it turns out that DL rules can be introduced to various lightweight DLs without increasing their worst-case complexity. In particular, DL rules enable us to significantly extend the tractable DLs ℰ ℒ ++and DLP.

IJCAI Conference 2007 Conference Paper

  • Sebastian Bader
  • Pascal Hitzler
  • Steffen H
  • ouml; lldobler
  • Andreas Witzel

We present a fully connectionist system for the learning of first-order logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feed-forward network and train the network using the examples. This results in the learning of first-order knowledge while damaged or noisy data is handled gracefully.

IJCAI Conference 2003 Conference Paper

A Resolution Theorem for Algebraic Domains

  • Pascal Hitzler

W. C. Rounds and G. -Q. Zhang have recently proposed to study a form of resolution on algebraic domains [Rounds and Zhang, 2001]. This framework allows reasoning with knowledge which is hierarchically structured and forms a (suitable) domain, more precisely, a coherent algebraic cpo as studied in domain theory. In this paper, we give conditions under which a resolution theorem — in a form underlying resolution-based logic programming systems — can be obtained. The investigations bear potential for engineering new knowledge representation and reasoning systems on a firm domaintheoretic background.