Arrow Research search

Author name cluster

Seffi Cohen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 Conference Paper

Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations

  • Yehonatan Elisha
  • Seffi Cohen
  • Oren Barkan
  • Noam Koenigstein

Saliency maps have become a cornerstone of visual explanation in deep learning, yet there remains no consensus on their intended purpose and their alignment with specific user queries. This fundamental ambiguity undermines both the evaluation and practical utility of explanation methods. In this paper, we introduce the Reference-Frame x Granularity (RFxG) taxonomy—a principled framework that addresses this ambiguity by conceptualizing saliency explanations along two essential axes: the reference-frame axis (distinguishing between pointwise "Why Husky?" and contrastive "Why Husky and not Shih-tzu?" explanations) and the granularity axis (ranging from fine-grained class-level to coarse-grained group-level interpretations, e.g., “Why Husky?” vs. “Why Dog?”). Through this lens, we identify critical limitations in existing evaluation metrics, which predominantly focus on pointwise faithfulness while neglecting contrastive reasoning and semantic granularity. To address these gaps, we propose four novel faithfulness metrics that systematically assess explanation quality across both RFxG dimensions. Our comprehensive evaluation framework spans ten state-of-the-art methods, 4 model architectures, and 3 datasets. By suggesting a shift from model-centric to user-intent-driven evaluation, our work provides both the conceptual foundation and practical tools necessary for developing explanations that are not only faithful to model behavior but also meaningfully aligned with human understanding.

JBHI Journal 2025 Journal Article

Temporal Integrative Machine Learning for Early Detection of Diabetic Retinopathy Using Fundus Imaging and Electronic Health Records

  • Shvat Messica
  • Seffi Cohen
  • Aviel Hadad
  • Michal Gordon
  • Or Katz
  • Dan Presil
  • Noa Dagan
  • Erez Tsumi

Diabetic Retinopathy (DR), a prevalent diabetes complication leading to blindness, often goes undetected until late stages due to patients seeking help only when symptoms manifest and limited experts' availability. To address these challenges, we present a novel temporal integrative machine learning system that harnesses both fundus images and electronic health records (EHR) for early and enhanced DR detection. Our system uniquely processes EHR data by focusing on temporal trends and long-term patient histories, creating thousands of temporal features that capture their evolving dynamics over time and deliver unparalleled model finesse. This dual-model system includes a temporal tabular model that relies solely on historical medical records and a deep learning multi-modal model that combines these records with fundus images. The models were trained and tested using real clinical data from 5, 000 patients at Soroka Hospital in Israel, comprising 25, 000 retinal images collected over 8 years and electronic health records spanning up to 20 years. Given the primarily unlabeled nature of the data, the training phase employed a pseudo-labeling technique. The models were evaluated and verified by a retina specialist, surpassing existing models with AUROC scores of 0. 881 for the temporal-trend EHR model and 0. 988 for the multi-modal imaging + EHR model. The integration of historical temporal medical data with imaging offers a more dynamic and comprehensive machine-learning system, enhancing DR detection and offering new insights into associated risk factors. This system not only aids physicians in obtaining a holistic view of a patient's health over time but also facilitates fast identification of individuals at high risk for DR.

ECAI Conference 2024 Conference Paper

FairUS - UpSampling Optimized Method for Boosting Fairness

  • Nurit Cohen-Inger
  • Guy Rozenblatt
  • Seffi Cohen
  • Lior Rokach
  • Bracha Shapira

The increasing application of machine learning (ML) in critical areas such as healthcare and finance highlights the importance of fairness in ML models, challenged by biases in training data that can lead to discrimination. We introduce ‘FairUS’, a novel pre-processing method for reducing bias in ML models utilizing the Conditional Generative Adversarial Network (CTGAN) to synthesize upsampled data. Unlike traditional approaches that focus solely on balancing subgroup sample sizes, FairUS strategically optimizes the quantity of synthesized data. This optimization aims to achieve an ideal balance between enhancing fairness and maintaining the overall performance of the model. Extensive evaluations of our method over several canonical datasets show that the proposed method enhances fairness by 2. 7 times more than the related work and 4 times more than the baseline without mitigation, while preserving the performance of the ML model. Moreover, less than a third of the amount of synthetic data was needed on average. Uniquely, the proposed method enables decision-makers to choose the working point between improved fairness and model’s performance according to their preferences.

AAAI Conference 2024 Conference Paper

TTTS: Tree Test Time Simulation for Enhancing Decision Tree Robustness against Adversarial Examples

  • Seffi Cohen
  • Ofir Arbili
  • Yisroel Mirsky
  • Lior Rokach

Decision trees are widely used for addressing learning tasks involving tabular data. Yet, they are susceptible to adversarial attacks. In this paper, we present Tree Test Time Simulation (TTTS), a novel inference-time methodology that incorporates Monte Carlo simulations into decision trees to enhance their robustness. TTTS introduces a probabilistic modification to the decision path, without altering the underlying tree structure. Our comprehensive empirical analysis of 50 datasets yields promising results. Without the presence of any attacks, TTTS has successfully improved model performance from an AUC of 0.714 to 0.773. Under the challenging conditions of white-box attacks, TTTS demonstrated its robustness by boosting performance from an AUC of 0.337 to 0.680. Even when subjected to black-box attacks, TTTS maintains high accuracy and enhances the model's performance from an AUC of 0.628 to 0.719. Compared to defenses such as Feature Squeezing, TTTS proves to be much more effective. We also found that TTTS exhibits similar robustness in decision forest settings across different attacks.