IS 2025
Explicable Artificial Intelligence for Affective Computing
Abstract
Artificial intelligence (AI) is increasingly tasked with recognizing and responding to human emotions, making affective computing one of its most consequential frontiers. As AI spreads into finance, policymaking, and mental health, the opacity of deep learning models raises urgent challenges for trust, accountability, and ethics. This special issue addresses explicability not just as algorithmic transparency, but as a paradigm integrating cognitive science, the humanities, and ethical foresight with technical innovation. Guided by the “Seven Pillars for the Future of AI”— multidisciplinarity, task decomposition, parallel analogy, symbol grounding, similarity measure, intention awareness, and trustworthiness—it envisions affective AI as a partner in meaning-making rather than a mere inference engine. The six featured articles span topics from depression detection and sentiment analysis to hate speech moderation and interpretable driving behaviors, advancing affective AI that is accurate, interpretable, and aligned with human dignity.
Authors
Keywords
Context
- Venue
- IEEE Intelligent Systems
- Archive span
- 2001-2026
- Indexed papers
- 2921
- Paper id
- 122273302541638015