Grzegorz J. Nalepa, Jagiellonian University in Kraków.
José Palma, Universidad de Murcia.
Elin A. Topp, Lund University, Sweden
Martin Atzmueller, Osnabrück University, Germany
Explainability of AI has been a widely discussed topic in the recent decade. It clearly is an area where the discussion of technology and engineering strongly overlaps with human-oriented and social topics. After all, explanations in AI systems are provided for humans, and as such should be useful for them on many levels. Thus, a complete perspective on the design, development, and evaluation of XAI methods with and for humans should be considered. It clearly is a strongly disciplinary field where practitioners from several fields, beyond engineering of AI meet and collaborate. The objective of this workshop is to provide ample space for discussion and exchange of novel ideas.
The list of the topics include, but is not limited to:
What matters most about the workshop is meeting people and inspiring vivid discussions. We encourage authors to submit both short papers introducing their original ideas (5 pages with references in LLNCS) or full papers presenting research results (8-10 pages). See http://www.icinac.org/iwinac.org/iwinac2026/paper-submission.html for details.
(tentative, to be confirmed)