ET4 Does Explainability make Sense?
Explainability is currently discussed as a solution to socio-technical challenges such as intelligent software providing incomprehensible decisions but affecting humans’ lives, or big data enabling fast learning but becoming too complex to fully comprehend and judge its achievements. With explainability, more insights to the function, decisions and usefulness of algorithms is expected. Yet, if an explanation is successful, it results in an understanding. And vice versa: If there is understanding, one can (mostly) make it explicit by formulating an explanation. In my presentation, I will elaborate on this bidirectionality pointing out that for a successful explanation, it is essential to know what contents can be explained to whom in what situations. Explainability, I will argue, only makes sense when taking the interactive process of explaining that into account.Objectives
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. arXic.org arXiv:1806.00069v2
Miller, T. (2018). Explanation in artificial intelligence: Insights from the social sciences. arXiv:1706.07269v3 [cs.AI]
University of PaderbornVita
Katharina J. Rohlfing received the Master’s degree in linguistics, philosophy, and media studies from the Paderborn University, Germany, in 1997. As a member of the interdisciplinary Graduate Program Task-Oriented Communication, she received the Ph.D. degree in linguistics from Bielefeld University, in 2002.
In 2006, with her interdisciplinary project on the Symbiosis of Language and Action, she became a Dilthey-Fellow (VolkswagenStiftung) and Head of the Emergentist Semantics Group within the Center of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University. Currently, she is professor for psycholinguistics at the Paderborn University, where she works on social learning and scaffolding with strong interdisciplinary interest in theories and modelling learning by interaction.