SC9 Responsible Artificial Intelligence

Description

First day (2 sessions) Responsible Design of Intelligent Systems (module 1): understand Responsible AI as part of complex socio-technical systems; methods to take into account societal values, moral and ethical considerations, weigh the respective priorities of values held by different stakeholders in different multicultural contexts, explain its reasoning and guarantee transparency; concrete challenges and opportunities in different application areas
Societal challenges of AI: ethical, legal, political, economical; including also reflection on the practical and philosophical objections to ethical deliberation by machines
Introduction to Philosophical Ethics
Design for Values engineering method (values – norms – requirements)
Ethics in Practice: ethical issues (fairness, transparency, accountability, responsibility, privacy…) in healthcare, transportation, decision-making, military applications (or other concrete examples)
Evaluation and verification formalisms for RAI

Second day (2 sessions) Ethical Machines (module 1): is about understanding, developing and evaluating ethical agency and reasoning abilities as part of the behaviour of artificial autonomous systems (such as artificial agents and robots); understanding and applying different computational options that ensure that ethical principles are observed ‘by design’.
Ethics by Design: computational reasoning models for ethical deliberation
Explainable AI: mathematical principles and computational approaches

Objectives

The development and use of AI raises fundamental ethical issues for society, which are of vital importance to our future. There is already much debate concerning the impact of AI on labour, social interactions (including healthcare), privacy, fairness and security (including peace initiatives and warfare). The societal and ethical impact of AI encompasses many domains, for instance machine classification systems raise questions about privacy and bias, and autonomous vehicles raise questions about safety and responsibility.
These issues are of special relevance for the development of interactive applications, where humans interact with intelligent agents. In this case, agent need to be developed in ways that both support and facilitate interaction but also ensure the human values and personal preferences are implemented. The ability to learn, i.e., to adapt to preferences, values, abilities and behaviour of a user and to the social, legal and ethical context of the task at hand, for a large part influences the acceptance of interactive agent applications. In this course, we will focus on the grounding theories and technology for human-agent interaction and discuss the need for interdisciplinary approaches to solve problems related to human-agent interaction.
What does ethics mean with respect to artificial systems?
Can we program systems to be ethical?
What are the social, ethical and legal implications of autonomous reasoning?
What is our responsibility as researchers?

Literature

Special Issue on AI and Ethics: https://link.springer.com/journal/10676/20/1/page/1.
Dignum, V „Responsible Autonomy“ IJCAI 2017 https://www.ijcai.org/proceedings/2017/655
van de Poel, Ibo “Design for value change Journal Article” Ethics and Information Technology, 2018, ISSN: 1572-8439. https://link.springer.com/article/10.1007%2Fs10676-018-9461-9

Course location

Guenne

Course requirements

None

Instructor information.

Instructor
Virginia Dignum

Affiliation

Umeå University

Vita

Virginia Dignum is Full Professor at Umeå University, Chair of the Social and Ethical Artificial Intelligence group. She is also associated with the Faculty of Technology Policy and Management at the Delft University of Technology in the Netherlands. Her research focuses on value-sensitive design of intelligent systems and multi-agent organisations, in particular on the ethical and societal impact of AI. She is a Fellow of the European Artificial Intelligence Association (EURAI), a member of the European Commission High Level Expert Group on Artificial Intelligence, and of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems. She is member of the scientific boards of the Delft Design for Values Institute, the AI4People - European Global Forum on AI, the Responsible Robotics Foundation, the SIDNfonds, and ALLAI-NL the Dutch AI Alliance. In 2006, she was awarded the prestigious Veni grant by the NWO (Dutch Organization for Scientific Research) for her work on agent-based organizational frameworks. She a well-known speaker on the social and ethical impacts of Artificial Intelligence, and is member of the reviewing boards for all major journals and conferences in AI. She has also chaired many international conferences and workshops, including ECAI2016, the European Conference on AI. She has published more than 180 peer-reviewed papers and edited several books, currently yielding a h-index of 30. From 2011 to 2017, she was vice-president of the Benelux AI Association (BNVKI).

Website

http://www.cs.umu.se/