Towards explainable, compliant and adaptive human-automation interactionShow others and affiliations
2021 (English)In: CEUR Workshop Proceedings, CEUR-WS , 2021Conference paper, Published paper (Refereed)
Abstract [en]
AI-based systems use trained machine learning models to make important decisions in critical contexts. The EU guidelines for trustworthy AI emphasise the respect for human autonomy, prevention of harm, fairness, and explicability. Many successful machine learning methods, however, deliver opaque models where the reasons for decisions remain unclear to the end user. Hence, accountability and trust are difficult to ascertain. In this position paper, we focus on AI systems that are expected to interact with humans and we propose our visionary architecture, called ECA-HAI (Explainable, Compliant and Adaptive Human-Automation Interaction)-RefArch. ECA-HAI-RefArch allows for building intelligent systems where humans and AIs form teams, able to learn from data but also to learn from each other by playing “serious games”, for a continuous improvement of the overall system. Finally, conclusions are drawn.
Place, publisher, year, edition, pages
CEUR-WS , 2021.
Series
CEUR Workshop Proceedings, ISSN 16130073
Keywords [en]
Compliant AI, Explainable AI, Programme synthesis, Serious games, Intelligent systems, Machine learning, AI systems, Building intelligent systems, Continuous improvements, EU guidelines, Human-automation interactions, Machine learning methods, Machine learning models, Position papers, Man machine systems
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:mdh:diva-58812Scopus ID: 2-s2.0-85109217296OAI: oai:DiVA.org:mdh-58812DiVA, id: diva2:1669556
Conference
3rd EXplainable AI in Law Workshop, XAILA 2020, 9 December 2020
2022-06-142022-06-142024-12-20Bibliographically approved