AI Chatbot

(c) AI generated

Chat4XAI Explains Deep Reinforcement Learning Decisions

The use of artificial intelligence is often hampered by a lack of transparency of the models. In response to this, Prof. Dr Andreas Metzger, Jone Bartel and Jan Laufer have developed an AI chatbot that aims to contribute to explainability.

Deep reinforcement learning (deep RL) is a promising technique that enables service-orientated systems to adapt independently to open and dynamic environments. However, the decisions for such adaptations are based on deep neural networks that are essentially opaque to human understanding – resembling a black box. This poses problems for both developers of service-oriented systems and the providers and users alike.

Chat4XAI is designed to help overcome this difficulty. Similar to ChatGPT, this AI chatbot is based on a Large Language Model (LLM), facilitating interaction in natural language. The objective is to empower developers and users of service-oriented systems to better understand the system's decisions through dialogue with Chat4XAI. This should not only support the development of the systems, but also contribute to greater trust and legal certainty.

On 30 November 2023, Jone Bartel will present this work at the 21st International Conference on Service-Oriented Computing (ICSOC)in Rome.

Conference Paper

Metzger, A., Bartel, J., Laufer, J. (2023). An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-Oriented Systems. In: Monti, F., Rinderle-Ma, S., Ruiz Cortés, A., Zheng, Z., Mecella, M. (eds) Service-Oriented Computing. ICSOC 2023. Lecture Notes in Computer Science, vol 14419. Springer, Cham. https://doi.org/10.1007/978-3-031-48421-6_22

Abstract

Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because the action-selection policy that underlies its decision-making essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to provide natural-language explanations of the decision-making of Deep RL. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance, and more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI’s ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.

Keywords

chatbot, explainable AI, reinforcement learning, service engineering, service adaptation

Contact

Software Systems Engineering (SSE)

apl. Prof. Dr.-Ing. Andreas Metzger
+49 201 18-34650

Software Systems Engineering

Jone Bartel
+49 201 18-37042

Software Systems Engineering (SSE)

Jan Laufer
+49 201 18 37330