(c) sdecoret/AdobeStock

Explanations for AI-based System Adaptations

Online reinforcement learning is a promising approach to realize self-adaptive systems. But the complex models on which adaptations are based are practically a black box. The SSE team investigated which solutions from Explainable AI (XAI) research may be suitable to make adaptations comprehensible.

Professor Metzger presented the results of the research at the 3rd IEEE International Conference on Autonomic Computing and Self-Organizing Systems - ACSOS 2022. The paper was nominated as Best Paper; Andreas Metzger's slides are available atSlideshare .

F. Feit, A. Metzger, K. Pohl, „Explaining Online Reinforcement Learning Decisions of Self-Adaptive Systems“, in 3rd Int’l Conference on Autonomic Computing and Self-Organizing Systems (ACSOS 2022), 2022


Abstract

Design time uncertainty poses an important challenge when developing a self-adaptive system. As an example, defining how the system should adapt when facing a new environment state, requires understanding the precise effect of an adaptation, which may not be known at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realizing selfadaptive systems in the presence of design time uncertainty. By using Online RL, the self-adaptive system can learn from actual operational data and leverage feedback only available at runtime. Recently, Deep RL is gaining interest. Deep RL represents learned knowledge as a neural network whereby it can generalize over unseen inputs, as well as handle continuous environment states and adaptation actions. A fundamental problem of Deep RL is that learned knowledge is not explicitly represented. For a human, it is practically impossible to relate the parametrization of the neural network to concrete RL decisions and thus Deep RL essentially appears as a black box. Yet, understanding the decisions made by Deep RL is key to (1) increasing trust, and (2) facilitating debugging. Such debugging is especially relevant for self-adaptive systems, because the reward function, which quantifies the feedback to the RL algorithm, must be defined by developers. The reward function must be explicitly defined by developers, thus introducing a potential for human error. To explain Deep RL for self-adaptive systems, we enhance and combine two existing explainable RL techniques from the machine learning literature. The combined technique, XRL-DINE, overcomes the respective limitations of the individual techniques. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results of applying XRL-DINE to a self-adaptive system exemplar.

 

Contact

Software Systems Engineering (SSE)

apl. Prof. Dr.-Ing. Andreas Metzger
+49 201 18-34650