Self-Adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework | |
---|---|
Author | |
Abstract |
This paper focuses on the problem of optimizing system utility of Machine-Learning (ML) based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such as model retraining, which operate at the level of individual ML components. To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit trade-offs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating (i) the expected performance improvement after adaptation and (ii) the impact of ML adaptation on overall system utility. We apply the proposed framework to engineer a self-adaptive ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection data-set. We initially consider a scenario in which information on model’s quality is immediately available. Next we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.
|
Year of Publication |
2024
|
Journal |
ACM Transactions on Autonomous and Adaptive Systems
|
Date Published |
March
|
Google Scholar | BibTeX |