“A Probabilistic Model Checking Approach to Self-Adapting Machine Learning Systems” is the newest paper, under the scope of the AIDA’s project, was accepted and presented at the 3rd International Workshop on Automated and Verifiable Software System Development (ASYDE) 2021.
Written by Maria Casimiro, David Garlan and Javier Cámara, Luís Rodrigues and Paolo Romano, the paper introduces a formal framework that enables reasoning about whether to adapt machine learning (ML) models of ML based systems, considering the trade-off between the costs and benefits of adaptation. Such a framework can be particularly useful since ML-based systems typically operate in non-static environments, prone to unpredictable changes that can adversely impact the accuracy of the ML models, which are usually in the critical path of the systems. ML mispredictions can thus affect other components in the system, and ultimately impact overall system utility in non-trivial ways. The proposed framework can be employed to determine the gains achievable via ML adaptation and to find the boundary that renders adaptation worthwhile.
According to Maria Casimiro, “the paper provides insights into the potential of ML adaptation as a way to maintain system utility throughout a system’s execution”.
ASYDE 2021, co-located with the 19th International Conference on Software Engineering and Formal Methods (SEFM 2021), provides a forum for researchers to put forward and discuss automated software development methods and techniques, automated planning mechanisms, compositional verification theories and more.
The overall experience at ASYDE 2021 was a great opportunity to share on-going research with experts in the field of software engineering and formal methods and gather valuable feedback on how to further improve the work going forward.