Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continu- ous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model check- ing on the system that is checked. The analysis is carried out for STL under different training regimes.
Is Machine Learning Model Checking Privacy Preserving?
Bortolussi, LucaPrimo
;Nenzi, LauraSecondo
;Saveri, Gaia
Penultimo
;Silvetti, SimoneUltimo
2025-01-01
Abstract
Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continu- ous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model check- ing on the system that is checked. The analysis is carried out for STL under different training regimes.File | Dimensione | Formato | |
---|---|---|---|
2024_Isola.pdf
Accesso chiuso
Tipologia:
Documento in Versione Editoriale
Licenza:
Copyright Editore
Dimensione
3.49 MB
Formato
Adobe PDF
|
3.49 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.