Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continu- ous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model check- ing on the system that is checked. The analysis is carried out for STL under different training regimes.

Is Machine Learning Model Checking Privacy Preserving?

Bortolussi, Luca
Primo
;
Nenzi, Laura
Secondo
;
Saveri, Gaia
Penultimo
;
Silvetti, Simone
Ultimo
2025-01-01

Abstract

Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continu- ous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model check- ing on the system that is checked. The analysis is carried out for STL under different training regimes.
File in questo prodotto:
File Dimensione Formato  
2024_Isola.pdf

Accesso chiuso

Tipologia: Documento in Versione Editoriale
Licenza: Copyright Editore
Dimensione 3.49 MB
Formato Adobe PDF
3.49 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3097561
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact