Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continu- ous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model check- ing on the system that is checked. The analysis is carried out for STL under different training regimes.

Is Machine Learning Model Checking Privacy Preserving?

Bortolussi, Luca;Nenzi, Laura;Saveri, Gaia;Silvetti, Simone
2024-01-01

Abstract

Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continu- ous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model check- ing on the system that is checked. The analysis is carried out for STL under different training regimes.
2024
9783031751066
9783031751073
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3097561
 Avviso

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact