This paper investigates the impact of kernel functions on the accuracy of bi-fidelity Gaussian process regressions (GPR) for engineering applications. The potential of composite kernel learning (CKL) and model selection is also studied, aiming to ease the process of manual kernel selection. Using the autoregressive Gaussian process as the base model, this paper studies four kernel functions and their combinations: Gaussian, Matern-3/2, Matern 5/2, and Cubic. Experiments on four engineering test problems show that the best kernel is problem dependent and sometimes might be counter intuitive, even when a large amount of low-fidelity data already aids the model. In this regard, using CKL or automatic kernel selection via cross validation and maximum likelihood can reduce the tendency to select a poor-performing kernel. In addition, the CKL technique can create a slightly more accurate model than the best-performing individual kernel. The main drawback of CKL is its significantly expensive computational cost. The results also show that, given a sufficient amount of samples, tuning the regression term is important to improve the accuracy and robustness of bi-fidelity GPR, while decreasing the importance of the proper kernel selection.

On kernel functions for bi-fidelity Gaussian process regressions

Parussini L.;Bregant L.;
2023-01-01

Abstract

This paper investigates the impact of kernel functions on the accuracy of bi-fidelity Gaussian process regressions (GPR) for engineering applications. The potential of composite kernel learning (CKL) and model selection is also studied, aiming to ease the process of manual kernel selection. Using the autoregressive Gaussian process as the base model, this paper studies four kernel functions and their combinations: Gaussian, Matern-3/2, Matern 5/2, and Cubic. Experiments on four engineering test problems show that the best kernel is problem dependent and sometimes might be counter intuitive, even when a large amount of low-fidelity data already aids the model. In this regard, using CKL or automatic kernel selection via cross validation and maximum likelihood can reduce the tendency to select a poor-performing kernel. In addition, the CKL technique can create a slightly more accurate model than the best-performing individual kernel. The main drawback of CKL is its significantly expensive computational cost. The results also show that, given a sufficient amount of samples, tuning the regression term is important to improve the accuracy and robustness of bi-fidelity GPR, while decreasing the importance of the proper kernel selection.
File in questo prodotto:
File Dimensione Formato  
2023-On kernel functions for bi‑fidelity Gaussian process regressions.pdf

Accesso chiuso

Tipologia: Documento in Versione Editoriale
Licenza: Copyright Editore
Dimensione 2.87 MB
Formato Adobe PDF
2.87 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3085438
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 10
social impact