EXplainable AI (XAI) techniques can be employed to help identify points of concern in the objects analyzed when using image-based Deep Neural Networks (DNNs). There has been an increasing number of works proposing the usage of DNNs to perform Failure Analysis (FA) in various industrial applications. These DNNs support practitioners by providing an initial screening to speed up the manual FA process. In this work, we offer a proof-of-concept for using a DNN to recognize failures in pictures of Printed Circuit Boards (PCBs), using the boolean information of (non) faultiness as ground truth. To understand if the model correctly identifies faulty connectors within the PCBs, we make use of XAI tools based on Class Activation Mapping (CAM), observing that the output of these techniques seems not to align well with these connectors. We further analyze the faithfulness of these techniques with respect to the DNN, observing that often they do not seem to capture relevant features according to the model’s decision process. Finally, we mask out faulty connectors from the original images, noticing that the DNN predictions do not change significantly, thus showing that the model possibly did not learn to base its predictions on features associated with actual failures. We conclude with a warning that FA using DNNs should be conducted using more complex techniques, such as object detection, and that XAI tools should not be taken as oracles, but their correctness should be further analyzed.

Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly?

Leonardo Arrighi
;
Sylvio Barbon Junior;Felice Andrea Pellegrino;
2023-01-01

Abstract

EXplainable AI (XAI) techniques can be employed to help identify points of concern in the objects analyzed when using image-based Deep Neural Networks (DNNs). There has been an increasing number of works proposing the usage of DNNs to perform Failure Analysis (FA) in various industrial applications. These DNNs support practitioners by providing an initial screening to speed up the manual FA process. In this work, we offer a proof-of-concept for using a DNN to recognize failures in pictures of Printed Circuit Boards (PCBs), using the boolean information of (non) faultiness as ground truth. To understand if the model correctly identifies faulty connectors within the PCBs, we make use of XAI tools based on Class Activation Mapping (CAM), observing that the output of these techniques seems not to align well with these connectors. We further analyze the faithfulness of these techniques with respect to the DNN, observing that often they do not seem to capture relevant features according to the model’s decision process. Finally, we mask out faulty connectors from the original images, noticing that the DNN predictions do not change significantly, thus showing that the model possibly did not learn to base its predictions on features associated with actual failures. We conclude with a warning that FA using DNNs should be conducted using more complex techniques, such as object detection, and that XAI tools should not be taken as oracles, but their correctness should be further analyzed.
File in questo prodotto:
File Dimensione Formato  
2023___Leonardo_Arrighi___Explainable_AI_PCBs__compressed.pdf

embargo fino al 21/10/2024

Descrizione: Versione finale della pubblicazione - versione compressa con immagini di inferiore qualità
Tipologia: Bozza finale post-referaggio (post-print)
Licenza: Copyright Editore
Dimensione 253.39 kB
Formato Adobe PDF
253.39 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3061718
 Avviso

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact