This article presents an interpretable approach to binary image classification using Genetic Programming (GP), applied to the Patch-Camelyon (PCAM) dataset, which contains small tissue biopsy patches labeled as malignant or benign. While Deep Neural Networks (DNNs) achieve high performance in image classification, their opaque decision-making processes, prone to overfitting behavior and dependency on large amounts of annotated data limit their utility in critical fields like digital pathology, where interpretability is essential. To address this, we employ GP, specifically using the Multi-Modal Adaptive Graph Evolution (MAGE) framework, to evolve end-to-end image classification pipelines. We trained MAGE a hundred times with the best optimized key hyperparameters for this task. Among all MAGE models trained, the best one achieved 78% accuracy on the validation set and 76% accuracy on the test set. Among Convolutional Neural Networks (CNNs), our baseline, the best model obtained 84.5% accuracy on the validation set and 77.1% accuracy on the test set. Unlike CNNs, our GP approach enables program-level transparency, facilitating interpretability through example-based reasoning. By analyzing evolved programs with medical experts, we highlight the transparency of decision-making in MAGE pipelines, offering an interpretable alternative for medical image classification tasks where model interpretability is paramount.

Evolved and Transparent Pipelines for Biomedical Image Classification

Nadizar G.;
2025-01-01

Abstract

This article presents an interpretable approach to binary image classification using Genetic Programming (GP), applied to the Patch-Camelyon (PCAM) dataset, which contains small tissue biopsy patches labeled as malignant or benign. While Deep Neural Networks (DNNs) achieve high performance in image classification, their opaque decision-making processes, prone to overfitting behavior and dependency on large amounts of annotated data limit their utility in critical fields like digital pathology, where interpretability is essential. To address this, we employ GP, specifically using the Multi-Modal Adaptive Graph Evolution (MAGE) framework, to evolve end-to-end image classification pipelines. We trained MAGE a hundred times with the best optimized key hyperparameters for this task. Among all MAGE models trained, the best one achieved 78% accuracy on the validation set and 76% accuracy on the test set. Among Convolutional Neural Networks (CNNs), our baseline, the best model obtained 84.5% accuracy on the validation set and 77.1% accuracy on the test set. Unlike CNNs, our GP approach enables program-level transparency, facilitating interpretability through example-based reasoning. By analyzing evolved programs with medical experts, we highlight the transparency of decision-making in MAGE pipelines, offering an interpretable alternative for medical image classification tasks where model interpretability is paramount.
2025
9783031899904
9783031899911
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3113819
 Avviso

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact