Generative Adversarial Networks (GANs) have recently gained large interest in computer vision being used in many tasks, but their evaluation is still an open issue. This is especially true in medical imaging where GAN application is at its infancy, and where the use of scores based on models trained on datasets far away from the medical domain, e.g. the Inception score, can lead to misleading results. To overcome such limitations we propose a framework to evaluate images generated by GANs in terms of fidelity and structural similarity with the real ones. On the one hand, we measure the distance between the probability densities of the real and generated samples by exploiting feature representations given by a Convolutional Neural Network (CNN) trained as a discriminator. On the other hand, we compute domain-independent metrics catching the image high-level quality. We also introduce a visual layer explaining the CNN. We extensively evaluate the proposed approach with 4 state-of-the-art GANs over a real-world medical dataset of CT lung images.

Evaluating GANs in Medical Imaging

Sicilia R.;Cordelli E.;Ramella S.;Soda P.
2021-01-01

Abstract

Generative Adversarial Networks (GANs) have recently gained large interest in computer vision being used in many tasks, but their evaluation is still an open issue. This is especially true in medical imaging where GAN application is at its infancy, and where the use of scores based on models trained on datasets far away from the medical domain, e.g. the Inception score, can lead to misleading results. To overcome such limitations we propose a framework to evaluate images generated by GANs in terms of fidelity and structural similarity with the real ones. On the one hand, we measure the distance between the probability densities of the real and generated samples by exploiting feature representations given by a Convolutional Neural Network (CNN) trained as a discriminator. On the other hand, we compute domain-independent metrics catching the image high-level quality. We also introduce a visual layer explaining the CNN. We extensively evaluate the proposed approach with 4 state-of-the-art GANs over a real-world medical dataset of CT lung images.
2021
978-3-030-88209-9
978-3-030-88210-5
GAN; Evaluation; CNN; Explainability
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12610/74329
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 6
social impact