Current practice in cancer treatment collects multimodal data, such as radiology images, histopathology slides, genomics and clinical data. The importance of these data sources taken individually has fostered the recent rise of radiomics and pathomics, i.e., the extraction of quantitative features from radiology and histopathology images collected to predict clinical outcomes or guide clinical decisions using artificial intelligence algorithms. Nevertheless, how to combine them into a single multimodal framework is still an open issue. In this work, we develop a multimodal late fusion approach that combines hand-crafted features computed from radiomics, pathomics and clinical data to predict radiotherapy treatment outcomes for non-small-cell lung cancer patients. Within this context, we investigate eight different late fusion rules and two patient-wise aggregation rules leveraging the richness of information given by CT images, whole-slide scans and clinical data. The experiments in leave-one-patient-out cross-validation on an in-house cohort of 33 patients show that the proposed fusion-based multimodal paradigm, with an AUC equal to 90.9%, outperforms each unimodal approach, suggesting that data integration can advance precision medicine. The results also show that late fusion favourably compares against early fusion, another commonly used multimodal approach. As a further contribution, we explore the chance to use a deep learning framework against hand-crafted features. In our scenario characterised by different modalities and a limited amount of data, as it may happen in different areas of cancer research, the results show that the latter is still a viable and effective option for extracting relevant information with respect to the former.

RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for Adaptive Radiotherapy

Tortora, Matteo
;
Cordelli, Ermanno;Sicilia, Rosa;Ippolito, Edy;Perrone, Giuseppe;Ramella, Sara;Soda, Paolo
2023-01-01

Abstract

Current practice in cancer treatment collects multimodal data, such as radiology images, histopathology slides, genomics and clinical data. The importance of these data sources taken individually has fostered the recent rise of radiomics and pathomics, i.e., the extraction of quantitative features from radiology and histopathology images collected to predict clinical outcomes or guide clinical decisions using artificial intelligence algorithms. Nevertheless, how to combine them into a single multimodal framework is still an open issue. In this work, we develop a multimodal late fusion approach that combines hand-crafted features computed from radiomics, pathomics and clinical data to predict radiotherapy treatment outcomes for non-small-cell lung cancer patients. Within this context, we investigate eight different late fusion rules and two patient-wise aggregation rules leveraging the richness of information given by CT images, whole-slide scans and clinical data. The experiments in leave-one-patient-out cross-validation on an in-house cohort of 33 patients show that the proposed fusion-based multimodal paradigm, with an AUC equal to 90.9%, outperforms each unimodal approach, suggesting that data integration can advance precision medicine. The results also show that late fusion favourably compares against early fusion, another commonly used multimodal approach. As a further contribution, we explore the chance to use a deep learning framework against hand-crafted features. In our scenario characterised by different modalities and a limited amount of data, as it may happen in different areas of cancer research, the results show that the latter is still a viable and effective option for extracting relevant information with respect to the former.
2023
Late fusion; machine learning; multimodal learning; non-small-cell lung cancer; pathomics; radiomics
File in questo prodotto:
File Dimensione Formato  
RadioPathomics_Multimodal_Learning_in_Non-Small_Cell_Lung_Cancer_for_Adaptive_Radiotherapy.pdf

accesso aperto

Licenza: Creative commons
Dimensione 3.66 MB
Formato Adobe PDF
3.66 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12610/74963
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 2
social impact