The pervasiveness of artificial intelligence in our daily lives has raised the need to understand and trust the outputs of the learning models, especially when involved in decision processes. As a result, eXplainable Artificial Intelligence has captured more and more interest in the scientific community, providing insights into the behaviour of these systems, ensuring algorithms fairness, transparency and trustworthiness. In this contribution we overview our work on the explainability of deep learning models applied to time series, multimodal data and towards extracting meaningful medical concepts.
Making AI trustworthy in multimodal and healthcare scenarios
Cordelli E.;Guarrasi V.;Iannello G.;Sicilia R.;Soda P.;Tronchin L.
2023-01-01
Abstract
The pervasiveness of artificial intelligence in our daily lives has raised the need to understand and trust the outputs of the learning models, especially when involved in decision processes. As a result, eXplainable Artificial Intelligence has captured more and more interest in the scientific community, providing insights into the behaviour of these systems, ensuring algorithms fairness, transparency and trustworthiness. In this contribution we overview our work on the explainability of deep learning models applied to time series, multimodal data and towards extracting meaningful medical concepts.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
20.500.12610-79227.pdf
accesso aperto
Tipologia:
Versione Editoriale (PDF)
Licenza:
Non specificato
Dimensione
554.76 kB
Formato
Adobe PDF
|
554.76 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.