Lung cancer is by far the leading cause of cancer death among both men and women. Radiation therapy is one of the main approaches to lung cancer treatment, and its planning is crucial for the therapy outcome. However, the current practice that uniformly delivers the dose does not take into account the patient-specific tumour features that may affect treatment success. Since radiation therapy is by its very nature a sequential procedure, Deep Reinforcement Learning (DRL) is a well-suited methodology to overcome this limitation. In this respect, in this work we present a DRL controller optimizing the daily dose fraction delivered to the patient on the basis of CT scans collected over time during the therapy, offering a personalized treatment not only for volume adaptation, as currently intended, but also for daily fractionation. Furthermore, this contribution introduces a virtual radiotherapy environment based on a set of ordinary differential equations modelling the tissue radiosensitivity by combining both the effect of the radiotherapy treatment and cell growth. Their parameters are estimated from CT scans routinely collected using the Particle Swarm Optimization algorithm. This permits the DRL to learn the optimal behaviour through an iterative trial and error process with the environment. We performed several experiments considering three rewards functions modelling treatment strategies with different tissue aggres-siveness and two exploration strategies for the exploration-exploitation dilemma. The results show that our DRL approach can adapt to radiation therapy treatment, optimizing its behaviour according to the different reward functions and outperforming the current clinical practice.

Deep Reinforcement Learning for Fractionated Radiotherapy in Non-Small Cell Lung Carcinoma

Tortora, Matteo;Cordelli, Ermanno;Sicilia, Rosa;Matteucci, Paolo;Iannello, Giulio;Ramella, Sara;Soda, Paolo
2021-01-01

Abstract

Lung cancer is by far the leading cause of cancer death among both men and women. Radiation therapy is one of the main approaches to lung cancer treatment, and its planning is crucial for the therapy outcome. However, the current practice that uniformly delivers the dose does not take into account the patient-specific tumour features that may affect treatment success. Since radiation therapy is by its very nature a sequential procedure, Deep Reinforcement Learning (DRL) is a well-suited methodology to overcome this limitation. In this respect, in this work we present a DRL controller optimizing the daily dose fraction delivered to the patient on the basis of CT scans collected over time during the therapy, offering a personalized treatment not only for volume adaptation, as currently intended, but also for daily fractionation. Furthermore, this contribution introduces a virtual radiotherapy environment based on a set of ordinary differential equations modelling the tissue radiosensitivity by combining both the effect of the radiotherapy treatment and cell growth. Their parameters are estimated from CT scans routinely collected using the Particle Swarm Optimization algorithm. This permits the DRL to learn the optimal behaviour through an iterative trial and error process with the environment. We performed several experiments considering three rewards functions modelling treatment strategies with different tissue aggres-siveness and two exploration strategies for the exploration-exploitation dilemma. The results show that our DRL approach can adapt to radiation therapy treatment, optimizing its behaviour according to the different reward functions and outperforming the current clinical practice.
2021
D3QN; Deep reinforcement learning; NSCLC; Particle swarm optimization; Radiation therapy; Tumour treatment optimization
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0933365721001305-main.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 2.34 MB
Formato Adobe PDF
2.34 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12610/71865
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 7
social impact