Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
masoudzp authored Nov 25, 2024
1 parent e17ba69 commit 455d98c
Showing 1 changed file with 23 additions and 47 deletions.
70 changes: 23 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,24 @@

<h1 align="center">Compressed Radiation Treatment Planning (CompressRTP)</h1>

<h2 align="center">
<a href="./images/RMR_NeurIPS_Paper.pdf">NeurIPS'2024 </a> |
<a href="https://arxiv.org/abs/2410.00756">ArXiv'2024 </a> |
<a href="https://iopscience.iop.org/article/10.1088/1361-6560/acbefe/meta">PMB'2023</a>

Radiotherapy is used to treat more than half of all cancer patients, either on its own or alongside other treatments like surgery, chemotherapy, or immunotherapy. It works by directing high-energy radiation beams at the patient's body to destroy cancer cells. Because every patient's anatomy is unique, radiotherapy must be personalized. This means customizing the radiation beams to effectively target the tumor while minimizing harm to nearby healthy tissue.
</h2>

# The Challenge

Personalizing radiotherapy involves solving large and complex optimization problems. These problems need to be solved quickly due to the limited time available in clinical settings. Currently, they are often solved using rough approximations, which can lead to less effective treatments. This might result in the tumor not receiving enough radiation or healthy tissues being exposed to too much.
# What is CompressRTP?

# The CompressRTP Project
Radiotherapy is used to treat more than half of all cancer patients, either alone or in combination with other treatments like surgery, chemotherapy, or immunotherapy. It works by directing high-energy radiation beams at the patient's body to destroy cancer cells. Since every patient's anatomy is unique, radiotherapy must be personalized. This means customizing the radiation beams to effectively target the tumor while minimizing harm to nearby healthy tissue.

The CompressRTP project aims to solve these optimization problems both rapidly and accurately. This ongoing project currently includes tools introduced in our latest three publications [1, 2, 3]. These tools are available as three Jupyter Notebooks in this repository:

1. [Sparse-Only Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_only.ipynb)
2. [Sparse-Plus-Low-Rank Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb)
3. [Fluence Wavelet Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/fluence_wavelets.ipynb)

These codes are provided as extensions to PortPy.
Personalizing radiotherapy involves solving large and complex optimization problems. These problems need to be solved quickly due to the limited time available in clinical settings. Currently, they are often solved using gross approximations, which can lead to less effective treatments. This might result in the tumor not receiving enough radiation or healthy tissues being exposed to excessive radiation. The CompressRTP project aims to solve these optimization problems both rapidly and accurately. This ongoing project currently includes tools introduced in our latest three publications [1, 2, 3].

# High-Level Overview
The optimization problems in radiotherapy are highly complex due to the "curse of dimensionality" since they involve many beams, beamlets (small segments of beams), and voxels (3D pixels representing volume). However, much of this data is redundant because it comes from discretizing a system that is inherently continuous. For example, radiation doses delivered from adjacent beamlets are highly correlated, and radiation doses delivered to neighboring voxels are very similar. This redundancy means that large-scale radiotherapy optimization problems are highly compressible, which is the foundation of **CompressRTP**.

Dimensionality reduction and compression have a rich history in statistics and engineering. Recently, these techniques have re-emerged as powerful tools for addressing increasingly high-dimensional problems in fields like big data and machine learning. Our goal is to adapt and adopt these versatile methods to embed high-dimensional radiotherapy optimization problems into lower-dimensional spaces so they can be solved efficiently. A general radiotherapy optimization problem can be formulated as:
Dimensionality reduction and compression have a rich history in statistics and engineering. Recently, these techniques have re-emerged as powerful tools for addressing increasingly high-dimensional problems in fields like big data and machine learning. Our goal is to **adapt and adopt** these versatile methods to **embed high-dimensional** radiotherapy optimization problems into **lower-dimensional spaces** so they can be solved efficiently. A general radiotherapy optimization problem can be formulated as:

$Minimize f(Ax,x)$

Expand All @@ -37,14 +34,14 @@ Subject to $g(Ax,x)\leq 0,x\geq 0$
1. **Matrix Compression to Address the Computational Intractability of $A$:**

- **The Challenge:** The matrix $𝐴$ is large and dense (approximately 100,000–500,000 rows and 5,000–20,000 columns) and is the main source of computational difficulty in solving radiotherapy optimization problems.
- **Traditional Approach:** This matrix is often sparsified in practice by simply ignoring small elements (e.g., zeroing out elements less than 1% of the maximum value in $𝐴$, which can potentially lead to sub-optimal treatment plans.
- **Traditional Approach:** This matrix is often sparsified in practice by simply ignoring small elements (e.g., zeroing out elements less than 1% of the maximum value in $𝐴$), which can potentially lead to sub-optimal treatment plans.
- **CompressRTP Solutions:** We provide a compressed and accurate representation of matrix $𝐴$ using two different techniques:
- **(1.1) Sparse-Only Compression:** This technique sparsifies $𝐴$ using advanced tools from probability and randomized linear algebra.
- **(1.2) Sparse-Plus-Low-Rank Compression:** This method decomposes $𝐴$ into a sum of a sparse matrix and a low-rank matrix.
- **(1.1) Sparse-Only Compression:** This technique sparsifies $𝐴$ using advanced tools from probability and randomized linear algebra. ([NeurIPS paper](./images/RMR_NeurIPS_Paper.pdf), [Sparse-Only Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_only.ipynb))
- **(1.2) Sparse-Plus-Low-Rank Compression:** This method decomposes $𝐴$ into a sum of a sparse matrix and a low-rank matrix. ([ArXiv paper](https://arxiv.org/abs/2410.00756), [Sparse-Plus-Low-Rank Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb))
2. **Fluence Compression to Enforce Smoothness on $𝑥$:**
- **The Need for Smoothness:** The beamlet intensities $𝑥$ need to be smooth for efficient and accurate delivery of radiation. Smoothness refers to small variations in the intensity of neighboring beamlets in two dimensions.
- **Traditional Approach:** Smoothness is often achieved implicitly by adding regularization terms to the objective function that discourage variations between neighboring beamlets.
- **CompressRTP Solution:** We enforce smoothness explicitly by representing the beamlet intensities using low-frequency wavelets, resulting in built-in wavelet-induced smoothness. This can be easily integrated into the optimization problem by adding a set of linear constraints.
- **CompressRTP Solution:** We enforce smoothness explicitly by representing the beamlet intensities using low-frequency wavelets, resulting in built-in wavelet-induced smoothness. This can be easily integrated into the optimization problem by adding a set of linear constraints. ([PMB paper](https://iopscience.iop.org/article/10.1088/1361-6560/acbefe/meta), [Wavelet Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/fluence_wavelets.ipynb))


# 1) Matrix Compression to Address the Computational Intractability of $𝐴$
Expand All @@ -60,22 +57,10 @@ $Minimize f(Sx,x)$
Subject to $g(Sx,x)\leq 0,x\geq 0$
$(S≈A,S$ is sparse, $A$ is dense)

We have developed a simple yet effective matrix sparsification technique with desirable mathematical properties (see **Algorithm 1** below). The main idea is to **retain the large elements of the matrix deterministically** and **handle the smaller elements probabilistically** (see the [paper](./images/RMR_NeurIPS_Paper.pdf) for a detailed explanation).

<p align="center">
<img src="./images/Algorithm_RMR.png" width="90%" height="40%">
<p>

Our **Randomized Minor Rectification (RMR)** algorithm transforms a dense matrix $𝐴$ into a sparse matrix $𝑆$ that typically contains only 2–4% of the non-zero elements of $𝐴$, while maintaining negligible accuracy loss (i.e., small feasibility and sub-optimality gaps).

This approach ensures that an optimal solution of the surrogate problem is a near-optimal solution of the original problem, as stated in the following two theorems:

**Small Feasibility Gap (Theorem 3.6 in the Paper)**:
An optimal point of the surrogate problem violates each constraint of the original problem by no more than $(19 + 5 log m)\epsilon ∥x∥_2$ with a probability of at least 95%.

**Small Sub-Optimality Gap (Theorem 3.9 in the Paper)**: An optimal point of the surrogate problem, $x_A$, is a near-optimal solution to the original problem with a probability of at least 95%, and the sub-optimality gap of O(e), where $e = (19 + 5 logm) \epsilon max (∥x_A∥_2, ∥x_S∥_2)$ ($x_A$ is an optimal solution of the original problem).

In our [paper](./images/RMR_NeurIPS_Paper.pdf), we introduced **Randomized Minor Rectification (RMR)**, a simple yet effective matrix sparsification algorithm equiped with robust mathematical properties. The core principle of RMR is to **deterministically retain the large elements of a matrix while probabilistically handling the smaller ones**. Specifically, the RMR algorithm converts a dense matrix $𝐴$ into a sparse matrix $𝑆$ with typically 2–4% non-zero elements. This sparsification ensures that the optimal solution to the surrogate optimization problem (where $𝐴$ is replaced by
$𝑆$) remains a near-optimal solution for the original problem. For a detailed mathematical analysis, refer to Theorems 3.6 and 3.9 in our [paper](./images/RMR_NeurIPS_Paper.pdf).
<p align="center">
<img src="./images/RMR_vs_Others.png" width="90%" height="40%">
<p>
Expand All @@ -93,7 +78,7 @@ $𝐴$ is sparsified by simply zeroing out small elements—a technique commonly

**Implementation in PortPy:**

If you are using PortPy for your radiotherapy research, you can apply RMR sparsification by simply adding the following lines of code:
If you are using PortPy for your radiotherapy research, you can apply RMR sparsification by simply adding the following lines of code. For more details, see [Sparse-Only Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_only.ipynb).

```python
from compress_rtp.utils.get_sparse_only import get_sparse_only
Expand All @@ -113,32 +98,23 @@ is **low-rank** and therefore **compressible**.
<img src="./images/SPlusL_singular_values.png" width="90%" height="40%">
<p>

**Figure Explanation:** The low-rank nature of matrix $𝐴$ can be verified by observing the exponential decay of its singular values, as shown by the blue line in the left figure. If we decompose matrix
$𝐴$ into $𝐴=𝑆+𝐿$, where $𝑆$ is a sparse matrix containing large-magnitude elements (e.g., elements greater than 1% of the maximum value of $𝐴$), and $𝐿$ includes smaller elements mainly representing scattering doses, then the singular values of the scattering matrix $𝐿$ reveal an even sharper exponential decay (depicted by the red line). This suggests the use of “sparse-plus-low-rank” compression, $𝐴≈𝑆+𝐻𝑊$, as schematically shown in the right figure.
**Figure Explanation:** The low-rank nature of matrix $𝐴$ can be verified by observing the exponential decay of its singular values, as shown by the blue line in the **left figure**. If we decompose matrix
$𝐴$ into $𝐴=𝑆+𝐿$, where $𝑆$ is a sparse matrix containing large-magnitude elements (e.g., elements greater than 1% of the maximum value of $𝐴$), and $𝐿$ includes smaller elements mainly representing scattering doses, then the singular values of the scattering matrix $𝐿$ reveal an even sharper exponential decay (depicted by the red line). This suggests the use of “sparse-plus-low-rank” compression, $𝐴≈𝑆+𝐻𝑊$, as schematically shown in the **right figure**.


The matrix $𝑆$ is sparse, $𝐻$ is a “tall skinny matrix” with only a few columns, and $𝑊$ is a “wide short matrix” with only a few rows. Therefore, $𝐴≈𝑆+𝐻𝑊$ provides a compressed representation of the data. This allows us to solve the following surrogate problem instead of the original problem

$Minimize f(Sx+Hy,x)$

Subject to $g(Sx+Hy,x)\leq 0, x=Wy, x\geq 0$

<p align="center">
<img src="./images/SPlusL_Lung_Benefits.png" width="90%" height="40%">
<p>

**Figure Explanation:** The figure above demonstrates improvements in plan quality across multiple clinical criteria when using the sparse-plus-low-rank compression method instead of naively sparsifying the matrix
$𝐴$ by simply removing small elements. This comparison was performed on data from 10 lung cancer patients. Additionally, using compression reduced the average optimization time by approximately 20% compared to the naïve sparsification. More aggressive compression can be applied to gain even greater acceleration.

Decomposing a matrix into the sum of a sparse matrix and a low-rank matrix has found numerous applications in fields such as computer vision, medical imaging, and statistics. Historically, this structure has been employed as a form of prior knowledge to recover objects of interest that manifest themselves in either the sparse or low-rank components.
Subject to $g(Sx+Hy,x)\leq 0, y=Wx, x\geq 0$

However, the application presented here represents a novel departure from conventional uses of sparse-plus-low-rank decomposition. Unlike traditional settings where specific components (sparse or low-rank) hold intrinsic importance, our primary goal is not to isolate or interpret these structures. Instead, we leverage them for computationally efficient matrix representation. In this case, the structure serves purely as a tool for optimizing computational efficiency while maintaining data integrity.
Decomposing a matrix into the sum of a sparse matrix and a low-rank matrix has found numerous applications in fields such as computer vision, medical imaging, and statistics. Historically, this structure has been employed as a form of prior knowledge to recover objects of interest that manifest themselves in either the sparse or low-rank components. However, the application presented here represents a novel departure from conventional uses of sparse-plus-low-rank decomposition. Unlike traditional settings where specific components (sparse or low-rank) hold intrinsic importance, our primary goal is not to isolate or interpret these structures. Instead, we leverage them for computationally efficient matrix representation. In this case, the structure serves purely as a tool for optimizing computational efficiency while maintaining data integrity.

**Note:** Both sparse-only and sparse-plus-low-rank compression techniques serve the same purpose. We are currently investigating the strengths and weaknesses of each technique and their potential combination. Stay tuned for more results.

**Implementation in PortPy:**

In PortPy, you can apply the sparse-plus-low-rank compression using the following lines of code. Unlike the sparse-only compression using RMR, which did not require any changes other than replacing $𝐴x$ with $𝑆x$ in your optimization formulation and code, this compression requires adding a linear constraint $x=𝑊𝑦$ and replacing $Ax$ with $𝑆x+Hy$. These changes can be easily implemented using CVXPy (see the [Sparse-Plus-Low-Rank Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb) for details).
In PortPy, you can apply the sparse-plus-low-rank compression using the following lines of code. Unlike the sparse-only compression using RMR, which did not require any changes other than replacing $𝐴x$ with $𝑆x$ in your optimization formulation and code, this compression requires adding a linear constraint $y=𝑊x$ and replacing $Ax$ with $𝑆x+Hy$. These changes can be easily implemented using CVXPy (see the [Sparse-Plus-Low-Rank Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb) for details).

```python
from compress_rtp.utils.get_sparse_plus_low_rank import get_sparse_plus_low_rank
Expand All @@ -163,7 +139,7 @@ To address these challenges, we treat the intensity map of each beam as a **2D i

**Implementation in PortPy:**

In **PortPy**, you can incorporate wavelet smoothness by adding the following lines of code:
In **PortPy**, you can incorporate wavelet smoothness by adding the following lines of code. For detailed explanation, see [Wavelet Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/fluence_wavelets.ipynb).

```python
from compress_rtp.utils.get_low_dim_basis import get_low_dim_basis
Expand Down

0 comments on commit 455d98c

Please sign in to comment.