Instituto de Engenharia de Sistemas e

Transcrição

Instituto de Engenharia de Sistemas e
Instituto de Engenharia de Sistemas e Computadores de Coimbra
Institute of Systems Engineering and Computers
INESC - Coimbra
Humberto Rocha
Brı́gida da Costa Ferreira
Joana Matos Dias
Maria do Carmo Lopes
Towards efficient transition from optimized to delivery
fluence maps in inverse planning of radiotherapy design
No. 7
2010
ISSN: 1645-2631
Instituto de Engenharia de Sistemas e Computadores de Coimbra
INESC - Coimbra
Rua Antero de Quental, 199; 3000-033 Coimbra; Portugal
www.inescc.pt
Towards efficient transition from optimized to delivery
fluence maps in inverse planning of radiotherapy design
H. Rocha
∗
J. M. Dias
∗,†
B.C. Ferreira
§,‡
M.C. Lopes
§
June 21, 2010
Abstract
The intensity modulated radiation therapy (IMRT) treatment planning problem
is usually divided in three smaller problems that are solved sequentially: geometry
problem, intensity problem, and realization problem. Most of the research published
relates to each of the above problems separately. There exists many models and algorithms to address each of the problems satisfactorily. However, there is the need to
link well the three problems. While the linkage between the geometry problem and
the intensity problem is straightforward, the linkage between the intensity problem
and the realization problem is all but simple, may lead to significant deterioration of
plan quality, and is never, or rarely, documented in publications. The goal of this
research report is to explain how the transition from the intensity problem to the
realization problem is typically done and using a clinical example of a head and neck
cancer case, present numerical evidences of the resulting deterioration of plan quality.
Research directions to improve the discretization of a optimal continuous fluence map
are highlighted.
Key words. Radiotherapy, IMRT, Fluence map optimization, Delivery problem.
INESC-Coimbra, Coimbra, Portugal.
Faculdade de Economia, Universidade de Coimbra, Coimbra, Portugal.
‡
I3N, Departamento de Fı́sica, Universidade de Aveiro, Aveiro, Portugal.
§
Serviço de Fı́sica Médica, IPOC-FG, EPE, Coimbra, Portugal.
∗
†
1
1
Introduction
The goal of radiation therapy is to deliver a dose of radiation to the cancerous region to
sterilize the tumor minimizing the damages on the surrounding healthy organs and tissues.
Radiation therapy is based on the fact that cancerous cells are focused on fast reproduction
and are not as able to repair themselves when damaged by radiation as healthy cells.
Therefore, the goal of the treatment is to deliver enough radiation to kill the cancerous
cells but not so much that jeopardizes the ability of healthy cells to survive. The continuous
development of new treatment machines contributes to the improvement of the accuracy
and better control over the radiation delivery.
There are two types of radiation treatment planning: forward planning and inverse
planning. Forward planning is a trial and error approach in which dose distribution is
calculated for a fixed set of parameters (beams and fluences). This is a manual process
repeated until a suitable treatment is finally found for a certain set of parameters. This is a
time consuming process, where patience is a key factor and has no guarantees of producing
high quality treatment plans. Inverse planning, like the name suggests, is the reverse of
forward planning. In inverse planning, for a prescribed treatment plan, a correspondent
set of parameters (beams and fluences) is algorithmically computed in order to fulfil the
prescribed doses and restrictions. Inverse treatment planning allows the modeling of highly
complex treatment planning problems and optimization has a fundamental role in the
success of this procedure. An important type of inverse treatment planning is intensity
modulated radiation therapy (IMRT) where the radiation beam is modulated by a multileaf
collimator (see Fig. 1(a)). Multileaf collimators (MLC) enable the transformation of the
beam into a grid of smaller beamlets of independent intensities (see Fig. 1(b)). Despite the
illustration of Fig. 1(b), beamlets do not exist physically. Their existence is generated by
the movement of the leaves of the MLC in Fig. 1(a) that block part of the beam during
portions of the delivery time. The MLC has movable leaves on both sides that can be
positioned at any beamlet grid boundary. MLC can operate in two distinct ways: dynamic
collimation or multiple static collimation. In the first case, the leaves move continuously
during irradiation. In the second case, the “step and shoot mode”, the leaves are set
to open a desired aperture during each segment of the delivery and radiation is on for a
specific fluence time or intensity. This procedure generates a discrete set (the set of chosen
2
(a)
(b)
Figure 1: Illustration of a multileaf collimator (with 9 pairs of leaves) – 1(a) and illustration
of a beamlet intensity map (9 × 9) – 1(b).
beam angles) of intensity maps like in Fig. 1(b). Here, one will consider multiple static
collimation.
A common way to solve the inverse planning in IMRT optimization problems is to use a
beamlet-based approach. This approach leads to a large-scale programming problem with
thousands of variables and hundreds of thousands of constraints. The quality of the plan,
depending on the programming models and the corresponding solving methods, determines
the clinical treatment effect. Due to the complexity of the whole optimization problem,
many times the treatment planning is divided into three smaller problems which can be
solved separately: geometry problem, intensity problem, and realization problem. The
geometry problem consists in finding the minimum number of beams and corresponding
directions that satisfy the treatment goals using optimization algorithms [6, 8, 13]. In clinical practice, most of the times, the number of beams is assumed to be defined a priori by
the treatment planner and the beam directions are still manually selected by the treatment
planner that relies mostly on his experience. After deciding what beam angles should be
used, a patient will be treated using an optimal plan obtained by solving the intensity (or
fluence map) problem - the problem of determining the optimal beamlet weights for the
fixed beam angles. Many mathematical optimization models and algorithms have been proposed for the intensity problem, including linear models (e.g. [14, 15]), mixed integer linear
models (e.g. [10, 12]), nonlinear models (e.g. [18]), and multiobjective models (e.g. [16]).
After an acceptable set of intensity maps is produced, one must find a suitable way for
3
delivery (realization problem). Typically, beamlet intensities are discretized over a range
of values (0 to 10, e.g.) and one of the many existing techniques ([1, 2, 14, 17]) is used
to construct the apertures and intensities that approximately match the intensity maps
previously determined. However, reproducing the optimized intensity maps efficiently, i.e.
minimizing the radiation exposure time, is a challenging optimization problem. Moreover,
one needs to find an efficient way for MLC devices to produce the exact same optimized
intensity profiles. Due to leaf collision issues, leaf perturbation of adjacent beamlet intensities, tongue-and-groove constraints, etc., the intensity maps actually delivered may be
substantially different from the optimized ones. Those problems have been tackled ([9],
e.g.) and are still a prosperous field of research.
Most of the research published relates to each of the above problems separately. There
exists many models and algorithms to address each of the problems satisfactorily. However,
there is the need to link well the three problems. While the linkage between the geometry
problem and the intensity problem is straightforward, the linkage between the intensity
problem and the realization problem is all but simple, may lead to significant deterioration
of plan quality, and is never, or rarely, documented in publications. By reading the previous
paragraph, and most of the literature, we get the idea that deterioration of plan quality after
obtaining optimal fluence maps is only, or mainly, caused by the difficulties of segmentation
during the realization problem related with the complexity of the fluence map or physical
restrictions of the MLC. That is not true. One of the main reasons for deterioration of plan
quality during the realization problem, as will be detailed here, is related to the simplistic
linkage between the intensity problem and the realization problem.
The goal of this research report is to explain how the transition from the intensity
problem to the realization problem is typically done and using a clinical example of a head
and neck cancer case, numerical evidence of the resulting deterioration of plan quality will
be presented. Research directions to improve the discretization of a optimal continuous
fluence map will be highlighted.
4
2
Transition from intensity to realization problem
In order to better understand what is at stake when linking the intensity problem and
the realization problem, let us detail here what we have in the end of the intensity problem
and what we need to do in the realization problem. The outcome of the intensity problem
is a set of optimal fluence maps (one for each fixed beam) that can be represented by real
matrices of m × n beamlet weights (intensity assigned to each beamlet), i.e., there are m
leaf pairs and for each leaf there are n + 1 possible positions. This matrices, solutions of
the intensity problem, are not feasible with respect to the treatment parameters so they
must be transformed into feasible hardware settings a posteriori, with a degradation of
plan quality. In order to convert an optimal fluence map into a set of MLC segments in a
process called segmentation, the real matrices must be transformed in integer matrices by
discretizing each beamlet intensity (matrix entry) over a range of values. This discretization
is one of the main causes for deterioration of plan quality. An illustration of the usual way
this transition is done is presented next.
Let us assume that an optimal fluence map is represented by the real matrix:












Wr = 










0
0
0
0
0
0
0
0
0
0 8.1366 13.105 13.4550 14.1250 13.8250 8.0398 3.5303 0
0 5.9854 8.2366 12.9489 12.9689
8.0386
7.5884 3.2276 0
0 3.5043 6.9877
7.5345
7.8395
7.5839
6.3485 2.5349 0
0 3.5114 6.0657
7.3895
7.3589
5.9978
3.2313 1.7964 0
0 1.1781 3.5217
5.9854
6.0874
3.3333
2.2225 1.5894 0
0 1.1971 1.4574
3.5213
3.2323
3.1313
1.1943
0
0
0
0
1.3771
1.3727
1.1799
1.1982
0
0
0
0
0
0
0
0
0
0
0
0























(1)
The ideal would be to deliver the intensities of the above real matrix in a continuous
manner – see Fig. 2(a). However, that is not feasible for the existing MLC’s. Typically,
this beamlet intensities are discretized over a range of values – 0 to number of levels. Let
us consider 6 levels. A common procedure is to determine the levels range by dividing the
maximum intensity by the number of levels. For this example, the levels range would be
2.3542. By dividing each beamlet intensity by 2.3542 and then rounding we determine the
5
14
12
10
8
6
4
2
0
(a)
(b)
Figure 2: Ideal continuous fluence – 2(a) and discrete fluence – 2(b).
level for each beamlet. This is equivalent to distribute the beamlet intensities according to
Table 1:
Level
level intensity
beamlet intensity
0
0.0000
[0.0000; 1.1771)
1
2.3542
[1.1771; 3.5313)
2
4.7083
[3.5313; 5.8854)
3
7.0625
[5.8854; 8.2396)
4
9.4167
[8.2396; 10.5938)
5
11.7708
[10.5938; 12.9479)
6
14.1250
[12.9479; 14.1250]
Table 1: Beamlet distribution to correspondent intensity level.
This discretization leads to the fluence map of Fig. 2(b) and can be represented by the
following matrix of beamlet weights:
6

Wl











=










0
0
0
0
0
0
0
0
0
0 7.0625 14.1250 14.1250 14.1250 14.1250 7.0625 2.3542 0
0 7.0625
7.0625
14.1250 14.1250
7.0625
7.0625 2.3542 0
0 2.3542
7.0625
7.0625
7.0625
7.0625
7.0625 2.3542 0
0 2.3542
7.0625
7.0625
7.0625
7.0625
2.3542 2.3542 0
0 2.3542
2.3542
7.0625
7.0625
2.3542
2.3542 2.3542 0
0 2.3542
2.3542
2.3542
2.3542
2.3542
2.3542
0
0
0
0
2.3542
2.3542
2.3542
2.3542
0
0
0
0
0
0
0
0
0
0
0
0












= α × Wi = 2.3542 × 










0 0 0 0 0 0 0 0 0
0 3 6 6 6 6 3 1 0
0 3 3 6 6 3 3 1 0
0 1 3 3 3 3 3 1 0
0 1 3 3 3 3 1 1 0
0 1 1 3 3 1 1 1 0
0 1 1 1 1 1 1 0 0
0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0
























(2)











.










Note that even knowing that the number of levels is 6, beamlets were only assigned to
3 different levels (1, 3, and 6) leading to larger gaps between levels. Note also that this
discretization with no criteria may assign very different intensities to the same level and
similar intensities to different levels. E.g., 1.1781 and 3.5217 would be in the same level
with intensity 2.3542 while 3.5854 that is closer to 3.5217 would be in a different level with
intensity 4.7083.
Since the weights of the different beamlets are different, and MLC devices produce the
same radiation intensity for all the exposed beamlets (when beam is on), beamlet variation can be achieved by changing the beam aperture and superimposing the correspondent
number of segments. The problem of choosing the leaf positions and corresponding apertures to consider for delivery is equivalent to find the decomposition of Wi into K binary
P
k
matrices B k , called shape matrices, such that Wi = K
k=1 αk B , where αk is the intensity
7
value (exposure time) for each aperture and B k are 0–1 matrices where a 0-entry means
the radiation is blocked and a 1-entry means that the radiation go through.
More than a decomposition can be found for a given fluence matrix. For the integer
matrix Wi in (2), a possible decomposition is

 
0 0 0 0 0 0 0 0 0




















0 0 0 0 0 0 0 0 0




 
 
 
 
 
 
 
=
 
 
 
 
 
 
 
 
 
0 3 6 6 6 6 3 1 0 

0 3 3 6 6 3 3 1 0
0 1 3 3 3 3 3 1 0
0 1 3 3 3 3 1 1 0
0 1 1 3 3 1 1 1 0
0 1 1 1 1 1 1 0 0
0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0











2 ×









0 0 0 0 0 0 0 0 0

0 1 1 1 1 1 1 1 0 


0 1 1 1 1 1 1 1 0 
0 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 0 0
0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0



0 0 1 1 1 1
0 0 1 1 1 1
0 0 0 1 1 0
0 0 0 0 0 0
0 0 0 0 0 0

0 1 1 1 1 1 1 1 0 








1 0 0 




1 0 0 




0 0 0 + 3 ×




0 0 0 





0 0 0 




0 0 0 


0 1 1 1 1 1 1 0 0 

0 1 1 1 1 1

0 0 0 0 0 0 0 0 0


+









0 0 0 0 0 0 0 0 0


0 0 1 1 1 1 0 0 0 


0 0 0 1 1 0 0 0 0 


0 0 0 0 0 0 0 0 0 




0 0 0 0 0 0 0 0 0 

0 0 0 0 0 0 0 0 0 


0 0 0 0 0 0 0 0 0 

0 0 0 0 0 0 0 0 0 ,
0 0 0 0 0 0 0 0 0
which is equivalent to the superimposition of the apertures of Fig. 3.
In the previous illustration of a segmentation procedure many physical issues inherent
to MLC’s physical limitations were ignored, including leaf collision issues, leaf perturbation
of adjacent beamlet intensities, tongue-and-groove constraints, etc. Those problems have
been tackled ([9], e.g.) but the intensity maps actually delivered may still be different
from the theoric ones. However, the possible plan quality degradation originated by this
issues (possible changes of decimals in each beamlet intensity) is limited when compared to
degradation caused by rounding the optimal beamlet intensities (possible changes of units
in each beamlet intensity). By a simple inspection of matrix Wr in Eq. 1 and matrix Wl in
Eq. 2 one see that the linkage between the intensity problem and the delivery problem by
8
Figure 3: Sequence of apertures and intensities for a decomposition of Wi in (2).
doing a simple rounding leads to large differences in the intensity of the beamlets. The lack
of criteria to increase or reduce a beamlet intensity should be avoided since all optimization
effort is jeopardized by doing so.
3
Numerical results
A clinical example of a head and neck case is used to verify the deterioration caused by
the rounding of the optimal fluence maps. In general, the head and neck region is a complex
area to treat with radiotherapy due to the large number of sensitive organs in this region
(e.g. eyes, mandible, larynx, oral cavity, etc.). For simplicity, in this study, the OARs used
for treatment optimization were limited to the spinal cord, the brainstem and the parotid
glands. The tumor to be treated plus some safety margins is called planning target volume
(PTV). For the head and neck case in study it was separated in two parts: PTV left and
PTV right (see Figure 4). The prescribed doses for all the structures considered in the
optimization are presented in Table 2.
In order to facilitate convenient access, visualization and analysis of patient treatment
planning data, the computational tools developed within Matlab [11] and CERR [7] (computational environment for radiotherapy research) were used as the main software platform
to embody our optimization research and provide the necessary dosimetry data to perform
optimization in IMRT.
9
Figure 4: Structures considered in the IMRT optimization visualized in CERR.
Structure
Mean Dose
Max Dose
Prescribed Dose
Priority
Spinal cord
–
45 Gy
–
1
Brainstem
–
54 Gy
–
1
Left parotid
26 Gy
–
–
3
Right parotid
26 Gy
–
–
3
PTV left
–
–
59.4 Gy
2
PTV right
–
–
50.4 Gy
2
Body
–
70 Gy
–
–
Table 2: Prescribed doses for all the structures considered for IMRT optimization.
In order to perform IMRT optimization on this case, two models where used: a linear
model and a nonlinear model. Our tests were performed on a 2.66Ghz Intel Core Duo PC
with 3 GB RAM. We used CERR 3.2.2 version and Matlab 7.4.0 (R2007a). The dose was
computed using CERR’s pencil beam algorithm (QIB) with seven equispaced beams in a
coplanar arrangement, with angles 0, 51, 103, 154, 206, 257 and 309, and with 0 collimator
angle. In order to solve the nonlinear model we used the Matlab routine quadprog, that for
large scale problems uses a reflective trust-region algorithm. To address the linear problem
10
Level
level intensity
beamlet intensity range
0
0.0000
[0.0000 ; 1.2857)
1
2.5714
[1.2857 ; 3.8571)
2
5.1429
[3.8571 ; 6.4286)
3
7.7143
[6.4286 ; 9.0000)
4
10.285
[9.0000 ; 11.571)
5
12.857
[11.571 ; 14.142)
6
15.428
[14.142 ; 16.714)
7
18.000
[16.714 ; 18.000]
Table 3: Beamlet distribution to correspondent intensity level for 7 levels.
Level
level intensity
beamlet intensity range
0
0.0000
[0.0000 ; 1.8000)
1
3.6000
[1.8000 ; 5.4000)
2
7.2000
[5.4000 ; 9.0000)
3
10.800
[9.0000 ; 12.600)
4
14.400
[12.600 ; 16.200)
5
18.000
[16.200 ; 18.000]
Table 4: Beamlet distribution to correspondent intensity level for 5 levels.
we used one of the most effective commercial tools to solve large scale linear programs –
Cplex[4]. We used a barrier algorithm (baropt solver of Cplex 10.0) to tackle our linear
problem.
In order to acknowledge the degree of plan quality deterioration, results obtained for the
optimal fluence maps were compared with the fluence maps obtained after rounding optimal
intensities using 7 levels and 5 levels. In Tables 3 and 4 we have the beamlet intensity range
for each intensity level. By decreasing the number of levels, the segmentation problem will
be simplified resulting in more efficient deliver. However, by decreasing the number of levels
the beamlet intensity range will increase potentiating a more expressive deterioration of
results. In the best case scenario for both levels there are no differences between the optimal
11
(a)
(b)
Figure 5: Ideal theoric fluence – 5(a) and delivered fluence – 5(b) for a beam using Konrad.
intensities and the rounded intensities. However, for the worst case scenario, for 7 levels
the difference between the optimal and the rounded intensity for each beamlet is 1.2857
and for 5 levels that difference is 1.8. Since the number of beamlets is 1907 for this head
and neck case the total rounding error can be as large as 1907 times 1.2857 for 7 levels or
1907 times 1.8 for 5 levels.
It is important no remark that this rounding procedure is the usual procedure of many
treatment planning systems such as Konrad. In Fig. 5 we have the optimal fluence for
beam corresponding to 0 degrees using Konrad and the delivered fluence after rounding
using 7 levels.
The quality of the results can be perceived considering a variety of metrics and can
change from patient to patient. Typically, results are judged by their cumulative dosevolume histogram (DVH) and by analyzing isodose curves, i.e., the level curves for equal
dose per slice. An ideal DVH for the tumor would present 100% volume for all dose
values ranging from zero to the prescribed dose value and then drop immediately to zero,
indicating that the whole target volume is treated exactly as prescribed. Ideally, the curves
for the organs at risk would instead drop immediately to zero, meaning that no volume
receives radiation. In Fig. 6, DVH curves for PTV’s and parotids are presented for optimal
fluences using the linear and the nonlinear models and for the rounded optimal intensities
when using 5 and 7 levels. Another metric usually used considers prescribed dose that 95%
of the volume of the PTV receives (D95 ). Typically, 95% of the prescribed dose is required.
12
100
100
90
90
Percent Volume (%)
70
60
− ideal
− 7 levels
− 5 levels
− ideal
− 7 levels
− 5 levels
50
40
70
60
40
30
20
20
10
10
0
10
20
30
40
Dose (Gy)
50
60
0
70
0
10
20
100
100
90
90
80
80
70
70
60
50
40
PTV right
PTV right
PTV right
PRT right
PRT right
PRT right
20
10
0
0
10
30
40
Dose (Gy)
50
60
70
50
60
70
60
50
40
30
− ideal
− 7 levels
− 5 levels
− ideal
− 7 levels
− 5 levels
20
30
40
Dose (Gy)
(b)
Percent Volume (%)
Percent Volume (%)
(a)
30
− ideal
− 7 levels
− 5 levels
− ideal
− 7 levels
− 5 levels
50
30
0
PTV left
PTV left
PTV left
PRT left
PRT left
PRT left
80
Percent Volume (%)
PTV right
PTV right
PTV right
PRT right
PRT right
PRT right
80
PTV left
PTV left
PTV left
PRT left
PRT left
PRT left
20
10
50
60
0
70
(c)
0
10
− ideal
− 7 levels
− 5 levels
− ideal
− 7 levels
− 5 levels
20
30
40
Dose (Gy)
(d)
Figure 6: DVH for ideal optimal fluence using NLP vs 7 and 5 level for right PTV and
PRT – 6(a), DVH for rounded fluence using NLP vs 7 and 5 level for left PTV and PRT–
6(b), DVH for ideal optimal fluence using LP vs 7 and 5 level for right PTV and PRT –
6(c) and DVH for rounded fluence using LP vs 7 and 5 level for left PTV and PRT– 6(d).
D95 is represented in Fig. 6 with an asterisk. By observing Fig. 6 some of the conclusions
that can be drawn include:
• We can observe deterioration of the results from the transition of the optimal fluence
maps to the rounded ones;
• That deterioration is aggravated when fewer levels are considered;
• That deterioration is worst for linear models than for nonlinear models.
The first two conclusions were expected and are a corollary of what was said previously.
The last conclusion is probably the most interesting and the explanation for that might be
13
the fact that nonlinear models produce smoother solutions than the linear models. Note
that no segmentation was done and the results deterioration is exclusively caused by the
rounding of the optimal fluence maps. How to address this issue?
4
Direct approaches and research directions
Apparently no one tackled the problem arising from the degradation of plan quality
when rounding the optimal fluence maps since it is impossible to find anything about that
in literature and the usual way to make the transition from intensity problem to delivery
problem is rounding (or truncating) the optimal fluences. However, probably to address this
problem, not directly, but by contouring it, and also to address MLC physical limitations,
time limitations, etc. that make impossible the delivery of a plan for a given optimal fluence
map, at least in a clinical time, some approaches choose to optimize directly apertures and
fluences instead of solving the realization problem after the intensity problem. By solving
both problems at once, one has the guarantee that solutions are always feasible for delivery
without any post-processing. The column generation approach [12, 14] belongs to this
second group of approaches since the generated columns can be restricted to correspond
to feasible hardware settings. An advantage of the column generation approach to the
other IMRT optimization approaches is that the complexity of the treatment plan can be
controlled through the number of columns. This approach allows us to investigate the
non-trivial trade-off between plan quality and treatment complexity. Another approach
that belong to this second group of approaches is the direct aperture optimization (DAO)
scheme ([17]).
DAO and other variations, namely direct parameter machine optimization (DMPO), are
integrated in the latest generation of planning systems. DAO can be viewed and explained
as a similar optimization procedure as the post-optimization of the segmentation of the
rounded fluence map represented by matrix Wl in Eq. 2. After the segmentation of Wl we
obtain
14

0
0
0
0
0
0






















0
0
0



0 7.0625 14.1250 14.1250 14.1250 14.1250 7.0625 2.3542 0 


0 7.0625 7.0625 14.1250 14.1250 7.0625 7.0625 2.3542 0 


0 2.3542 7.0625 7.0625 7.0625 7.0625 7.0625 2.3542 0 


0 2.3542 7.0625 7.0625 7.0625 7.0625 2.3542 2.3542 0  =


0 2.3542 2.3542 7.0625 7.0625 2.3542 2.3542 2.3542 0 


0 2.3542 2.3542 2.3542 2.3542 2.3542 2.3542
0
0 


0
0
2.3542 2.3542 2.3542 2.3542
0
0
0 

0
0
0
0
0
0
0
0
0




0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0








 0 1 1 1 1 1 1 0 0 
 0 1 1 1 1 1 1 1 0 








 0 1 1 1 1 1 1 0 0 
 0 1 1 1 1 1 1 1 0 








 0 0 1 1 1 1 1 0 0 
 0 1 1 1 1 1 1 1 0 








2.3542 ×  0 1 1 1 1 1 1 1 0  + 4, 7084 ×  0 0 1 1 1 1 0 0 0 








 0 0 0 1 1 0 0 0 0 
 0 1 1 1 1 1 1 1 0 








 0 0 0 0 0 0 0 0 0 
 0 1 1 1 1 1 1 0 0 








 0 0 0 0 0 0 0 0 0 
 0 0 1 1 1 1 0 0 0 




0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0


0 0 0 0 0 0 0 0 0




 0 0 1 1 1 1 0 0 0 




 0 0 0 1 1 0 0 0 0 




 0 0 0 0 0 0 0 0 0 




+ 7, 0626 × 0 0 0 0 0 0 0 0 0  ⇐⇒ Wl = 2.3542×B 1 +4, 7084×B 2 +7, 0626×B 3.




 0 0 0 0 0 0 0 0 0 




 0 0 0 0 0 0 0 0 0 




 0 0 0 0 0 0 0 0 0 


0 0 0 0 0 0 0 0 0
Let us assume that instead of the previous decomposition of Wl , we want to optimize
the α’s in Wl = α1 × B 1 + α2 × B 2 + α3 × B 3 , where B i are the binary matrices of the
previous decomposition representing the apertures of Fig. 3. This procedure correspond to
15
re-optimize the time each aperture is on. DAO is similar to this post-optimization except
that the apertures to be considered are to be chosen from a very large set of possible
apertures. For this choice, the DAO method needs a heuristic algorithm with no polynomial
computational complexity guarantee. With DAO there is no need for conversion, filtering,
weight optimizations or other kinds of post-processing. Moreover, there is control over the
number of segments and hence control over the delivery time of the plan. However, the
choice of the segments to be considered in the optimization is questionable. The key factor
of DAO is not the optimization of the aperture time but the selection of the segments.
That selection is a direct, trial and error, random attempt! Moreover, for hard cases,
which inverse planning is suited for, is questionable if DAO can obtain good results. This
doubts are shared by the latest generation of planning systems developers since the former
option remains there. Therefore, it continues to be of the utmost interest to improve the
transition from the intensity problem to the realization problem.
One of the approaches to facilitate the transition from the intensity problem to the
realization problem as well as the reduction of the overall delivery time is to reduce the
complexity of the fluence maps. However, it has to be noted that there is a limit to the
degree which the complexity of the plan can be reduced before severely affecting plan
quality. It was demonstrated by Craft et al. [5] when they investigated the trade off
between treatment plan quality and the required number of segments. DAO is one of the
attempts to reduce the complexity of the plan but is not the only one. Attempts to decrease
the fluence map complexity in “traditional” IMRT have been reported, using smoothing
techniques. The impact of the map fluctuation to the leaf sequencing has been illustrated
in the literature. Intuitively, if the fluence maps have fewer fluctuations, the latter map
realization based on MLC leaf sequencing will become easier. Moreover, complicated fluence
maps can lead to a high number of segments. These high numbers are not suitable for
treatment because of the delivery time limitation. Therefore a procedure is needed to
reduce the final number of segments while keeping the quality of the plan. A common
strategy is to apply filters after getting the fluence maps, as an attempt to remove the
noise. A lot of map smoothing optimization models were proposed in recent years, which
intended to balance the dose goal and the map smoothness. Konrad filter averages the
fluence intensity of a beamlet with the ones immediately left and right. However, this
16
Number of
function
function value
iterations
value
after leveling
0
-64.5445
-64.5445
10
-85.3138
-85.2807
20
-85.5205
-85.4670
50
-85.6076
-85.5611
100
-85.6311
-85.5815
237
-85.6404
-85.5939
Table 5: Optimization history.
filter strategy have the same deterioration effect of the rounding of the intensities since
both procedures are done without taking the physicians preferences into account, and the
resulting dose distribution may not be desirable.
The ideal approach would be to obtain, as result of an optimization process, a smooth
fluence map, with each fluence intensity as close to a intensity level as possible so that
rounding errors are minimized. The second goal is more difficult to achieve but models
incorporating the number of levels could be developed in order to minimize distance from
intensity level during optimization. The first goal of obtaining, as result of an optimization
process, a smoother fluence map has been studied (see e.g. [3]). Apparently, there is a conflict between solving the beamlet-weight problem to optimum and keeping the degradation
in plan quality small in the realization problem. In practice, approximate solutions to the
IMRT optimization problems suffice; there are numerous non-negligible uncertainties and
sources of errors in the treatment planning process that make the search for the optimal
solutions unmotivated. In particular, a near-optimal solution that corresponds to a simple
treatment plan that can be delivered efficiently and with high accuracy is often preferred
to the complex optimal solution.
In order to illustrate the importance of not “over-optimizing” the beamlet weight problem, the previous H&N clinical example was used with the nonlinear model. We run the
nonlinear model for 10 iterations, 20 iterations, 50 iterations, 100 iterations, and to optimality (237 iterations). We note, in Table 5, that after 50 iterations the objective function
value is very close to the optimal one. Even for 20 iterations, most of the decrease is
17
(a)
(b)
(c)
(d)
(e)
(f)
Figure 7: Ideal fluence for initial beamlets – 7(a), ideal fluence after 10 iterations – 7(b),
ideal fluence after 20 iterations – 7(c), ideal fluence after 50 iterations – 7(d), ideal fluence
after 100 iterations – 7(e), ideal fluence after 237 iterations – 7(f) for a beam using a
quadratic model.
18
100
100
90
PTV right
PTV right
PTV right
PRT right
70
90
− optimal
− 50 iterations
− optimal
− 50 iterations
60
50
40
70
50
40
30
20
20
10
10
0
10
20
30
40
Dose (Gy)
50
60
0
70
0
10
20
(a)
50
60
70
60
70
100
90
PTV right
PTV right
PTV right
PRT right
80
70
90
− optimal
− 50 iterations
− optimal
− 50 iterations
60
50
40
70
50
40
30
20
20
10
10
10
20
30
40
Dose (Gy)
50
60
0
70
(c)
− optimal
− 50 iterations
− optimal
− 50 iterations
60
30
0
PTV left
PTV left
PTV left
PRT left
80
Percent Volume (%)
Percent Volume (%)
30
40
Dose (Gy)
(b)
100
0
− optimal
− 50 iterations
− optimal
− 50 iterations
60
30
0
PTV left
PTV left
PTV left
PRT left
80
Percent Volume (%)
Percent Volume (%)
80
0
10
20
30
40
Dose (Gy)
50
(d)
Figure 8: DVH for ideal optimal fluence using NLP vs fluence after 50 iterations using NLP
for right PTV and PRT – 6(a), DVH for ideal optimal fluence using NLP vs fluence after
50 iterations using NLP for left PTV and PRT– 6(b), DVH for rounded optimal fluence
using NLP vs rounded fluence after 50 iterations using NLP for right PTV and PRT – 6(c)
and DVH for rounded optimal fluence using NLP vs rounded fluence after 50 iterations
using NLP for left PTV and PRT– 6(d).
achieved. Looking at Fig. 7 we can observe that the fluence map for 20 or 50 iterations is
smoother than the optimal one, and consequently segmentation would be easier and faster
to deliver. Fig. 7(a) presents the starting fluence map for iteration 0, where all beamlets
corresponding to the beams eye view of PTV have the same initial value.
In Fig. 8 we can see that the slight difference in the objective function presented in
Table 5 have correspondence with slight differences for DVH curves. Increasing the number
of iterations appear to have more influence in OAR sparing than in PTV coverage. One can
19
also see that rounding the non-optimal fluence map leads to less deterioration of solutions
as can be seen by comparing the DVH curves of PTV right in Fig. 8(a) and 8(c).
Conclusion Remarks
Some of the conclusions and ideas drawn from this study include:
• The latest generation of planning systems offers two distinct ways of performing
IMRT optimization: a recent one where segmentation is integrated in the optimization
procedure such as with DMPO and a former one where fluence optimization is followed
by subsequent segmentation.
• Recently, some comparison studies between those two approaches have been published with a slight advantage for DMPO mainly in treatment time. However, the
former approach is done with a simplistic linking between fluence optimization and
segmentation. It is worth to explore whether an improvement of this linkage enhance
this approach to compete with the latest generation approach.
• DMPO is more forward planning than inverse planning. Its main advantage is the
control over the number of segments and consequently the control of the delivery
time. However, for hard cases, where inverse planning is suited for, is questionable
that DMPO can obtain good results. This doubts are shared by the latest generation
of planning systems developers since the former option remains there.
• Several paths may be pursued in order to improve former models with respect to the
linkage between intensity problem and realization problem:
– Re-optimization of the optimized fluence map. Deciding what level a beamlet
should belong to with some criteria and not just by rounding or truncating;
– Use tailored strategies for specific cases. As example in H&N cases parotid
sparing could be enhanced by rounding down all beamlets that deliver radiation
to parotid glands and rounding up all remaining beamlets;
– Incorporation of the segmentation process into the optimization.
20
• Several paths may be pursued in order to facilitate segmentation and speed up the
delivery process:
– Even though the leaf-sequencing process is complex, one could in general say
that smooth profiles are easier to convert into step-and-shoot segments than
non-smooth profiles. The degradation of the treatment quality will therefore
be larger when converting non-smooth profiles than smooth ones. Solving the
beamlet-weight problem to optimum produces non-smooth profiles with few gain
in terms of quality of solution. Finding smooth profiles that produce a highquality, but not necessarily optimal, dose distribution before conversion proved
to be a reliable path;
– introducing smoothing terms and solution refinement into the optimization;
– incorporation of the segmentation process into the optimization.
References
[1] M. Alber, G. Meedt, F. Nusslin, and R. Reemtsen. On the degeneracy of the IMRT
optimization problem. Med. Phys., Vol. 29, 2002, 2584–2589.
[2] G. Bednarz, D. Michalski, C. Houser, M.S. Huq, Y. Xiao, P. R. Anne, and J. M.
Galvin, The use of mixed-integer programming for inverse treatment planning with
pre-defined segments, Phys. Med. Biol., Vol. 47, 2002, 2235–2245.
[3] F. Carlsson and A. Forsgren, Iterative regularization in intensity-modulated radiation
therapy optimization, Med. Phys., Vol. 33, 2006, 225–234.
[4] CPLEX, ILOG CPLEX, http://www.ilog.com/products/cplex.
[5] D. Craft, P. Suss, T. Bortfeld, The tradeoff between treatment plan quality and required number of monitor units in intensity- modulated radiotherapy. Int J Radiat
Oncol Biol Phys, Vol. 67, 2007, 1596–1605.
[6] D. Craft, Local beam angle optimization with linear programming and gradient search,
Phys. Med. Biol., Vol. 52, 2007, 127–135.
21
[7] J. O. Deasy, A. I. Blanco, and V. H. Clark, CERR: A Computational Environment
for Radiotherapy Research, Med. Phys., Vol. 30, 2003, 979–985.
[8] M. Ehrgott, A. Holder, and J. Reese, Linear Algebra and its Applications, Vol. 428,
2008, 1272–1312.
[9] T. Kalinowski, A duality based algorithm for multileaf collimator field segmentation
with interleaf collision constraint, Discrete Appl. Math., Vol. 152, 2005, 52–88.
[10] E. K. Lee, T. Fox, and I. Crocker, Integer programing applied to intensity-modulated
radiation therapy treatment planning, Ann. Oper. Res., Vol. 119, 2003, 165–181.
[11] MATLAB, The MathWorks Inc., http://www.mathworks.com.
[12] F. Preciado-Walters, M. P. Langer, R. L. Rardin, and V. Thai, Column generation for
IMRT cancer therapy optimization with implementable segments, Ann. Oper. Res.,
Vol. 148, 2006, 65–79.
[13] H. Rocha, J. M. Dias, B. C. Ferreira, M. C. Lopes, Direct search applied to beam
angle optimization in radiotherapy design, Inescc Research Report 06/2010, ISSN:
1645–2631, www.inescc.pt/documentos/6 2010.PDF.
[14] H. E. Romeijn, R. K. Ahuja, J. F. Dempsey, and A. Kumar, A column generation
approach to radiation therapy treatment planning using aperture modulation, SIAM
J. Optim., Vol. 15, 2005, 838–862.
[15] H. E. Romeijn, R. K. Ahuja, J. F. Dempsey, A. Kumar, and J. Li, A novel linear
programming approach to fluence map optimization for intensity modulated radiation
therapy treatment planing, Phys. Med. Biol., Vol. 48, 2003, 3521–3542.
[16] H. E. Romeijn, J. F. Dempsey, and J. Li, A unifying framework for multi-criteria
fluence map optimization models, Phys. Med. Biol., Vol. 49, 2004, 1991–2013.
[17] D. M. Shepard, M. A. Earl, X. A. Li, S. Naqvi, and C. Yu, Direct aperture optimization: a turnkey solution for the step-and-shoot IMRT, Med. Phys., Vol. 29, 2002,
1007–1018.
22
[18] S. Spirou and C. -S. Chui, A gradient inverse planning algoritm with dose-volume
constraints, Med. Phys., Vol. 25, 1998, 321–333.
23

Documentos relacionados

as a PDF

as a PDF In 1696, Johann Bernoulli solved the brachistochrone problem by an ingenious method of combining Fermat’s principle of minimum time, Snell’s law of refraction and “finite element” discretization. T...

Leia mais

Instituto de Engenharia de Sistemas e

Instituto de Engenharia de Sistemas e FMO problem to drive our BAO problem. Most of the previous BAO studies are based on a variety of scoring methods or approximations to the FMO to gauge the quality of the beam angle set. When the BA...

Leia mais