Sequential drug decision problems in long-term medical conditions: a case Study of Primary Hypertension Eunju Kim ba, ma, msc


Computational complexity of sequential drug decision problems



Yüklə 10,52 Mb.
səhifə17/116
tarix04.01.2022
ölçüsü10,52 Mb.
#58520
1   ...   13   14   15   16   17   18   19   20   ...   116

2.5Computational complexity of sequential drug decision problems


SDDPs belong to combinatorial optimisation problems, which find a set, or a sequence, of discrete decision variables to achieve the stated objective. Where the search space SS consists of a set of possible drug sequences at discrete time points t=1,2,…,n, the optimal solution π*=(d1,d2,…,dm) is constructed by the sequence of drugs which maximises the value of the objective function. Such a combinatorial optimisation problem usually has a large decision space which consists of enormous possible solutions. In SDDPs, the size of DS depends on the size of HS, SS and δ which represents the number of variations of drug sequence defined on T for a given policy option π within SS:
Equation 2.6.

As described in the previous sub-section, the size of HS increases exponentially with increased number of possible health states l and the modelled time periods n (see Equation 2.1). The size of SS increases linearly with increased number of drug alternatives m (see Equation 2.2). When the number of possible drug alternatives m is large, the size of DS can be reduced if the number of time periods n is reduced. One way to reduce the size of DS may be to assume a maximum allowed number of drug switches based on clinical grounds. For example, if there are 10 possible drug alternatives for a specific health condition (i.e., A={1,2,…,10}, m=10), but clinically most patients would only consider a maximum of three drug switches, the size of the search space Z(SS)=10*9*8*7=5,040. Without this clinical constraint, the size of the search space Z(SS)=10!=3,628,800 which is more than 700 times bigger.

Further computational complexity arises because of the dynamic and stochastic nature of SDDPs. The term ‘dynamic’ here describes the varying properties of SDDPs through time and the existence of interaction between the time-dependent components of the model such as patients’ risk factors, health states and treatment effectiveness[121]. Any changes in state can be either pre-specified or occur randomly. In the former, which is called deterministic problem, the state at the next stage is completely determined by the state and policy decision at the current stage. In contrast, an SDDP is a stochastic problem which allows patients’ health state to undergo some modifications, which are not fully known in advance, over time. A transition probability function p(st+1|st,xt), where the Markovian assumption is applied, or p(st+1|s1,x1,…,st-1,xt-1,st,xt), where a non-Markovian assumption is applied, decides the probability that the system is in state st+1∈H at time t+1 if action xt∈A is chosen in state st at time t. The computational issue here is how to compute or store the huge number of transition probabilities for all possible actions where the number of heath states in the decision space becomes very large. This problem gets worse where semi-Markovian assumption is required in SDDPs. The dependency of treatment effectiveness on the timing of the drug to be used (e.g., used as first-line vs. second-line, or used for the first year vs. continuously after 5 years) and on the current and potentially previous health state(s) inevitably needs varying the transition probabilities, which causes the curse of dimensionality.

Furthermore, the computational time tends to increase exponentially as the size of the decision space increases with excessive storage requirement to process all the elements of the transition probability matrices. In many cases, a probabilistic sensitivity analysis (PSA) will be required where the objective function strongly depends on the probabilistic structure of the SDDP model: however, the simulation runs for the evaluation of the objective function will be constrained because of considerable computational time.

In the computational complexity theory, such a dynamic and stochastic sequential decision problem is classified as a non-deterministic polynomial-time hard (NP-hard) decision problem, which is unlikely to have a polynomial algorithm, hence may need exponential computation time to find the optimal solution in a worst-case[122]. Polynomial algorithm is a fast algorithm, which guarantees to solve the problem within a number of steps. For the size of problem n, the time or number of steps needed to find the solution is a polynomial function of n. On the other hand, NP-hard problems require times that are exponential functions of the problem size n; therefore the execution times of the latter grow much more rapidly as the problem size increases.

For such NP-hard problems, classic mathematical programming is known to become inefficient and impractical. The key restrictions can be found in both problem specification using mathematical programming and problem solving procedure. To use mathematical programming, firstly, the objective function and all the constraints of the given problem must be formulated in such a way that it fits the optimisation method being used: however, the sequential decision problems are likely to be too complicated to manipulate or to express as explicit functions of the decision variables because of the non-linear, non-additive and probabilistic elements. Furthermore, the classical mathematical programming is useful in finding the optimum of continuous and differentiable functions: however, most sequential decision problems involve objective functions that are not continuous and/or differentiable. Lastly, mathematical programming often involves large numbers of variables and constraints because of the characteristics of combinatorial optimisation problems, which has an enormous number of feasible solutions. This makes the direct enumeration approach infeasible.



As the alternative, approximate optimisation methods have been widely used in computer science, engineering, artificial intelligence, operational research, business management, bioinformatics, manufacturing, public transportation, military, financing/investment and so on. Approximate optimisation methods, which are generally called heuristic methods in operational research, do not guarantee an optimal solution, but can be attractive because the intelligent function approximation and search strategies provide ‘near-optimal’ solutions in a reasonable amount of time[62]. The reasons of exploring heuristic techniques could be 1) the development of the concept of computational complexity has provided a rational basis for using heuristics rather than pursuing the optimality, and 2) a significant increase in the power and efficiency of the more modern heuristic approaches[53]. Hence this thesis focuses on approximate optimisation methods in the following chapters.


Yüklə 10,52 Mb.

Dostları ilə paylaş:
1   ...   13   14   15   16   17   18   19   20   ...   116




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin