Sequential drug decision problems in long-term medical conditions: a case Study of Primary Hypertension Eunju Kim ba, ma, msc



Yüklə 10,52 Mb.
səhifə13/116
tarix04.01.2022
ölçüsü10,52 Mb.
#58520
1   ...   9   10   11   12   13   14   15   16   ...   116

2.2.2Markov models


Markov models are partially cyclic directed graphs, which describe the transition of a homogeneous cohort of patients from the current state to the next state over time according to the specified transition probabilities[71]. In cohort simulations, which assume a Markov state-transition model, a proportion of patients in each state transfers from one state to another state every cycle depending on the current health state. The model runs until all the patients in the cohort are in the absorbing state or until the time horizon assumed in the model is reached. The results are presented in a chart showing what proportion or cumulative proportion of the cohort is in which state at a given time. The costs and effectiveness incurred by patients across the different health states are added up every cycle. The final outcome is the average costs and effectiveness of the patients in the original cohort for the follow-up period. In order to provide probabilistic results, Monte Carlo simulation can be incorporated into the model.

Under the Markovian assumption the transition probability to a subsequent state of a cohort solely depends on the current state of the cohort. Thus, no information on the past is used to determine the subsequent transition of state at the current time. Some researchers have tried to relax the Markovian assumption by using time-dependent (or age- or time-in-state dependent) transition matrices[72, 73]. The structure of these models is similar to the standard Markov model. However, the difference is that the transition probabilities in the modified Markov models can be time-dependent (or age or time-in-state dependent) or varied according to the level of patients’ risk factors. Such models have been referred to as semi-Markov models[65, 66, 74].

Compared to the successive decision-tree, the Markov model has a simpler model structure because the time dimension is not explicitly shown (see Figure ‎2.). If an SDDP generally satisfies the Markovian assumption (i.e., previous drug uses or health states have no or limited impact on the drug decision and health state transition at the current time), then, Markov models may be appropriate to build the evaluation model for the SDDP. In practice, the memoryless assumption would be restrictive for most real SDDPs because previous drug uses or health states may have a substantial impact on the subsequent drug decision and health state transitions. Therefore, either a suitable semi-Markov model or more flexible model methods, such as IBMs, need to be considered. Note that the ability of semi-Markov models to represent complicated systems, where more detailed heterogeneity of population needs to be considered, may be still restricted by the curse of dimensionality as a result of the fundamental choice of the cohort model structure[64].

Figure ‎2.. An example Markov model of SDDPs





Yüklə 10,52 Mb.

Dostları ilə paylaş:
1   ...   9   10   11   12   13   14   15   16   ...   116




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin