Cost effectiveness modelling

Cost-effectiveness models inform economic decisions by computing a ratio of benefit, value or output to cost incurred to achieve such output. Cost-effectiveness analysis in healthcare usually implies selecting Quality Adjusted Life year or QALY as a generic measure of health benefit, although other health outcomes such as Life Years (LY), Hospitalisations averted or Lengths of Stay (LoS) can be selected.

Cost-effectiveness modelling involves calculating an incremental cost-effectiveness ratio that compares added benefit to added cost. This ratio is then compared to a threshold value that differs in various countries. The threshold value often reflects the opportunity cost of generating one QALY in a given healthcare system. [1]

Organizations such as the National Institute for Health and Care Excellence in Britain, Zorginstituut in the Nederlands, the nonprofit Institute for Clinical and Economic Review in the United States, PHARMAC in New Zealand, and other international agencies review modeling cost-effectiveness in healthcare. They demonstrated that cost-effectiveness analysis can help inform appropriate pricing. [2,3,4,5]

Cost effectiveness modelling requires rigorous selection of health states to be included in the model structure. Usually this involves simulating the number of hypothetical patients in specific states (no, acute, chronic disease state) and generic states (alive or absorbing state) of interest over time. Specialized microsimulation software packages exist to perform such analysis.

Cost Effectiveness model types:

Markov cohort model

Involves following a cohort of patients or a single patient over a lifetime and capturing disease related outcomes of interest. The cohort can be followed with shorter time horizons, if they are long enough to capture all costs and benefits followed by introducing a healthcare intervention. At the end of the digital simulation all patients in the cohort must end up in an “absorbing” state.

Steady state model (population-based)

In contrast to cohort modeling, population based approach doesn’t imply application of all-cause-mortality rates, but calculates the number of patients in each age group based on the current cross-sectional snapshot of the current demographic profile. [6]

Uncertainty analysis

Uncertainty analysis is an essential part of model results robustness check.

Parametric Uncertainty:

One-way sensitivity analysis (OWSA) - evaluates an impact of each individual input parameter on outcomes of interest. The result is display in form of “Tornado” diagram presenting most impactful parameters at the top and least impactful at the bottom.

Multi-way sensitivity analysis – evaluates the impact of two or more parameters at the same time. For example this can be varying the efficacy of treatment strategy and the discount rate used to calculate the present value of future outcomes.

Scenario analysis – evaluates the impact of a specific combination of input parameters (cost, effect, patient sub-group) to reflect one or several scenarios and estimates of incremental cost-effectiveness ratio (ICER).

Probabilistic sensitivity analysis (PSA) – evaluates the impact of a change in all input parameters at the same time. Each input parameter is assigned a statistical distribution (normal, log-normal, beta, gamma, triangular). A Monte-Carlo simulation is a mathematical technique used to run PSA. It involves sampling from a distribution around the mean input value to create a set of probabilistic inputs and therefore a probabilistic result. PSA run usually involves 1,000 or more iterations and plotting the cost and QALY difference results on a cost-effectiveness plane, creating a “cloud of uncertainty” around deterministic estimate of Cost per QALY.

Structural Uncertainty:

Involves evaluating the impact of changes in model structure. For example addition or deletion of health states, change of modeling methodology (static vs. dynamic, population vs. cohort-based).

Frequently Asked Questions

How is the time horizon selected?

The choice of time horizon is an important decision for economic modelling, and depends on the nature of the disease and intervention under consideration and the purpose of the analysis. Longer time horizons are applicable to chronic conditions associated with on-going medical management, rather than a cure. A shorter time horizon may be appropriate for some acute conditions, for which long-term consequences are less important. The time horizon for estimating clinical and cost effectiveness should be sufficiently long to reflect all important differences in costs or outcomes between the technologies being compared. A lifetime time horizon is required when alternative technologies lead to differences in survival or benefits that persist for the remainder of a person's life.

Why are the health states selected?

The structure of the model should be simple and at the same time correspond to the given problem, generally accepted knowledge about the modeling of diseases and also reflect the cause-and-effect relationship between variables. Lack of data does not always justify removing certain states or simplifying the model.

What types of costs should be included?

The selection of costs included is highly related to the perspective of the analysis. There is consensus that all direct health care costs should be included in the main analysis. It is also recommended to present costs borne by other sectors of the society, e.g. indirect costs in an additional analysis when relevant.

How is the model validation done?

The following verification and validation exercises should be explored:

  • Face validity: Does a model structure, its assumptions, input parameter values and distribution and output values and conclusions, make sense and can be explained at an intuitive level?
  • Internal validity (technical verification): Has the model been implemented correctly?
  • Cross model validation: Does the model achieve similar results with other models that were independently developed, but aimed at estimating the same outcomes?
  • External validation: How can we compare the outputs of the model with actual outputs provided by external sources (not used in the model)?