Welcome and Introductions
Learning Objectives
Fit a parametric mortality model to life table data to:
- Accurately simulate death times in a discrete event model.
- Summarize background mortality for a discrete time Markov model in a few parameters.
Construct a cause-deleted life table to parse out and model cause-specific death factors from background mortality.
Learning Objectives
- Accurately embed a transition probability matrix so that it accounts for “compound” (i.e., >1) transitions in a cycle.
Learning Objectives
Include non-markovian elements to capture:
- Total counts of events (all cycles) or transitions (single cycle) to/from certain states.
- Tunnel states to capture transitory health and/or cost dynamics.
Learning Objectives
- Backwards-convert an existing Markov model defined on the probability scale to accommodate new evidence, strategies, additional health states, different cycle lengths, etc.
Learning Objectives
Solve for PSA distribution parameters given sparse information from the literature (e.g., IQR of costs of $300-$750).
Improve the efficiency of PSA analyses by sampling correlated PSA distributions using copulas.
This Workshop is Packed
- Our primary aim is to provide you with intuition for why we use these methods.
- We also aim to provide you with code.
- We’ll move fast; don’t worry if everything doesn’t immediately resonate!
What They Taught You is (Technically) Wrong
A lot of the common methods taught for CEA are shortcuts, or may be technically correct for narrow cases—but are not generally.
This doesn’t mean everything published is totally wrong, however.
Because we often make comparisons across strategies, a lot of errors will (approximately) cancel out.
The Big Picture
- Decision thresholds methods, e.g. ICER, NMB, NHB all involve comparing a model run versus a reference run of the same model.
- For example, a model of \(f_{cost}\) and \(f_{qaly}\) are run versus \(\theta_{ref}\) and \(\theta_{target}\).
- These runs will have error due to misspecification, and in differencing the error can mostly cancel. Let \(g\) represent the truth, thus \(f(\theta) = g(\theta)+\epsilon_{\theta}\).
The Hopeful Big Picture
\[\text{ICER} = \frac{f_{cost}(\theta_{target}) - \epsilon_{ct} - f_{cost}(\theta_{ref}) + \epsilon_{cr}}{f_{qaly}(\theta_{target}) - \epsilon_{qt} - f_{qaly}(\theta_{ref}) + \epsilon_{qr}}\]
- If \(\epsilon_{ct} \sim \epsilon_{cr}\) and \(\epsilon_{qt} \sim \epsilon_{qr}\) then the model errors cancel and this approaches the true model.
- The decision threshold is robust in this case even when model run results are biased!
The Hopeful Big Picure
- A similar theme will occur periodically today.
- We’ll aim to highlight when these issues may be decision-relevant (i.e., errors may not cancel)