Learning Dynamic Treatment Regime (DTR) via meta-learners. It can learn from studies with multi-stage interventions as well as more than two treatment arms at each intervention.
The package supports learning from multi-stage and multi-armed randomized experiments or observed studies, and then recommend personalized sequence of treatments/interventions. Meta-learners are adopted in training stages: there are S- and T-learner from Q-learning framework and deC-learner (author proposed, not published yet) from A-learning camp.
Please find method details on the author's post.
To install the package:
devtools::install_github("junyzhou10/metaDTR")
The package currently supports S-, T-, and deC-learner. So far, random forest, XGBoost, BART (Chipman 2010) and GAM (Hastie 2017) are available as baselearner. There is an example of code in the appendix of author's post.
So far, base learner only supports BART, random forest (RF), and GAM. For meta-learners, now supports S-, T-, and deC-learner. X-learner will not be included because it is not staightforward in multi-armed cases, and not suitable for outcome types other than continuous. Details of de-centralized learner (deC-learner) will be available after publication.
Also, only continuous outcome type is allowed at this point. Incorporating binary endpoints with log odds ratio as causal estimand can the next step of work.
- Allow continuous treatment/action values. Not wrapped together yet. Please use function
learnDTR.cont()
for learning data with continuous outcomes andrecommendDTR.cont()
for recommendation given new dataset.
- Add XGBoost as baselearner.
- Add random forest as baselearner. Note that RF is suggested to use only when sample size is larger enough, or persuing numerical efficiency. Otherwise, BART is more desirable as baselearner.