Skip to content

Latest commit

 

History

History
30 lines (21 loc) · 2.27 KB

README.md

File metadata and controls

30 lines (21 loc) · 2.27 KB

metaDTR

Learning Dynamic Treatment Regime (DTR) via meta-learners. It can learn from studies with multi-stage interventions as well as more than two treatment arms at each intervention.

Method

The package supports learning from multi-stage and multi-armed randomized experiments or observed studies, and then recommend personalized sequence of treatments/interventions. Meta-learners are adopted in training stages: there are S- and T-learner from Q-learning framework and deC-learner (author proposed, not published yet) from A-learning camp.

Please find method details on the author's post.

Usage

To install the package:

devtools::install_github("junyzhou10/metaDTR")

The package currently supports S-, T-, and deC-learner. So far, random forest, XGBoost, BART (Chipman 2010) and GAM (Hastie 2017) are available as baselearner. There is an example of code in the appendix of author's post.

Limitations

So far, base learner only supports BART, random forest (RF), and GAM. For meta-learners, now supports S-, T-, and deC-learner. X-learner will not be included because it is not staightforward in multi-armed cases, and not suitable for outcome types other than continuous. Details of de-centralized learner (deC-learner) will be available after publication.

Also, only continuous outcome type is allowed at this point. Incorporating binary endpoints with log odds ratio as causal estimand can the next step of work.

Update: 04/01/2023

  • Allow continuous treatment/action values. Not wrapped together yet. Please use function learnDTR.cont() for learning data with continuous outcomes and recommendDTR.cont() for recommendation given new dataset.

Update: 03/05/2023

  • Add XGBoost as baselearner.

Update: 01/23/2023

  • Add random forest as baselearner. Note that RF is suggested to use only when sample size is larger enough, or persuing numerical efficiency. Otherwise, BART is more desirable as baselearner.