CIMTx: An R Package for Causal Inference with Multiple Treatments using Observational Data

Abstract:

CIMTx provides efficient and unified functions to implement modern methods for causal inferences with multiple treatments using observational data with a focus on binary outcomes. The methods include regression adjustment, inverse probability of treatment weighting, Bayesian additive regression trees, regression adjustment with multivariate spline of the generalized propensity score, vector matching and targeted maximum likelihood estimation. In addition, CIMTx illustrates ways in which users can simulate data adhering to the complex data structures in the multiple treatment setting. Furthermore, the CIMTx package offers a unique set of features to address the key causal assumptions: positivity and ignorability. For the positivity assumption, CIMTx demonstrates techniques to identify the common support region for retaining inferential units using inverse probability of treatment weighting, Bayesian additive regression trees and vector matching. To handle the ignorability assumption, CIMTx provides a flexible Monte Carlo sensitivity analysis approach to evaluate how causal conclusions would be altered in response to different magnitude of departure from ignorable treatment assignment.

Cite PDF Tweet

Published

Dec. 20, 2022

Received

Dec 6, 2021

Citation

Hu & Ji, 2022

Volume

Pages

14/3

213 - 230


1 Introduction

Modern comparative effectiveness research (CER) questions often require comparing the effectiveness of multiple treatments on a binary outcome . To answer these CER questions, specialized causal inference methods are needed. Methods appropriate for drawing causal inferences about multiple treatments include regression adjustment (RA) , inverse probability of treatment weighting (IPTW) , Bayesian Additive Regression Trees (BART) , regression adjustment with multivariate spline of the generalized propensity score (RAMS) , vector matching (VM) and targeted maximum likelihood estimation (TMLE) . Drawing causal inferences using observational data, however, inevitably requires assumptions. A key causal identification assumption is the positivity or sufficient overlap assumption, which implies that there are no values of pre-treatment covariates that could occur only among units receiving one of the treatments . Another key assumption requires appropriately conditioning on all pre-treatment variables that predict both treatment and outcome. The pre-treatment variables are known as confounders and this requirement is referred to as the ignorability assumption (also as no unmeasured confounding) . An important strategy to handle the positivity assumption is to identify a common support region for retaining inferential units. The ignorability assumption can be violated in observational studies, and as a result can lead to biased treatment effect estimates. One widely recognized way to address such concerns is sensitivity analysis .

The CIMTx package provides a suite of functions to easily implement the causal estimation methods, many of which were recently developed . In addition, CIMTx provides strategies to define a common support region to address the positivity assumption using IPTW, BART, VM and implements a flexible Monte Carlo sensitivity analysis approach for unmeasured confounding to address the ignorability assumption. Finally, CIMTx offers detailed examples of how to simulate data adhering to the complex structures in the multiple treatment setting. The simulated data can then be used by an analyst to compare the performance of different causal estimation methods. Table 1 summarizes key functionalities of CIMTx in comparison to recent R packages designed for causal inference with multiple treatments using observational data. CIMTx provides a comprehensive set of functionalities: from simulating data to estimating the causal effects to addressing causal assumptions and elucidating their ramifications. To assist applied researchers and practitioners who work with observational data and wish to draw inferences about the effects of multiple treatments, this article provides a comprehensive illustration of the CIMTx package.

Table 1: Comparisons of R packages for causal inference.
R packages Continuous Outcome Binary Outcome Sensitivity Analysis Identification of Common Support Design factors Estimation procedure
CIMTx RA, IPTW-SL IPTW-Multinomial IPTW-GBM VM, BART RAMS, TMLE
PSweight OW, IPTW-SL IPTW-Multinomial IPTW-GBM
twang IPTW-GBM
WeightIt CBPS, IPTW-SL IPTW-Multinomial IPTW-GBM,EBCW IPTW-TSBW
CBPS CBPS
optweight IPTW-TSBW

tablenotes

2 Design factors for data simulation

CIMTx provides specific functions to simulate data possessing complex data characteristics of the multiple treatment setting. Seven design factors are considered: (1) sample size, (2) ratio of units across treatment groups, (3) whether the treatment assignment model and the outcome generating model are linear or nonlinear, (4) whether the covariates that best predict the treatment also predict the outcome well, (5) whether the response surfaces are parallel across treatment groups, (6) outcome prevalence, and (7) degree of covariate overlap.

Design factors (1)–(5)

For the data generating process of treatment assignment, consider a multinomial logistic regression model, (1)lnP(W=1)P(W=T)=δ1+Xξ1L+Qξ1NLlnP(W=T1)P(W=T)=δ(T1)+Xξ(T1)L+Qξ(T1)NL, where Q denotes the nonlinear transformations and higher-order terms of the predictors X. ξ1L,,ξ(T1)L are vectors of coefficients for the untransformed versions of the predictors X and ξ1NL,,ξ(T1)NL for the transformed versions of the predictors captured in Q. The intercepts δ1,,δ(T1) can be specified to create the corresponding ratio of units across T treatment groups. The T sets of potential response surfaces can be generated as follows: (2)E[Y(1)|X]=logit1{τ1+Xγ1L+Qγ1NL}E[Y(T)|X]=logit1{τT+XγTL+QγTNL}, where the coefficient setting γ1L==γTL, γ1NL==γTNL and τ1τT corresponds to the parallel response surfaces, and by assigning different values to γwL and γwNL and setting τ1==τT=0, nonparallel response surfaces are generated, which imply treatment effect heterogeneity. Note that the predictors X and the transformed versions of the predictors Q in the treatment assignment model (1) can be different than those in the outcome generating model (2) to create various degrees of alignment. The observed outcomes are related to the potential outcomes through Yi=wiWYi(w). Covariates X can be generated from user-specified data distributions.

Outcome prevalence

Values for parameters τ1,,τT in model (2) can be chosen to create various outcome prevalence rates. The outcomes are considered rare if the prevalence rate is <5%.

Covariate overlap

With observational data, it is important to investigate how the sparsity of covariate overlap impacts the estimation of causal effects. We can modify the formulation of the treatment assignment model (1) to adjust the sparsity of overlap by including a multiplier parameter ψ as follows: (3)lnP(W=1)P(W=T)=δ1+Xψξ1L+Qψξ1NLlnP(W=T1)P(W=T)=δ(T1)+Xψξ(T1)NL+Qψξ(T1)NL, where larger values of ψ correspond to increased sparsity degrees of overlap.

Implementation in CIMTx

We will first demonstrate the functionality of data_sim() in CIMTx to simulate data in the multiple treatment setting using the above 7 design factors. We first use the data_sim() function to simulate a dataset with the following characteristics: (1) sample size = 500, (2) ratio of units = 1:1:1 across three treatment groups, (3) nonlinear treatment assignment and outcome generating models, (4) different predictors for the treatment assignment and outcome generating mechanisms, (5) parallel response surfaces, (6) outcome prevalence = (0.16,0.51,0.75) in three treatment groups with an overall rate around 0.5 and (7) moderate covariate overlap. Note that for the design factor (6), we can adjust tau to generate rare outcome events.

The outputs of the simulated data object are: (1) data$covariates for X, (2) data$w for treatment indicators, (3) data$y for observed binary outcomes, (4) data$y_prev for the outcome prevalence rates, (5) data$ratio_of_units for the proportions of units in each treatment group, (6) data$overlap_fig for the visualization of covariate overlap via boxplots of the distributions of true generalized propensity score (GPS).

library(CIMTx)
set.seed(1)
data <- data_sim(
  sample_size = 500, n_trt = 3,
  x = c("rnorm(0, 0.5)",  # x1
        "rbeta(2, .4)",   # x2
        "runif(0, 0.5)",  # x3
        "rweibull(1, 2)", # x4
        "rbinom(1, .4)"),   # x5
  # linear terms in parallel response surfaces
  lp_y = rep(".2*x1 + .3*x2 - .1*x3 - .1*x4 - .2*x5", 3), 
  # nonlinear terms in parallel response surfaces
  nlp_y  = rep(".7*x1*x1  - .1*x2*x3", 3), 
  align = F,# different predictors used in treatment and outcome models
  # linear terms in treatment assignment model
  lp_w = c(".4*x1 + .1*x2  - .1*x4 + .1*x5",   # w = 1
           ".2*x1 + .2*x2  - .2*x4 - .3*x5"),  # w = 2
  # nonlinear terms in treatment assignment model
  nlp_w = c("-.5*x1*x4  - .1*x2*x5", # w = 1
            "-.3*x1*x4 + .2*x2*x5"), # w = 2
  tau = c(-1.5, 0, 1.5), delta = c(0.5, 0.5), psi = 1)

In this simulated dataset, the ratio of units (data$ratio_of_units) and outcome prevalences (data$y_prev) are:

#> w
#>    1    2    3 
#> 0.35 0.35 0.30 
#>         w y_prev
#> 1       1   0.16
#> 2       2   0.51
#> 3       3   0.75
#> 4 Overall   0.46
graphic without alt text
Figure 1: Moderate overlap with psi = 1. Each panel presents boxplots by treatment group of the true generalized propensity score for one of the treatments, P(Wi=w|X=x) for every unit in the sample. The left-hand panel presents treatment 1 (W=1), the middle panel presents treatment 2 (W=2), and the right-hand panel presents treatment 3 (W=3).

Figure 1 (data$overlap_fig) shows the distributions of true GPS for each treatment group, suggesting moderate covariate overlap. We can change structures of the simulated data by modifying arguments of the data_sim() function. For example, setting delta = c(1.5, 0.5) yields unequal sample sizes across treatment groups with the ratio of unit .6:.2:.2. Assigning smaller values to psi can increase overlap: psi = 0.1 corresponds to a strong covariate overlap as shown in Figure 2.

graphic without alt text
Figure 2: Strong overlap with psi = 0.1. Each panel presents boxplots by treatment group of the true generalized propensity score for one of the treatments for every unit in the sample.

3 Methodology and implementation in CIMTx

Estimation of causal effects

Consider an observational study with N individuals, indexed by i=1,,N, drawn randomly from a target population. Each individual was exposed to one and only one treatment, indexed by W. The goal of this study is to estimate the causal effect of treatment W on a binary outcome Y. There are a total of T possible treatments, and Wi=w if individual i is observed under treatment w, where wW={1,2,,T}. Pre-treatment measured confounders are indexed by Xi. Under the potential outcomes framework, , individual i has T potential outcomes {Yi(1),,Yi(T)} under each treatment of W. For each individual, at most one of the potential outcomes is observed – the one corresponding to the treatment to which the individual is exposed. All other potential outcomes are missing, which is known as the fundamental problem of causal inference . In general, three standard causal identification assumptions need to be maintained in order to estimate the causal effects from observational data:

  1. The stable unit treatment value assumption: there is no interference between units and there are no different versions of a treatment.

  2. Positivity: the GPS for treatment assignment e(Xi)=P(Wi=1Xi) is bounded away from 0 and 1.

  3. Ignorability: pre-treatment covariates Xi are sufficiently predictive of both treatment assignment and outcome, p(WiYi(1),,Yi(T),Xi)=p(WiXi).

The CIMTx package addresses assumption (A2) in the section of “Identification of a common support region” and (A3) in the section of “Sensitivity analysis for unmeasured confounding”.

Causal effects can be estimated by summarizing functionals of individual-level potential outcomes. For dichotomous outcomes, causal estimands can be the risk difference (RD), odds ratio (OR) or relative risk (RR). For purposes of illustration, we define causal effects based on the RD. Let s1 and s2 be two subgroups of treatments such that s1,s2W and s1s2=, and define |s1| as the cardinality of s1 and |s2| of s2. Two commonly used causal estimands are the average treatment effect (ATE), ATEs1,s2, and the average treatment effect on the treated (ATT), for example, among those receiving s1, ATTs1|s1,s2. They are defined as: (4)ATEs1,s2=E[ws1Yi(w)|s1|ws2Yi(w)|s2|],ATTs1|s1,s2=E[ws1Yi(w)|s1|ws2Yi(w)|s2||Wis1].

We now introduce six methods implemented in CIMTx for estimating the causal effects of multiple treatments: RA, IPTW, BART, RAMS, VM and TMLE.

Regression adjustment

Regression adjustment , also known as model-based imputation , uses a regression model to impute missing potential outcomes: what would have happened to a specific individual had this individual received a treatment to which he or she not exposed. RA regresses the outcomes on treatment and confounders, (5)f(w,Xi)=E[Yi|Wi=w,Xi]=logit1{β0+β1w+β2Xi}, where β0 is the intercept, β1 is the coefficient for treatment and β2 is a vector of coefficients for covariates Xi. From the fitted regression model (5), the missing potential outcomes for each individual are imputed using the observed data. The causal effects can be estimated by contrasting the imputed potential outcomes between treatment groups. CIMTx implements RA with the Bayesian logistic regression model via the bayesglm() function of the arm package. For the ATE effects, we first average the L predictive posterior draws {fl(w,Xi),l=1,,L} over the empirical distribution of {Xi}i=1N, and for the ATT effects using s1 as the reference group, over the empirical distribution of {Xi}i:Wis1. We then take the difference of the averaged values between two treatment groups ws1 and ws2. Inferences about treatment effect can be obtained based on the L posterior average treatment effects. The 95% credible interval is calculated using the 2.5th percentile and the 97.5th percentile of the posterior draws .

In our package CIMTx, we can specify method = "RA" and estimand = "ATE" in the ce_estimate() function to get the ATE effects via RA:

ra_ate_res <- ce_estimate(y = data$y, x = data$covariates, w = data$w, 
                          method = "RA", estimand = "ATE", ndpost = 100)

The estimates, standard errors and 95% confidence intervals for the causal estimands would be printed using the summary() generic function:

summary(ra_ate_res)
#> $ATE12
#>      EST   SE LOWER UPPER
#> RD -0.28 0.04 -0.35 -0.21
#> RR  0.39 0.07  0.27  0.54
#> OR  0.26 0.06  0.17  0.40

#> $ATE13
#>      EST   SE LOWER UPPER
#> RD -0.60 0.06 -0.69 -0.47
#> RR  0.23 0.05  0.15  0.34
#> OR  0.07 0.02  0.03  0.13

#> $ATE23
#>      EST   SE LOWER UPPER
#> RD -0.31 0.04 -0.39 -0.25
#> RR  0.60 0.04  0.52  0.67
#> OR  0.25 0.05  0.17  0.34

Specifying estimand = "ATT" and setting reference_trt will get us the ATT effects:

ra_att_res <- ce_estimate(y = data$y, x = data$covariates,w = data$w, method = "RA", 
                          estimand = "ATT", ndpost = 100, reference_trt = 1)
summary(ra_att_res)
#> $ATT12
#>      EST   SE LOWER UPPER
#> RD -0.28 0.05 -0.37 -0.18
#> RR  0.40 0.09  0.25  0.57
#> OR  0.27 0.08  0.16  0.44

#> $ATT13
#>      EST   SE LOWER UPPER
#> RD -0.59 0.06 -0.67 -0.46
#> RR  0.24 0.06  0.14  0.38
#> OR  0.07 0.03  0.03  0.13

Inverse probability of treatment weighting

The idea of IPTW was originally introduced by in survey research to adjust for imbalances in sampling pools. Weighting methods have been extended to estimate the causal effect of a binary treatment in observational studies, and more recently reformulated to accommodate multiple treatments . When interest is in estimating the pairwise ATE for treatment groups s1 and s2, a consistent estimator of ATEs1,s2 is given by the weighted mean, ATE^s1,s2=i=1NYiI(Wis1)/|s1|i=1NI(Wis1)r(Wi,Xi)i=1NYiI(Wis2)/|s2|i=1NI(Wis2)r(Wi,Xi),

where r(w,Xi) is the weights satisfying r(w,Xi)=1/P(Wi=wXi), and I() is the indicator function. The CIMTx package provides three ways in which the weights can be estimated: (i) multinomial logistic regression , (ii) generalized boosted model (GBM) , and (iii) super learner . A challenge with IPTW is low GPS can result in extreme weights, which may yield erratic causal estimates with large sample variances . This issue is increasingly likely as the number of treatments increases. Weight trimming or truncation can alleviate the issue of extreme weights ). CIMTx provides an argument for users to choose the percentile at which the weights should be truncated. We briefly describe the three weight estimators.

  1. The multinomial logistic regression model for treatment assignment is as follows: P(Wi=w|Xi)=eαwXi1+eα1Xi++eαT1Xi, where αw is a vector of coefficients for Xi corresponding to treatment w, and can be estimated by using an iterative procedure such as generalized iterative scaling or iteratively reweighted least squares.

  2. GBM uses machine learning to flexibly model the relationships between treatment assignment and covariates. It does this by growing a series of boosted classification trees to minimize an exponential loss function. This process is effective for fitting nonlinear treatment models characterized by curves and interactions. The procedure of estimating the GPS can be tuned to find the GPS model producing the best covariate balance between treatment groups.

  3. Super learner is an algorithm that creates the optimally weighted average of several machine learning models. The machine learning models can be specified via the SL.library argument of the SuperLearner package. This approach has been proven to be asymptotically as accurate as the best possible prediction algorithm that is included in the library .

IPTW can be implemented in CIMTx by setting a specific method and estimand. For IPTW estimators, variance can be estimated via a robust sandwich‐type variance estimator or a bootstrap variance estimator. In practice, a bootstrap variance estimator is often recommended. . The following shows the code to estimate ATE using IPTW with weights estimated by multinomial logistic regression.

iptw_multi_res <- ce_estimate(y = data$y, x = data$covariates , w = data$w, 
                              method = "IPTW-Multinomial", estimand = "ATE")

We can estimate the ATE effects with weights estimated by super learner and GBM by changing the argument of method to "IPTW-SL", "IPTW-GBM" respectively. We can then estimate the causal effects and bootstrap confidence intervals by setting boot = TRUE.

iptw_sl_trim_ate_res <- ce_estimate(y = data$y, x = data$covariates , w = data$w, 
                                    method = "IPTW-SL", estimand = "ATE", 
                                    sl_library =  c("SL.glm", "SL.glmnet", "SL.rpart"),
                                    trim_perc = c(0.05,0.95), boot = TRUE, 
                                    nboots = 100, verbose_boot = F)
summary(iptw_sl_trim_ate_res)
#> $ATE12
#>      EST   SE LOWER UPPER
#> RD -0.34 0.05 -0.42 -0.24
#> RR  0.33 0.07  0.19  0.48
#> OR  0.20 0.06  0.10  0.33

#> $ATE13
#>      EST   SE LOWER UPPER
#> RD -0.59 0.05 -0.67 -0.46
#> RR  0.22 0.05  0.13  0.34
#> OR  0.07 0.02  0.04  0.13

#> $ATE23
#>      EST   SE LOWER UPPER
#> RD -0.25 0.05 -0.34 -0.15
#> RR  0.67 0.06  0.57  0.79
#> OR  0.34 0.09  0.21  0.54

Bayesian additive regression trees

BART is a likelihood-based machine learning model and has been adapted into causal inference settings in recent years . For a binary outcome, BART uses the probit regression f(w,Xi)=E[Yi|Wi=w,Xi]=Φ{j=1Jgj(w,Xi;Tj,Mj)}, where Φ is the the standard normal cumulative distribution function, (Tj,Mj) indexes a single subtree model in which Tj denotes the regression tree and Mj is a set of parameter values associated with the terminal nodes of the jth regression tree, gj(w,Xi;Tj,Mj) represents the mean assigned to the node in the jth regression tree associated with covariate value Xi and treatment level w, and the number of regression trees J is considered to be fixed and known. BART uses regularizing priors for (Tj,Mj) to keep the impact of each tree small. Although the prior distributions can be specified via the ce_estimate() function of CIMTx, the default priors tend to work well and require little modification in many situations . The details of prior specification and Bayesian backfitting algorithm for posterior sampling can be found in . The posterior inferences about the treatment effects can be drawn in a similar way as described in the Regression adjustment section.

Setting method = "BART" and specifiying the estimand = "ATE" or estimand = "ATT" of the ce_estimate() function implements the BART method.

bart_res <- ce_estimate(y = data$y, x = data$covariates, w = data$w, method = "BART", 
                        estimand = "ATT", ndpost=100, reference_trt = 1)
summary(bart_res)
#> $ATT12
#>      EST   SE LOWER UPPER
#> RD -0.38 0.07 -0.51 -0.25
#> RR  0.47 0.08  0.31  0.61
#> OR  0.21 0.07  0.10  0.35

#> $ATT13
#>      EST   SE LOWER UPPER
#> RD -0.56 0.07 -0.69 -0.43
#> RR  0.38 0.07  0.24  0.50
#> OR  0.06 0.03  0.02  0.13

Regression adjustment with multivariate spline of GPS

For a binary outcome, the number of outcome events can be small. The estimation of causal effects is challenging with rare outcomes because the great majority of units contribute no information to explaining the variability attributable to the differential treatment regimens in the health outcomes . found that regression adjustment on propensity score using one nonlinear spline performed best with respect to bias and root-mean-squared-error in estimating treatment effects. proposed RAMS, which accommodates multiple treatments by using a nonlinear spline model for the outcome that is additive in the treatment and multivariate spline function of the GPS as the following: f(Wi,Xi)=E[Yi|Wi,Xi]=logit1{βWi+h(R(Xi),ϕ)}, where h() is a spline function of the GPS indexed by ϕ and β=[β1,,βT] are regression coefficients associated with the treatment Wi. The dimension of the spline function h() depends on the number of treatments T. Confidence intervals of treatment effect estimates can be obtained using nonparametric bootstrap for RAMS .

In CIMTx, RAMS is implemented using the gam() function with tensor product smoother te() between treatments from the mgcv package. Treatment effects can then be estimated by averaging and contrasting the predicted f^(w,Xi) between treatment groups. The RAMS can be called by setting method = "RAMS-Multinomial" and specifying the estimand estimand = "ATE" or estimand = "ATT".

rams_multi_res <- ce_estimate(y = data$y, x = data$covariates, w = data$w,
                              method = "RAMS-Multinomial", estimand = "ATE", 
                              boot = TRUE, nboots = 100, verbose_boot = F)

Vector matching

proposed the VM algorithm, which matches individuals with similar vector of the GPS. VM obtains matched sets using a combination of k-means clustering and one-to-one matching with replacement within each cluster strata. Currently, VM is only designed to estimate the ATT effects. In CIMTx , VM is implemented via method = "VM". The CIMTx does not provide confidence intervals for treatment effect estimates because the authors of this method, , did not provide an approach to estimate the sampling variance of the VM estimator.

To implement VM in CIMTx, we set the reference group reference_trt = 1, the number of clusters to form using k-means clustering n_cluster = 3.

vm_res <- ce_estimate(y = data$y, x = data$covariates, w = data$w, method = "VM", 
                      estimand = "ATT", reference_trt = 1, n_cluster = 3)

The number of matched individuals is also stored in the output list:

vm_res$number_matched
#> 158

Targeted maximum likelihood estimation

TMLE is a doubly robust approach that combines outcome estimation, IPTW estimation, and a targeting step to optimize the parameter of interest with respect to bias and variance. implemented TMLE to estimate the ATE effects of multiple treatments. CIMTx calls the R package tmle to implement TMLE for the ATE effects. As suggested by , nonparametric bootstrap is used in CIMTx to obtain the confidence interval of the treatment effect estimate.

Calling method = "TMLE" implements TMLE in CIMTx. We use nonparametric bootstrap to estimate the 95% confidence intervals by setting boot = TRUE and nboots = 100.

tmle_res_boot <- ce_estimate(y = data$y, x = data$covariates, w = data$w, nboots = 100, 
                             method = "TMLE", estimand = "ATE", boot = TRUE, 
                             sl_library = c("SL.glm", "SL.glmnet", "SL.rpart"))
summary(tmle_res)
#> $ATE12
#> EST   SE LOWER UPPER
#> RD -0.36 0.04 -0.45 -0.29
#> RR  0.30 0.05  0.21  0.39
#> OR  0.17 0.04  0.11  0.24

#> $ATE13
#> EST   SE LOWER UPPER
#> RD -0.60 0.04 -0.67 -0.51
#> RR  0.20 0.03  0.15  0.28
#> OR  0.06 0.02  0.04  0.10

#> $ATE23
#> EST   SE LOWER UPPER
#> RD -0.24 0.05 -0.34 -0.14
#> RR  0.68 0.06  0.57  0.79
#> OR  0.34 0.09  0.21  0.55

Identification of a common support region

Turning to causal identification assumptions. If the positivity assumption (A2) is violated, problems can arise when extrapolating over the areas of the covariate space where common support does not exist. It is important to define a common support region to which the causal conclusions can be generalized. In CIMTx, the identification of a common support region is offered in three methods: IPTW, VM and BART.

For IPTW, one strategy is weight truncation, by which extreme weights that fall outside a specified range limit of the weight distribution are set to the range limit. This functionality is offered in CIMTx via the trim_perc argument. trim_perc, which can take two values – one for the lower- and one for the upper-percentile of the weight distribution for trimming. Figure 3 shows the distributions of the weights estimated by the three methods before and after weight trimming at the 5% and 95% of the weight distribution.

plot(iptw_multi_res, iptw_sl_res, iptw_gbm_res, iptw_multi_trim_res, 
     iptw_sl_trim_res, iptw_gbm_trim_res)
graphic without alt text
Figure 3: Distributions of the inverse probability of treatment weights estimated by multinomial logistic regression, super learner and generalized boosted models. Panel (a) shows results before weight trimming. Panel (b) displays results after trimming the weights at 5% and 95% of the distribution. Super learner and the generalized boosted models produced less extreme weights compared to multinomial logistic regression.

For VM, proposed a rectangular support region defined by the maximum value of the smallest GPS and the minimum value of the largest GPS among the treatment groups. Individuals that fall outside the region are discarded from the causal analysis. This feature is automatically implemented with "VM" in CIMTx.

For BART, supplied BART with a strategy to identify a common support region for retaining inferential units, which is to discard individuals with a large variability in their predicted potential outcomes. Specifically, for the ATT effects, any individual i with Wi=w will be discarded if (6)sifw>maxj{sjfw},j:Wj=w,wwW, where sjfw and sifw respectively denote the standard deviation of the posterior distribution of the potential outcomes under treatment W=w and W=w, for a given sample j. For the ATE effects, the discarding rule in equation (6) is applied to each treatment group. Users can implement the discarding rule by setting the discard argument in CIMTx. Using ATT1|1,2 as an example, 5 (bart_dis_res$n_discard) individuals in the reference group w=1 were discarded from the simulated data.

bart_dis_res <- ce_estimate(y = data$y, x = data$covariates, w = data$w, 
                            method = "BART", estimand = "ATT", discard = TRUE, 
                            ndpost = 100, reference_trt = 1)

Sensitivity analysis for unmeasured confounding

The violation of the ignorability assumption (A3) can lead to biased treatment effect estimates. Sensitivity analysis is useful in gauging how much the causal conclusions will be altered in response to different magnitude of departure from the ignorability assumption. CIMTx implements a new flexible sensitivity analysis approach developed by . This approach first defines a confounding function for any pair of treatments (w,w) as (7)c(w,w,x)=E[Y(w)|W=w,X=x]E[Y(w)|W=w,X=x].

The confounding function, also viewed as the sensitivity parameter in a sensitivity analysis, directly represents the difference in the mean potential outcomes Y(w) between those treated with W=w and those treated with W=w, who have the same level of x. If the ignorability assumption holds, the confounding function will be zero for all wW. When treatment assignment is not ignorable, the unmeasured confounding is present and the causal effect estimates using measured X will be biased. derived the form of the resultant bias as: (8)Bias(w,w)=pwc(w,w,x)+pwc(w,w,x)l:lW{w,w}pl{c(w,l,x)c(w,l,x)}, where pw=P(W=w|X=x), wwW={1,,T}.

Table 2 demonstrates the plausible assumptions about the confounding functions and their interpretations. There are three ways in which we can specify the prior for the confounding functions: (i) point mass prior; (ii) re-analysis over a range of point mass priors (tipping point); (iii) full prior with uncertainty specified. Since the new sensitivity analysis approach was developed within the Bayesian framework, strategy (iii) offers an advantage of incorporating the statistical uncertainty due to sampling and the uncertainty about the values of the sensitivity parameters. In strategy (i), a fixed value is assumed for the sensitivity parameter. Strategy (ii) expands on strategy (i) and examines how the causal conclusion would change when a range of values are assumed for the sensitivity parameter. We will demonstrate all three cases of prior specifications with sa() function in CIMTx package. further discussed (a) strategies to specify the confounding functions that represent our prior beliefs about the degrees of unmeasured confounding via the remaining variability in the outcomes unexplained by measured X ; and (b) ways in which the causal effects can be estimated adjusting for the presumed degree of unmeasured confounding.

Table 2: Interpretation of assumed priors on c(w,w,x) and c(w,w,x) for causal estimands based on the risk difference, assuming the outcome is an adverse event.
Prior assumption Interpretation and implications of the assumptions
c(w,w,x) c(w,w,x)
>0 <0 Unhealthier individuals are treated with w.
<0 >0 Contrary to the above interpretation, unhealthier individuals are treated with w.
<0 <0 The observed treatment allocation between w and w is beneficial relative to the alternative which reverses treatment assignment for everyone.
>0 >0 Contrary to the above interpretation, the observed treatment allocation between w and w is undesirable relative to the alternative which reverses treatment assignment for everyone.

The proposed sensitivity analysis algorithm proceeds with the following steps :

  1. Fit a multinomial probit BART model fMBART(A|X) to estimate the GPS, plP(W=l|X=x)lW, for each individual.

    1. for w1 to T do

    2. Draw M1 GPS p~l1,,p~lM1,lwlW from the posterior predictive distribution of fMBART(W|X) for each individual.

    3. for m1 to M1 do

    4. Draw M2 values ηlm1,,ηlmM2 from the prior distribution of each of the confounding functions c(w,l,x), for each ljlW.

    5. end for

    6. end for

  2. Compute the adjusted outcomes, YiCFYilwTP(Wi=l|Xi=x)c(w,l,x), for each treatment w, for each of M1M2 draws of {p~l1,ηl11,,ηl1M2,,p~lM1,ηlM11,,ηlM1M2;lwlW}.

  3. Fit a BART model to each of M1×M2 sets of observed data with the adjusted outcomes YCF.

  4. Estimate the combined adjusted causal effects and uncertainty intervals by pooling posterior samples across model fits arising from the M1×M2 data sets.

We now demonstrate the Monte Carlo sensitivity analysis approach for unmeasured confounding . We first simulate a small dataset in a simple causal inference setting. There are two binary confounders: X1 is measured and X2 is unmeasured.

set.seed(111)
data_SA <- data_sim(
  sample_size = 100, n_trt = 3,
  x = c("rbinom(1, .5)",  # x1: measured confounder
        "rbinom(1, .4)"), # x2: unmeasured confounder
  lp_y = rep(".2*x1+2.3*x2", 3),# parallel response surfaces
  nlp_y = NULL,
  align = F, # w model is not the same as the y model
  lp_w = c("0.2 * x1 + 2.4 * x2",  # w = 1
           "-0.3 * x1 - 2.8 * x2"),# w = 2
  nlp_w = NULL,
  tau = c(-2, 0, 2),  delta = c(0, 0), psi = 1)

Next we implement the sensitivity analysis algorithm step-by-step.

  1. Estimate the GPS for each individual. Specifically, we fit a multinomial probit BART model regressing treatment assignment on covariates, using mbart2() function from BART package. We set the number of posterior draws for the GPS (m1) to 50.

    m1 <- 50; sample_gap <- 10
    w_model <- BART::mbart2(x.train = data_SA$covariates, y.train = data_SA$w,
                            ndpost = m1 * sample_gap) 
  2. Then we draw the GPS for each individual from the fitted multinomial probit BART model.

    gps <- array(w_model$prob.train[seq(1, m1 * sample_gap, sample_gap),], 
                 dim = c(m1,  # 1st dimension is M1
                         length(unique(data_SA$w)), # 2nd dimension is w
                         dim(data_SA$covariates)[1])) # 3rd dimension is sample size
    dim(gps)
    #> 50 3 100

    The output of the posterior GPS is a three-dimensional array. The first dimension is the number of posterior draws for the GPS (M1). The second dimension is the number of treatment W, and the third dimension is the total sample size.

  3. Specify the prior distributions and the number of draws (M2) for the confounding functions c(w,w,x). In this illustrative simulation example, we use the true values of the confounding functions within each stratum of x1. This represents the strategy (i) point mass prior.

    x1 <- data_SA$covariates[, 1, drop = F]
    x2 <- data_SA$covariates[, 2, drop = F] # x2 as the unmeasured confounder
    w <- data_SA$w
    x1w_data <- cbind(x1, w)
    Y1 <- data_SA$y_true[, 1]
    Y2 <- data_SA$y_true[, 2]
    Y3 <- data_SA$y_true[, 3]
    y <- data_SA$y
    # Calculate the true confounding functions within x1 = 1 stratum
    c_1_x1_1 <- mean(Y1[w == 1 & x1 == 1]) - mean(Y1[w == 2 & x1 == 1]) # c(1,2)
    c_2_x1_1 <- mean(Y2[w == 2 & x1 == 1]) - mean(Y2[w == 1 & x1 == 1]) # c(2,1)
    c_3_x1_1 <- mean(Y2[w == 2 & x1 == 1]) - mean(Y2[w == 3 & x1 == 1]) # c(2,3)
    c_4_x1_1 <- mean(Y1[w == 1 & x1 == 1]) - mean(Y1[w == 3 & x1 == 1]) # c(1,3)
    c_5_x1_1 <- mean(Y3[w == 3 & x1 == 1]) - mean(Y3[w == 1 & x1 == 1]) # c(3,1)
    c_6_x1_1 <- mean(Y3[w == 3 & x1 == 1]) - mean(Y3[w == 2 & x1 == 1]) # c(3,2)
    
    c_x1_1 <- cbind(c_1_x1_1, c_2_x1_1, c_3_x1_1, c_4_x1_1, c_5_x1_1,
                    c_6_x1_1)# True confounding functions among x1 = 1

    The true values of the confounding functions within the stratum x1=0 can be calculated in a similar way.

    c_1_x1_0 <- mean(Y1[w == 1 & x1 == 0]) - mean(Y1[w == 2 & x1 == 0])# c(1,2)
    c_2_x1_0 <- mean(Y2[w == 2 & x1 == 0]) - mean(Y2[w == 1 & x1 == 0])# c(2,1)
    c_3_x1_0 <- mean(Y2[w == 2 & x1 == 0]) - mean(Y2[w == 3 & x1 == 0])# c(2,3)
    c_4_x1_0 <- mean(Y1[w == 1 & x1 == 0]) - mean(Y1[w == 3 & x1 == 0])# c(1,3)
    c_5_x1_0 <- mean(Y3[w == 3 & x1 == 0]) - mean(Y3[w == 1 & x1 == 0])# c(3,1)
    c_6_x1_0 <- mean(Y3[w == 3 & x1 == 0]) - mean(Y3[w == 2 & x1 == 0])# c(3,2)
    c_x1_0 <- cbind(c_1_x1_0, c_2_x1_0, c_3_x1_0, c_4_x1_0, c_5_x1_0, c_6_x1_0) 

    The true values of the confounding functions within the stratum of x1 can be calculated using the helper function true_c_fun_cal() in our package.

    true_c_fun <- true_c_fun_cal(x = x1, w = w)
  4. Calculate the confounding function adjusted outcomes with the drawn values of GPS and confounding functions.

    i <- 1; j <- 1
    ycf <- ifelse(
      x1w_data[, "w"] == 1 & x1 == 1,
      # w = 1, x1 = 1
      y - (c_x1_1[i, 1] * gps[j, 2, ] + c_x1_1[i, 4] * gps[j, 3, ]),
      ifelse(
        x1w_data[, "w"] == 1 & x1 == 0,
        # w = 1, x1 = 0
        y - (c_x1_0[i, 1] * gps[j, 2, ] + c_x1_0[i, 4] * gps[j, 3, ]),
        ifelse(
          x1w_data[, "w"] == 2 & x1 == 1,
          # w = 2, x1 = 1
          y - (c_x1_1[i, 2] * gps[j, 1, ] + c_x1_1[i, 3] * gps[j, 3, ]),
          ifelse(
            x1w_data[, "w"] == 2 & x1 == 0,
            # w = 2, x1 = 0
            y - (c_x1_0[i, 2] * gps[j, 1, ] + c_x1_0[i, 3] * gps[j, 3, ]),
            ifelse(
              x1w_data[, "w"] == 3 & x1 == 1,
              # w = 3, x1 = 1
              y - (c_x1_1[i, 5] * gps[j, 1, ] + c_x1_1[i, 6] * gps[j, 2, ]),
              # w = 3, x1 = 0
              y - (c_x1_0[i, 5] * gps[j, 1, ] + c_x1_0[i, 6] * gps[j, 2, ])
            )
          )
        )
      )
    ) 
  5. Use the adjusted outcomes to estimate the causal effects.

    bart_mod_sa <- BART::wbart(x.train = x1w_data, y.train = ycf, ndpost = 1000)
    predict_1_ate_sa <- BART::pwbart(cbind(x1, w = 1), bart_mod_sa$treedraws)
    predict_2_ate_sa <- BART::pwbart(cbind(x1, w = 2), bart_mod_sa$treedraws)
    predict_3_ate_sa <- BART::pwbart(cbind(x1, w = 3), bart_mod_sa$treedraws)
    RD_ate_12_sa <- rowMeans(predict_1_ate_sa - predict_2_ate_sa)
    RD_ate_23_sa <- rowMeans(predict_2_ate_sa - predict_3_ate_sa)
    RD_ate_13_sa <- rowMeans(predict_1_ate_sa - predict_3_ate_sa)
    predict_1_att_sa <- BART::pwbart(cbind(x1[w == 1,], w = 1), bart_mod_sa$treedraws)
    predict_2_att_sa <- BART::pwbart(cbind(x1[w == 1,], w = 2), bart_mod_sa$treedraws)
    RD_att_12_sa <- rowMeans(predict_1_att_sa - predict_2_att_sa) # w=1 is the reference

Repeat steps 3 and 4 M1×M2 times to form M1×M2 datasets with adjusted outcomes. The uncertainty intervals are estimated by pooling the posteriors across the M1×M2 model fits.

The sa() function implements the sensitivity analysis approach while fitting the M1×M2 models using parallel computation.

sa_res <- sa(m1 = 50, m2 = 1, x = x1, y = data_SA$y, w = data_SA$w, ndpost = 100, 
             estimand = "ATE", prior_c_function =  true_c_fun, nCores = 3)
summary(sa_res)
#>            EST   SE LOWER UPPER
#> ATE_RD12 -0.44 0.10 -0.64 -0.23
#> ATE_RD13 -0.58 0.11 -0.80 -0.36
#> ATE_RD23 -0.14 0.12 -0.38  0.10

We compare the sensitivity analysis results to the naive estimators where we ignore the unmeasured confounder X2, and to the results where we had access to X2.

bart_with_x2_res <- ce_estimate(y = data_SA$y, x = cbind(x1, x2), w = data_SA$w, 
                                method = "BART", estimand = "ATE", ndpost = 100)
bart_without_x2_res <- ce_estimate(y = data_SA$y, x = x1, w = data_SA$w, 
                                   method = "BART", estimand = "ATE", ndpost = 100)

Figure 4 compares the estimates of ATE1,2, ATE2,3 and ATE1,3 from the three analyses. The sensitivity analysis estimators are similar to the results that could be achieved had the unmeasured confounder X2 been made available.

graphic without alt text
Figure 4: Estimates and 95% credible intervals for ATE1,2, ATE2,3 and ATE1,3

We can also conduct the sensitivity analysis for the ATT effects by setting estimand = "ATT".

sa_att_res <- sa(m1 = 50, m2 = 1, x = x1, y = data_SA$y, w = data_SA$w, ndpost = 100,
                 estimand = "ATT", prior_c_function =  true_c_fun, nCores = 1, 
                 reference_trt = 1)
summary(sa_att_res)
#>            EST   SE LOWER UPPER
#> ATT_RD12 -0.42 0.10 -0.63 -0.22
#> ATT_RD13 -0.57 0.11 -0.79 -0.35

Finally, we demonstrate the sa() function in a more complex data setting with 3 measured confounders and 2 unmeasured confounders.

set.seed(1)
data_SA_2 <- data_sim(
  sample_size = 100, n_trt = 3,
  x = c( "rnorm(0, 0.5)",  # x1
         "rbeta(2, .4)",   # x2
         "runif(0, 0.5)",  # x3
         "rweibull(1, 2)", # x4 as one of the unmeasured confounders
         "rbinom(1, .4)" ),  # x5 as one of the unmeasured confounders
  lp_y = rep(".2*x1 + .3*x2 - .1*x3 - 1.1*x4 - 1.2*x5", 3),
  nlp_y  = rep(".7*x1*x1  - .1*x2*x3", 3), # parallel response surfaces
  align = FALSE,
  lp_w =  c(".4*x1 + .1*x2 - 1.1*x4 + 1.1*x5",  # w = 1
            ".2*x1 + .2*x2 - 1.2*x4 - 1.3*x5"), # w = 2,
  nlp_w = c("-.5*x1*x4 - .1*x2*x5",  # w = 1
            "-.3*x1*x4 + .2*x2*x5"), # w = 2,
  tau = c(0.5,-0.5,0.5), delta = c(0.5,0.5), psi = 2)

We have demonstrated the strategy (i) point mass prior, and now show how strategy (ii) re-analysis over a range of point mass priors and (iii) full prior with uncertainty specified can be used. For strategy (ii), we can specify the grid of the specific confounding function using seq() function. In the following example, we will set c(1,3) as a grid of 5 negative numbers from -0.6 to 0 with an increment of 0.15, and set c(3,1) as a grid of 5 positive numbers from 0 to 0.6 with an increment of 0.15. This specification encodes our belief that unhealthier (suppose the outcome is death) individuals are treated with treatment option 3 (see Table 2) because those receiving w=1 would have lower probability of death to either treatment. The other confounding functions are drawn from a uniform distribution (strategy (iii)).

c_grid <- c("runif(-.6, 0)", "runif(0,.6)", "runif(-.6,0)", # c(1,2), c(2,1), c(2,3)
            "seq(-.6, 0,.15)", "seq(0,.6,.15)", "runif(0,.6)") # c(1,3), c(3,1), c(3,2)
SA_grid_res <- sa(y = data_SA_2$y, w = data_SA_2$w, x = data_SA_2$covariates[,-c(4,5)],
                  prior_c_function = c_grid, m1 = 1, nCores = 3, estimand = "ATE")

The sensitivity analysis results can be visualized via a contour plot. Figure 5 shows how the estimate of ATE1,3 would change under different values of the two confounding functions c(1,3,x) and c(3,1,x). Under the assumption that unhealthier patients are treated with w=3, when the effect of unmeasured confounding increases (moving up along the 45 line), the beneficial treatment effect associated with w=3 becomes more pronounced evidenced by larger estimates of ATE1,3.

plot(SA_grid_res, ATE = "1,3")
graphic without alt text
Figure 5: Contour plot of the confounding function adjusted ATE1,3. The blue lines report the adjusted causal effect estimates corresponding to pairs of values for c(1,3) and c(3,1) spaced on a grid (0.6,0)×(0,0.6) incremented by 0.15, and under the prior distributions: c(1,2),c(2,3)U(0.6,0);c(2,1),c(2,3)U(0,0.6).

4 Discussion

We contribute a comprehensive R package CIMTx suitable for causal analysis of observational data with multiple treatments and a binary outcome. In this package, we introduce six methods for the estimation of causal effects, including both the classical approaches and machine learning based methods. Drawing causal inference from non-experimental data inevitably involves structural causal assumptions. CIMTx offers a unique set of features to address two key assumptions: positivity and ignorability, using appropriate estimation procedures. Additionally, the CIMTx package provides guidance to readers on how to simulate data possessing the data characteristics in the multiple treatment setting. Detailed step-by-step examples are provided to demonstrate all methods. The current version of the CIMTx package focuses on binary outcomes. For future research, developing methods and R packages for causal inferences with more complex outcomes such as censored survival outcomes could be a worthwhile contribution.

CRAN packages used

CIMTx, PSweight, twang, WeightIt, CBPS, optweight, arm, SuperLearner, mgcv, tmle, BART

CRAN Task Views implied by cited packages

Bayesian, CausalInference, Econometrics, Environmetrics, MachineLearning, MixedModels, Spatial, TeachingStatistics

Note

This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.

Footnotes

    References

    P. C. Austin. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis. Statistics in Medicine, 35(30): 5642–5655, 2016.
    H. A. Chipman, E. I. George and R. E. McCulloch. BART: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1): 266–298, 2010.
    S. R. Cole and M. A. Hernán. Constructing inverse probability weights for marginal structural models. American Journal of Epidemiology, 168(6): 656–664, 2008.
    M. Erik von Elm, D. G. Altman, M. Egger, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Annals of Internal Medicine, 147(8): 573--577, 2007.
    P. Feng, X.-H. Zhou, Q.-M. Zou, M.-Y. Fan and X.-S. Li. Generalized propensity score for estimating the average treatment effect of multiple treatments. Statistics in Medicine, 31(7): 681–697, 2012.
    C. Fong, M. Ratkovic and K. Imai. CBPS: Covariate balancing propensity score. 2021. URL https://CRAN.R-project.org/package=CBPS. R package version 0.22.
    J. M. Franklin, W. Eddings, P. C. Austin, E. A. Stuart and S. Schneeweiss. Comparing the performance of propensity score methods in healthcare database studies with rare outcomes. Statistics in Medicine, 36(12): 1946–1963, 2017.
    N. Greifer. Optweight: Targeted stable balancing weights using optimization. 2019. URL https://CRAN.R-project.org/package=optweight. R package version 0.2.5.
    N. Greifer. WeightIt: Weighting for covariate balance in observational studies. 2020. URL https://CRAN.R-project.org/package=WeightIt. R package version 0.10.2.
    J. L. Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1): 217–240, 2011.
    P. W. Holland. Statistics and causal inference. Journal of the American Statistical Association, 81(396): 945–960, 1986.
    D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47(260): 663–685, 1952.
    J. W. H. A. M. J. D. A. L. Hu. A bayesian perspective on assessing sensitivity to assumptions about unobserved data. In Handbook of missing data methodology, Eds G. Molenberghs, G. Fitzmaurice, M. G. Kenward, A. Tsiatis and G. Verbeke pages. 405–434 2014. Boca Raton, FL: CRC Press.
    L. Hu and C. Gu. Estimation of causal effects of multiple treatments in healthcare database studies with rare outcomes. Health Services and Outcomes Research Methodology, 21(3): 287–308, 2021.
    L. Hu, C. Gu, M. Lopez, J. Ji and J. Wisnivesky. Estimation of causal effects of multiple treatments in observational studies with a binary outcome. Statistical Methods in Medical Research, 29(11): 3218–3234, 2020a.
    L. Hu, J. Ji, R. D. Ennis and J. W. Hogan. A flexible approach for causal inference with multiple treatments and clustered survival outcomes. arXiv preprint, 2022a. arXiv:2202.08318.
    L. Hu, J. Ji and F. Li. Estimating heterogeneous survival treatment effect in observational data using machine learning. Statistics in Medicine, 40(21): 4691–4713, 2021a.
    L. Hu, J.-Y. Joyce Lin and J. Ji. Variable selection with missing data in both covariates and outcomes: Imputation and machine learning. Statistical Methods in Medical Research, 30(12): 2651–2671, 2021b.
    L. Hu, J.-Y. J. Lin, K. Sigel and M. Kale. Estimating heterogeneous survival treatment effects of lung cancer screening approaches: A causal machine learning analysis. Annals of Epidemiology, 62: 36–42, 2021c.
    L. Hu, B. Liu, J. Ji and Y. Li. Tree-based machine learning to identify and understand major determinants for stroke at the neighborhood level. Journal of the American Heart Association, 9(22): e016745, 2020b.
    L. Hu, J. Zou, C. Gu, J. Ji, M. Lopez and M. Kale. A flexible sensitivity analysis approach for unmeasured confounding with multiple treatments and a binary outcome with application to SEER-Medicare lung cancer data. The Annals of Applied Statistics, 16(2): 1014–1037, 2022b.
    G. W. Imbens. The role of the propensity score in estimating dose-response functions. Biometrika, 87(3): 706–710, 2000.
    G. W. Imbens and D. B. Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.
    J. D. Kang, J. L. Schafer, et al. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical Science, 22(4): 523–539, 2007.
    B. P. Kindo, H. Wang and E. A. Peña. Multinomial probit bayesian additive regression trees. Stat, 5(1): 119–131, 2016.
    J. Kruschke. Doing bayesian data analysis: A tutorial with r, JAGS, and stan. Academic Press, 2014.
    B. K. Lee, J. Lessler and E. A. Stuart. Weight trimming and propensity score weighting. PLOS One, 6(3): e18174, 2011.
    A. Linden, S. D. Uysal, A. Ryan and J. L. Adams. Estimating causal effects for multivalued treatments: A comparison of approaches. Statistics in Medicine, 35(4): 534–552, 2016.
    R. J. Little. Missing-data adjustments in large surveys. Journal of Business & Economic Statistics, 6(3): 287–296, 1988.
    M. J. Lopez and R. Gutman. Estimation of causal effects with multiple treatments: A review and new ideas. Statistical Science, 32(3): 432–454, 2017.
    D. F. McCaffrey, B. A. Griffin, D. Almirall, M. E. Slaughter, R. Ramchand and L. F. Burgette. A tutorial on propensity score estimation for multiple treatments using generalized boosted models. Statistics in Medicine, 32(19): 3388–3414, 2013.
    G. Ridgeway, D. McCaffrey, A. Morral, B. A. Griffin, L. Burgette and M. Cefalu. Twang: Toolkit for weighting and analysis of nonequivalent groups. 2020. URL https://CRAN.R-project.org/package=twang. R package version 1.6.
    S. Rose and S.-L. Normand. Double robust estimation for multiple unordered treatments and clustered observations: Evaluating drug-eluting coronary artery stents. Biometrics, 75(1): 289–296, 2019.
    D. B. Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5): 688, 1974.
    D. B. Rubin. Randomization analysis of experimental data: The Fisher randomization test comment. Journal of the American Statistical Association, 75(371): 591–593, 1980.
    D. B. Rubin. The use of matched sampling and regression adjustment to remove bias in observational studies. Biometrics, 185–203, 1973.
    M. J. Van der Laan, E. C. Polley and A. E. Hubbard. Super learner. Statistical Applications in Genetics and Molecular Biology, 6(1): 2007.
    T. Zhou, G. Tong, F. Li, L. Thomas and F. Li. PSweight: Propensity score weighting for causal inference with observational studies and randomized trials. 2020. URL https://CRAN.R-project.org/package=PSweight. R package version 1.1.2.

    Reuse

    Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

    Citation

    For attribution, please cite this work as

    Hu & Ji, "CIMTx: An R Package for Causal Inference with Multiple Treatments using Observational Data", The R Journal, 2022

    BibTeX citation

    @article{RJ-2022-058,
      author = {Hu, Liangyuan and Ji, Jiayi},
      title = {CIMTx: An R Package for Causal Inference with Multiple Treatments using Observational Data},
      journal = {The R Journal},
      year = {2022},
      note = {https://rjournal.github.io/},
      volume = {14},
      issue = {3},
      issn = {2073-4859},
      pages = {213-230}
    }