jomo: A Flexible Package for Two-level Joint Modelling Multiple Imputation

Multiple imputation is a tool for parameter estimation and inference with partially observed data, which is used increasingly widely in medical and social research. When the data to be imputed are correlated or have a multilevel structure — repeated observations on patients, school children nested in classes within schools within educational districts — the imputation model needs to include this structure. Here we introduce our joint modelling package for multiple imputation of multilevel data, jomo, which uses a multivariate normal model fitted by Markov Chain Monte Carlo (MCMC). Compared to previous packages for multilevel imputation, e.g. pan, jomo adds the facility to (i) handle and impute categorical variables using a latent normal structure, (ii) impute level-2 variables, and (iii) allow for cluster-specific covariance matrices, including the option to give them an inverse-Wishart distribution at level 2. The package uses C routines to speed up the computations and has been extensively validated in simulation studies both by ourselves and others.


Introduction
Missing data are ubiquitous in clinical and social research. The most straightforward way to deal with missing data is to exclude all observations with any item missing from the analysis, i.e. a complete records analysis. However this strategy is at best inefficient; further unless -given covariates -the probability of a complete record does not depend on the outcome (dependent) variable, it will lead to biased results. Rubin (1976) described different mechanisms causing missing data: Missing Completely At Random (probability of missingness unrelated to observed and unobserved values, MCAR), Missing At Random (given observed data, occurrence of missing values is independent of the actual values, MAR) and Missing Not At Random (given observed data, occurrence of missing values still depends on the actual values, MNAR).
Multiple Imputation (MI) is a very flexible, practical, tool to deal with missing data. It consists of imputing missing data several times, creating multiple imputed data sets. Then, the substantive model is directly fitted to each of the imputed data sets; the results are then combined for inference using Rubin's rules (Rubin, 1987). An appropriately specified multiple imputation model gives valid, efficient inference if data are MAR (Carpenter and Kenward, 2013, Ch. 2;?). Key attractions of MI are that it separates the imputation of missing data from the analysis, thus allowing (a) use of the substantive analysis model that we intended to use with fully observed data and (b) inclusion of auxiliary variables in the imputation model -which provide information about the missing valueswithout having to include them in the substantive analysis model.
There are several models and associated algorithms which can be used to impute missing data; Schafer (1997) proposed a joint multivariate normal model, fitted by MCMC. The main assumption of this method is that the partially observed data follow a joint multivariate normal distribution; given this, a Gibbs sampler uses the proper conditional distributions to update the parameters of the model and impute the missing data.
One of the advantages of the joint modelling approach is that it extends naturally to multilevel/hierarchical data structures. Such structures arise commonly, for example, when we have repeated observations (level 1) on individuals (level 2), or students (level 1) nested in schools (level 2).
A number of joint modelling multiple imputation packages have been written: norm (Novo and Schafer, 2013;Schafer and Olsen, 2000) assumes a multivariate normal model for imputation of singlelevel normal data, cat (Harding et al., 2012) a log-linear model to impute categorical data, and mix (Schafer., 2010) uses the general location model (Olkin and Tate, 1961) to impute a mix of continuous and categorical data (Schafer, 1997, Ch. 9). pan (Zhao and Schafer, 2013;Schafer and Yucel, 2002) uses a multilevel multivariate normal model for imputation of multilevel normal data.
As far as we are aware, jomo is the first R package to extend this to allow for a mix of multilevel (clustered) continuous and categorical data. It is derived from, but also extends the functionality of REALCOM (Carpenter et al., 2011), a standalone program written in Matlab. In REALCOM, binary and categorical variables are handled through a latent normal variables approach presented in Carpenter and Kenward (2013, Ch. 5). The aim of the jomo package is to provide an efficient implementation of the REALCOM imputation model in R, while both (i) speeding up the processes through the use of C sub-routines and (ii) adding the flexibility to specify level-2 specific covariance matrices and level-2 random covariance matrices, the latter following the proposal of Yucel (2011). This paper is organized as follows: after briefly introducing joint modelling multiple imputation and the principal functions within jomo, we present a series of short tutorials, in which we explain how to impute (i) single-level continuous and categorical data; (ii) clustered homoscedastic data, (iii) clustered heteroscedastic data and (iv) level-2 variables. We then present functions to help check the convergence of the underlying MCMC algorithms and outline the suggested workflow for using the package. The next section introduces another R package, mitml, which provides both a different interface to jomo and a number of useful tools to manage and investigate the properties of imputed data sets. The penultimate section provides an overview of both the simulation studies and the applications where the package was used. We conclude with a short discussion.

Joint Modelling Multiple Imputation
To introduce the general ideas of joint modelling multiple imputation, we consider a data set made up of N observations on three continuous variables; we further assume that one of these variables, X, is fully observed, while the other two, Y 1 and Y 2 , have missing values. The main idea behind joint modelling imputation is to define a joint multivariate model for all the variables in the data set, to be used for imputation. The simplest joint model is the multivariate normal model: This model uses X, which is completely observed, as predictor while Y 1 and Y 2 are included as outcomes, being partially observed. Another possibility is a tri-variate normal model with all variables, including X, treated as outcomes. Choosing between these two models has been the matter of debate in the literature, and is not the focus of this paper. We simply note here that the imputation model chosen needs to be (at least approximately) congenial with the analysis model of substantial interest, in order for multiple imputation to lead to correct inference (Meng, 1994).
After having chosen the imputation model, Bayesian methods are used to fit it and impute the missing data. In particular, Gibbs sampling is used, dealing with missing data via a data augmentation algorithm (Tanner and Wong, 1987). This consists of repeatedly drawing new values for all the parameters in the model, one at a time, from the relevant conditional distribution. The parameters of model (1) are the fixed effects β and the covariance matrix Ω, and the missing data.
The sampler has to be run until it reaches the stationary distribution. Then, the current draw of the missing values is combined with the observed data to make the first imputed data set. Further imputed data sets are obtained by running the sampler for a sufficient number of supplementary iterations to guarantee stochastic independence between consecutive imputations.
Before running the Gibbs sampler, it is necessary to choose starting values for the parameters; the more plausible these starting values are, the faster the sampler converges. Furthermore, being a Bayesian method, prior distributions are used. By default, jomo uses flat priors for all the parameters in the model, except for the covariance matrices, which are given inverse-Wishart priors with the minimum possible number of degrees of freedom in order to give the greatest possible weight to the observed data.
Extending the same methodology to more complicated situations is conceptually relatively straightforward. For example, binary and categorical variables can be included in the imputation model by means of latent normal variables. Under this model, if Y 1 is binary, a latent continuous variable Y * 1 is included in the model, such that Y * 1,i > 0 for individuals i for whom Y 1,i = 0 and Y * 1,i ≤ 0 for individuals for whom Y 1,i = 1 (Goldstein et al., 2009). Similarly, a categorical variable with K categories can be represented by K − 1 latent normal variables denoting the differences between categories. In contrast to models without categorical data, this strategy requires constraints on the variance-covariance matrix to guarantee identifiability of the model (for a wider discussion of model identifiability see Carpenter and Kenward (2013, Chap.5)). To sample from this constrained covariance matrix, Metropolis Hastings (MH) steps are introduced to augment the standard Gibbs Sampler.
If observations in the data set are nested in J clusters, model (1), can be readily expanded to include  Type of variables   Model type  Missing data  Continuous  Binary/categorical  Mixed   Single-level  Level 1  jomo1con  jomo1cat  jomo1mix  Two-level  Level 1  jomo1rancon  jomo1rancat  jomo1ranmix  Both levels  jomo2com  jomo2com  jomo2com  Two-level, heteroscedastic Level 1  jomo1ranconhr  jomo1rancathr jomo1ranmixhr Both levels jomo2hr jomo2hr jomo2hr Table 1: Summary of subfunctions used by the main "umbrella function" jomo, given the type of imputation model, the level at which missing data occur, and the type of the variables with missing data.
random intercepts as follows: More general random effects structures can be modelled as well, as we will show in a later section.
It is similarly possible to consider a joint imputation model for variables at level 2, e.g. patientlevel variables in longitudinal studies (Carpenter and Kenward, 2013, p. 212), or allowing for level-1 heteroscedasticity, which can be particularly appealing in individual patient data (IPD) meta-analyses among other applications.

Package structure
The jomo package can be used to impute missing data in continuous and categorical variables in single-level and multilevel data. The main interface to the jomo package is the jomo function, which automatically selects the correct imputation method for the data, depending on (a) the model specification (e.g., single-level vs. multilevel) and (b) the variables in the data set (e.g., continuous variables of type numeric vs. categorical variables as factor vs. a mixture of them). In addition, the sub-functions called by jomo can be called individually; a list of all sub-functions is given below and in Table 1. 1. jomo1con, jomo1cat and jomo1mix: these impute single-level continuous, categorical and mixed data sets. jomo1con is very similar to the imp.norm function of the norm package. However, jomo1cat and jomo1mix used the latent normal variables approach described in Carpenter and Kenward (2013, Ch. 5) to impute categorical variables and a mix of continuous and categorical variables, respectively.

jomo: tutorial with single-level data
We begin with single-level data sets. We first assume all variables with missing data are continuous, that the substantive model is a linear regression of one variable on some or all the others, and that each variable is approximately normally distributed conditional on the others (so that a joint multivariate normal model is an appropriate choice for imputation).
A key attraction of multiple imputation is that any variables in the data set that are not in the substantive model can still be included in the imputation model. If such auxiliary variables are good predictors of missing values, they will help recover missing information. If they are also predictors of the data being missing, they may correct bias. Recall that (simply speaking) the MAR assumption states that, given the observed data, the probability that a value is missing is conditionally independent of the actual value. Therefore, inclusion of judiciously chosen auxiliary variables (even ones that are not themselves complete) can improve the plausibility of the MAR assumption.
In joint modelling imputation (Carpenter and Kenward, 2013, Ch. 3), partially observed variables are dependent variables. However, as hinted above, with fully observed variables we can choose to either condition on them as predictors or include them in the (multivariate) response. The software is equally comfortable with both options, and it makes little difference in practice for single-level data. However, the choice has a bigger impact for clustered data (Quartagno and Carpenter, 2016;Grund et al., 2016b), as we will see in the multilevel imputation section.
Once we have decided on the variables to include in the imputation model, and whether to condition on any fully observed variables as covariates, imputation using the jomo function is straightforward.
For illustration, we use an educational data set of students' test scores (JSPmiss), which is a subset of the Junior School Project ( Mortimore et al., 1988). The fully observed data set is freely available with the MLwiN software (Rasbash et al. (2017); or the related R package R2MLwiN Zhang et al. (2016)) and a partially observed version is included with this package (data are MAR). This data set has eight variables: a school and an individual identifier, sex, fluency in English language (3 categories, fluent), a test score at Year 1 (ravens) and at Year 3 (english) and a binary behavioral score at Year 3 (behaviour). Additionally a constant is provided to help the user, as we will see later in this tutorial. This shows the data set has (248 + 871) = 1119 observations, of which 236 have missing data on the outcome english, 235 have missing data on fluent and so on and so forth. For the purpose of this example, we ignore clustering by school and take as the substantive analysis model the following linear regression: To illustrate the use of jomo with continuous variables, let i = 1, . . . , 1119 index observations and use the following joint imputation model: Both responses in this bivariate normal model, english and ravens, are partially observed continuous variables. The binary covariate, sex, is fully observed. This is identical to model (1) with X being sex.
To impute the missing values we proceed as follows. First, in order to guarantee that the correct function is being used, we need to make sure that continuous variables are stored as numeric vectors in the data frame and binary/categorical variables as factors. We can easily test this as follows: # Check that ravens is numeric: class(JSPmiss$ravens) # [1] "numeric" # Were ravens not numeric, we convert it as follows: JSPmiss <-within(JSPmiss, ravens <-as.numeric(ravens)) # Define the data.frame with the outcomes Y <-JSPmiss[, c("english", "ravens")] # The imputation model requires an intercept variable (simply a column of 1 s) # to include in the covariate matrix X. The JSPmiss data set already contains # such a variable (cons). Were it not present, we could define it as follows: JSPmiss$cons <-1 # Define the data.frame with the covariates: X <-JSPmiss[, c("cons", "sex")] # Set the seed (so we can replicate the results exactly if desired) set.seed (1569) # Run jomo and store the imputed data sets in a new data frame, imp imp <-jomo(Y = Y, X = X, nburn = 1000, nbetween = 1000, nimp = 5) Running this code, with the above seed, the following output is shown on screen: The first sentence informs us that, as we did not pass any clustering indicator to the function, jomo is using single-level imputation. The second sentence is telling us which of the sub-functions was used. As we have two numeric dependent variables, jomo1con has been chosen.
Then, the software prints a '.' for each 10 burn-in updates of the MCMC sampler, followed by a notification that it has created the first imputed data set. It then prints a '.' for each 10 further updates of the MCMC sampler, before imputations 2, 3, 4, and 5. The default values for the burn-in and between-imputation updates are both 1000, resulting in 100 dots printed in each case.
Finally, the software prints the estimated posterior mean of the regression coefficients for english (β 0,e , β 1,e ,), ravens (β 0,r , β 1,r ), and the elements of the covariance matrix: The same results can be obtained by running jomo1con in place of jomo with exactly the same arguments. In general, to exactly replicate the results, you will have to (i) have the data sorted in the same order and (ii) use the same seed. Sometimes we will wish to suppress the output generated by jomo; the option output = 0 controls this.
Running jomo creates the object imp, which is a data frame: # with the following names: names(imp) #[1] "english" "ravens" "cons" "sex" "id" "Imputation" The names include the original variable names, inherited from JSPmiss, and a new variable Imputation, which indexes the original data (0) and the imputed data sets. We have five imputed data sets. Let's look at the top of the original data, and the top of the fifth imputation:

Starting values and prior distributions
To impute missing data, jomo fits a Bayesian model using MCMC. Therefore, we need to provide starting values and priors for each parameter in the model. The default starting values are a matrix of zeros for the fixed effects parameter β, and the identity matrix for the covariance matrix. In the majority of situations, changing the starting values will not change the results materially. Nevertheless, good starting values may considerably reduce the number of iterations needed for the algorithm to converge.
In order to represent the maximum uncertainty and to give the greatest weight to the data, jomo assumes flat improper priors for all the parameters in the model, except the covariance matrix. For this, an inverse-Wishart prior is used, with degrees of freedom set to the minimum possible, i.e. the dimension of the covariance matrix; this represents the greatest uncertainty (least information).
Changing the scale matrix of the inverse-Wishart prior may have some impact when we have a very small number of observations. In our example, with 1119 observations and only two outcomes, the impact is immaterial.
We now illustrate how to set the starting values for all the parameters, the scale matrix of the inverse-Wishart prior for the covariance matrix as well as the number of burn-in iterations for the MCMC sampler, the number of iterations between imputations, and the number of imputations: # Set starting values for fixed effect parameters beta beta.start <-matrix(1, 2, 2) # Set starting value for covariance matrix l1cov.start <-diag (2, 2) # Set scale matrix of the inverse-Wishart prior for the covariance matrix: l1cov.prior <-diag(2, 2); # Set seed to get results below set.seed (1569) # Impute: # [Note for new R users: the inputs set above have the same names as their # corresponding options in the function. Hence, when we set <option> = <object # name>, we have the same string on both sides of = ] imp2 <-jomo(Y, X = X, beta.start = beta.start, l1cov.start = l1cov. We see no material change from the previous results. In simple problems, the default burn-in of nburn = 1000 is often enough for the sampler to converge. Further below, we show how to visually check whether the stationary distribution has been reached.

Analysing the imputed data
Recall our substantive linear regression model above. Once we have created our imputed data sets, we follow the usual multiple imputation rules and fit our substantive model to each imputed data set, before summarising the results for final inference using Rubin's rules, which are implemented in mitools: library(mitools) # load if not done so above # Use the object imp which we created with the original run above. First # convert the data frame of results imp to a list of imputations imp.list <-imputationList(split(imp, imp$Imputation)[-1]) # Fit model to each of the 5 imputed data sets fit.imp <-with(data = imp.list, lm(english~ravens + sex)) # Extract coefficients and variances coefs <-MIextract(fit.imp, fun = coef) vars <-MIextract(fit.imp, fun = function(x) diag(vcov (x)) (Meinfelder, 2011), and testEstimates from mitml, which we present more in depth in the penultimate section of this paper. Other packages with their own implementation of Rubin's rules include Amelia, mi, and lavaan.survey.

Categorical variables
Fully observed binary covariates can be included in the X matrix of the imputation model as type numeric, exactly as with sex in this example. To include fully observed categorical covariates with three or more categories, appropriate dummy variables have to be created. For this purpose, we might use the R package dummies (Brown, 2012) or the function constrasts in base R.
jomo also readily imputes a mix of binary, categorical and continuous variables. This is done using a latent normal model (see Goldstein et al., 2009;Carpenter and Kenward, 2013, Ch. 4). To illustrate this, we continue to use the data set JSPmiss but now also impute the partially observed fluency level (3 categories). The underlying joint imputation model remains a multivariate normal model, but fluent is represented by two latent normal variables: where: As highlighted above, in order for this model to be estimable, we need to constrain the variancecovariance matrix of ( f 1,i , f 2,i ) T , (i.e. the bottom right 2 × 2 submatrix of Ω) to be 1 0.5 0.5 1 .
These constraints are automatically implemented in the software.
The code for imputation is essentially the same as before, but now we need to make sure that fluent, being a categorical variable, is included in the dependent variable data frame as a factor: # convert "fluent" to factor The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 JSPmiss <-within(JSPmiss, fluent <-factor(fluent)) # Define the data frame with the dependent (outcome) variables for the imputation model: Y <-JSPmiss[, c("english", "ravens", "fluent")] # Define the data.frame with the (fully observed) covariates of the imputation model X <-JSPmiss[, c("cons", "sex")] # fit the model and impute the missing data: set.seed (1569) imp <-jomo(Y = Y, X = X) This will take a little longer than the previous examples, and returns the following output: The output illustrates that jomo recognized that fluent was a factor variable and therefore used the function for the imputation of mixed data types.
The matrix posterior mean of the fixed effect estimates links directly to (5). Specifically,β 0,e = 38.26,β 1,e = 6.36, . . . ,β 0, f 2 = −1.84,β 1, f 2 = 0.18. Likewise, the posterior means of the covariance matrix terms are labelled, and correspond directly to (5) Calling the relevant sub-function (jomo1mix) is possible again but a bit more complex, because continuous and categorical outcomes must be passed as separate arguments.
We can specify starting values explicitly if we wish. To specify all the starting values, we need to specify n − 1 starting β values for each n-category variable and a proper starting value for the covariance matrix. For example, in the present case, fluent has 3 categories, which are modelled with 2 latent normal variables. As a result, β is a 2 × 4 matrix of regression coefficients, and the covariance matrix is of size 4 × 4 (i.e., two predictors with two continuous and two latent dependent variables). So, continuing with Y and X defined as above: # Starting value for beta beta.start <-matrix(0, 2, 4) # Specify a 2 by 4 matrix of zeros # Starting value for covariance matrix; the software disregards impossible values: l1cov.start <-diag(2, 4) set.seed(1569) imp <-jomo(Y = Y, X = X, beta.start = beta.start, l1cov.start = l1cov.start) As our starting values are different from the default, we get slightly different estimates of the posterior means.
While the software is designed for unordered categorical data, it can be used for ordinal data too. Our simulation results show that if variables are truly ordinal it gives good results with only a marginal loss in efficiency (Quartagno and Carpenter, 2019).

jomo: tutorial with multilevel data
When we wish to impute missing data with a multilevel substantive model, our imputation model should itself preserve the multilevel structure (Lüdtke et al., 2017;Andridge, 2011); one key benefit of jomo is that it allows us to do this.
Three different approaches for multilevel multiple imputation have been implemented in jomo, providing a flexible framework for the treatment of missing data in multilevel data sets. They allow: 1. imputation with a common level-1 covariance matrix across level-2 units (the default); 2. imputation with a cluster-specific level-1 covariance matrices, and 3. imputation allowing for the level-1 covariance matrix to be randomly distributed across level-2 units, following the proposal by Yucel (2011) and as developed by Quartagno and Carpenter (2016).
Option (2) requires sufficient data within each level-2 unit to estimate the covariance matrix; option (3) is a practical choice when we suspect there is heterogeneity across level-2 units, but there is insufficient information within each level-2 unit for option (2).
We illustrate the software again with the JSPmiss data set, distributed with the package. Let j index school and i students within schools. Our substantive model is: We now use multilevel imputation for the missing data. This is done by extending the joint imputation model described above to the multilevel setting. As before, the variables english, ravens and fluent are responses, and the fully observed variable sex a covariate. The imputation model has random intercepts at level 2, and a common level-1 covariance matrix (approach 1 above).
Y english,i,j = β 0,e + β 1,e X sex,i,j + u e,j + e,i,j Y ravens,i,j = β 0,r + β 1,r X sex,i,j + u r,j + r,i,j where: This is the same as model (5), apart from the addition of the covariance matrix Ω u for the vector of random intercepts, u j .
Apart from specifying the level-two identifier, imputing the missing values proceeds the same as before: # Define cluster/group indicator clus <-JSPmiss$school # Define the data.frame with outcomes of the imputation model JSPmiss <-within(JSPmiss, fluent <-factor(fluent)) Y <-JSPmiss[, c("english", "ravens", "fluent")] # Define the data frame with covariates of the imputation model Simply including clus was enough to tell jomo that we have a multilevel structure and to impute accordingly. The assumed model included a random intercept for each outcome variable; we will see later in this section how to specify the design matrix for the random effects differently. The output follows the same format as that from fitting (5) above, with two additions: 1. we obtain the posterior mean of the u j , the random intercepts for each of the 5 responses, for each of the j = 1, . . . , 50 schools. For example, for school 2, the posterior means are

Analysing the imputed data
To fit the substantive multilevel model to each imputed data set we proceed as before: imp.list <-imputationList(split(imp, imp$Imputation)[-1]) # Fit model to each of the 5 imputed data sets fit.imp <-with(data = imp.list, lmer(english~ravens + sex + factor(fluent) + (1|clus))) # In order to get multiple imputation inference for the random coefficients, we recommend using the mitml package, as described in the penultimate section below.

Design matrix for random effects
We now show how to specify additional random effects in the imputation model, apart from the random intercept that is included by default. This is done by specifying the design matrix of the random effects, Z. When this is not specified, it defaults to a random intercept. When it is specified by the user, the random intercept has to be included (if desired).

Starting values and prior distributions
The starting values of the sampling algorithm can again be overridden by the user, thus potentially leading to better sampling behaviour and faster convergence. In comparison with the single-level case, we now have two additional sets of parameters: (i) the matrix of random effects, whose rows contain the random effects vectors for all level-2 units (u.start), and (ii) the level-2 covariance matrix (l2cov.start). Note that with small numbers of level-2 units, the impact of the scale matrix of the prior for the level-2 covariance matrix can be substantial. We proceed as follows: beta.start <-matrix(1, 2, 4) # initialise fixed effects to zero u.start <-matrix(0.5, nlevels(JSPmiss$school), 4) # initialise all random effects to 0.5 l1cov.start <-diag(2, 4) # initialise diagonal covariance matrix of 2 s for level 1 l2cov.start <-diag(2, 4) # initialise diagonal covariance matrix of 2 s for level 1 l2cov.prior <-diag(2, 4); # set scale matrix of inverse-Wishart prior for level-2 # covariance matrix set.seed(1569) imp <-jomo(Y = Y, X = X, clus = clus, beta.start = beta. start, u.start = u.start, l1cov.start = l1cov.start, l2cov.start = l2cov.start, l2cov.prior = l2cov.prior, nburn = 2000, nbetween = 1000, nimp = 5) # Output omitted; as these are not all the default values, the posterior means differ # from the previous results by Monte Carlo error.

Cluster-specific covariance matrices
In some cases, it is implausible that all of the clusters share the same level-1 covariance matrix. For example, when aggregating individual patient data from different studies to perform a meta-analysis, it is often reasonable to assume that covariance matrices are different across studies.
Continuing to use (10) as an example, the only difference from before is that now the level-1 covariance matrix is not modelled as constant but as different across level-2 units. Thus instead of Ω e , we have Ω e,j , j = 1, . . . , 50. We fit this model by specifying the additional argument meth = "fixed": # Define the data.frame with outcomes of the imputation model Y <-JSPmiss[, c("english", "ravens", "fluent")] # Define the data.frame with covariates of the imputation model X <-JSPmiss[, c("cons", "sex")] # Define cluster/group indicator clus <-JSPmiss$school # Fixed cluster-specific covariance matrices imp2 <-jomo(Y = Y, X = X, clus = clus, meth = "fixed") # Output omitted Note that the output is now considerably longer, as the posterior mean for each of the level 1 covariance matrices is reported. Compared to the previous models, this model is more complex and requires estimation of a larger number of parameters.

Random cluster-specific covariance matrices
There are several reasons why we may wish to go beyond the setting above and allow the covariance matrices to be random across level-2 units. For example, we may have reason to believe that different level-2 units have different covariance matrices but that the number of observations on some of these level-2 units is insufficient to estimate level-2 specific covariance matrices reliably. In this case, sharing information across level-2 units is desirable. Another situation is when some variable is fully missing from some clusters, and therefore it is necessary to share information with clusters where it was observed through the specification of a hierarchical distribution for the covariance matrices.
Continuing to use (10) as an example, the cluster-specific covariance matrices are now assumed to follow a specific distribution. Thus instead of Ω e,j , we have Ω e,j ∼ IW(a, S) j = 1, . . . , 10. Here, a and S are the degrees of freedom and scale matrix of the inverse-Wishart distribution, respectively.
We can simply fit this model by specifying the option meth = "random": imp3 <-jomo(Y = Y, X = X, clus = clus, meth = "random") and then analyse the imputed data sets in the usual way. For full details on the algorithm used to fit the random covariance matrices algorithm initially proposed by Yucel (2011), see the appendix of Quartagno and Carpenter (2016).
The sub-function called by jomo for this type of data is jomo1ranmixhr, which can be called directly but with a slightly more complex syntax.
With random covariance matrices, we have one further parameter, a, denoting the degrees of freedom of the inverse-Wishart distribution for the cluster-specific covariance matrices. The default starting value for this parameter, a.start, is the minimum possible, i.e., the dimension of the level-1 covariance matrix. This is also the default for the hyperparameter a.prior of the chi-square prior distribution for a.

Imputing level-2 variables
In many applications, we have variables describing aspects of the level-2 units, and these may also have missing values. For example, in longitudinal studies, time-independent variables related to individuals (level-2 units), such as sex or the baseline variable ravens, may be affected by missing data. We can impute any missing level-2 values naturally with jomo (Carpenter and Kenward, 2013, Ch.9). As described above, we can use either a single common or multiple cluster-specific level-1 covariance matrices.
To illustrate this, we use a new data set, ExamScores. The fully observed version of this data set is again available with MLwiN and R2MLwiN, and it represents a subset of a larger data set of examination results from six inner London Education Authorities. The partially observed version that we use here is available with jomo. As in the previous example, this data set contains data from pupils (level 1) clustered in schools (level 2). Some of the variables are related to students at level 1 (normexam, a normalised version of exam score at age 16 and standlrt, London Reading Test (LRT) score at age 11). The other variables describe features of the schools at level 2, for example avslrt (continuous), representing the average LRT score for pupils in a particular school.
Fitting this model is very similar to fitting previous models; we simply have to define the level-2 variables.
# Define data.frame with level-1 outcomes of imputation model Y <-ExamScores[, c("normexam", "standlrt")] # Define data.frame with level-2 outcomes of imputation model This updates the sampler nburn times, but does not create any imputed data sets. Instead, the output of this function is a list containing three elements: • finimp: the final state of the data set, which would be the first imputation if we ran the jomo function with nburn burn-in iterations; • collectbeta: a three-dimensional array containing the fixed effect parameter draws at each of the nburn iterations; • collectomega: a three-dimensional array containing the level-1 covariance matrix draws at each of the nburn iterations; When running the corresponding .MCMCchain functions for multilevel imputation we will also have: • collectu: a three-dimensional array containing the random effects draws at each of the nburn iterations; • collectcovu: a three-dimensional array containing the level-2 covariance matrix draws at each of the nburn iterations; We can then check the convergence of the sampler by looking at the trace plot for each parameter value. For example, in Figure 1 (left panel), we can see the plot for β e,0 , which we obtain by running: plot(imp$collectbeta[1, 1, 1:5000], type = "l", ylab = expression(beta["e,0"]), xlab = "Iteration number" ) In this case, we can see that a burn in of 100-500 is reasonable; the sampler clearly converges very quickly.

Using jomo in practice
The .MCMCchain functions only register a single imputation, but the state of the sampler at this point is captured. This provides a mechanism for combining multiple runs of .MCMCchain and/or jomo in a flexible manner, for example, to obtain the full set of posterior draws for the model parameters with multiple runs of .MCMCchain or to generate mildly informative prior distributions to be used with jomo. Specifically, at the end of the .MCMCchain run, the following objects capture the state of the MCMC sampler: • start.imp for the level-1 variables with missing values; • (where present) l2.start.imp for level-2 variables with missing values, and • finimp.latnorm: the final state of the imputed data set using latent normals in place of categorical variables.
To do this, we run jomo.MCMCchain first, using the default prior. We retain the last draw (or the posterior mean of the latter part of the chain) as the covariance matrix prior. We use these to assign values to l1cov.prior or l2cov.prior and then we apply jomo as usual. Specifically: # Define data frame as usual JSPmiss <-within(JSPmiss, fluent <-factor(fluent)) Y <-JSPmiss[, c("english", "ravens", "fluent")] X <-JSPmiss[, c("cons", "sex")]

# Run jomo.MCMCchain with default prior
The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 imp1 <-jomo.MCMCchain(Y = Y, X = X) # Collect posterior mean of covariance matrix l1cov.guess <-apply(imp1$collectomega, c(1, 2), mean) # Multiply by degrees of freedom, i.e. dimension of the matrix (4), to get scale matrix l1cov.prior <-l1cov.guess*4 # Run jomo imp <-jomo(Y = Y, X = X, l1cov.prior = l1cov.prior) However, if a proper prior guess for the value of all the parameters was available, this would be preferable, as it would avoid using the same data twice, to fit the model and to estimate the hyperparameters of the priors.
When using jomo, we recommended the following workflow: 1. Before running the imputation model (which may take some time), perform a "dry run", to check the software is fitting the model we intended. We can do this using the .MCMCchain function with nburn = 2 and checking the output.
2. Re-run the same function for a larger number of iterations (e.g. 5000) and analyse the resulting trace and autocorrelation plots to choose a sensible number of burn-in and between-imputation iterations for the final imputation process.
3. Run the jomo function for the chosen number of iterations.
4. Fit the substantive model on the imputed data sets and apply Rubin's rules.

mitml: an alternative interface to jomo
The mitml package (Grund et al., 2016c) provides an alternative interface to joint modeling multiple imputation with jomo. Originally created as an interface to the pan package, mitml also provides access to most of the features implemented in jomo and includes a number of additional tools for managing, visualising, and analysing multiply imputed data sets.

Specification of the imputation model
The main interface to jomo is provided by the function jomoImpute, which offers two convenient ways of specifying the imputation model. The first option uses a formula-based syntax similar to packages for multilevel modelling such as lme4, nlme (Pinheiro et al., 2017), and others. The following operators can be used to define such a formula: : separates the dependent variables (left-hand side) and predictor variables (right-hand side) of the imputation model; + : adds dependent and predictor variables to the model; * : adds interactions of two or more predictors to the model; | : specifies the cluster indicator and adds cluster-specific random effects to the model (e.g., 1|school), and I() : defines additional transformation of predictor variables to be included in the model.
For example, to fit the imputation model in (10) for the substantive model in (8) with JSPmiss, the model formula can be specified as: fml <-english + ravens + fluent~sex + (1|school) The imputation is then run with jomoImpute by specifying the incomplete data, the imputation model, the number of imputations (m), and the number of iterations for burn-in (n.burn) and between imputations (n.iter). Like jomo, jomoImpute requires that categorical variables are formatted as factors.
# Convert "fluent" to factor JSPmiss <-within(JSPmiss, fluent <-factor(fluent)) # Run imputation imp <-jomoImpute(data = JSPmiss, formula = fml, n.burn = 1000, n.iter = 1000, m = 5, seed = 1569) In addition, imputation models can be run independently in subsets of the data. For this purpose, jomoImpute includes an optional group argument, denoting the name of a variable in the data set. If specified, the imputation model is run separately for each level of group.
As an alternative to specifying a formula, the imputation model can be specified with the type argument of jomoImpute. The type argument is an integer vector denoting the role of each variable in the imputation model. The following values are supported: 1 : dependent variables (fully or partially observed); 2 : predictor variables (fully observed) with fixed effect; 3 : predictor variables (fully observed) with fixed and random effect; -1 : (optional) grouping variable; if specified, imputations are run separately within each group; -2 : cluster indicator, and 0 : variables excluded from the imputation model.
In applications with missing data at both level 1 and 2, formula and type are specified as a list of two formulas or type vectors, denoting the imputation model for variables at level 1 and 2, respectively.
# Fit the substantive model to each of the imputed data sets fit.imp <-with(imp.list, lmer(english~ravens + sex + fluent + (1|school))) Finally, mitml allows pooling the results obtained from the imputed data sets. For example, testEstimates can be used to pool the estimates of individual parameters such as fixed effects and variance components (Rubin's rules Many different pooling methods are supported by mitml, including Rubin's rules with and without correction for smaller samples (testEstimates), pooled Wald and likelihood-ratio tests (LRTs) for multiple parameters and model comparisons (testModels, anova), and tests of constraints on the model parameters via the "delta method" (testConstraints, see Casella and Berger, 2002).
Note that, in order to use mitml directly with jomo, the imputed data must be converted to the mitml.list format. This conversion can be achieved with the function jomo2mitml.list.

Simulations and applications
The jomo package has been extensively evaluated in simulation studies and has been used in various applications. In this section, we provide a brief overview of these studies. For continuous data, Quartagno and Carpenter (2016); Audigier et al. (2018); Grund et al. (2018c,b) showed that jomo provides accurate parameter estimates and inferences. Quartagno and Carpenter (2019); Audigier et al. (2018); Grund et al. (2018c,b) provided similar results for binary categorical data, and Quartagno and Carpenter (2019) showed that the same procedures can be used for categorical and ordinal data. Similar findings were reported by Grund et al. (2018a,b,c) for missing data at level 2 and by Quartagno and Carpenter (2016) for applications with group-specific fixed or random covariance matrices at level 1. Further, jomo has been used to study the performance of MI for handling missing data in clustered randomised trials (Hossain et al., 2017a,b) and matched case-control studies (Seaman and Keogh, 2015). Finally, it has been used in applications, particularly, but not exclusively, for imputation of missing data in individual patient data meta-analyses (e.g., (Bloos et al., 2017)).

Conclusions and further developments
In this article, we have introduced a flexible new package for performing joint modelling multiple imputation for multilevel data. This package provides three important contributions: (i) it handles mixed data types, including continuous, categorical and binary data in a flexible way, (ii) it allows for either a common level-1 covariance matrix across level-2 units , cluster-specific level-1 covariance matrices, or random level-1 covariance matrices, and (iii) it gives valid imputation of missing values on level-2 variables. This makes jomo an effective choice for treating missing data in many applications, including single-level and multilevel data, cross-sectional and longitudinal data, and meta-analyses with individual participant data.
As with all statistical techniques, multiple imputation has to be used carefully. In particular: • We should check that the stationary distribution has been reached, before acting on our results.
As described above, the package provides tools to facilitate this.
• With many (level-1) variables and relatively few observations, a careful choice of the prior for the level-1 covariance matrix is important. We recommend a weakly informative prior and, in particular, following the strategy described above, running the .MCMCchain functions to find a sensible choice for the scale matrices for the inverse-Wishart priors.
• Like most imputation software, ours assumes that data are MAR. If data are MNAR, the results of analyses under MAR may be biased.
In this paper we present functions for multilevel imputation, which make better and more efficient use of all the available data compared to ad-hoc strategies, like imputing including cluster as a fixed effect or imputing separately by cluster. The first approach has been discussed in various publications (Lüdtke et al., 2017;Audigier et al., 2018;Drechsler, 2015) which broadly concluded that the approach is unsatisfactory for small clusters and low intra-cluster correlation. Additionally, under this approach dealing with random slopes is problematic and it is not possible to impute systematically missing variables. Similar considerations are likely to hold for the second strategy consisting in imputing separately by cluster, with the additional complication that level 2 variables cannot be imputed with this method.
If we are imputing a variable which has a random slope in the substantive model (Grund et al., 2016a), then (i) as usual, this variable will be a response in the imputation model and (ii) we should also allow its association with the outcome to be cluster-specific in the imputation model by allowing the level-1 covariance matrix to be random across level-2 units. However, although this approach performs better than a simpler one using a common covariance matrix, it is not a perfectly compatible approach Enders et al., 2018), and functions for substantive model compatible imputation (Goldstein et al., 2014) should be preferred to impute missing data in those settings, when possible. These have been recently added to jomo and they will be presented in a second paper soon. When interactions or non-linear terms are present in the model of interest, ignoring them in the imputation model may lead to bias; instead, they should be included as covariates (Carpenter and Kenward, 2013, p. 130). When these terms involve partially observed variables, the solution consists again in using substantive model compatible functions.
When using functions for random cluster-specific covariance matrices, users should note that this specifies an inverse-Wishart distribution matrix for the level-1 covariance matrices across the level-2 units. Our simulations (Quartagno and Carpenter, 2016) suggest when this assumption is not appropriate there will be some (usually immaterial) loss of efficiency. In principle, jomo could be extended to incorporate other distributions.
All the illustrated functions make use of either Gibbs or Metropolis-Hastings sampling; however, other sampling algorithms such as Hamiltonian Monte-Carlo may provide interesting alternatives in the future.
Future updates and additions to the package will be advertised on www.missingdata.org.uk, together with an up-to-date list of publications related to the package. We hope the package is useful to readers and welcome their feedback.