bqror: An R package for Bayesian Quantile Regression in Ordinal Models

This article describes an R package bqror that estimates Bayesian quantile regression for ordinal models introduced in Rahman (2016). The paper classifies ordinal models into two types and offers computationally efficient, yet simple, Markov chain Monte Carlo (MCMC) algorithms for estimating ordinal quantile regression. The generic ordinal model with 3 or more outcomes (labeled ORI model) is estimated by a combination of Gibbs sampling and Metropolis-Hastings algorithm. Whereas an ordinal model with exactly 3 outcomes (labeled ORII model) is estimated using Gibbs sampling only. In line with the Bayesian literature, we suggest using marginal likelihood for comparing alternative quantile regression models and explain how to compute the same. The models and their estimation procedures are illustrated via multiple simulation studies and implemented in two applications. The article also describes several other functions contained within the bqror package, which are necessary for estimation, inference, and assessing model fit.


Introduction
Quantile regression defines the conditional quantiles of a continuous dependent variable as a function of the covariates without assuming any error distribution (Koenker and Bassett 1978).The method is robust and offers several advantages over least squares regression such as desirable equivariance properties, invariance to monotone transformation of the dependent variable, and robustness against outliers (Koenker 2005;Davino, Furno, and Vistocco 2014;Furno and Vistocco 2018).However, quantile regression with discrete outcomes is more complex because quantiles of discrete data cannot be obtained through a simple inverse operation of the cumulative distribution function (cd f ).Besides, discrete outcome (binary and ordinal) modeling requires location and scale restrictions to uniquely identify the parameters (see Section 2 for details).Kordas (2006) estimated quantile regression with binary outcomes using simulated annealing, while Benoit and Van den Poel (2010) proposed Bayesian binary quantile regression where a working likelihood for the latent variable was constructed by assuming the error follows an asymmetric Laplace (AL) distribution (Yu and Moyeed 2001).The estimation procedure for the latter is available in the bayesQR package of R software (Benoit and Van den Poel 2017).A couple of recent works on Bayesian quantile regression with binary longitudinal (panel) outcomes are Rahman and Vossmeyer (2019) and Bresson, Lacroix, and Rahman (2021).Extending the quantile framework to ordinal outcomes is more intricate due to the difficulty in satisfying the ordering of cut-points while sampling.Rahman (2016) introduced Bayesian quantile analysis of ordinal data and proposed two efficient MCMC algorithms.Since Rahman (2016), ordinal quantile regression has attracted some attention which includes Alhamzawi (2016), Alhamzawi and Ali (2018), Ghasemzadeh, Ganjali, and Baghfalaki (2018), Rahman and Karnawat (2019), Ghasemzadeh, Ganjali, and Baghfalaki (2020), and Tian et al. (2021).
Ordinal outcomes frequently occur in a wide class of applications in economics, finance, marketing, and the social sciences.Here, ordinal regression (e.g.ordinal probit, ordinal logit) is an important tool for modeling, analysis, and inference.Given the prevalence of ordinal models in applications and the recent theoretical developments surrounding ordinal quantile regression, an estimation package is essential so that applied econometricians and statisticians can benefit from a more comprehensive data analysis.At present, no statistical software (such as R, MATLAB, Python, Stata, SPSS, and SAS) have any package for estimating quantile regression with ordinal outcomes.The current paper fills this gap in the literature and describes the implementation of bqror package (version 1.6.0)for estimation and inference in Bayesian ordinal quantile regression.
The bqror package offers two MCMC algorithms.Ordinal model with 3 or more outcomes is estimated through a combination of Gibbs sampling (Casella and George 1992) and Metropolis-Hastings (MH) algorithm (S.Chib and Greenberg 1995).The method is implemented in the function quantregOR1.For ordinal models with exactly 3 outcomes, the package presents a Gibbs sampling algorithm that is implemented in the function quantregOR2.We recommend using this procedure for an ordinal model with 3 outcomes, since its simpler and faster.Both functions, quantregOR1 and quantregOR2, report the posterior mean, posterior standard deviation, 95% posterior credible interval, and inefficiency factor of the model parameters.To compare alternative quantile regression models, we recommend using the marginal likelihood over the deviance information criterion (DIC).This is because the "Bayesian approach" to compare models is via the marginal likelihood (Siddhartha Chib 1995;Siddhartha Chib and Jeliazkov 2001).So, the bqror package also provides the marginal The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 likelihood with technical details for computation described in this paper.Additionally, the package includes functions to calculate the covariate effects and example codes to produce trace plots of MCMC draws.Lastly, this paper utilizes the bqror package to demonstrate the estimation of quantile ordinal models on simulated data and real-life applications.

Quantile regression in ordinal models
Ordinal outcomes are common in a wide class of applications in economics, finance, marketing, social sciences, statistics in medicine, and transportation.In a typical study, the observed outcomes are ordered and categorical; so for the purpose of analysis scores/numbers are assigned to each outcome.For example, in a study on public opinion about offshore drilling (Mukherjee and Rahman 2016), responses may be recorded as follows: 1 for 'strongly oppose', 2 for 'somewhat oppose', 3 for 'somewhat support', and 4 for 'strongly support'.The numbers have an ordinal meaning but have no cardinal interpretation.We cannot interpret a score of 2 as twice the support compared to a score of 1, or the difference in support between 2 and 3 is the same as that between 3 and 4. With ordinal outcomes, the primary modeling objective is to express the probability of outcomes as a function of the covariates.Ordinal models that have been extensively studied and employed in applications include the ordinal probit and ordinal logit models (Johnson and Albert 2000;Greene and Hensher 2010), but they only give information about the average probability of outcomes conditional on the covariates.
Quantile regression with ordinal outcomes can be estimated using the monotone equivariance property and provides information on the probability of outcomes at different quantiles.In the spirit of Albert and Chib (1993), the ordinal quantile regression model can be presented in terms of an underlying latent (or unobserved) variable z i as follows: where x ′ i is a 1 × k vector of covariates, β p is a k × 1 vector of unknown parameters at the p-th quantile, ϵ i follows an AL distribution i.e., ϵ i ∼ AL(0, 1, p), and n denotes the number of observations.Note that unlike the Classical (or Frequentist) quantile regression, the error is assumed to follow an AL distribution in order to construct a (working) likelihood (Yu and Moyeed 2001).The latent variable z i is related to the observed discrete response y i through the following relationship, where γ p = (γ p,0 = −∞, γ p,1 , . . ., γ p,J−1 , γ p,J = ∞) is the cut-point vector and J denotes the number of outcomes or categories.Typically, the cut-point γ p,1 is fixed at 0 to anchor the location of the distribution required for parameter identification (Jeliazkov and Rahman 2012).Given the observed data y = (y 1 , • • • , y n ) ′ , the joint density (or likelihood when viewed as a function of the parameters) for the ordinal quantile model can be written as, where Θ p = (β p , γ p ), F AL (•) denotes the cd f of an AL distribution and I(y i = j) is an indicator function, which equals 1 if y i = j and 0 otherwise.Working directly with the AL likelihood (3) is not convenient for MCMC sampling.Therefore, the latent formulation of the ordinal quantile model (1), following Kozumi and Kobayashi (2011), is expressed in the normal-exponential mixture form as follows, where and E denotes normal and exponential distributions, respectively; Based on this formulation, we can write the conditional distribution of the latent variable as This allows access to the properties of normal distribution which helps in constructing efficient MCMC algorithms.

OR I model
The term "OR I model" is ascribed to an ordinal model in which the number of outcomes (J) is equal to or greater than 3, location restriction is imposed by setting γ p,1 = 0, and scale restriction is achieved via constant variance (See Rahman (2016); for a given value of p, variance of a standard AL distribution is constant).Note that in contrast to Rahman (2016), our definition of OR I model includes an ordinal The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 model with exactly 3 outcomes.The location and scale restrictions are necessary to uniquely identify the parameters (see Jeliazkov, Graves, and Kutzbach (2008) and Jeliazkov and Rahman (2012) for further details and a pictorial representation).
During the MCMC sampling of the OR I model, we need to preserve the ordering of cut-points (γ p,0 = −∞ < γ p,1 < γ p,2 < . . .< γ p,J−1 < γ p,J = ∞).This is achieved by using a monotone transformation from a compact set to the real line.Many such transformations are available (e.g., log-ratios of category bin widths, arctan, arcsin), but the bqror package utilizes the logarithmic transformation (Albert and Chib 2001;Rahman 2016), The cut-points (γ p,1 , γ p,2 , • • • , γ p,J−1 ) can be obtained from a one-to-one mapping to (δ p,2 , • • • , δ p,J−1 ).With all the modeling ingredients in place, we employ the Bayes' theorem and express the joint posterior distribution as proportional to the product of the likelihood and the prior distributions.As in Rahman (2016), we employ independent normal priors: β p ∼ N(β p0 , B p0 ), δ p ∼ N(δ p0 , D p0 ) in the bqror package.The augmented joint posterior distribution for the OR I model can thus be written as, where in the likelihood function of the second line, we use the fact that the observed y i is independent of (β p , w) given (z i , δ p ).This follows from equation (2) which shows that y i given (z i , δ p ) is determined with probability 1.In the third line, we specify the conditional distribution of the latent variable and the prior distribution on the parameters.
The conditional posterior distributions are derived from the augmented joint posterior distribution (6), and the parameters are sampled as per Algorithm~1.This algorithm is implemented in the bqror package.The parameter β p is sampled from an updated multivariate normal distribution and the latent weight w is sampled element-wise from a generalized inverse Gaussian (GIG) distribution.The cut-point vector δ p is sampled marginally of (z, w) using a random-walk MH algorithm.Lastly, the latent variable z is sampled element-wise from a truncated normal (TN) distribution.
Algorithm 1: Sampling in OR I model.
• Sample β p |z, w ∼ N( βp , Bp ), where, • Sample δ p |y, β marginally of w (latent weight) and z (latent data), by generating δ ′ p using a random-walk chain δ ′ p = δ p + u, where u ∼ N(0 J−2 , ι 2 D), ι is a tuning parameter and D denotes the negative inverse Hessian, obtained by maximizing the log-likelihood with respect to δ p .Given the current value of δ p and the proposed draw δ ′ p , return δ ′ p with probability, ; otherwise repeat the old value δ p .The variance of u may be tuned as needed for appropriate step size and acceptance rate.
where TN denotes a truncated normal distribution and γ p is obtained via δ p using equation ( 5) The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859

OR II model
The term "OR II model" is used for an ordinal model with exactly 3 outcomes (i.e., J = 3) where both location and scale restrictions are imposed by fixing the cut-points.Since there are only 2 cut-points and both are fixed at some values, the scale of the distribution needs to be free.Therefore, a scale parameter σ p is introduced and the quantile ordinal model is rewritten as follows: where σ p ϵ i ∼ AL(0, σ p , p), (γ 1 , γ 2 ) are fixed, and recall γ 0 = −∞ and γ 3 = ∞ .In this formulation, the conditional mean of z i is dependent on σ p which is problematic for Gibbs sampling.So, we define a new variable ν i = σ p w i ∼ E (σ p ) and rewrite the model in terms of ν i .In this representation, , the conditional mean is free of σ p and the model is conducive to Gibbs sampling.The next step is to specify the prior distributions required for Bayesian inference.We follow Rahman (2016) and assume β p ∼ N(β p0 , B p0 ) and σ p ∼ IG(n 0 /2, d 0 /2); where IG stands for an inverse-gamma distribution.These are the default prior distributions in the bqror package.Employing the Bayes' theorem, the augmented joint posterior distribution can be expressed as, where the derivations in each step are analogous to those for the OR I model.
The augmented joint posterior distribution, given by equation ( 8), can be utilized to derive the conditional posterior distributions and the parameters are sampled as presented in Algorithm 2. This involves sampling β p from an updated multivariate normal distribution and sampling σ p from an updated IG distribution.The latent weight ν is sampled element-wise from a GIG distribution and similarly, the latent variable z is sampled element-wise from a truncated normal distribution.
Algorithm 2: Sampling in OR II model.

Marginal likelihood
The article by Rahman (2016) employed DIC (Spiegelhalter et al. 2002;Gelman et al. 2013) for model comparison.However, in the Bayesian framework alternative models are typically compared using the marginal likelihood or the Bayes factor (Poirier 1995;Greenberg 2012).Therefore, we prefer using the marginal likelihood (or the Bayes factor) for comparing two or more regression models at a given quantile.
The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 Consider a model M s with parameter vector Θ s .Let f (y|M s , Θ s ) be its sampling density and π(Θ s |M s ) be the prior distribution; where s = 1, . . ., S.Then, the marginal likelihood for the model M s is given by the expression, m(y|M s ) = f (y|Θ s , M s )π(Θ s |M s ) dΘ s .The Bayes factor is the ratio of marginal likelihoods.So, for any two models M q versus M r , the Bayes factor, B qr = , can be easily computed once we have the marginal likelihoods.
Siddhartha Chib (1995) and later Siddhartha Chib and Jeliazkov (2001) showed that a simple and stable estimate of marginal likelihood can be obtained from the MCMC outputs.The approach is based on the recognition that the marginal likelihood can be written as the product of likelihood function and prior density over the posterior density.So, the marginal likelihood m(y|M s ) for model M s is expressed as, Siddhartha Chib (1995) refers to equation ( 9) as the basic marginal likelihood identity since it holds for all values in the parameter space, but typically computed at a high density point (e.g., mean, mode) denoted Θ * to minimize estimation variability.The likelihood ordinate f (y|M s , Θ * ) is directly available from the model and the prior density π(Θ * |M s ) is assumed by the researcher.The novel part is the computation of posterior ordinate π(Θ * |y, M s ), which is estimated using the MCMC outputs.Since the marginal likelihood is often a large number, it is convenient to express it on the logarithmic scale.An estimate of the logarithm of marginal likelihood is given by, where we have dropped the conditioning on M s for notational simplicity.The next two subsections explain the computation of marginal likelihood for the OR I and OR I I quantile regression models.

Marginal likelihood for OR I model
We know from Section 2.1 that the MCMC algorithm for estimating the OR I model is defined by the following conditional posterior densities: π(β p |z, w), π(δ p |β p , y), π(w|β p , z), and π(z|β p , δ p , w, y).
The conditional posteriors for β p , w, and z have a known form, but that of δ p is not tractable and is sampled using an MH algorithm.Consequently, we adopt the approach of Siddhartha Chib and Jeliazkov (2001) to calculate the marginal likelihood for the OR I model.
To simplify the computational process (specifically, to keep the computation over a reasonable dimension), we estimate the marginal likelihood marginally of the latent variables (w, z).Moreover, we decompose the posterior ordinate as, where Θ * = (β * p , δ * p ) denotes a high density point.By placing the intractable posterior ordinate first, we avoid the MH step in the reduced run -the process of running an MCMC sampler with one or more parameters fixed at some value -of the MCMC sampler.We first estimate π(δ * p |y) and then the reduced conditional posterior ordinate π(β * p |δ * p , y).To obtain an estimate of π(δ * p |y), we need to express it in a computationally convenient form.The parameter δ p is sampled using an MH step, which requires specifying a proposal density.Let q(δ p , δ ′ p |β p , w, z, y) denote the proposal density for the transition from δ p to δ ′ p , and let, denote the probability of making the move.In the context of the model, f (y|β p , δ p ) is the likelihood given by equation (3), π(β p ) and π(δ p ) are normal prior distributions (i.e., β p ∼ N(β p0 , B p0 ) and δ p ∼ N(δ p0 , D p0 ) as specified in Section 2.1, and the proposal density q(δ p , δ ′ p |β p , w, z, y) is normal given by f N (δ ′ p |δ p , ι 2 D) (see Algorithm~1 in Section 2.1).There are two points to be noted about the proposal density.First, the conditioning on (β p , w, z, y) is only for generality and not necessary as illustrated by the use of a random-walk proposal density.Second, since our MCMC sampler utilizes a random-walk proposal density, the second ratio on the right hand side of equation ( 11) reduces to 1.
We closely follow the derivation in Siddhartha Chib and Jeliazkov (2001) The resulting quadruplet of draws {β where ).The computation of the posterior ordinate π(β * p |δ * p , y) is trivial.We have the sample of H draws {w (h) , z (h) } from the reduced run, which are marginally of β p from the distribution π(w, z|δ * p , y).These draws are utilized to estimate the posterior ordinate as, Substituting the two density estimates given by equations ( 13) and ( 14) into equation ( 10 The term π(σ * p |β * p , y) is a reduced conditional density ordinate and can be estimated with the help of a reduce run.So, we generate an additional sample (say another G iterations) of {ν (g) which is a simulation consistent estimate of π(σ * p |β * p , y).Substituting the two density estimates given by equations ( 16) and ( 17) into equation ( 10), we obtain an estimate of the logarithm of marginal likelihood, where the likelihood function and prior densities are evaluated at Θ * = (β * p , σ * p ).Here, the likelihood function has the expression, , where the cut-points γ are known and fixed for identification reasons as explained in Section 2.2.

Simulation studies
This section explains the data generating process for simulation studies, the functions offered in the bqror package, and usage of the functions for estimation and inference in ordinal quantile models.

OR I model: data, functions, and outputs
Data Generation: The data for simulation study in OR I model is generated from the regression: Here, U and AL denote a uniform distribution and an asymmetric Laplace distribution, respectively.The z values are continuous and are classified into 4 categories based on the cut-points (0, 2, 4) to generate ordinal values of y, the outcome variable.We follow the above procedure to generate 3 data sets each with 500 observations (i.e., n = 500).The 3 data sets correspond to the quantile p equaling 0.25, 0.50, and 0.75, and are stored as data25j4, data50j4, and data75j4, respectively.Note that the last two letters in the name of the data (i.e., j4) denote the number of unique outcomes in the y variable.
We now describe the major functions for Bayesian quantile estimation of OR I model, demonstrate their usage, and note the inputs and outputs of each function.
quantregOR1: The quantregOR1 is the primary function for estimating Bayesian quantile regression in ordinal models with 3 or more outcomes (i.e., OR I model) and implements Algorithm~1.In the code snippet below, we first read the data and then do the following: define the ordinal response variable (y) and covariate matrix (xMat), specify the number of covariates (k) and number of outcomes (J), and set the prior means and covariances for β p and δ p .We then call the quantregOR1 function and specify the inputs: ordinal outcome variable (y), covariate matrix including a column of ones (xMat), prior mean (b0) and prior covariance matrix (B0) for the regression coefficients, prior mean (d0) and prior covariance matrix (D0) for the transformed cut-points, burn-in size (burn), post burn-in size (mcmc), quantile (p), the tuning factor (tune) to adjust the MH acceptance rate, and the auto correlation cutoff value (accutoff).The last input verbose, when set to TRUE (FALSE) will (not) print the summary output.
The posterior means of (β p , δ p ) are close to the true values used to generate the data with small standard deviations.So, the quantregOR1 function is successful in recovering the true values of the parameters.The inefficiency factor is computed from the MCMC samples using the batch-means method (Greenberg 2012).They indicate the cost of working with correlated samples.For example, an inefficiency factor of 3 implies that it takes 3 correlated draws to get one independent draw.As such, low inefficiency factor indicates better mixing and a more efficient MCMC algorithm.Inefficiency factor also bears a direct relationship with effective sample size, where the latter can be obtained as the total number of (post burn-in) MCMC draws divided by the inefficiency factor (Siddhartha Chib 2012).The inefficiency factors for (β p , δ p ) are stored in the object modelOR1 of class bqrorOR1 and can be obtained by calling modelORI$ineffactor.
The third last row displays the random-walk MH acceptance rate for δ p , for which the preferred acceptance rate is around 30 percent.The last two rows present the model comparison measures, the logarithm of marginal likelihood and the DIC.The logarithm of marginal likelihood is computed using the MCMC outputs from the complete and reduced runs as explained in Section 3, while the principle for computing the DIC is presented in Gelman et al. (2013).For any two competing models at the same quantile, the model with a higher (lower) marginal likelihood (DIC) provides a better model fit.
While the two model comparison measures are printed as part of the summary output, they can also be called individually.For example, the logarithm of marginal likelihood can be obtained by calling modelORI$logMargLike. Whereas, the DIC can be obtained by calling modelORI$dicQuant$DIC.Two more quantities that are part of the object modelORI$dicQuant are effective number of parameters denoted p D and the deviance computed at the posterior mean.They can be obtained by calling modelORI$dicQuant$pd and modelORI$dicQuant$dev, respectively.Besides, post estimation, one may also use the command modelORI$summary or summary.bqrorOR1(modelORI) to extract and print the summary output.

covEffectOR1:
The function covEffectOR1 computes the average covariate effect for different outcomes of OR I model at a specified quantile, marginally of the parameters and the remaining covariates.While a demonstration of this function is best understood in a real-life study and is presented in the application section, here we present the mechanics behind the computation of average covariate effect.Suppose, we want to compute the average covariate effect when the l-th covariate {x i,l } is set to the values a and b, denoted as {x a i,l } and {x b i,l }, respectively.We split the covariate and parameter vectors as follows: x a i = (x a i,l , x i,−l ), x b i = (x b i,l , x i,−l ), and β p = (β p,l , β p,−l ), where −l in the subscript denotes all covariates (parameters) except the l-th covariate (parameter).We are interested in the distribution of the difference {Pr(y i = j|x b i,l ) − Pr(y i = j|x a i,l )} for 1 ≤ j ≤ J, marginalized over {x i,−l } The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 and (β p , δ p ), given the data y = (y 1 , • • • , y n ) ′ .We marginalize the covariates using their empirical distribution and the parameters based on the posterior distribution.
To obtain draws from the distribution {Pr(y i = j|x b i,l ) − Pr(y i = j|x a i,l )}, we use the method of composition (See Siddhartha Chib and Jeliazkov (2006), Rahman andVossmeyer (2019), andBresson, Lacroix, andRahman (2021), for additional details).In this process, we randomly select an individual, extract the corresponding sequence of covariate values, draw a value (β p , δ p ) from the posterior distributions, and evaluate {Pr(y i = j|x b i , β p , δ p ) − Pr(y i = j|x a i , β p , δ p )}, where, for q = b, a and 1 ≤ j ≤ J.This process is repeated for all remaining individuals and other MCMC draws from the posterior distribution.Finally, the average covariate effect (ACE) for outcome j (for 1 ≤ j ≤ J) is calculated as the mean of the difference in pointwise probabilities, where (β p ) is an MCMC draw of (β p , δ p ), and M is the number of post burn-in MCMC draws.

OR I I model: data, function, and outputs
Data Generation: The data generating process for the OR I I model closely resembles that of OR I model.In particular, 500 observations are generated for each value of p from the regression model: , where β = (−4, 6, 5), (x 2 , x 3 ) ∼ U(0, 1) and ϵ i ∼ AL(0, σ = 1, p) for i = 1, . . ., n.The continuous values of z are classified based on the cut-points (0, 3) to generate 3 ordinal values for y, the outcome variable.Once again, we choose p equal to 0.25, 0.50, and 0.75 to generate three samples from the model, which are referred to as data25j3, data50j3, and data75j3, respectively.The last two letters in the data names (i.e., j3) denote the number of unique outcomes in the y variable.

quantregOR2:
The function quantregOR2 implements Algorithm~2 and is the main function and for estimating Bayesian quantile regression in OR I I model i.e., an ordinal model with exactly 3 outcomes.In the code snippet below, we first read the data, define the required quantities, and then call the quantregOR2 for estimating the quantile model.The function inputs are as follows: the ordinal outcome variable (y), covariate matrix including a column of ones (xMat), prior mean (b0) and prior covariance matrix (B0) for β p , prior shape (n0) and scale (d0) parameters for σ p , second cut-point (gammacp2), burn-in size (burn), post burn-in size (mcmc), quantile (p), auto correlation cutoff value (accutoff), and the verbose option which when set to TRUE (FALSE) will (not) print the outputs.We use a relatively diffuse prior distributions on (β p , σ p ) to allow the data to speak for itself.The outputs from the quantregOR2 function are the following: summary.bqrorOR2,postMeanbeta, postMeansigma, postStdbeta, postStdsigma, logMargLike, dicQuant, ineffactor, betadraws, and sigmadraws.A detailed description of each output is presented in the bqror package help file.Once again, we summarize the MCMC draws by reporting the posterior mean, posterior standard deviation, 95% posterior credible (or probability) interval, and the inefficiency factor of the quantile regression coefficients β p and scale parameter σ p .These quantities are presented in the last five columns and labeled appropriately.

library('bqror'
The output also exhibits the logarithm of marginal likelihood and the DIC for OR I I model.For two or more models at the same quantile, the model with a higher (lower) marginal likelihood (DIC) indicates a better model fit.The logarithm of marginal likelihood is computed using the Gibbs output as explained in Section 3, while the computation of DIC follows Gelman et al. (2013).
The two model comparison measures can also be printed individually.For example, the logarithm of marginal likelihood can be obtained by calling modelORII$logMargLike and the DIC by calling modelORII$dicQuant$DIC.The effective number of parameters p D and the deviance computed at the posterior mean can obtained by calling modelORII$dicQuant$pd and modelORII$dicQuant$dev, respectively.Lastly, one may use the command modelORII$summary or summary.bqrorOR2(modelORII) to extract and print the summary output.covEffectOR2: The function covEffectOR2 computes the average covariate effect for the 3 outcomes of OR I I model at a specified quantile, marginally of the parameters and remaining covariates.The principle underlying the computation is analogous to that of OR I model and explained below.An implementation of the function is presented in the tax policy application.Suppose, we are interested in computing the average covariate effect for the l-th covariate {x i,l } for two different values a and b, and split the covariate and parameter vectors as: We are interested in the distribution of the difference {Pr( We again employ the method of composition i.e., randomly select an individual, extract the corresponding sequence of covariate values, draw a value (β p , σ p ) from their posterior distributions, and lastly evaluate {Pr( for q = b, a, and 1 ≤ j ≤ J = 3.This process is repeated for other individuals and the remaining Gibbs draws to compute the ACE for outcome j (= 1, 2, 3) as below, p ) is a Gibbs draw of (β p , σ p ) and G is the number of post burn-in Gibbs draws.

Applications
In this section, we consider the educational attainment and tax policy applications from Rahman (2016) to demonstrate the real data applications of the proposed bqror package.While the educational attainment study displays the implementation of ordinal quantile regression in OR I model, the tax policy study highlights the use of ordinal quantile regression in OR I I model.Data for both the applications are included as a part of the bqror package.
The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 the Markov chains have converged to the joint posterior distribution, such as shown in Figure 2. The idea is that the trace plots should show variation around a central value if the chain has converged, rather than drift without settling down.
Next, we utilize the covEffectOR1 function to compute the average covariate effect for a $10,000 increase in family income on the four categories of educational attainment.In general, the calculation of average covariate effect requires creation of either one or two new covariate matrices depending whether the covariate is continuous or indicator (binary), respectively.Since income is a continuous variable, a modified covariate matrix is created by adding $10,000 to each observation of family income.This is xMod2 in the code below and the increased income variable corresponds to x b i,l for i = 1, • • • , n in Section 4.1.The second covariate matrix xMod1 (in the code below) is simply the covariate or design matrix xMat; so with reference to Section 4.1, x a i,l = x i,l for i = 1, • • • , n.When the covariate of interest is an indicator variable, then xMod1 also requires modification as illustrated in the tax policy application.
We now call the covEffectOR1 function and supply the inputs to get the results.The results shows that at the 50th quantile and for a $10,000 increase in family income, the probability of obtaining less than high school (high school degree) decreases by 3.14 (1.29) percent, while the probability of achieving some college or associate's degree (college or graduate degree) increases by 1.93 (2.50) percent.

Tax Policy
Here, the objective is to analyze the factors that affect public opinion on the proposal to raise federal taxes for couples (individuals) earning more than $250,000 ($200,000) per year in the United States (US).The proposal was designed to extend the Bush Tax cuts for the lower and middle income classes, but restore higher rates for the richer class.Such a policy is considered pro-growth, since it is aimed to promote economic growth in the US by augmenting consumption among the low-middle income families.After extensive debate, the proposed policy received a two year extension and formed a part of the "Tax Relief, Unemployment Insurance Reauthorization, and Job Creation Act of 2010".The data for the study was taken from the 2010-2012 American National Election Studies (ANES) on the Evaluations of Government and Society Study 1 (EGSS 1) and contains 1,164 observations.The dependent variable in the model, individual's opinion on tax increase, has 3 categories: Oppose, Neither favor nor oppose, and Favor (See Figure 3).The covariates included in the model are the intercept, indicator variables for employment status, income above $75,000, bachelors' degree, post-bachelors' degree, computer ownership, cell phone ownership, and white race.
To estimate the quantile model on public opinion about federal tax increase, we load the bqror package, prepare the data, and provide the inputs into the quantregOR2 function.Specifically, we define the outcome variable, covariate matrix (with covariates in order as in Rahman 2016), specify the prior distributions for (β p , σ p ), choose the second cut-off value, burn-in size, number of post burn-in MCMC iterations, and values for quantile (p = 0.5 for this illustration) and autocorrelation cutoff.library(bqror) data("Policy_Opinion") data <-na.omit(Policy_Opinion)cols <-c("Intercept","EmpCat","IncomeCat","Bachelors","Post.Bachelors", "Computers","CellPhone", "White") x <-data[cols] xMat <-x [,c(1,2,3,4,5,6,7,8) FedTax <-quantregOR2(y = yOrd, x = xMat, b0, B0, n0, d0, gammacp2 = 3, burn = 1125, mcmc = 4500, p = 0.5, accutoff = 0.5, TRUE) [1] Summary of MCMC draws : The results (See Footnote 1) from the MCMC draws are summarized above, where we report the posterior mean, posterior standard deviation, 95% posterior credible interval, and the inefficiency factor of the parameters (β p , σ p ).Additionally, the summary displays the logarithm of marginal likelihood and the DIC. Figure 4 presents the trace plots of the Gibbs draws, which can be generated by loading the bayesplot package and using the codes below.mcmc <-500 burn <-round(0.25*mcmc)nsim <-mcmc + burn mcmcDraws <-cbind(t(FedTax$betadraws), t(FedTax$sigmadraws)) color_scheme_set('darkgray') bayesplot_theme_set(theme_minimal()) mcmc_trace(mcmcDraws[(burn+1):nsim, ], facet_args = list(ncol = 3)) Finally, we utilize the covEffectOR2 function to demonstrate the calculation of average covariate effect within the OR I I framework.Below, we compute the average covariate effect for computer ownership (assume this is the l-th variable) on the 3 categories of public opinion about the tax policy.Here, the covariate is an indicator variable since you may either own a computer (coded as 1) or not (coded as 0).So, we need to create two modified covariate matrices.In the code snippet below, the first matrix xMat1 and the second matrix xMat2 are created by replacing the column on computer The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 The result on covariate effect shows that at the 50th quantile, ownership of computer decreases probability for the first category Oppose (Neither favor nor oppose) by 3.96 (3.31) percent, and increases the probability for the third category Favor by 7.26 percent.

Conclusion
A wide class of applications in economics, finance, marketing, and the social sciences have dependent variables which are ordinal in nature (i.e., they are discrete and ordered) and are characterized by an underlying continuous variable.Modeling and analysis of such variables have been typically confined to ordinal probit or ordinal logit models, which offers information on the average probability of outcome variable given the covariates.However, a recently proposed method by Rahman (2016) allows Bayesian quantile modeling of ordinal data and thus presents the tool for a more comprehensive analysis and inference.The prevalence of ordinal responses in applications is well known and hence a software package that allows Bayesian quantile analysis with ordinal data will be of immense interest to applied researchers from different fields, including economics and statistics.
The R Journal Vol.XX/YY, AAAA 20ZZ ISSN 2073-4859 The current paper presents an implementation of the bqror package -the only package available for estimation and inference of Bayesian quantile regression in ordinal models.The package offers two MCMC algorithms for estimating ordinal quantile models.An ordinal quantile model with 3 or more outcomes is estimated by a combination of Gibbs sampling and MH algorithm, while estimation of an ordinal quantile model with exactly 3 outcomes utilizes a simpler and computationally faster algorithm that relies solely on Gibbs sampling.For both forms of ordinal quantile models, the bqror package also provides functions to calculate the covariate effects (for continuous as well as binary regressors) and measures for model comparison -marginal likelihood and the DIC.The paper explains how to compute the marginal likelihood from the MCMC outputs and recommends its use over the DIC for model comparison.Additionally, this paper demonstrates the usage of functions for estimation and analysis of Bayesian quantile regression with ordinal data on simulation studies and two applications related to educational attainment and tax policy.In the future, the current package will be extended to include ordinal quantile regression with longitudinal data and variable selection in ordinal quantile regression with cross section and/or longitudinal data.

Figure 1 :
Figure 1: Bar chart showing the different categories of educational attainment.The number of responses (percentage) for each category are shown inside (at the top) of each bar.

Figure 2 :
Figure 2: Trace plots of the MCMC draws in the educational attainment study.

Figure 3 :
Figure 3: Bar chart for public opinion on tax increase.The number of responses (percentage) for each category are shown inside (at the top) of each bar.

Figure 4 :
Figure 4: Trace plots of the MCMC draws in the tax policy study.