Semiparametric Generalized Linear Models with the gldrm Package

This paper introduces a new algorithm to estimate and perform inferences on a recently proposed and developed semiparametric generalized linear model (glm). Rather than selecting a particular parametric exponential family model, such as the Poisson distribution, this semiparametric glm assumes that the response is drawn from the more general exponential tilt family. The regression coefficients and unspecified reference distribution are estimated by maximizing a semiparametric likelihood. The new algorithm incorporates several computational stability and efficiency improvements over the algorithm originally proposed. In particular, the new algorithm performs well for either small or large support for the nonparametric response distribution. The algorithm is implemented in a new R package called gldrm.


Introduction
introduced the generalized linear density ratio model (gldrm), which is a novel semiparametric formulation of the classical glm.Although Rathouz and Gao did not use the term gldrm, we refer to it as such because it is a natural extension of the density ratio model (see e.g.Lovric (2011)).Like a standard glm, the gldrm relates the conditional mean of the response to a linear function of the predictors through a known link function g.To be specific, let the random variable Y be a scalar response, and let X be a p × 1 covariate vector for a particular observation.The model for the mean is E(Y|X = x) = g −1 (x T β) . (1) Because Equation 1 holds for both gldrm and glm, the regression coefficients have the same interpretation for both models.The gldrm relaxes the standard glm assumption that the distribution of Y|X comes from a particular exponential family model.Instead, assume that Y|X comes from an exponential tilt model of the form where b(θ) = log f 0 (y) exp(θy) dλ(y) , and θ is defined implicitly as the solution to Here f 0 is an unspecified probability density with respect to a measure λ.We call f 0 the reference distribution.Measure λ is Lebesgue if Y|X is continuous, a counting measure if Y|X is discrete, or a mixture of the two if Y|X has a mixture distribution.Note that E(Y|X) = b (θ) and Var(Y|X) = b (θ), which are standard glm properties.
The regression coefficients β and reference distribution f 0 are estimated by maximizing a semiparametric likelihood function, which contains a nonparametric representation of f 0 that has point mass only at values of Y observed in the data.The quantity θ is not a parameter in this model, but rather a function of the free parameters β and f 0 , as well as of X.This work was fully developed by Huang and Rathouz (2012), who first focused on the case where the covariate vector takes one of finitely many values, drawing on theoretical arguments advanced in the empirical likelihood literature (e.g.Owen et al. (2001)).Drawing on semiparametric profile likelihood methods, Huang (2014) went on to fully generalize the asymptotic arguments, proving consistency and asymptotic normality of the regression coefficient estimators, and deriving the asymptotic variance matrix using the profile likelihood.Huang also proved pointwise asymptotic normality of the reference cumulative distribution function estimator.
Despite these important theoretical advances, computation for this model has remained a practical challenge.The original algorithm in Rathouz and Gao (2009) is somewhat rudimentary and applies mostly to cases with finite support.It does not scale well, and stability and speed are challenges.This paper proposes a new algorithm to address these challenges and render the model more widely applicable.
In particular, the issue of optimizing the semiparametric likelihood over the f 0 point masses is improved with application of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) technique (Broyden, 1970; The R Journal Vol.10/1, July 2018 ISSN 2073-4859 Fletcher, 1970;Goldfarb, 1970;Shanno, 1970).Because the number of parameters in the semiparametric likelihood associated with f 0 is equal to the number of unique points in the response vector, the rank of the Hessian matrix can become unwieldy, especially for continuous response data.The BFGS method uses an easily-computed rank-two approximate Hessian matrix that has an inverse which is much less computationally intensive to calculate.
The optimization techniques in this paper have been incorporated into the R package gldrm (Wurm and Rathouz, 2018), the first CRAN package for gldrm.This package estimates the gldrm model and provides coefficient standard errors.In addition, since the publication of Rathouz and Gao (2009), Huang (2014) developed likelihood ratio tests and likelihood ratio-based confidence intervals for regression coefficients and demonstrated their strong properties.The gldrm package provides utilities for likelihood ratio tests of nested models, as well as confidence intervals.
We present our optimization algorithm in Section 2. In Section 3, we present simulation experiments to check the finite sample performance of the gldrm estimators and the accuracy of estimated standard errors using this algorithm.In Section 4, we discuss methods for inference.Section 5, we present a simulation that benchmarks the computational time and how it scales with the number of observations and number of covariates.In Section 6, we present a summary of the functions in the gldrm package.In Section 7, we demonstrate the functionality of the gldrm package with an example in R. In Section 8, we make concluding remarks.

Optimization algorithm
Suppose we have an observed sample (x 1 , y 1 ), . . ., (x n , y n ) generated by the model specified in Equations 1-4.The x i can be fixed or random, and the y i are conditionally independent given the covariates.Let S = (s 1 , . . ., s K ) be the observed support, i.e. a vector containing the unique values in the set {y 1 , . . ., y n }.If the response follows a continuous distribution, then K will equal n.Otherwise K may be less than n due to ties.Let the vector f0 = ( f 1 , . . . ,f K ) represent probability mass at the points (s 1 , . . ., s K ).This is a nonparametric representation of f 0 because the true reference density may be continuous or have probability mass at values not contained in S.
The semiparametric log-likelihood is where each θ i is defined implicitly as the solution to There exists a θ i that satisfies Equation 6 as long as g −1 (x T i β) ∈ (m, M) for all i, where m ≡ min(S ) and M ≡ max(S ).
We obtain parameter estimates from this likelihood as arg max (β, f0 ), subject to the following constraints: Constraint (C4) is necessary for identifiability because for any nonparametric density f0 = ( f 1 , . . ., f k ) T , the "exponentially tilted" density f (α) f k e αs k has the same semiparametric log-likelihood for any α ∈ R, i.e. (β, f0 ) = (β, f (α) 0 ) for all β.We can set µ 0 to be any value within the range of observed response values, but choosing a value too extreme can lead to numerical instability in estimating f0 .It usually works well to choose The R Journal Vol.10/1, July 2018 ISSN 2073-4859 To perform the optimization, we take an iterative approach, alternating between optimizing over f0 and over β (Rathouz and Gao, 2009).Each optimization step marginally optimizes the log-likelihood over one set of parameters, holding the other fixed.Neither optimization step has a closed form solution, so iterative procedures must be used.To update f0 , we propose using the BFGS technique.To update β, we use Fisher scoring, which is equivalent to iteratively re-weighted least squares (IRLS).We propose iterating BFGS until convergence for each f0 update, while only using a single IRLS iteration to update β in between f0 updates; this is tantamount to a profile likelihood approach to estimation of β.
Although the log-likelihood in Equation 5 is a function of (β, f0 ), we have expressed it in terms of (θ 1 , . . ., θ n ); each θ i is implicitly a function of (β, f0 ).The score function is also best expressed in terms of the θ i 's (see Equations 12 and 15).Consequently, our optimization algorithm requires the θ i 's to be computed each time β or f0 is updated.To do this, we use an iterative Newton-Raphson procedure after each iteration of the β update or the f0 update procedures.
In what follows, we detail updates of the θ i 's, of f0 , and of β.We use the notation b(θ) to denote the function defined in Equation 4 with respect to the discrete probability distribution specified by f0 , θ update procedure θ i is defined implicitly as the solution to Equation 6, which can be written as To calculate θ i , we use the Newton-Raphson procedure provided in Appendix C of Rathouz and Gao (2009).To satisfy equation 7, we need to find θ such that (The transformation, g l , stabilizes the solution.)We use Newton-Raphson to find the root of t(θ) = g l {b (θ)} − g l (µ i ).Let θ (r) denote the approximate solution at the r th Newton-Raphson iteration.The Newton-Raphson update is given by where and We typically initialize θ (0) to be the value obtained from the previous θ update procedure.The first time θ is updated, we initialize θ (0) to zero for every observation.We define convergence when |t(θ (r) )| < , where is a small threshold such as 10 −10 .
As µ i → M from the left, θ i → +∞.Likewise, as µ i → m from the right, θ i → −∞.To prevent numerical instability when µ i is at or near these boundaries, we cap |θ i | at a maximum value (500 by default).The appropriateness of this threshold would depend on the scale of response variable.Rather than adjust the threshold, we center and scale the response variable to the interval [-1, 1] (see the subsequent section on "response variable transformation").

f0 optimization procedure
Holding β fixed at its current estimate, we need to marginally optimize the log-likelihood (β, f0 ) over f0 , subject to constraints (C2)-(C4).The linear constraints (C3) and (C4) could be enforced using constrained optimization techniques such as Lagrange multipliers or reducing the dimension of the parameter space by two.Huang and Rathouz (2012) used the former technique, while Rathouz and Gao (2009) used the latter.We propose a different method, based on the BFGS technique, that is more computationally efficient.At each iteration, we apply a BFGS update to f0 to improve the unconstrained log-likelihood and then transform f0 to satisfy constraints (C3) and (C4).Application of the constraints does not affect the log-likelihood of the estimate, as the constraints are only required for identifiability (i.e.uniqueness of the optimal f0 ).For any set of positive f0 values, there exists a unique set of values with equal log-likelihood that satisfies both constraints (C3) and (C4).
We define the transformation g0 = (g 1 , . . ., g k ) = (log f 1 , . . ., log f K ) and consider the loglikelihood as a function of g0 only, with β held fixed.Working on the log scale enforces constraint (C2) The R Journal Vol.10/1, July 2018 ISSN 2073-4859 and also improves numerical stability.Specifically, numerical stability is improved by working with the score and Hessian as a function of g0 rather than f0 .
BFGS is a quasi-Newton procedure, which makes iterative updates using an approximate Hessian matrix along with the exact score function.Let g(t) 0 be the estimate at the t th iteration.The updates take the form g(t+1 Here, S( g0 ; β) is the score as a function of g0 only, holding β fixed.It has for k = 1, . . ., K (derivation in Appendix A).Note that this score function ignores constraints (C3) and (C4).The matrix H t is an approximation to the Hessian of the log-likelihood as a function of g0 ; β), we can write the BFGS estimate of the Hessian recursively as H t is a rank-2 update to H t−1 that satisfies the secant condition: H t u t = v t .Furthermore, H t is guaranteed to be symmetric and positive definite, even though the true Hessian is not full rank.(The true Hessian is not full rank because without imposing constraints (C3) and ( C4), the log-likelihood does not have a unique optimum.)The BFGS update in Equation 11requires the inverse of H t , which can be calculated efficiently and directly, without calculating H t .By the Sherman-Morrison-Woodbury formula, For an initial estimate, we propose H −1 0 = αI K , where I K is the K × K identity matrix.We perform a line search along the gradient to choose an appropriate value for α such that (β, ) 0 ).As previously mentioned, constraints (C3) and (C4) are only required for identifiability.After each BFGS iteration, we impose these constraints on the f0 estimate, which does not affect the log-likelihood of the estimate.Specifically, we apply (C3) by scaling our estimate of f0 to sum to one.We then "exponentially tilt" the estimate to enforce constraint (C4).In other words, we compute θ such that K ∑ j=1 s j f j e θs j / K ∑ j=1 f j e θs j = µ 0 , and set our final estimate for the iteration to be f k ← f k e θs k / K ∑ j=1 f j e θs j for all k.
We initialize g(0) 0 to the log of the f0 estimate obtained from the previous f0 update procedure.We suggest using the empirical response distribution as an initial estimate of f0 .We define convergence using the relative change in the log-likelihood.Our default convergence threshold is 10 −10 .If the loglikelihood decreases after any iteration, we backtrack by half steps, setting g(t+1 0 until the log-likelihood improves.In our experience, we have found that the log-likelihood improves after most iterations without taking half steps, but log-likelihood decreases can occur sporadically (both at early and late iterations).

β optimization procedure
Holding f0 fixed at its current estimate, we could marginally optimize the log-likelihood over β using iteratively re-weighted least squares (IRLS).Rather than iterating until convergence, however, we propose using a single IRLS iteration to update β in between f0 updates.The IRLS algorithm is simply the Newton-Raphson algorithm, but using the Fisher information in place of the negative Hessian matrix.This technique is commonly referred to as Fisher Scoring.As we now show, the Fisher Scoring update minimizes a weighted least squares expression, which is why we can refer to the algorithm as IRLS.The score function is given by The R Journal Vol.10/1, July 2018 ISSN 2073-4859 (derivation in Appendix C) and the Fisher information is where X is an n × p matrix with rows x T i , W is an n × n diagonal matrix with entries Let β (0) be the estimate obtained from the previous β update procedure.The IRLS step between β (0) and the updated estimate, β (1) , is This is the solution to a weighted least squares expression, which can be computed efficiently using QR decomposition.

Response variable transformation
Numerical stability issues can occur if the exp{θ i s k } terms become too large or small during optimization.To prevent these terms from exceeding R's default floating-point number range, we transform the response vector to the interval [-1, 1].Specifically, the response values are transformed to It turns out the semiparametric log-likelihood function with the transformed response and modified link function ) is equivalent to the original log-likelihood.The parameters β and f0 that optimize the modified log-likelihood also optimize the original log-likelihood (proof in Appendix D).
As mentioned in the "θ update procedure" section, we choose to cap |θ| for each observation at 500 by default, thereby restricting the rescaled θ i s k terms to the interval [-500, 500].Note that the optimization function in the gldrm R package returns the estimated θ i values for each observation, and these values are on the original scale of the response variable.

Inference
The gldrm R package (Wurm and Rathouz, 2018) can perform inference on the regression coefficients using likelihood ratio tests for nested models.It can also calculate likelihood ratio, Wald, or score confidence intervals.All three confidence intervals should yield similar results for large samples, but the Wald form may be preferred for its computational simplicity.Likelihood ratio and score confidence intervals are more computationally expensive to calculate.Huang (2014) recommends likelihood ratio tests for small samples.
The Wald asymptotic variance estimate for the β estimator can be obtained from the inverse of the information matrix given in Equation 16.Recall this information matrix is calculated with the f0 parameters held fixed.Its inverse is a valid asymptotic variance matrix because the full information matrix is block diagonal; i.e., the β and f0 estimators are asymptotically independent.The gldrm optimization function returns standard error estimates for each coefficient, which are displayed by the print method along with p-values.Wald confidence intervals are straightforward to calculate from the standard errors and can be obtained from the gldrmCI function.For these single-coefficient hypothesis tests and confidence intervals, we approximate the null distribution by a t-distribution with n − p degrees of freedom, where n is the number of observations and p is the rank of the covariate matrix.
Likelihood ratio tests for nested models are based on the usual test statistic: 2 times the loglikelihood difference between the full and reduced model.Following Huang (2014), we approximate the null distribution by an F-distribution with q and n − p degrees of freedom, where q is the difference in the number of parameters between the full and reduced models.This test can be performed by the gldrmLRT function.
Likelihood ratio and score confidence intervals for a single coefficient can be obtained from the gldrmCI function.These confidence intervals are more computationally expensive than Wald confidence intervals because an iterative method is required to search for the interval boundaries.We The R Journal Vol.10/1, July 2018 ISSN 2073-4859 use the following bisection method.
Suppose we want to obtain the lower boundary of a confidence level 1 − α interval for a single coefficient β * , and that the gldrm estimate of this coefficient is β * .For the lower boundary, we need to find β * lo such that β * lo < β * and the one-sided likelihood ratio test has a p-value equal to α/2 under the null hypothesis H 0 : As explained in the next paragraph, we can compute the p-value for any guess of β * lo .We begin by finding a guess that is too low (p-value less than α/2) and a guess that is too high (p-value greater than α/2).We then obtain the p-value of the midpoint.If the p-value is less than α/2 then the midpoint becomes the new low guess, and if it is greater than α/2 then the midpoint becomes the new high guess.We iterate this process, each time halving the distance between the low guess and high guess, until we have a guess of β * lo that has a p-value arbitrarily close to α/2.The same procedure can be used to solve for β * hi .Let β 0 be any constant.To calculate the likelihood ratio or score test p-value of H 0 : β * = β 0 , we need to optimize the log-likelihood over f0 and all other β values, holding β * fixed at β 0 .This can be done by multiplying β 0 with the column of the X matrix corresponding to β * and treating this vector as an offset to the linear predictor.In other words, this vector contains a fixed component of the linear predictor for each observation.The β * column is dropped from X, and the model is optimized with the offset term.

Goodness of fit
To check the model fit, we calculate the randomized probability inverse transform (Smith, 1985).This is done by evaluating the fitted conditional cdf (i.e., the cumulative distribution function, conditional on the covariates) of each observed response value.If the model fit is good, the probability inverse transform values should roughly follow a uniform distribution on the interval (0, 1).Because the gldrm fitted cdf is discrete, we use randomization to make the distribution continuous as follows.
Let F(y|X = x) denote the fitted cdf conditional on a covariate vector x evaluated at y, and let y − i = max {y j : y j < y i } {−∞} .For each observation, we draw a random value from a uniform distribution on the interval F(y To illustrate a good gldrm fit, we generate 1,000 independent pairs (x, y) where x ∼ Normal(mean = 0, sd = 1) and y|x ∼ Normal(mean = x, sd = 1).The mean of y|x is linear in x, and the data generating mechanism is an exponential tilt family, so we expect the gldrm fit to be good.We fit the model and then do a visual check by plotting the histogram and uniform QQ plot of the randomized probability inverse transform values (Figure 1).To illustrate a poor gldrm fit, we generate 1,000 independent pairs (x, y) where x ∼ Normal(mean = 0, sd = 1) and y|x ∼ Normal(mean = x, sd = x 2 ).The mean of y|x is again linear in x, but this data generating mechanism is not an exponential tilt family.The diagnostic plots (Figure 2) confirm that the gldrm fit is not ideal.The probability inverse transform values are concentrated near the center of the interval (0, 1) rather than being uniformly distributed.The gldrm still provides a good estimate of the regression coefficients and the mean of y|x, but it is not the correct model for the cdf of y|x. .The R Journal Vol.10/1, July 2018 ISSN 2073-4859

Simulation experiments
Under four different simulation scenarios, we check the finite sample performance of the regression coefficient estimators, β, as well as the reference distribution cdf estimator F0 ().For β, we also check the accuracy of the estimated standard errors.
For each simulation, the data generating mechanism is a gamma glm.The covariates are the same under each simulation setting, and their values are fixed, not random.The covariate vector is x = (x 0 , x 1 , x 2 , x 3 ).Variable x 0 is an intercept column of ones.Variable x 1 is an indicator with exactly 20% ones in each sample (observations 5, 10, 15, 20, . . .have x 1 = 1, and the rest have x 1 = 0).Variable x 2 is a continuous variable that follows a uniform sequence between zero and one in each sample (the first observation has x 2 = 0, and the last observation has x 2 = 1).Lastly, The coefficient values were (β 1 , β 2 , β 3 ) = (1, 1, −2) for all simulation scenarios.The intercepts were β 0 = 0 for simulations 1 & 2, and β 0 = 1 for simulations 3 & 4.These values were selected so the mean response values were centered close to one and, for the identity link scenarios, guaranteed to be positive.
The log link was used for simulations 1 & 2, and the identity link was used for simulations 3 & 4. For all four simulations, the response was drawn from a gamma distribution with Var(y|x) = φE(y|x) 2 , where φ varies by simulation to achieve different R 2 values.R 2 is defined as the squared correlation between the response and linear combination of predictors.
1. Simulation 1: log link and high R 2 (φ = 0.5, R 2 ≈ 0.13) 2. Simulation 2: log link and low R 2 (φ = 4, R 2 ≈ 0.019) 3. Simulation 3: identity link and high R 2 (φ = 0.25, R 2 ≈ 0.13) 4. Simulation 4: identity link and low R 2 (φ = 2, R 2 ≈ 0.018) A gldrm was fit to each data set with correct link function and with f0 constrained to have mean µ 0 = 1.If we had followed our usual practice of choosing µ 0 to be the sample mean of the observed response values, then the reference distribution would have a different mean for each simulation replicate.By choosing µ 0 = 1 for all replicates, the true f 0 always follows a gamma distribution with mean one and variance φ, where φ varies by simulation scenario.The value µ 0 = 1 was chosen because, by design, it fell within the range of observed response values for all 2,000 replicates of each simulation scenario.The cumulative reference distribution estimate, denoted as F0 (•), was computed at percentiles 0.1, 0.25, 0.5, 0.75, 0.9.
For each simulation scenario, we display four tables.The first table (Tables 1, 5, 9, and 13) contains the sample mean of each β estimator.For comparison, we also calculated the gamma glm coefficient estimates for each simulated data set and display the sample mean alongside in parentheses.
The second table (Tables 2, 6, 10, and 14) contains the root mean squared error (rmse) of each β estimator.The rmse is calculated as , where βi is the estimator of the i th simulation replicate, and β is the true parameter value.For comparison, we also show the relative efficiency compared to the gamma glm estimator.Relative efficiency is calculated as mse glm /mse gldrm , where mse is the mean squared error (not rmse).
The third table (Tables 3, 7, 11, and 15) contains Wald confidence interval coverage rates for each β estimator.Confidence intervals were calculated based on a t-distribution with n − 4 degrees of freedom.
The R Journal Vol.10/1, July 2018 ISSN 2073-4859 To summarize the results, the gldrm estimators perform as expected.In all four simulation scenarios, the bias of β and F0 () goes to zero as the sample size increases.For the high R 2 scenarios, there is very little bias even at n = 25.Also, the relative efficiency of the β estimators compared to the gamma glm estimators goes to one as the sample size increases.This is expected because, as Rathouz and Gao (2009) demonstrated, the β and f 0 estimators are asymptotically orthogonal, so gldrm and glm have equal asymptotic efficiency when the glm model is correctly specified.For the high R 2 scenarios, the relative efficiency is close to one even at n = 25.For the low R 2 scenarios, the gldrm estimator is actually more efficient than the glm estimator for small sample sizes.
The standard error estimates for gldrm β estimators are consistently low for small sample sizes, as demonstrated by low Wald confidence interval coverage rates.The coverage rates improve with increasing sample size, demonstrating good asymptotic properties of the standard error estimators.Likelihood ratio confidence intervals can be calculated with the gldrm R package and may have better small sample performance, as demonstrated by Huang (2014).

Likelihood ratio and score confidence intervals
To support our claim that likelihood ratio confidence intervals may have better small sample performance, we compared the Wald, likelihood ratio, and score 95% confidence interval coverage rates under the settings of Simulation 1 with n = 25.The coverage rates are shown in Table 17.The Wald coverage rates are similar to those in Table 3, but not identical because this experiment used a new set of 2,000 replicates.
While the Wald confidence intervals tend to be too narrow, the score confidence intervals tend to be a bit too wide.The likelihood ratio intervals have coverage rates much closer to 0.95 than the corresponding Wald or score intervals.This is consistent with the findings of Huang (2014).β 0 β 1 β 2 β 3 Wald 0.918 0.823 0.909 0.859 Likelihood ratio 0.969 0.939 0.958 0.944 Score 0.976 0.966 0.967 0.967 Table 17: Coverage rates for 95% confidence intervals under the settings of Simulation 1.

Computational time and scaling
The following simulation explores the computation time of gldrm and how it scales with the number of observations n and number of covariates p.For each value of n and p, we fit gldrm to 100 randomly generated data sets.A gldrm was fit once to each data set.
Covariates were drawn independently from a standard normal distribution, and coefficients were drawn independently from a uniform distribution on the interval (-1, 1).Simulation 1 used an identity link function with response drawn from a normal distribution with variance function V(µ) = 1.Simulation 2 used a log link function with response drawn from an exponential distribution, i.e. a gamma distribution with variance function V(µ) = µ 2 .
Iteration limits were set at 100 per θ update, 1,000 per f 0 update, and 100 for the outer loop (for which each iteration consists of a single-iteration β update, followed by an f 0 update).Convergence thresholds were set at 10 −10 for the θ update, f 0 update, and outer loop update.This experiment was run using a 2.2 GHz AMD Opteron 6174 processor.Figures 3 and 4 show the average CPU seconds for Simulations 1 and 2, respectively.
We also repeated this experiment with the number of support points fixed at 25.To do this, we generated n response values from the model.The first 25 values were designated as the support, and the remaining response values were matched to the nearest support value and set to that value.This discrete data generating model is not actually a gldrm model, but we are only using it to evaluate computation time.Figures 5 and 6 show the average CPU seconds for Simulations 1 and 2, respectively, with support size fixed at 25.
The R Journal Vol.10/1, July 2018 ISSN 2073-4859 In summary, the computation time scales roughly linearly with n when the support is fixed at 25.When the support grows with n, the computation time grows faster than linearly.Computation time increases with p, but there is not a consistent pattern, especially with fixed support.qq q q q q q qq q q q q q qq q q q q q qq q q q q q qq q q q q q 0 200 400 600 0 500 1000 1500 n time (seconds) p q q q q q 1 2 4 8 16 qq q q q q q qq q q q q q qq q q q q q qq q q q q q qq q q q q q 0 200 400 600 0 500 1000 1500 n time (seconds) p q q q q q 1 2 4 8 16 q q q q q qq q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q qq q q q q q 0 200 400 600 0 10000 20000 30000 40000 50000 n time (seconds) p q q q q q 1 2 4 8 16 The R Journal Vol.10/1, July 2018 ISSN 2073-4859 q q q q q qq q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q qq q q q q q 0 200 400

R package: gldrm
The gldrm package (Wurm and Rathouz, 2018) was written entirely in the R programming language.Its primary functions are the following.
• gldrm is the main function for fitting gldrm models.
• gldrmLRT performs a likelihood ratio test between nested gldrm models.The test statistic is calculated as 2 × (llik − llik 0 )/r, where r is the difference is the number of parameters between the full and null models.Under the null hypothesis, the test statistic follows an asymptotic F distribution with degrees of freedom r and n − p, where n is the number of observations and p is the number of parameters in the full model.
• gldrmCI calculates Wald or likelihood ratio confidence intervals for a single coefficient.
• predict.gldrmFit is a predict method, which is modeled after the print.glmmethod in the stats package.Predictions can be obtained on the scale of the response or linear predictor, or the tilted nonparametric density for each observation can be returned.
• gldrmPIT Returns a set of randomized probability inverse transform values and plots the histogram and uniform QQ plot.

R example: iris data
We demonstrate the gldrm package using the Iris data set from the datasets package.This is a data set of n = 150 observations.We choose sepal length to be the response variable.This variable has 35 unique values, so the support is K = 35.We first demonstrate how to fit a gldrm model with the optimization function, which is simply called gldrm.We demonstrate how to perform inference on the regression coefficients.We check the goodness of fit using the randomized probability inverse transform.Finally we show how to obtain predictions for a set of observations, including the predicted mean and nonparametric estimate of the distribution function.

Fit gldrm model
The gldrm optimization function takes a formula and data argument, similar to R's glm function.
Instead of passing both an error distribution and link function through a family argument, the gldrm only requires a link function.The link argument will accept the name of a link function (any function supported by make.link in the stats package is accepted).Alternatively, a custom link function can be passed as a list containing three items: 1. linkfun A vectorized link function.

linkinv
The corresponding inverse link function, also vectorized.

mu.eta
The derivative of the inverse link function, also vectorized.
This list structure is the same as that of link-glm class objects, which are constructed by the make.linkfunction.The custom link option allows great flexibility, but it is imperative that the user correctly specifies the inverse link and its derivative.
The R Journal Vol.

Inference
The gldrmLRT function performs a semiparametric likelihood ratio test between two nested models.We demonstrate this on the Iris data by fitting a sub-model that excludes "Species", which is a categorical variable with three levels and two degrees of freedom.We also obtain Wald and likelihood ratio confidence intervals for the petal width coefficient.

Goodness of fit
The gldrmPIT function produces goodness of fit plots using the probability inverse transform.In addition to plotting, this function also returns the inverse transform values as a vector.

Prediction
We obtain predictions for three selected observations in the data set: one from each Species.These observations were contained in training data, but we could obtain predictions for out-of-sample observations in the same way.Using the predict method, we obtain the fitted mean response value for each observation, which is the estimate of E(Y|X = x).
We also use the predict method to obtain the nonparametric estimate of the conditional density f (y|X = x).This is obtained by setting the argument type = "fTilt", which returns an n × K matrix with (i, k) th entry fk exp{θ i s k − b(θ i )}.Each row contains the nonparametric conditional density estimate for a given observation and sums to one.We use this matrix to calculate the conditional probabilities for the form P(y 1 < Y ≤ y 2 |X = x) for each observation.Note that all observed support values (sepal length values) fall between four and eight, so all probability mass falls within this interval.

Discussion
We introduced a new optimization algorithm for gldrm, which computationally scales better with the number of unique observed response values.This is especially useful for estimation of continuous response data where the number of parameters in the f0 parameter equals the sample size.In particular, the BFGS technique dramatically speeds up the f0 optimization step, and makes gldrm much more computationally feasible for either discrete or continuous response data.The new algorithm is implemented in the gldrm package.Simulation results show that the algorithm and software obtain accurate estimates and standard errors.Computational time was shown to be feasible for support sizes well over one thousand.
Future research directions could include the incorporation of random effects into the gldrm framework.Optimization techniques for generalized linear mixed models, such as adaptive Gaussian quadrature, may be useful for model estimation (Pinheiro and Chao, 2006;Pinheiro and Bates, 1995).

Appendix Notation
In this appendix, we use ∂ to denote the partial derivative and d to denote the total derivative.We require this notation for chain rule derivatives, specifically to derive the score function with respect to β and f0 .Each θ i is implicitly a function of (β, f0 ) and hence should not be held fixed when differentiating the log-likelihood with respect to either β or f0 .We also let denote the contribution of the i th observation to the log-likelihood.

A. Score function for f0
We derive Equation 12, which gives the score function with respect to f0 , holding β fixed.Using the definition of i in Equation 18and applying the chain rule, the k th element of the score function is The first term on the RHS of Equation 19is The second term on the RHS of Equation 19is Recall that µ i = g −1 (x T i β), which does not vary with f0 .Therefore, and the third term on the RHS of Equation 19 is Plugging the results of Equations 20, 21, and 23 into Equation 19, we obtain We obtain the result of Equation 12 by applying the Jacobian transformation {S( g0 ; β)} k = f k {S( f0 ; β)} k .
The R Journal Vol.(33) where θ * i = θ i ρ.Equation 6 can be rewritten as Hence, if we define a modified link function g * (µ) = g(ν + ρµ) inverse g −1 * (η) = (g −1 (η) − ν)/ρ, we can write In other words θ * i has the same implicit definition as θ i when the modified link function and transformed response variable are used in place of the oginal.This shows that, as a function of (β, f0 ), the log-likelihood with the transformed response and modified link function is equivalent to the original log-likelihood.Note that θ * i must be multiplied by 1/ρ if one would like calculate θ i on the original scale.

Figure 1 :
Figure 1: Scatterplot and probability inverse transform histogram and QQ plot for a good gldrm model fit.

Figure 2 :
Figure 2: Scatterplot and probability inverse transform histogram and QQ plot for a poor gldrm model fit.

Figure 5 :
Figure 5: Mean computation time for Simulation 1 with support size fixed at 25. Error bars represent ±2 standard errors over 100 replicates.

Figure 6 :
Figure 6: Mean computation time for Simulation 2 with support size fixed at 25. Error bars represent ±2 standard errors over 100 replicates.

Table 1 :
Sample mean of β.For comparison, the gamma glm sample mean is shown in parentheses.

Table 2 :
Root mean squared error (rmse) of β.In parentheses is the relative efficiency compared to the gamma glm estimator.

Table 3 :
Wald confidence interval coverage rate for β.

Table 4 :
Sample mean of F0 at selected true percentiles.

Table 5 :
Sample mean of β.For comparison, the gamma glm sample mean is shown in parentheses.

Table 6 :
Root mean squared error (rmse) of β.In parentheses is the relative efficiency compared to the gamma glm estimator.

Table 7 :
Wald confidence interval coverage rate for β.

Table 8 :
Sample mean of F0 at selected true percentiles.

Table 9 :
Sample mean of β.For comparison, the gamma glm sample mean is shown in parentheses.

Table 10 :
Root mean squared error (rmse) of β.In parentheses is the relative efficiency compared to the gamma glm estimator.

Table 11 :
Wald confidence interval coverage rate for β.

Table 12 :
Sample mean of F0 at selected true percentiles.

Table 13 :
Sample mean of β.For comparison, the gamma glm sample mean is shown in parentheses.

Table 14 :
Root mean squared error (rmse) of β.In parentheses is the relative efficiency compared to the gamma glm estimator.

Table 15 :
Wald confidence interval coverage rate for β.

Table 16 :
Sample mean of F0 at selected true percentiles.