The inverse Gaussian distribution (IGD) is a well known and often used probability distribution for which fully reliable numerical algorithms have not been available. We develop fast, reliable basic probability functions (dinvgauss
, pinvgauss
, qinvgauss
and rinvgauss
) for the IGD that work for all possible parameter values and which achieve close to full machine accuracy. The most challenging task is to compute quantiles for given cumulative probabilities and we develop a simple but elegant mathematical solution to this problem. We show that Newton’s method for finding the quantiles of a IGD always converges monotonically when started from the mode of the distribution. Simple Taylor series expansions are used to improve accuracy on the log-scale. The IGD probability functions provide the same options and obey the same conventions as do probability functions provided in the stats package.
The inverse Gaussian distribution (IGD) (Tweedie 1957; Johnson and Kotz 1970) is widely used in a variety of application areas including reliability and survival analysis (Whitmore 1975; Chhikara and Folks 1977; Bardsley 1980; Chhikara 1989; Wang and Xu 2010; Balakrishna and Rahul 2014). It is more generally used for modeling non-negative positively skewed data because of its connections to exponential families and generalized linear models (Seshadri 1993; Blough et al. 1999; Smyth and Verbyla 1999; De Jong and Heller 2008).
Basic probability functions for the IGD have been implemented previously in James Lindsey’s R package rmutil (Lindsey 2010) and in the CRAN packages SuppDists (Wheeler 2009) and STAR (Pouzat 2012). We have found however that none of these IGD functions work for all parameter values or return results to full machine accuracy. Bob Wheeler remarks in the SuppDists documentation that the IGD “is an extremely difficult distribution to treat numerically”. The rmutil package was removed from CRAN in 1999 but is still available from Lindsey’s webpage. SuppDists was orphaned in 2013 but is still available from CRAN. The SuppDists code is mostly implemented in C while the other packages are pure R as far as the IGD functions are concerned.
The probability density of the IGD has a simple closed form expression and so is easy to compute. Care is still required though to handle infinite parameter values that correspond to valid limiting cases. The cumulative distribution function (cdf) is also available in closed form via an indirect relationship with the normal distribution (Shuster 1968; Chhikara and Folks 1974). Considerable care is nevertheless required to compute probabilities accurately on the log-scale, because the formula involves a sum of two normal probabilities on the un-logged scale. Random variates from IGDs can be generated using a combination of chisquare and binomial random variables (Michael et al. 1976). Most difficult is the inverse cdf or quantile function, which must be computed by some iterative numerical approximation.
Two strategies have been used to compute IGD quantiles. One is to solve
for the quantile using a general-purpose equation solver such as the
uniroot
function in R. This is the approach taken by the qinvgauss
functions in the rmutil and STAR packages. This approach can usually
be relied on to converge satisfactorily but is computationally slow and
provides only limited precision. The other approach is to use Newton’s
method to solve the equation after applying an initial approximation
(Kallioras and Koutrouvelis 2014). This approach was taken by one of the
current authors when developing inverse Gaussian code for S-PLUS
(Smyth 1998). It is also the approach taken by the qinvGauss
function in the SuppDists package. This approach is fast and accurate
when it works but can fail unpredictably when the Newton iteration
diverges. Newton’s method cannot in general be guaranteed to converge,
even when the initial approximation is close to the required value, and
the parameter values for which divergence occurs are hard to predict.
We have resolved the above difficulties by developing a Newton iteration for the IGD quantiles that has guaranteed convergence. Instead of attempting to find a starting value that is close to the required solution, we instead use the convexity properties of the cdf function to approach the required quantiles in a predictable fashion. We show that Newton’s method for finding the quantiles of an IGD always converges when started from the mode of the distribution. Furthermore the convergence is monotonic, so that backtracking is eliminated. Newton’s method is eventually quadratically convergent, meaning that the number of decimal places corrected determined tends to double with each iteration (Press et al. 1992). Although the starting value may be far from the required solution, the rapid convergence means the starting value is quickly left behind. Convergence tends to be rapid even when the required quantile in the extreme tails of the distribution.
The above methods have been implemented in the dinvgauss
, pinvgauss
,
qinvgauss
and rinvgauss
functions of the
statmod package
(Smyth 2016). The functions give close to machine accuracy for all
possible parameter values. They obey similar conventions to the
probability functions provided in the stats package. Tests show that
the functions are faster, more accurate and more reliable than existing
functions for the IGD. Every effort has to made to ensure that the
functions return results for the widest possible range of parameter
values.
The inverse Gaussian distribution, denoted IG(
Note that the mean
The IGD is unimodal with mode at
Description | Parameter values | log-pdf | cdf | |
---|---|---|---|---|
Left limit | 0 | 0 | ||
Left limit | 0 | 0 | ||
Left limit | 0 | 0 | ||
Right limit | 0 | 1 | ||
Right limit | 0 | 1 | ||
Right limit | 0 | 1 | ||
Spike | 1 | |||
Spike | 1 | |||
Inverse chisquare | Eqn (5) | Eqn (5) | Uses pchisq |
|
Invalid | NA |
NA |
NA |
Care needs to be taken with special cases when evaluating the pdf
(Table 1). When NA
if NA
. Missing values for NA
values for the pdf except when NA
values for the pdf except when
Next we give some code examples. We start by loading the packages that we will compare. Note that statmod is loaded last and is therefore first in the search path.
> library(rmutil)
> library(SuppDists)
> library(STAR)
> library(statmod)
The statmod dinvgauss
function checks for out-of-range or missing
values:
> options(digits = 3)
> dinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = 1.5, dispersion = 0.7)
[1] 0.000 0.000 0.440 0.162 0.000 NA
Infinite mean corresponds to an inverse-chisquare case:
> dinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = Inf, dispersion = 0.7)
[1] 0.000 0.000 0.233 0.118 0.000 NA
Infinite dispersion corresponds to a spike at 0 regardless of the mean:
> dinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = NA, dispersion = Inf)
[1] 0 Inf 0 0 0 NA
Extreme
> dinvgauss(c(-1, 0, 1, Inf), mean = NA, dispersion = NA)
[1] 0 NA NA 0
All the existing functions rmutil::dinvgauss
, SuppDist::dinvGauss
and STAR::dinvgauss
return errors for the above calls; they do not
tolerate NA
values, or infinite parameter values, or
Let
It is possible to derive an asymptotic expression for the right tail
probability. If
To avoid or minimize the numerical problems described above, we convert
the terms in the cdf to the log-scale and remove a common factor before
combining the two term terms to get pnorm
with
lower.tail=TRUE
and log.p=TRUE
. Note also that log1p()
is an R
function that computes the logarithm of one plus its argument avoiding
subtractive cancellation for small arguments. The computation of the
right tail probability is similar but with
statmod::pinvgauss
function is
able to compute correct cdf values even in the far tails of the
distribution:
> options(digits = 4)
> pinvgauss(0.001, mean = 1.5, disp = 0.7)
[1] 3.368e-312
> pinvgauss(110, mean = 1.5, disp = 0.7, lower.tail = FALSE)
[1] 2.197e-18
None of the existing functions can distinguish such small left tail probabilities from zero:
> rmutil::pinvgauss(0.001, m = 1.5, s = 0.7)
[1] 0
> SuppDists::pinvGauss(0.001, nu = 1.5, lambda = 1/0.7)
[1] 0
> STAR::pinvgauss(0.001, mu = 1.5, sigma2 = 0.7)
[1] 0
rmutil::pinvgauss
doesn’t compute right tail probabilities.
STAR::pinvgauss
does but can’t distinguish right tail probabilities
less than 1e-17
from zero:
> STAR::pinvgauss(110, mu = 1.5, sigma2 = 0.7, lower.tail = FALSE)
[1] 0
SuppDists::pinvGauss
returns non-zero right tail probabilities, but
these are too large by a factor of 10:
> SuppDists::pinvGauss(110, nu = 1.5, lambda = 1/0.7, lower.tail = FALSE)
[1] 2.935e-17
The use of log-scale computations means that statmod::pinvgauss
can
accurately compute log-probabilities that are too small to be
represented on the unlogged scale:
> pinvgauss(0.0001, mean = 1.5, disp = 0.7, log.p = TRUE)
[1] -7146.914
None of the other packages can compute log-probabilities less than about
pinvgauss
handles special cases similarly to dinvgauss
(Table 1). Again, none of the existing functions do
this:
> pinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = 1.5, dispersion = 0.7)
[1] 0.0000 0.0000 0.5009 0.7742 1.0000 NA
Infinite mean corresponds to an inverse-chisquare case:
> pinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = Inf, dispersion = 0.7)
[1] 0.000 0.000 0.232 0.398 1.000 NA
Infinite dispersion corresponds to a spike at 0 regardless of the mean:
> pinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = NA, dispersion = Inf)
[1] 0 1 1 1 1 NA
Extreme
> pinvgauss(c(-1, 0, 1, Inf), mean = NA, dispersion = NA)
[1] 0 NA NA 1
We can test the accuracy of the cdf functions by comparing to the cdf of
the
> options(digits = 4)
> mu <- 1.5
> phi <- 0.7
> q1 <- 0.1
> z <- (q1 - mu)^2 / (phi * mu^2 * q1)
> polycoef <- c(mu^2, -2 * mu - phi * mu^2 * z, 1)
> q <- Re(polyroot(polycoef))
> q
[1] 0.1 22.5
The chisquare cdf value corresponding to the left hand size of equation (6) is:
> options(digits = 18)
> pchisq(z, df = 1, lower.tail = FALSE)
[1] 0.00041923696954098788
Now we compute the right hand size of equation (6) using each of the IGD packages, starting with statmod:
> pinvgauss(q[1], mean = mu, disp = phi) +
+ pinvgauss(q[2], mean = mu, disp = phi, lower.tail = FALSE)
[1] 0.00041923696954098701
> rmutil::pinvgauss(q[1], m = mu, s = phi) +
+ 1 - rmutil::pinvgauss(q[2], m = mu, s = phi)
[1] 0.00041923696954104805
> SuppDists::pinvGauss(q[1], nu = mu, lambda = 1/phi) +
+ SuppDists::pinvGauss(q[2], nu = mu, lambda = 1/phi, lower.tail = FALSE)
[1] 0.00041923696954101699
> STAR::pinvgauss(q[1], mu = mu, sigma2 = phi) +
+ STAR::pinvgauss(q[2], mu = mu, sigma2 = phi, lower.tail = FALSE)
[1] 0.00041923696954100208
It can be seen that the statmod function is the only one to agree with
pchisq
to 15 significant figures, corresponding to a relative error of
about
More extreme tail values give even more striking results. We repeat the
above process now with
> q1 <- 0.01
> z <- (q1 - mu)^2 / (phi * mu^2 * q1)
> polycoef <- c(mu^2, -2 * mu - phi * mu^2 * z, 1)
> q <- Re(polyroot(polycoef))
The reference chisquare cdf value is:
> pchisq(z, df = 1, lower.tail = FALSE)
[1] 1.6427313604456241e-32
This can be compared to the corresponding values from the IGD packages:
> pinvgauss(q[1], mean = mu, disp = phi) +
+ pinvgauss(q[2], mean = mu, disp = phi, lower.tail = FALSE)
[1] 1.6427313604456183e-32
> rmutil::pinvgauss(q[1], m = mu, s = phi) +
+ 1 - rmutil::pinvgauss(q[2], m = mu, s = phi)
[1] 0
> SuppDists::pinvGauss(q[1], nu = mu, lambda = 1/phi) +
+ SuppDists::pinvGauss(q[2], nu = mu, lambda = 1/phi, lower.tail = FALSE)
[1] 8.2136568022278466e-33
> STAR::pinvgauss(q[1], mu = mu, sigma2 = phi) +
+ STAR::pinvgauss(q[2], mu = mu, sigma2 = phi, lower.tail = FALSE)
[1] 1.6319986233795599e-32
It can be seen from the above that rmutil and SuppDists do not agree
with pchisq
to any significant figures, meaning that the relative
error is close to 100%, while STAR manages 3 significant figures.
statmod on the other hand continues to agree with pchisq
to 15
significant figures.
Now consider the problem of computing the quantile function
If
Recall that the IGD is unimodal for all parameter values with mode
Suppose that the required
We use
The term pinvgauss
with log.p=TRUE
and log1p
function.
We find that the statmod qinvgauss
package gives 16 significant
figures whereas the other packages give no more than 6–8 figures of
accuracy. Precision can be demonstrated by comparing the probability
vector qinvgauss
and pinvgauss
. qinvgauss
and pinvgauss
are inverse
functions, so the final probabilities should be equal in principle to
the original values. Error is measured by comparing the original and
processed probability vectors:
> p <- c(0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.5,
+ 0.9, 0.99, 0.999, 0.9999, 0.99999, 0.999999)
>
> p1 <- pinvgauss(qinvgauss(p, mean = 1, disp = 1), mean = 1, disp = 1)
> p2 <- rmutil::pinvgauss(rmutil::qinvgauss(p, m = 1, s = 1), m = 1, s = 1)
> p3 <- SuppDists::pinvGauss(SuppDists::qinvGauss(p, nu = 1, la = 1), nu = 1, la = 1)
> p4 <- STAR::pinvgauss(STAR::qinvgauss(p, mu = 1, sigma2 = 1), mu = 1, sigma2 = 1)
>
> options(digits = 4)
> summary( abs(p-p1) )
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.00e+00 0.00e+00 0.00e+00 1.92e-17 2.20e-19 2.22e-16
> summary( abs(p-p2) )
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.00e+00 5.10e-09 8.39e-08 3.28e-07 5.92e-07 1.18e-06
> summary( abs(p-p3) )
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.00e-12 6.00e-12 2.77e-10 1.77e-09 2.58e-09 1.03e-08
> summary( abs(p-p4) )
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.00e+00 0.00e+00 1.20e-08 8.95e-07 2.17e-07 6.65e-06
It can be seen that the error for statmod::qinvgauss
is never greater
than 2e-16
.
Similar results are observed if relative error is assessed in terms of
the quantile
> q <- qinvgauss(p, mean = 1, disp = 1)
> q1 <- qinvgauss(pinvgauss(q, mean = 1, disp = 1), mean = 1, disp = 1)
> q2 <- rmutil::qinvgauss(rmutil::pinvgauss(q, m = 1, s = 1), m = 1, s = 1)
> q3 <- SuppDists::qinvGauss(SuppDists::pinvGauss(q, nu = 1, la = 1), nu = 1, la = 1)
> q4 <- STAR::qinvgauss(STAR::pinvgauss(q, mu = 1, sigma2 = 1), mu = 1, sigma2 = 1)
> summary( abs(q1-q)/q )
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.00e+00 0.00e+00 0.00e+00 5.57e-17 0.00e+00 4.93e-16
> summary( abs(q2-q)/q )
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.00e+00 1.70e-06 3.30e-06 8.94e-05 8.80e-05 5.98e-04
> summary( abs(q3-q)/q )
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.09e-08 3.94e-08 4.78e-08 4.67e-08 5.67e-08 8.93e-08
> summary( abs(q4-q)/q )
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.00e+00 3.00e-07 1.40e-06 9.20e-05 9.42e-05 5.46e-04
The relative error for statmod::qinvgauss
is never worse than 5e-16
.
Speed was determined by generating qinvgauss
or qinvGauss
functions
on p with mean and dispersion both equal to one.
> set.seed(20140526)
> u <- runif(1000)
> p <- runif(1e6)
> system.time(q1 <- qinvgauss(p, mean = 1, shape = 1))
user system elapsed
4.29 0.41 4.69
> system.time(q2 <- rmutil::qinvgauss(p, m = 1, s = 1))
user system elapsed
157.39 0.03 157.90
> system.time(q3 <- SuppDists::qinvGauss(p, nu = 1, lambda = 1))
user system elapsed
13.59 0.00 13.68
> system.time(q4 <- STAR::qinvgauss(p, mu = 1, sigma2 = 1))
user system elapsed
266.41 0.06 267.25
Timings shown here are for a Windows laptop with a 2.7GHz Intel i7 processor running 64-bit R-devel (built 31 January 2016). The statmod qinvgauss function is 40 times faster than the rmutil or STAR functions about 3 times faster than SuppDists.
Reliability is perhaps even more crucial than precision or speed.
SuppDists::qinvGauss
fails for some parameter values because Newton’s
method does not converge from the starting values provided:
> options(digits = 4)
> SuppDists::qinvGauss(0.00013, nu=1, lambda=3)
Error in SuppDists::qinvGauss(0.00013, nu = 1, lambda = 3) :
Iteration limit exceeded in NewtonRoot()
By contrast, statmod::qinvgauss
runs successfully for all parameter
values because divergence of the algorithm is impossible:
> qinvgauss(0.00013, mean = 1, shape = 3)
[1] 0.1504
qinvgauss
returns right tail values accurately, for example:
> qinvgauss(1e-20, mean = 1.5, disp = 0.7, lower.tail = FALSE)
[1] 126.3
The same probability can be supplied as a left tail probability on the log-scale, with the same result:
> qinvgauss(-1e-20, mean = 1.5, disp = 0.7, log.p = TRUE)
[1] 126.3
Note that qinvgauss
returns the correct quantile in this case even
though the left tail probability is not distinguishable from 1 in
floating point arithmetic on the unlogged scale. By contrast, the
rmutil and STAR functions do not compute right tail values and the
SuppDists function fails to converge for small right tail
probabilities:
> SuppDists::qinvGauss(1e-20, nu = 1.5, lambda = 1/0.7, lower.tail = FALSE)
Error in SuppDists::qinvGauss(1e-20, nu = 1.5, lambda = 1/0.7, lower.tail = FALSE) :
Infinite value in NewtonRoot()
Similarly for log-probabilities, the rmutil and STAR functions do not accept log-probabilities and the SuppDists function gives an error:
> SuppDists::qinvGauss(-1e-20, nu = 1.5, lambda = 1/0.7, log.p=TRUE)
Error in SuppDists::qinvGauss(-1e-20, nu = 1.5, lambda = 1/0.7, log.p = TRUE) :
Infinite value in NewtonRoot()
All the statmod IGD functions allow variability to be specified either
by way of a dispersion (
> args(qinvgauss)
function (p, mean = 1, shape = NULL, dispersion = 1, lower.tail = TRUE,
log.p = FALSE, maxit = 200L, tol = 1e-14, trace = FALSE)
Boundary or invalid p
are detected:
> options(digits = 4)
> qinvgauss(c(0, 0.5, 1, 2, NA))
[1] 0.0000 0.6758 Inf NA NA
as are invalid values for
> qinvgauss(0.5, mean = c(0, 1, 2))
[1] NA 0.6758 1.0285
The statmod functions dinvgauss
, pinvgauss
and qinvgauss
all
preserve the attributes of the first input argument provided that none
of the other arguments have longer length. For example, qinvgauss
will
return a matrix if p
is a matrix:
> p <- matrix(c(0.1, 0.6, 0.7, 0.9), 2, 2)
> rownames(p) <- c("A", "B")
> colnames(p) <- c("X1", "X2")
> p
X1 X2
A 0.6001 0.3435
B 0.4919 0.4987
> qinvgauss(p)
X1 X2
A 0.8486 0.4759
B 0.6637 0.6739
Similarly the names of a vector are preserved on output:
> p <- c(0.1, 0.6, 0.7, 0.9)
> names(p) <- LETTERS[1:4]
> qinvgauss(p)
A B C D
0.2376 0.8483 1.0851 2.1430
The functions statmod::rinvgauss
, SuppDists::rinvGauss
and
STAR::rinvgauss
all use the same algorithm to compute random deviates
from the IGD. The method is to generate chisquare random deviates
corresponding to
The rmutil::rinvgauss
function generates random deviates by running
qinvgauss
on random uniform deviates. This is far slower and less
accurate than the other functions.
Basic probability calculations for the IGD have been available in various forms for some time but the functions described here are the first to work for all parameter values and to return close to full machine accuracy.
The statmod functions achieve good accuracy by computing probabilities on the log-scale where possible. Care is given to handle special limiting cases, including some cases that have not been previously described. The statmod functions trap invalid parameter values, provide all the standard arguments for probability functions in the R and preserve argument attributes on output.
A new strategy has been described to invert the cdf using a
monotonically convergent Newton iteration. It may seem surprising that
we recommend starting the iteration from the same value regardless of
the quantile required. Intuitively, a starting value that is closer to
the required quantile might have been expected to be better. However
using an initial approximation runs the risk of divergence, and
convergence of Newton’s method from the mode is so rapid that the
potential advantage of a closer initial approximation is minimized. The
statmod qinvgauss
function is 40 times faster than the quantile
functions in the rmutil or STAR packages, despite returning 16
rather than 6 figures of accuracy. It is also 3 times faster than
SuppDists, even though SuppDists::qinvGauss
is written in C, uses
the same basic Newton strategy and has a less stringent stopping
criterion. The starting values for Newton’s method used by
SuppDists::qinvGauss
are actually closer to the final values than
those used by statmod::qinvgauss
, but the latter are more carefully
chosen to achieve smooth convergence without backtracking.
SuppDists::qinvGauss
uses the log-normal approximation of
(Whitmore and Yalovsky 1978) to start the Newton iteration and the
STAR::qinvgauss
uses the same approximation to setup the interval
limits for uniroot
. Unfortunately the log-normal approximation has
much heavier tails than the IGD, meaning that the starting values are
more extreme than the required quantiles and are therefore outside the
domain of monotonic convergence.
As well as the efficiency gained by avoiding backtracking, monotonic
convergence has the advantage that any change in sign of the Newton step
is a symptom that the limits of floating point accuracy have been
reached. In the statmod qinvgauss
function, the Newton iteration is
stopped if this change of sign occurs before the convergence criterion
is achieved.
The current statmod functions could be made faster by reimplementing
in C, but the pure R versions have benefits in terms of
understandability and easy maintenance, and they are only slightly
slower than comparable functions such as qchisq
and qt
.
This strategy used here to compute the quantile could be used for any continuous unimodal distribution, or for continuous distribution that can be transformed to be unimodal.
> sessionInfo()
R Under development (unstable) (2016-01-31 r70055)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252
[3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C
[5] LC_TIME=English_Australia.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] statmod_1.4.24 STAR_0.3-7 codetools_0.2-14 gss_2.1-5
[5] R2HTML_2.3.1 mgcv_1.8-11 nlme_3.1-124 survival_2.38-3
[9] SuppDists_1.1-9.2 rmutil_1.0
loaded via a namespace (and not attached):
[1] Matrix_1.2-3 splines_3.3.0 grid_3.3.0 lattice_0.20-33
Here we derive an asymptotic expression for the right tail probability,
Distributions, NumericalMathematics
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Giner & Smyth, "statmod: Probability Calculations for the Inverse Gaussian Distribution", The R Journal, 2016
BibTeX citation
@article{RJ-2016-024, author = {Giner, and Smyth, Gordon K.}, title = {statmod: Probability Calculations for the Inverse Gaussian Distribution}, journal = {The R Journal}, year = {2016}, note = {https://rjournal.github.io/}, volume = {8}, issue = {1}, issn = {2073-4859}, pages = {339-351} }