investr: An R Package for Inverse Estimation

Abstract:

Inverse estimation is a classical and well-known problem in regression. In simple terms, it involves the use of an observed value of the response to make inference on the corresponding unknown value of the explanatory variable. To our knowledge, however, statistical software is somewhat lacking the capabilities for analyzing these types of problems. In this paper, we introduce investr (which stands for inverse estimation in R), a package for solving inverse estimation problems in both linear and nonlinear regression models.The views expressed in this paper are those of the authors, and do not reflect the official policy or position of the United States Air Force, Navy, Department of Defense, or the U.S. Government.

Cite PDF Tweet

Published

May 27, 2014

Received

Sep 27, 2013

Citation

Greenwell & Kabban, 2014

Volume

Pages

6/1

90 - 100


1 Introduction

Consider the regression model Yi=f(xi;β)+ϵi (i=1,,n), where f is a known expectation function (called a calibration curve) that is monotonic over the range of interest and ϵiiidN(0,σ2). A common problem in regression is to predict a future response Y0 from a known value of the explanatory variable x0. Often, however, there is a need to do the reverse; that is, given an observed value of the response Y=y0, estimate the unknown value of the explanatory variable x0. This is known as the calibration problem, though we refer to it more generally as inverse estimation. In this paper, we consider only controlled calibration, where the values of the explanatory variables are fixed by design. A more thorough overview of the calibration problem, including Bayesian approaches and multivariate calibration, is given in .

There are three main functions available in the investr package :

calibrate operates on objects of class "lm" and can only be used when the expectation function has the form f(xi;β)=β0+β1xi (i.e., the simple linear regression model), where closed-form solutions are available for calculating confidence intervals for x0. For more complicated models (e.g., polynomial and nonlinear regression), closed-form expressions are usually not available and iterative techniques will be required. This is the purpose of the function invest, which calculates the point estimate and confidence intervals for x0 by calling the function uniroot from the stats package. The function plotFit produces a scatterplot of the data with fitted regression curve and the option to add confidence/prediction bands for the response (pointwise or adjusted). It can be used with single-predictor objects of class "lm" or "nls"; however, for objects of class "nls", confidence/prediction bands are based on the linear approximation and can be misleading . The development version of investr can be found on GitHub at https://github.com/w108bmg/investr. To report bugs or issues, contact the main author or submit them to https://github.com/w108bmg/investr/issues.

2 Calibration for straight line regression

Consider the most common calibration model, Yi=β0+β1xi+ϵi (i=1,,n), where xi is fixed and ϵiiidN(0,σ2). Suppose we obtain a series of m observations y01,,y0m which are associated with the same (unknown) x0. It can be shown that the maximum likelihood (ML) estimator of x0, also known as the classical estimator, is

(1)x^0=y¯0β^0β^1,

where β^0 and β^1 are the usual ML estimators of β0 and β1, respectively, and y¯0=j=1my0j/m. The classical estimator in general is computed as x^0=f1(y¯0;β^); that is, by inverting the fitted calibration curve at y¯0 . Since x^0 is a ratio of jointly normal random variables, it is not surprising to learn that it does not have any finite moments (think of a standard Cauchy distribution). The sampling distribution of x^0 is complicated, but fortunately, not required for setting an exact 100(1α)% confidence interval on x0. This is discussed in the next section.

Inversion interval

It can be shown that an exact 100(1α)% confidence interval for x0 is given by

(2)x^0+(x^0x¯)g±(tσ^/β^1)(x^0x¯)2/Sxx+(1g)(1m+1n)1g,

where Sxx=i=1n(xix¯)2, g=(t2σ^2)/(β^12Sxx) and t=tα/2,n+m3 is the upper 1α percentile of a Student’s t-distribution with n+m3 degrees of freedom. The inversion interval (2) can also be obtained using a fiducial argument, as in . For the special case m=1, the inversion interval is equivalent to inverting a 100(1α)% prediction interval for the response. In other words, if one draws a horizontal line through a scatterplot of the data at y0, then the abscissas of its intersection with the usual (pointwise) prediction band for f correspond to the endpoints of the inversion interval (2). This interval should only be used when the usual F test for testing H0:β1=0 versus H1:β10 can be rejected at the specified α level; in other words, when the regression line is not “too” flat. If H0 is not rejected, then such a confidence interval for x0 may result in either the entire real line or two semi-infinite intervals —see Figure 2. The plotFit function in the investr package can be used for drawing scatterplots with the fitted model and confidence/prediction bands. To calculate the inversion interval for the linear calibration problem, we use the calibrate function and specify the option interval = "inversion" (the default) as in the following example.

The data frame arsenic contains the true amounts of arsenic present in 32 water samples . Also present, is the amount of arsenic measured by some field test, which is subject to error. A new water sample is obtained and subjected to the field test producing a reading of 3.0 μg/ml. It is desired to infer the true amount of arsenic present in the sample. The following code fits a simple linear regression model to the arsenic data and then uses the calibrate function to compute the ML estimate and corresponding 90% calibration interval based on Equation (2).

library(investr)
mod <- lm(measured ~ actual, data = arsenic)
(res <- calibrate(mod, y0 = 3, interval = "inversion", level = 0.9))
  
## Figure 1
plotFit(mod, interval = "prediction", level = 0.9, shade = TRUE, col.pred = "skyblue")
abline(h = 3, v = c(res$lower, res$estimate, res$upper), lty = 2)

Running the above block of code produces Figure 1 and the following output to the R console:

estimate    lower    upper 
  2.9314   2.6035   3.2587

where estimate is the ML estimate (1), and lower/upper correspond to the lower/upper bounds of the 90% inversion interval for x0 (2). If instead the new water sample was subjected to the field test three times, thereby producing three response values corresponding to x0, say 3.17, 3.09, and 3.16 μg/ml, we would simply supply calibrate with a vector of these values as in

calibrate(mod, y0 = c(3.17, 3.09, 3.16), interval = "inversion", level = 0.9)

If interval = "inversion", and the slope of the model is not ignificant at the specified α level, then finite confidence limits for x0 will not be produced. For example, suppose badfit is an "lm" object for which the slope is not significant at the α=0.1 level. Then, as illustrated in Figure 2,

calibrate(badfit, y0 = 10, level = 0.9)

will either produce two semi-infinite intervals, e.g.,

Error: The calibration line is not well determined. The resulting
confidence region is the union of two semi-infinite intervals:
( -Inf , -282.0006 ) U ( 393.1267 , Inf)

or the entire real line, e.g.,

estimate    lower    upper 
-97.5987     -Inf      Inf 
Warning message:
The calibration line is not well determined.
graphic without alt text
Figure 1: Scatterplot of the arsenic data with fitted calibration line and pointwise prediction band. A horizontal reference line is drawn through the observed value y0 = 3. The vertical lines identify the position of the point estimate and 90% confidence bounds for x0.
graphic without alt text
Figure 2: Hypothetical 1α (pointwise) prediction bands. Left: Horizontal line at y0 intersects the prediction band at two points, resulting in a finite interval. Middle: Horizontal line at y0 does not intersect the prediction band at all resulting in an infinite interval. Right: Horizontal line at y0 only intersects with one side of the prediction band resulting in two semi-infinite intervals.

Wald interval

Another common approach to computing calibration intervals is to use the delta method . It is easy to show that an approximate standard error for the ML estimator (1), based on a first-order Taylor series approximation, is given by

(3)se^(x^0)=σ^|β^1|1m+1n+(x^0x¯)2Sxx.

Assuming large sample normality for x^0 leads to an approximate 100(1α)% Wald confidence interval for x0 of

(4)x^0±tα/2,n+m3σ^|β^1|1m+1n+(x^0x¯)2Sxx.

This is equivalent to taking g=0 in the inversion interval (2). Unlike the inversion interval, though, the Wald interval always exists and is symmetric about x^0. This symmetry is attractive, but not always realistic, such as when the calibration curve f is nonlinear and has horizontal asymptotes. To obtain a Wald-type interval we specify interval = "Wald" in the call to calibrate. For instance,

calibrate(mod, y0 = 3, interval = "Wald", level = 0.9)

produces the output

estimate    lower    upper       se 
  2.9314   2.6040   3.2589   0.1929

The point estimate remains unchanged (as expected) but the calibration interval is slightly different. The major benefit of using the delta method to compute calibration intervals is that it always exists and provides us with an asymptotic estimate of the standard error, which in this case gives se^=0.1929. The bootstrap, as discussed in Section Bootstrap calibration intervals, also provides calibration intervals and an estimate of the standard error, but does not require large samples or specific distribution assumptions.

Confidence interval for a specified mean response

Instead of inferring the value of x0 that corresponds to an observed value of the response y0, suppose we want to infer the value of x0 that corresponds to a specified value of the mean response, say f(x0;β)=μ0. The obvious ML estimate of x0 is

(5)x^0=μ0β^0β^1,

which is the same as Equation (1) but with y¯0 replaced with μ0, a fixed population parameter. This is analogous to the difference between (i) predicting a future value of the response corresponding to a given value of the explanatory variable and (ii) estimating the mean response that corresponds to a particular value of the explanatory variable. Thus, the point estimates (1) and (5) are the same but the former has greater variability inherited from the variance of Y0. This is sometimes referred to as regulation (as opposed to calibration) and is described in more detail in . The confidence interval formulas for x0 corresponding to a specified mean response (i.e., regulation) are the same as those given in Equations (2) and (4), but with 1/m replaced with 0 and tα/2,n+m3 replaced with tα/2,n2. For instance, the Wald interval for x0 corresponding to μ0 is simply x^0±tα/2,n2σ^|β^1|1n+(x^0x¯)2Sxx,

where x^0 is calculated as in Equation (5). To obtain calibration intervals corresponding to a specified mean response, use the option mean.response = TRUE.

To illustrate, consider the crystal growth data taken from . These data are from an experiment in which the weight in grams of 14 crystals were recorded after letting the crystals grow for different (predetermined) amounts of time in hours. The weight of each crystal is plotted against time in Figure 3. Suppose we want to estimate the growth time in hours that corresponds to an average weight of 8 grams; that is, we want to estimate x0 corresponding to μ0=8. The following code chunk fits a simple linear regression model to the crystal growth data and then computes the ML estimate and a 95% calibration interval for x0.

mod <- lm(weight ~ time, data = crystal)
(res <- calibrate(mod, y0 = 8, mean.response = TRUE))
  
## Figure 3
plotFit(mod, interval = "confidence", pch = 19, shade = TRUE, col.conf = "plum", 
        extend.range = TRUE)
abline(h = 8, v = c(res$lower, res$estimate, res$upper), lty = 2)

The output for this code chunk should be

estimate    lower    upper 
 15.8882  14.6590  17.1596 

Thus, in order to produce crystals with an average weight of 8 grams, they should be grown for an estimated 15.8882 hours. A 95% confidence interval for the growth time is (14.6590, 17.1596). Obviously, if mean.response = TRUE, then y0 can only take a single value; otherwise, an error will be displayed as in the following:

calibrate(mod, y0 = c(8, 9), mean.response = TRUE)

which displays the message

Error in calibrate.default(cbind(x, y), ...) : 
  Only one mean response value allowed.
graphic without alt text
Figure 3: Scatterplot of the crystal growth data with fitted calibration line and pointwise confidence band. A horizontal reference line is drawn at μ0 = 8. The vertical lines identify the position of the point estimate and 95% confidence bounds for the growth time x0.

This type of calibration problem is similar to computing the median effective dose (ED0.5), or more generally EDp where 0<p<1, in binary response models. In R, these models are usually fit with the glm function from the stats package. The function dose.p from the MASS package can then be used to compute the ML estimate of EDp for specified p. An estimate of the asymptotic standard error based on the delta method is also given which can be used to calculate a Wald-based confidence interval for EDp. A future release of investr will likely make an inversion-type interval available for these models as well (i.e., by inverting a confidence interval for the mean response on the link scale). The package drc , for fitting dose-response curves, may also be of interest.

Simultaneous calibration intervals

The calibration intervals discussed so far are one-at-a-time intervals. If k new values of the response are observed where each corresponds to a different unknown, say x01,x0k, then we need to adjust the critical value used in the inversion and Wald intervals accordingly. The simplest approaches are of course, the Bonferroni and Scheffé procedures. These can be computed by specifying the adjust option which can take any of the following arguments: "none" (the default), "bonferroni", or "scheffe". The value k also needs to be specified. See for more details.

3 Nonlinear and polynomial calibration

In application, many relationships are nonlinear (e.g., dose-response curves). The classical estimator along with the inversion and Wald intervals can easily be extended to such nonlinear calibration curves. However, classical inference in these models (such as prediction intervals) are based on large samples and linear approximations . Thus, for nonlinear calibration curves, the inversion interval provides only an approximate 100(1α)% confidence interval for x0 as does the Wald interval. Calibration in nonlinear models is discussed in further detail in , , and . For calibration in polynomial models, see and .

The invest function can be used for inverse estimation with any univariate regression model in R that inherits from class "lm" or "nls" (with the exception of weighted fits). For instance, consider the regression model Yi=f(xi;θ)+ϵi (i=1,,n), where f may or may not be linear in θ. If we wish to estimate x0 given an observation y0, then the point estimate x^0 is given by solving y0=f(x;θ^) for x. The solution will be unique as long as f is monotonic in the region of interest. The invest function computes x^0 by calling the stats functions predict and uniroot to solve y0f(x;θ^)=0 numerically for x.

Approximate confidence intervals

Equation (2) gives a closed-form expression for the inversion interval for the case of simple linear regression. In more complicated cases, such an expression is not available and the interval must be found numerically. It can be shown that

(6)Tx0=Y0f(x0;θ^){σ^2+S^x02}1/2tnp,

where S^x02 is the estimated variance of f(x0;θ^). An approximate 100(1α)% confidence interval for x0 is then given by the set

(7){x:tα/2,np<Tx<t1α/2,np}.

Essentially, invest finds the lower and upper limits for this interval by solving the equations Txtα/2,np=0andTxt1α/2,np=0 numerically for x0. For the special case of the simple linear regression model, these limits coincide with Equation (2) and the coverage probability is exactly 1α.

The Wald interval for x0 is also easily extended. It has the basic form: x^0±tα/2,npse^(x^0), where p is the dimension of θ. The estimated standard error se^(x^0) is based on a first-order Taylor series approximation. For the special case of the simple linear regression model, this approximation results in Equation (3).

Since the point estimate and confidence intervals for x0 are obtained numerically using uniroot, invest has the additional options lower, upper, tol, and maxiter. See the reference manual for details.

To get an idea of how invest works, consider the data in Figure 4 which were generated from the model Y=5+xsin(x)+1.5Z, where Z is a standard normal random variable. Suppose we want to estimate x0 given a new observation, say y0=22. Fitting a model of the form f(x;θ)=θ1+θ2[xsin(x)] to the sample, we obtain f^(x)=5.490+0.941[xsin(x)], which is certainly invertible, but f^1 cannot be expressed in closed-form using a finite number of terms; thus, x^0=f^1(y0=22) must be obtained numerically. The following chunk of code generates the sample data, calculates x^0 and an approximate 95% Wald interval for x0 using the invest function.

set.seed(101) # for reproducibility
x <- rep(seq(from = 0, to = 25, by = 2), each = 2)
d <- data.frame(x, y = 5 + x - sin(x) + rnorm(length(x), sd = 1.5))
mod <- lm(y ~ I(x - sin(x)), data = d)
res <- invest(mod, y0 = 22, interval = "Wald")

## Figure 4
plotFit(mod, interval = "prediction", shade = TRUE, col.pred = "seagreen1",
        extend.range = TRUE)
abline(h = 22, v = res$estimate, lty = 2)

The point estimate is x^0=16.7053 with an estimated standard error of 0.8909. The code used by invest for obtaining the point estimate is essentially

uniroot(function(x) predict(mod, newdata = list("x" = x)) - 22, 
        lower = min(d$x), upper = max(d$x))$root

The code for obtaining the standard error used in the Wald interval is slightly more involved:

## Write x0.hat as function of parameters (theta1.hat, theta2.hat, Y0)
x0.fun <- function(params, object = mod) {
  object$coefficients <- params[1:2]
  uniroot(function(x) predict(object, list("x" = x)) - params[3], 
          lower = 0, upper = 25, tol = 1e-10)$root
}

## Variance-covariance matrix of (theta1.hat, theta2.hat, Y0)'
covmat <- diag(3)
covmat[1:2, 1:2] <- vcov(mod)
covmat[3, 3] <- summary(mod)$sigma^2

## Numerically evaluate gradient
params <- c(coef(mod), y0 = 22)
grad <- attr(numericDeriv(quote(x0.fun(params)), "params"), "gradient")

## Calculate standard error
(se <- sqrt(grad %*% covmat %*% t(grad)))
graphic without alt text
Figure 4: Scatterplot of simulated data with fitted calibration line and pointwise prediction band. A horizontal reference line is drawn through the observed value y0 = 22. The vertical line identifies the position of the point estimate x^0.

The following example uses the nasturtium data from the drc package. These data were analyzed in using an approximate Bayesian approach. Bioassays were performed on a type of garden cress called nasturtium. Six replicates of the response (plant weight in mg after the third week of growth) were measured at seven preselected concentrations of an agrochemical. The weights corresponding to three new soil samples, all sharing the same (unknown) concentration x0, were observed to be 309, 296, and 419 mg. The block of code below fits a log-logistic model f(x;θ)={θ1,x=0,θ1/[1+exp{θ2+θ3ln(x)}],x>0, and computes an approximate 95% inversion interval for x0.

## Load package containing nasturtium data
library(drc)
  
## Fit log-logistic model
mod <- nls(weight ~ theta1/(1 + exp(theta2 + theta3 * log(conc))),
           start = list(theta1 = 1000, theta2 = -1, theta3 = 1), 
           data = nasturtium)
plotFit(mod, interval = "prediction") # figure not shown
  
## Compute approximate 95% inversion interval
invest(mod, y0 = c(309, 296, 419), interval = "inversion")

The interval obtained is (1.7722, 2.9694) with a point estimate of 2.2639. We can check the point estimate manually. Some algebra gives x^0=exp{log(θ^1/y¯01)θ^2θ^3}=2.2639.

A Wald interval can also be obtained by running the following line of code:

invest(mod, y0 = c(309, 296, 419), interval = "Wald")

the output for which is

estimate    lower    upper       se 
  2.2639   1.6889   2.8388   0.2847

In certain cases (i.e., when x^0 can be written in closed-form), one can use the very useful deltaMethod function from the car package to compute the approximate standard error used in the Wald interval. For the nasturtium example, the minimal code to obtain se^(x^0) is

## Using the deltaMethod function in the car package
library(car)
covmat <- diag(4)
covmat[1:3, 1:3] <- vcov(mod)
covmat[4, 4] <- summary(mod)$sigma^2 / 3 # since length(y0) = 3
(se <- deltaMethod(c(coef(mod), y0.bar = mean(c(309, 296, 419))), 
                   g = "exp((log(theta1 / y0.bar - 1) - theta2) / theta3)", 
                   vcov. = covmat)$SE)

which produces an estimated standard error of 0.2847019, the same as that obtained automatically by invest.

As indicated by the bootstrap distribution obtained in the next section, the symmetric Wald interval obtained here seems unrealistic for this problem. In the next section, we show how to obtain a bias-corrected and accelerated (BCa) bootstrap confidence interval for x0 using the boot package .

4 Bootstrap calibration intervals

The bootstrap provides an alternative means for computing calibration intervals. This is useful in nonlinear calibration problems where inference traditionally relies on large samples, approximate normality, and linear approximations. A practical guide to the bootstrap is provided by and the companion R package boot, and bootstrap resampling for controlled calibration is discussed in . Although it is likely for a "bootstrap" option to appear in a future release of investr, it is quite simple to set up using the recommended boot package. First, we discuss a naive approach to calculating bootstrap calibration intervals.

A simple, but ultimately wrong approach to resampling in controlled calibration is demonstrated in the following example for the nasturtium data:

library(boot)

## Function to compute estimate of x0
x0.fun <- function(object, y) {
  theta <- unname(coef(object))
  exp((log(theta[1] / mean(y) - 1) - theta[2]) / theta[3]) 
}
  
## Bootstrap setup
y0 <- c(309, 296, 419)
res <- resid(mod) - mean(resid(mod)) # center the residuals
n <- length(res)
boot.data <- data.frame(nasturtium, res = res, fit = fitted(mod))
boot.fun <- function(data, i) {
  boot.mod <- nls(fit + res[i] ~ theta1 / (1 + exp(theta2 + theta3 * log(conc))),
                  start = list(theta1 = 1000, theta2 = -1, theta3 = 1), data = data)
                    
  ## Make sure the original estimate also gets returned
  if (all(i == 1:n)) x0.fun(mod, y0) else x0.fun(boot.mod, y0)
}
  
## Run bootstrap simulation (takes about 50s on a standard laptop)
set.seed(123) # for reproducibility
res <- boot(boot.data, boot.fun, R = 9999) # collect 9,999 bootstrap samples
boot.ci(res, type = "bca") # obtain BCa confidence interval for x0

This produces an approximate 95% BCa calibration interval of (2.04, 2.52), quite shorter than the ones produced by the inversion and Wald methods in the previous section. Did the bootstrap really do that well? The answer here is no, but it is not the bootstrap’s fault. Recall that y¯0 is an observed value of the random variable Y0 which has variance σ2/m. This source of variability is ignored in the bootstrap Monte Carlo simulation above, which treats y¯0 as a fixed constant. Compare the interval just obtained with the output from

invest(mod, y0 = mean(c(309, 296, 419)), mean.response = TRUE)

We can get a hold of this extra source of variability by again sampling with replacement from the centered residuals. In particular, we resample the responses y0j (j=1,2,,m), by randomly selecting m additional residuals ej from the centered residuals and calculating the bootstrap responses y0j=y¯0+ej,j=1,2,,m. Let y¯0=j=1my0j/m. The correct bootstrap replicate of x0 is given by x^0=f1(y¯0;θ^). To implement this, all we need to do is make the following changes to boot.fun (the changes are colored magenta):

boot.fun <- function(data, i) {
  boot.mod <- nls(fit + res[i] ~ theta1 / (1 + exp(theta2 + theta3 * log(conc))),
                  start = list(theta1 = 1000, theta2 = -1, theta3 = 1), data = data)
                    
  ## Simulate the correct variance
  Y0 <- y0 + sample(data$res, size = length(y0), replace = TRUE)
    
  ## Make sure the original estimate also gets returned
  if (all(i == 1:n)) x0.fun(mod, y0) else x0.fun(boot.mod, Y0)
}

Rerunning the previous example with the modified boot.fun function produces a more realistic 95% confidence interval for x0 of (1.818, 2.950) and an estimated standard error for x^0 of 0.2861 (similar to that obtained by the delta method earlier). Figure 5 shows a histogram and normal Q-Q plot of the 9,999 bootstrap replicates of x^0. The bootstrap distribution of x^0 is comparable to the approximate posterior density of x0 given in . The bootstrap distribution is skewed to the right and clearly not normal, suggesting that the Wald interval may not be realistic for this problem.

graphic without alt text
Figure 5: The left panel shows a histogram of the bootstrap replicates with a vertical line indicating the position of the original estimate x^0. The right panel shows a normal Q-Q plot of the bootstrap replicates.

5 Summary

We introduced the investr package for computing point estimates along with inversion and Wald-based confidence intervals for linear and nonlinear calibration problems with constant variance. We also showed how the boot package can be used for constructing approximate 100(1α)% calibration intervals, which for convenience, may be incorporated into a future release of investr. The authors are currently working on extending the package to handle the case of heteroscedastic errors, random coefficient models [e.g., objects of class "lme" from the recommended nlme package; ]), and even multivariate calibration problems (e.g., objects of class "mlm").

6 Acknowledgements

The authors would like to thank two anonymous reviewers and the Editor for their helpful comments and suggestions.

CRAN packages used

investr, MASS, drc, car, boot

CRAN Task Views implied by cited packages

Agriculture, ChemPhys, Distributions, Econometrics, Environmetrics, Finance, MixedModels, NumericalMathematics, Optimization, Psychometrics, Robust, Survival, TeachingStatistics, TimeSeries

Note

This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.

Footnotes

  1. The views expressed in this paper are those of the authors, and do not reflect the official policy or position of the United States Air Force, Navy, Department of Defense, or the U.S. Government.[↩]

References

D. M. Bates and D. G. Watts. Nonlinear regression analysis and its applications. New York: John Wiley & Sons, 1988.
P. J. Brown. Measurement, regression, and calibration. Clarendon Press, 1993.
A. Canty and B. Ripley. Boot: Bootstrap r (s-plus) functions. 2014. URL http://CRAN.R-project.org/package=boot. R package version 1.3-11.
A. C. Davison and D. V. Hinkley. Bootstrap methods and their applications. Cambridge: Cambridge University Press, 1997.
R. Dorfman. A note on the δ-method for finding variance formulae. The Biometric Bulletin, 1: 129–137, 1938.
N. R. Draper and H. Smith. Applied regression analysis. New York: John Wiley & Sons, 1998.
B. Efron. Bootstrap methods: Another look at the jackknife. Annals of Statistics, 7(1): 1–26, 1979.
C. Eisenhart. The interpretation of certain regression methods and their use in biological and industrial research. Annals of Mathematical Statistics, 10(2): 162–186, 1939.
E. C. Fieller. Some problems in interval estimation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 16: 175–185, 1954.
J. Fox and S. Weisberg. An R companion to applied regression. Second Thousand Oaks CA: Sage, 2011. URL http://socserv.socsci.mcmaster.ca/jfox/Books/Companion.
F. A. Graybill. Theory and application of the linear model. Pacific Grove, CA: Duxbury, 1976.
F. A. Graybill and H. K. Iyer. Regression analysis: Concepts and applications. Belmont, CA: Duxbury Press, 1994.
B. M. Greenwell. Investr: Functions for inverse estimation with linear and nonlinear regression models in r. 2014. URL https://github.com/w108bmg/investr. R package version 1.0.2.
S. Huet. Statistical tools for nonlinear regression: A practical guide with s-PLUS and r examples. Springer, 2004.
G. Jones and D. M. Rocke. Bootstrapping in controlled calibration experiments. Technometrics, 41(3): 224–233, 1999.
R. G. Miller. Simultaneous statistical inference. Springer-Verlag, 1981.
C. Osborne. Calibration: A review. International Statistical Review, 59(3): 309–336, 1991.
J. Pinheiro, D. Bates, S. DebRoy, D. Sarkar and R Core Team. Nlme: Linear and nonlinear mixed effects models. 2014. URL http://CRAN.R-project.org/package=nlme. R package version 3.1-117.
A. Racine-Poon. A Bayesian approach to nonlinear calibration problems. Journal of the American Statistical Association, 83(403): 650–656, 1988.
C. Ritz and J. C. Streibig. Bioassay analysis using R. Journal of Statistical Software, 12(5): 1–22, 2005. URL http://www.jstatsoft.org/v12/i05/.
J. R. Schwenke and G. A. Milliken. On the calibration problem extended to nonlinear models. Biometrics, 47(2): 563–574, 1991.
G. A. F. Seber and A. J. Lee. Linear regression analysis. John Wiley & Sons, 2003.
G. A. F. Seber and C. J. Wild. Nonlinear regression. John Wiley & Sons, 2003.
W. N. Venables and B. D. Ripley. Modern applied statistics with s. Fourth New York: Springer, 2002. URL http://www.stats.ox.ac.uk/pub/MASS4.

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Greenwell & Kabban, "investr: An R Package for Inverse Estimation", The R Journal, 2014

BibTeX citation

@article{RJ-2014-009,
  author = {Greenwell, Brandon M. and Kabban, Christine M. Schubert},
  title = {investr: An R Package for Inverse Estimation},
  journal = {The R Journal},
  year = {2014},
  note = {https://rjournal.github.io/},
  volume = {6},
  issue = {1},
  issn = {2073-4859},
  pages = {90-100}
}