Welch’s two-sample \(t\)-test based on least squares (LS) estimators is generally used to test the equality of two normal means when the variances are not equal. However, this test loses its power when the underlying distribution is not normal. In this paper, two different tests are proposed to test the equality of two long-tailed symmetric (LTS) means under heterogeneous variances. Adaptive modified maximum likelihood (AMML) estimators are used in developing the proposed tests since they are highly efficient under LTS distribution. An R package called RobustBF is given to show the implementation of these tests. Simulated Type I error rates and powers of the proposed tests are also given and compared with Welch’s \(t\)-test based on LS estimators via an extensive Monte Carlo simulation study.
Testing the equality of two population means is one of the most encountered problems in applied sciences. Student’s \(t\)-test, which is uniformly most powerful unbiased, is commonly used under normality and homogeneity of variances assumptions. The well-known Behrens-Fisher (BF) problem arises when the assumption of homogeneity of variances is not met. This problem can be defined as testing the null hypothesis \[\begin{aligned} \label{1} {H_0}:{\mu _{1}} = {\mu_{2}} \end{aligned} \tag{1}\] when \({Y_{i1}, Y_{i2},..., Y_{in_i}}\) \(\left(i=1,2\right)\) are independent random samples from \(N\left(\mu_i, \sigma_i^{2}\right)\) distribution. Fisher (1939) endorsed Behrens’ solution to the BF problem by using the fiducial theory. Many researchers studied this problem. For example, Welch (1938) proposed a test statistic and provided its degrees of freedom approximately. It should be noted that degrees of freedom provided by Welch (1938) can also be obtained by using the Satterthwaite approximation; see Satterthwaite (1946). This is why the mentioned degrees of freedom is also known as Welch-Satterthwaite degrees of freedom in the literature. Wang (1971) calculated the Type I error rates of the Welch’s two-sample \(t\)-test and Aspin-Welch test for different sets of degrees of freedom and nominal significance levels and concluded that Welch’s \(t\)-test could be used in practice with little loss of accuracy. Davenport and Webster (1975) considered the test suggested by Fairfield Smith (1936) for the BF problem and compared its Type I error rates with those of Mehta and Srinivasan (1970). They concluded that this test is a very practical solution to the BF problem besides being stable in regard to size and having adequate power. Best and Rayner (1987) calculated the Wald score and likelihood ratio statistics and showed that the test based on Wald statistics has the same asymptotic properties as the Welch’s \(t\)-test. Kim and Cohen (1998) presented a review of basic concepts and applications concerning the BF problem under fiducial, Bayesian and frequentist approaches. Singh et al. (2002) developed a test based on the Jackknife estimator of the common population variance and compared the powers of the proposed test with those of Welch’s \(t\)-test and Cochran and Cox (1957) test. According to the results of their study, the proposed test is more powerful than the Cochran-Cox test for all cases, while it is more preferable to Welch’s \(t\)-test for some cases. Chang and Pal (2008) developed a computational approach test (CAT) for the BF problem and compared it with Welch’s \(t\)-test, Cochran-Cox text, Generalized \(p\)-value test, and Singh–Saxena–Srivastava test under the normal and \(t\)-model. They found that Welch’s \(t\)-test, Cochran-Cox text, and CAT are robust under the heavier tailed \(t\)-models besides having similar size and power.
When the literature is examined, it can be seen that Welch’s \(t\)-test has a very good performance as compared to other tests in the case of heteroscedasticity and unequal sample sizes in the context of normality. The power of Welch’s \(t\)-test decreases very rapidly when the underlying distribution is long-tailed symmetric (LTS) since the least squares (LS) estimators are not robust to the violation of normality. It is known that non-normal distributions are more common in real-life problems. Yuen (1974) proposed a two-sample trimmed \(t\)-test and compared its performance of it with Welch’s \(t\)-test for both normal and long-tailed samples. Tiku and Singh (1981) proposed Welch-type statistics based on modified maximum likelihood (MML) estimators and showed that the proposed test is more powerful than Yuen (1974)’s trimmed \(t\)-test. In addition, Tiku and Singh (1981) investigated the analogous test based on the robust bisquare estimators BS82 and showed that this test statistic gives misleading Type I errors.
In this study, a robust version of Welch’s \(t\)-test for the BF problem is proposed when the underlying distribution is LTS. A second test using the fiducial model, which is a special case of a functional model given by Dawid and Stone (1982), is also proposed; see Fisher (1933, 1935) for more information about the fiducial approach. The reason for including a robust version of fiducial-based test into this study is to see its performance in the context of BF problem and to make comprehensive comparisons with its rivals (i.e., robust version of Welch’s \(t\)-test and the traditional Welch’s \(t\)-test). Both of the proposed tests are based on adaptive modified maximum likelihood (AMML) estimators, see Tiku and Sürücü (2009) and Dönmez (2010). To the best of our knowledge, this is the first study using AMML estimators for testing the equality of two LTS means under heterogeneous variances. These estimators are efficient and easy to compute for LTS samples, see Tiku and Sürücü (2009).
The R packages stats by R Core Team (1970) and asht by Fay (2020) include Welch’s \(t\)-test based on LS estimators and BF test under normality, respectively. WRS2 by Mair and Wilcox (2021) contains Yuen’s test based on the trimmed sample means. Different from the earlier studies, we provide an R package RobustBF computing the values of the proposed test statistics and/or the corresponding \(p\)-values.
The rest of this study is organized as follows. Firstly, AMML estimators are given. Secondly, the robust Welch test and robust test based on the fiducial approach are developed. Thirdly, an extensive Monte Carlo simulation study is conducted to compare the performances of the proposed tests with the traditional Welch’s \(t\)-test based on LS estimators. The proposed tests are applied to a real data set via RobustBF package. This paper is finalized some concluding remarks.
Assume that \({Y_{i1}, Y_{i2},..., Y_{in_i}}\) \(\left(i=1,2\right)\) be independent random samples from \(LTS\left(p, \mu_i, \sigma_i\right)\) distribution \[\begin{aligned} f\left(y\right)= \frac{1}{\sqrt{k}\beta\left(1/2,p-1/2\right)\sigma}\left(1+\frac{\left(y-\mu\right)^2}{k\sigma^2}\right)^{-p}, \quad -\infty < y < \infty; -\infty < \mu < \infty; \sigma>0; p\geq 2, \label{2} \end{aligned} \tag{2}\] where \(\mu\) is the location parameter, \(\sigma\) is the scale parameter, \(p\) is the shape parameter, and \(k=2p-3\) (Tiku and Kumra 1985). It should be noted that \(E\left(y\right)=\mu\), \(V\left(y\right)=\sigma^2\), and \(t=\sqrt{\left(\nu/k\right)}\left(y/\sigma\right)\) has Student’s \(t\) distribution with \(\nu=2p-1\) degrees of freedom.
The log-likelihood \((\ln L)\) function is given by \[\begin{aligned} \ln L= -N\ln\left(\sqrt{k}\beta\left(1/2, p-1/2\right)\right)-\sum\limits_{i=1}^{2}n_i\ln\left(\sigma_i\right) - p\sum\limits_{i=1}^{2}\sum\limits_{j=1}^{n_i}\ln\left(1+\frac{\left(y_{ij}-\mu_{i}\right)^2}{k\sigma_{i}^2}\right), \label{3} \end{aligned} \tag{3}\] where \(N=n_1+n_2\). Then, the likelihood equations are obtained as follows \[\begin{aligned} \frac{\partial \ln L}{\partial \mu_i}=\frac{2p}{k\sigma_i}\sum\limits_{j=1}^{n_i}g\left(z_{ij}\right)=0 \label{4} \end{aligned} \tag{4}\]
\[\begin{aligned} \frac{\partial \ln L}{\partial \sigma_i}= -\frac{n_i}{\sigma_i}+\frac{2p}{k\sigma_i}\sum\limits_{j=1}^{n_i}z_{ij}g\left(z_{ij}\right)=0 \label{5} \end{aligned} \tag{5}\] where \[\begin{aligned} g\left(z_{ij}\right)=\frac{z_{ij}}{1+\left(1/k\right)z_{ij}^2} \quad \text{and} \quad z_{ij}=\frac{y_{ij}-\mu_i}{\sigma_i}. \label{6} \end{aligned} \tag{6}\]
By solving the above likelihood equations, (4) and (5), simultaneously, the maximum likelihood (ML) estimators of the parameters \(\mu_i\) and \(\sigma_i\) are obtained. However, these equations involve nonlinear functions of the parameters, and so ML estimators cannot be obtained explicitly. Hence, numerical methods can be used to solve these equations. Numerical methods may cause convergence problems like non-convergence of iterations, convergence to wrong roots, or multiple roots (Puthenpura and Sinha 1986; Vaughan 1992). MML methodology proposed by Tiku (1967, 1968) overcomes these mentioned problems by providing explicit solutions to likelihood equations. In MML methodology, firstly, the standardized statistics are ordered in ascending way, i.e., \(z_{i\left(1\right)} \leq z_{i\left(2\right)} \leq ...\leq z_{i\left(n_i\right)}\). Then, likelihood equations in (4) and (5) are rewritten in terms of \(z_{i\left(j\right)}\) and \(g(z_{i\left(j\right)})\) \(\left(i=1,2; j=1,2,...,n_i\right)\) as shown in (7) and (8) since summation is invariant to ordering, i.e., \(\sum\limits_{j=1}^{n_i}z_{i\left(j\right)}=\sum\limits_{j=1}^{n_i}z_{ij}\). \[\begin{aligned} \frac{\partial \ln L}{\partial \mu_i}=\frac{2p}{k\sigma_i}\sum\limits_{j=1}^{n_i}g\left(z_{i\left(j\right)}\right)=0 \label{7} \end{aligned} \tag{7}\]
\[\begin{aligned} \frac{\partial \ln L}{\partial \sigma_i}= -\frac{n_i}{\sigma_i}+\frac{2p}{k\sigma_i}\sum\limits_{j=1}^{n_i}z_{i\left(j\right)}g\left(z_{i\left(j\right)}\right)=0. \label{8} \end{aligned} \tag{8}\] Here, \(z_{i\left(j\right)}=\frac{y_{i\left(j\right)}-\mu_i}{\sigma_i}\) and \(g(z_{i\left(j\right)})=\frac{z_{i\left(j\right)}}{1+\left(1/k\right)z_{i\left(j\right)}^2}\). The nonlinear function \(g(z_{i\left(j\right)})\) is linearized utilizing the first two terms of the Taylor series expansion around the expected values of the ordered statistics \(E(z_{i(j)})=t_{i(j)}\) as follows \[\begin{aligned} g\left(z_{i\left(j\right)}\right)\cong\alpha_{ij}+\beta_{ij}z_{i\left(j\right)}, \label{9} \end{aligned} \tag{9}\] where \[\begin{aligned} \alpha_{ij}=\frac{\left(2/k\right)t_{i\left(j\right)}^3}{\left(1+\left(1/k\right)t_{i\left(j\right)}^2\right)^2}\quad \text{and} \quad \beta_{ij}=\frac{1-\left(1/k\right)t_{i\left(j\right)}^2}{\left(1+\left(1/k\right)t_{i\left(j\right)}^2\right)^2}. \label{10} \end{aligned} \tag{10}\] Since \(t_{i\left(j\right)}\) values cannot be obtained exactly, approximate values of \(t_{i\left(j\right)}\) which do not affect the efficiencies of the resulting estimators are used, \[\begin{aligned} \int_{-\infty}^{t_{i\left(j\right)}}f\left(z\right)dz=\frac{j}{n_i+1},\quad i=1,2; j=1,2,...,n_i. \label{11} \end{aligned} \tag{11}\] Secondly, modified likelihood equations are obtained by inserting the approximation (9) into Eqs. (7) and (8) \[\begin{aligned} \frac{\partial \ln L^{*}}{\partial \mu_i}=\frac{2p}{k\sigma_i}\sum\limits_{j=1}^{n_i}\left(\alpha_{ij}+\beta_{ij}z_{i\left(j\right)}\right)=0 \label{12} \end{aligned} \tag{12}\]
\[\begin{aligned} \frac{\partial \ln L^{*}}{\partial \sigma_i}= -\frac{n_i}{\sigma_i}+\frac{2p}{k\sigma_i}\sum\limits_{j=1}^{n_i}z_{i(j)}\left(\alpha_{ij}+\beta_{ij}z_{i\left(j\right)}\right)=0. \label{13} \end{aligned} \tag{13}\] Finally, MML estimators of \({\mu_i}\) and \({\sigma_i}\) are found by solving Eqs. (12) and (13). They are given as follows \[\begin{aligned} \hat{\mu}_i=\frac{\sum\limits_{j=1}^{n_i}\beta_{ij}y_{i(j)}}{m_i}\quad \text{and} \quad \hat{\sigma}_i=\frac{B_i+\sqrt{B_i^2+4n_iC_i}}{2\sqrt{n_i\left(n_i-1\right)}}, \label{14} \end{aligned} \tag{14}\] where \[\begin{aligned} B_i=\frac{2p}{k}\sum\limits_{j=1}^{n_i}\alpha_{ij}\left(y_{i\left(j\right)}-\hat{\mu}_i\right), \quad C_i=\frac{2p}{k}\sum\limits_{j=1}^{n_i}\beta_{ij}\left(y_{i(j)}-\hat{\mu}_i\right)^2 \quad \text{and} \quad m_i=\sum\limits_{j=1}^{n_i}\beta_{ij}; \label{15} \end{aligned} \tag{15}\] see Tiku and Suresh (1992). The asymptotic properties of the MML estimators \({\hat \mu} _i\) and \({\hat \sigma}_i\) can be demonstrated with the help of the following theorems.
Theorem 1. \(\hat{\mu}_i\) is the minimum variance bound (MVB) estimator and is asymptotically normally distributed with mean \({\mu_i}\) and variance \(\sigma_i^2/M_i\) \(\left(M_i=2pm_i/k\right)\).
Theorem 2. \(\left(n_i-1\right)\hat{\sigma}_{i}^{2}/{\sigma}_{i}^{2}\) is distributed as chi-square (more accurately a multiple of chi-square) with \(\left(n_i-1\right)\) degrees of freedom.
For proofs of theorems, see, e.g. Şenoğlu and Tiku (2001; Güven et al. 2019).
MML estimators have the same asymptotic properties as the ML estimators and are as efficient as ML estimators, even for small samples. They are easy to compute and robust to the outliers.
It should be noted that the shape parameter \(p\) is assumed to be known in the MML methodology. However, in some real-life applications, it may be possible to assume that the data comes from a certain type of distribution, namely LTS distribution, but there is no opportunity to specify the value of the shape parameter. Hence, Tiku and Sürücü (2009) proposed AMML methodology, which is a new version of MML methodology, see (Dönmez 2010) and Acıtaş et al. (2020, 2021). This methodology relaxes the assumption of the known shape parameter. AMML estimators are computed in two iterations. In the first iteration, initial \(t_{ij}\) values are calculated from the sample data, as shown below \[\begin{aligned} t_{ij}=\left(y_{ij}-T_{0i}\right)/S_{0i} \quad i=1,2; j=1,...,n_i. \label{16} \end{aligned} \tag{16}\] Here, \(T_{0i}\) and \(S_{0i}\) are the initial estimates of \(\mu_i\) and \(\sigma_i\) and they are calculated as \[\begin{aligned} T_{0i}=med\left\{{y_{ij}}\right\}\quad \text{and} \quad S_{0i}=1.483med\left\{\mid{y_{ij}-T_{0i}\mid}\right\} \quad i=1,2; j=1,...,n_i, \label{17} \end{aligned} \tag{17}\] respectively. Using the \(t_{ij}\) values in (16), \({\alpha}_{ij}\) and \({\beta}_{ij}\) coefficients are calculated as follows \[\begin{aligned} {\alpha}_{ij}=\frac{\left(1/k\right){t}_{ij}}{1+\left(1/k\right){t}_{ij}^{2}} \quad \text{and} \quad {\beta}_{ij}=\frac{1}{1+\left(1/k\right){t}_{ij}^{2}}. \label{18} \end{aligned} \tag{18}\] Then, the AMML estimates of the parameters \(\mu_i\) and \(\sigma_i\) are obtained using Eq. (14) and \({\alpha}_{ij}\) and \({\beta}_{ij}\) values given in Eq. (18). To distinguish these estimates from the MML estimates, they are represented by \(\hat{\mu}_{i(AMML)}\) and \(\hat{\sigma}_{i(AMML)}\) in the rest of the paper. In the second iteration, \(t_{ij}\) values are revised as follows \[\begin{aligned} t_{ij}=\left(y_{ij}-\hat{\mu}_{i\left(AMML\right)}\right)/\hat{\sigma}_{i\left(AMML\right)} \quad i=1,2; j=1,...,n_i \label{19} \end{aligned} \tag{19}\] and recalculate the \({\alpha}_{ij}\) and \({\beta}_{ij}\) values using the equalities in (18) for these \(t_{ij}\) values. Then final AMML estimates of \(\mu_i\) and \(\sigma_i\) are obtained.
It should be noted that in AMML methodology, \(y_{ij}\) observations are used rather than the ordered \(y_{i(j)}\) observations since \(t_{ij}\) values are calculated from the sample observations. In addition, the shape parameter \(p\) is taken to be 16.5 in the calculations of \(\alpha_{ij}\) and \(\beta_{ij}\) coefficients since this value makes AMML estimators efficient for normal and near normal distributions. It also makes them robust to mild outliers. The reason why we use AMML methodology in the proposed tests is that it provides the same asymptotic properties as MML methodology and, as mentioned before, relaxes the assumption of known shape parameter \(p\).
In this section, we propose two different tests for testing the equality of two LTS means.
In this subsection, we briefly introduce Welch’s \(t\)-test proposed by
Welch (1938) under normal theory and then give the robust
version of it. Welch’s \(t\)-test based on LS estimators is defined as
\(W=\left\{(\bar{x}_1-\bar{x}_2)-\left(\mu_1-\mu_2\right)\right\}/\sqrt{\left\{\left(s_1^{2}/n_1\right)+\left(s_2^{2}/n_2\right)\right\}}\).
It is known that \(W\) is approximately distributed as Student’s \(t\) with
degrees of freedom
\[\begin{aligned}
f=\frac{1}{\left\{c^2/\left(n_1-1\right)+\left(1-c^2\right)/\left(n_2-1\right)\right\}},
\label{20}
\end{aligned} \tag{20}\]
where
\(c=\left(s_1^{2}/n_1\right)/\left\{\left(s_1^{2}/n_1\right)+\left(s_2^{2}/n_2\right)\right\}\).
Here, \(\bar{x}_i\) and \(s_i^{2}\) \((i=1,2)\) are the sample means and
sample variances, respectively. The value of \(W\) test can be obtained
using t.test
function available in R.
In this study, we propose the following test statistics based on AMML estimators as a robust alternative to Welch’s \(t\)-test \[\begin{aligned} {RW} = \frac{\left({\hat{\mu}_{1\left(AMML\right)}}-{\hat{\mu}_{2\left(AMML\right)}}\right)-\left(\mu_1-\mu_2\right)}{\sqrt{\left(\hat{\sigma}_{1\left(AMML\right)}^2/M_1\right)+ \left(\hat{\sigma}_{2\left(AMML\right)}^2/M_2\right)}}. \label{21} \end{aligned} \tag{21}\] As we shall see at the end of this section, the null distribution of \(RW\) is approximately distributed as Student’s \(t\) based upon Theorems 1 and 2 . The approximate degrees of freedom for this test is obtained using the Satterthwaite (1946) approximation as follows.
Let \[\begin{aligned} {c_1} = \frac{\sigma_{1}^{2}}{\left(n_1-1\right)M_1}, \quad \quad {c_2} = \frac{\sigma_{2}^{2}}{\left(n_2-1\right)M_2} \label{22} \end{aligned} \tag{22}\]
\[\begin{aligned} Q_1=\frac{\left({n_1}-1\right)\hat \sigma _{1\left(AMML\right)}^2}{\sigma_1^2}\quad \text{and} \quad Q_2=\frac{({n_2}-1)\hat \sigma_{2\left(AMML\right)}^2}{\sigma_2^2} \label{23} \end{aligned} \tag{23}\] where \(Q_1\) and \(Q_2\) are independent chi-square random variables with degrees of freedom \({\left(n_1-1\right)}\) and \({\left(n_2-1\right)}\), respectively (see Theorem 2). If the linear combination of \(Q_1\) and \(Q_2\) is written as \[\begin{aligned} Q={c_1}Q_1+{c_2}Q_2={\frac{{\hat \sigma_{1\left(AMML\right)}^2}}{{M_1}} + \frac{{\hat \sigma_{2\left(AMML\right)}^2}}{{{M_2}}}} \label{24}, \end{aligned} \tag{24}\] then \({\nu Q}/{E\left(Q\right)}\) has an approximate \({\chi ^2}\) distribution with the following degrees of freedom \[\begin{aligned} \begin{split} \nu &= \frac{\Big[{c_1}Q_1+{c_2}Q_2\Big]^2}{\Big(\Big[{c_1}Q_1\Big]^2/\nu_1\Big)+\Big(\Big[{c_2}Q_2\Big]^2/\nu_2\Big)}\\\\ &= \frac{\left(\left(\hat \sigma_{1(AMML)}^2/M_1\right)+\left(\hat \sigma_{2\left(AMML\right)}^2/M_2\right)\right)^2}{{\left(\hat \sigma_{1\left(AMML\right)}^2/M_1\right)^2/\left(n_1-1\right)}+{\left(\hat \sigma_{2\left(AMML\right)}^2/M_2\right)^2/\left(n_2-1\right)}}. \end{split} \end{aligned}\] Here, \[\begin{aligned} \nu_1=n_1-1, \quad \nu_2=n_2-1 \quad \text{and} \quad E\left(Q\right) = \frac{{\sigma _1^2}}{{{M_1}}} + \frac{{\sigma _2^2}}{{{M_2}}}. \end{aligned}\] \(RW\) in (21) can be rewritten as follows \[\begin{aligned} RW=\frac{\left(\left({\hat{\mu}_{1\left(AMML\right)}}-{\hat{\mu}_{2\left(AMML\right)}}\right)-\left(\mu_1-\mu_2\right)\right)/\sqrt{\left(\sigma _1^2/M_1\right)+\left(\sigma _2^2/M_2\right)}}{\sqrt{\left(\hat{\sigma}_{1\left(AMML\right)}^2/M_1\right)+\left(\hat{\sigma}_{2\left(AMML\right)}^2/M_2\right)}/\sqrt{\left(\sigma _1^2/M_1\right)+\left(\sigma _2^2/M_2\right)}}. \end{aligned}\] Since this expression is equivalent to \[\begin{aligned} \frac{Z}{\sqrt {Q}/\sqrt{E\left(Q\right)}}, \end{aligned}\] it is obvious that \(RW\) is approximately distributed as Student’s \(t\) with \({\nu}\) degrees of freedom. Here, \(Z\sim N(0,1)\) (see Theorem 1) and \({\sqrt {Q}/\sqrt{E\left(Q\right)}}\sim \sqrt{\chi _\nu ^2/\nu}\).
To verify the null distribution of the \(RW\), the probabilities \[\begin{aligned} p_1=Pr\left(\lvert RW \rvert \geq t_{{1-\alpha/2},\nu}\right) \label{29} \end{aligned} \tag{25}\] are simulated from 10,000 Monte Carlo runs for various combinations of the sample sizes \(n_1\) and \(n_2\). The results are demonstrated in Table 1. Here, \(\nu\) is the degrees of freedom for \(RW\).
In this section, fiducial-based test is proposed using the concept of fiducial inference and pivotal model; see Fisher (1933, 1935) and Dawid and Stone (1982). Let denote the \(RW\) test based on the observed values as \[\begin{aligned} {RW^{*}} = \frac{{\left(\hat{\mu}_{1\left(AMML\right)}^{*}-\hat{\mu}_{2\left(AMML\right)}^{*}\right)}-{\left(\mu_1-\mu_2\right)}}{\sqrt{\frac{{{\hat \sigma }_{1\left(AMML\right)}^{2*}}}{M_1}+\frac{{{\hat \sigma }_{2\left(AMML\right)}^{2*}}}{M_2}}}. \label{30} \end{aligned} \tag{26}\] First, the fiducial distribution of \(RW^{*}\) is derived using pivotal quantities and fiducial distribution of the parameters of interest. Then, the corresponding \(p\)-value is obtained. Here, \(\left(\hat{\mu}_{i(AMML)}^{*}, \hat {\sigma}_{i(AMML)}^{2*}\right)\) are the observed values of \(\left(\hat{\mu}_{i(AMML)}, \hat {\sigma}_{i(AMML)}^{2}\right)\) \(\left(i=1,2\right)\). Let \[\begin{aligned} Z_i=\frac{\hat{\mu}_{i\left(AMML\right)}-\mu_i}{{\sigma}_{i}/\sqrt{M_i}} \end{aligned}\] and \[\begin{aligned} Q_i=\frac{\left(n_i-1\right)\hat{\sigma}_{i\left(AMML\right)}^{2}}{\sigma_{i}^{2}} \end{aligned}\] are mutually independent pivotal quantities. They have asymptotically \(N(0,1)\) and \(\chi _{({n_i} - 1)}^2\) distributions, respectively (see Theorems 1 and 2). Using pivotal quantities \(Z_i\) and \(Q_i\), data generating equations are obtained as given below \[\begin{aligned} \hat{\mu}_{i\left(AMML\right)}=\mu_i+\left({\sigma}_{i}/\sqrt{M_i}\right)Z_i \label{33} \end{aligned} \tag{27}\] and \[\begin{aligned} \hat{\sigma}_{i\left(AMML\right)}^{2}={\sigma}_{i}^{2}Q_i/\left(n_i-1\right). \label{34} \end{aligned} \tag{28}\] Given \(\left(\hat{\mu}_{i\left(AMML\right)}^{*}, \hat {\sigma}_{i\left(AMML\right)}^{2*}\right)\), Eqs. (27) and (28) are expressed as follows \[\begin{aligned} \hat{\mu}_{i\left(AMML\right)}^{*}=\mu_i+\left({\sigma}_{i}/\sqrt{M_i}\right)z_i \label{35} \end{aligned} \tag{29}\] and \[\begin{aligned} \hat{\sigma}_{i\left(AMML\right)}^{2*}={\sigma}_{i}^{2}q_i/\left(n_i-1\right). \label{36} \end{aligned} \tag{30}\] Here, \(\left(z_i, q_i\right)\) are the observed values of \(\left(Z_i, Q_i\right)\). Eqs. (29) and (30) have the unique solutions as given below \[\begin{aligned} {\mu_i} = {\hat \mu }_{i\left(AMML\right)}^{*} - \frac{z_i}{\sqrt{q_i/\left(n_i-1\right)}}\frac{{{\hat \sigma }_{i\left(AMML\right)}^{*}}}{\sqrt{M_i}} \end{aligned}\] and \[\begin{aligned} {\sigma_{i}^2} = \frac{\left(n_i-1\right)\hat \sigma _{i\left(AMML\right)}^{2*}}{q_i}. \end{aligned}\] Since \(\frac{Z_i}{\sqrt{Q_i/\left(n_i-1\right)}}\) is distributed as a \(t_i\) variable with \((n_i-1)\) degrees of freedom, the fiducial distribution of \(\mu_i\) is the same as that of \[\begin{aligned} T_{\mu _i}^{*} ={\hat \mu_{i\left(AMML\right)}^{*}}-\frac{t_i{\hat \sigma }_{i\left(AMML\right)}^{*}}{\sqrt{M_i}} \end{aligned}\] for given \(\left({\hat \mu }_{\left(AMML\right)}^{*}, {{\hat \sigma }_{\left(AMML\right)}^{2*}}\right)\). Therefore, the fiducial distribution of \({RW^{*}}\) in (26) is derived by utilizing the fiducial distribution of \(\mu_i\) as follows \[\begin{aligned} T_{RF}=\frac{\left(\left({t_1{\hat \sigma }_{1\left(AMML\right)}^{*}}\right)/{\sqrt{M_1}}\right)-\left(\left({t_2{\hat \sigma }_{2\left(AMML\right)}^{*}}\right)/{\sqrt{M_2}}\right)}{\sqrt{\left({\hat \sigma }_{1\left(AMML\right)}^{2*}\right)/{M_1}+\left({\hat \sigma }_{2\left(AMML\right)}^{2*}\right)/{M_2}}}, \label{40} \end{aligned} \tag{31}\] where \(t_1 \sim t_{\left(n_1-1\right)}\) and \(t_2 \sim t_{\left(n_2-1\right)}\). Since \[\begin{aligned} {RW_{0}^{*}} = \frac{{\left(\hat{\mu}_{1\left(AMML\right)}^{*} - \hat{\mu}_{2(AMML)}^{*}\right)}}{\sqrt{\left({\hat \sigma }_{1\left(AMML\right)}^{2*}\right)/{M_1}+\left({\hat \sigma }_{2\left(AMML\right)}^{2*}\right)/{M_2}}} \label{41} \end{aligned} \tag{32}\] is the observed value of \({T_{RF}}\) under \(H_0:\mu_1=\mu_2\), the corresponding \(p\)-value is given by \[\begin{aligned} p=Pr\left(T_{RF} \geq {RW_{0}^{*}} \right). \label{42} \end{aligned} \tag{33}\] An algorithm for calculating the fiducial \(p\)-value in Eq.(33) via Monte Carlo simulation study is given as follows
Algorithm 1
For the given data, compute \(\hat{\mu}_{i(AMML)}^{*}\), \(\hat{\sigma}_{i(AMML)}^{*}\) \(\left(i=1,2\right)\) and then \(RW_{0}^{*}\) utilizing Eq. (32).
Generate \(t_i\sim t(n_i-1), (i=1,2)\).
Compute \(T_{RF}^{2}\) utilizing Eq. (31).
Let \(F_l=1\) if \(T_{RF}^{2}> RW_{0}^{2*}\), else \(F_l=0\)
Repeat the steps 2-4 \(K\) times.
Compute the simulated \(p\)-value using \(p=\frac{1}{K}\sum\limits_{j=1}^{K} F_j\).
It should be noted that the squares of \(T_{RF}\) and \(RW_{0}^{*}\) in Steps 3 and 4 are taken since the alternative hypothesis is two-sided, i.e., \(H_1:\mu_1-\mu_2\neq 0\); see Li et al. (2011).
In this section, Type I error rates and powers of the proposed tests (\(RW\) and \(RF\)) are compared with those of the \(W\) test under the specified nominal level \(\alpha=0.05.\) The plan of the simulation study is outlined as follows:
We use the following population distributions while generating samples.
Population 1 | Population 2 | ||
(a) | \(Cauchy (0,1)\) | \(Cauchy (0,1)\) | |
(b) | \(5 \times Cauchy (0,1)\) | \(Cauchy (0,1)\) | |
(c) | \(Normal(0,3^2)/Uniform(0,1)\) | \(Normal(0,1)/Uniform(0,1)\) | |
(d) | \(0.8Normal(0,4^2)+0.2\frac{Normal(0,4^2)}{Uniform[0,1]}\) | \(0.8Normal(0,1)+0.2\frac{Normal(0,1)}{Uniform[0,1]}\) | |
(e) | \(3t_2\) | \(t_2\) | |
(f) | \(2t_5\) | \(t_5\) | |
(g) | \(Logistic(0,3)\) | \(Logistic(0,1)\) | |
(h) | \(Laplace(0,1)\) | \(Laplace(0,\sqrt{6})\) |
Here, \(t_a\): Student’s \(t\) distribution with \(a\) degrees of freedom.
10,000 different samples are considered for each of size \(n_i \left(i=1,2\right)\). Sample sizes are taken as \((n_1,n_2)\)=\((6,6)\),\((6,10)\),\((10,10)\),\((10,15)\),\((10,30)\), \((20,20)\),\((20,30)\),\((20,50)\),\((30,50)\) and \((50,50)\) while comparing the Type I error rates and powers of the tests. Simulations are conducted in R software.
To compute the Type I error rates of the \(RW\), \(RF\), and \(W\) tests, firstly, samples are generated under the null hypothesis \(H_0\):\(\mu_1=\mu_2\) for given \(\left(n_1,n_2\right).\) Then AMML and LS estimates of the parameters are calculated. The probability in Eq. (25) gives the Type I rates of the \(RW\) test. It should be noted that this probability shows that how close the distribution of the \(RW\) test is to Student’s \(t\) with degrees of freedom \(\nu\). \(RF\) is carried out using Algorithm 1 with K=5,000. The fiducial \(p\)-value for the \(RF\) is computed in the final step of the mentioned algorithm. This procedure is repeated for each of the 10,000 samples. The proportion of the 10,000 \(p\)-values that are less than the nominal level \(\alpha=0.05\) gives Type I error rates of the \(RF\).
To compute the power of the tests, similar steps are followed, but a constant \(d\) is added to the observations in the first population. Any test can be considered powerful if it achieves maximum power and adheres to the prescribed significance level.
The results of the Monte Carlo simulation study are given in Tables 1-9. The Type I error rates and power of the tests are given in Table 1 and Tables 2-9, respectively.
Numerical results of Table 1 can be summarized for Models (a)-(h) as follows
The numerical results of Tables 2-9 can be summarized as follows. It should be noted that the first line of Tables 2-9, that is, \(d=0.00\) presents simulated Type I error rates of the tests.
Overall, the \(RW\) test can be recommended for testing the equality of two LTS means under the assumption of heterogeneous variances since it has the best performance with respect to size and power. Although the performance of the \(RF\) test is not as good as the \(RW\) test, it has better performance than the traditional Welch’s \(t\)-test.
In the RobustBF package, we show the implementation of the proposed tests (\(RW\) and \(RF\)), based on AMML estimators, and \(W\) test, based on LS estimators, using the data representing the values of \(10(y-2.0)\) (\(y\) is the pollution level (measurement of lead) in water samples from two lakes). It has been shown that long-tailed symmetric distribution provides a plausible model for the mentioned data; see Tiku and Akkaya (2004) and also reference therein.
To run RobustBF package, we first install the package and then load it by typing:
> install.packages("RobustBF")
> library(RobustBF)
respectively. Next the pollution level data are inputted for each lakes (Lake 1 and Lake 2) in terms of the vectors as shown below
<- c(-1.48, 1.25, -0.51, 0.46, 0.60, -4.27, 0.63, -0.14, -0.38, 1.28,
y1 0.93, 0.51, 1.11, -0.17, -0.79, -1.02, -0.91, 0.10, 0.41, 1.11)
<- c(1.32, 1.81, -0.54, 2.68, 2.27, 2.70, 0.78, -4.62, 1.88, 0.86,
y2 2.86, 0.47, -0.42, 0.16, 0.69, 0.78, 1.72, 1.57, 2.14, 1.62)
The value of the \(RW\) test, its degrees of freedom with the corresponding \(p\)-value, AMML estimates of the location parameters (\(\hat{\mu}_{1(AMML)}\), \(\hat{\mu}_{2(AMML)}\)), and AMML estimates of the scale parameters (\(\hat{\sigma}_{1(AMML)}\), \(\hat{\sigma}_{2(AMML)}\)) are given by using the function
> RW(y1,y2)
The \(p\)-value and AMML estimates of the location and scale parameters are given for the \(RF\) test by using the function
> RF(y1,y2,iter=5000)
It should be noted that the \(p\)-value for the \(RF\) test is obtained
using a computational approach, and it is based on the replication
number in Algorithm 1, denoted as iter
in the RF
function. When the
above-mentioned functions in the RobustBF package are performed, the
following results are obtained
> RW(y1,y2)
's Two Sample t-Test
Robust Welch
data: y1 and y2
RW = -3.1602, df = 36.892, p-value = 0.0031
alternative hypothesis: true difference between in means is not equal to 0
sample estimates:
mean of y1 mean of y2 sd of y1 sd of y2
0.0626 1.2391 1.0861 1.2876
> RF(y1,y2,iter=5000)
Robust Fiducial Based Test
: y1 and y2
data-value = 0.0032
p: true difference in means is not equal to 0
alternative hypothesis:
sample estimates
mean of y1 mean of y2 sd of y1 sd of y2 0.0626 1.2391 1.0861 1.2876
We also use t.test
function in R to test the null hypothesis
\(H_0\):\(\mu_1=\mu_2\) and obtain its \(p\)-value as 0.0243. It can be seen
from these results, \(RW\), \(RF\), and \(W\) tests reject the null hypothesis
at \(\alpha=0.05\) significance level since the \(p\)-values corresponding
to these tests are all less than 0.05. However, \(p\)-values for \(RW\) and
\(RF\) tests are much smaller than the ones obtained for \(W\). Results of
the \(RW\) and \(RF\) tests are more reliable since the AMML estimates of
the \(\sigma_1\) and \(\sigma_2\) (\(\hat{\sigma}_{1(AMML)}=1.0861\),
\(\hat{\sigma}_{2(AMML)}=1.2876\)) are less than the corresponding LS
estimates (\(\hat{\sigma}_{1(LS)}=1.2819\),
\(\hat{\sigma}_{2(LS)}=1.6542\)). It should be noted that \(RW\) and \(RF\)
tests reject the null hypothesis while \(W\) fails to reject it at the
significance level \(\alpha=0.01\). These results are in agreement with
the simulation results in the context of long-tailed symmetric
distributions.
Reviewing the literature shows that comparing two means is a commonly
encountered problem, especially in applied sciences when the usual
normality and homogeneity of variances assumptions are violated. For
this reason, in this study, we present RobustBF package and propose
\(RW\) and \(RF\) tests to test the equality of two LTS means when the
variances are unknown and arbitrary. The first test included in the
package is a robust version of Welch’s \(t\)-test, and the other one is a
robust fiducial-based test. The proposed tests are based on AMML
estimators. Also, we use t.test
function available in R to compare the
proposed tests with Welch’s \(t\)-test in terms of Type I error rates and
powers. Examining the results of the simulation study reveals that Type
I error rates of the \(RW\) test are closer to the nominal level in
general. Therefore, the \(RW\) test verifies the obtained null
distribution for long-tailed symmetric samples. This test is followed by
\(RF\). \(RF\) does not require the knowledge of sampling distribution of
the test statistics. \(W\) test appears to be conservative except for the
\(t_5\), Logistic and Laplace distributions. \(RW\) shows the best power
performance among the others besides being robust for the contamination
model for the scenarios considered in this study. Therefore, the
proposed \(RW\) test can be recommended for testing the equality of two
LTS means under heterogeneity of variances. \(W\) test performs poorly in
almost all cases. According to our knowledge, the proposed tests
presented in the RobustBF package are not available in any other R
tool.
Model (a) | Model (b) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
\(n_1\) | \(n_2\) | \(RW\) | \(RF\) | \(W\) | \(RW\) | \(RF\) | \(W\) | ||||
6 | 6 | 0.023 | 0.014 | 0.015 | 0.031 | 0.025 | 0.019 | ||||
6 | 10 | 0.026 | 0.016 | 0.017 | 0.033 | 0.027 | 0.020 | ||||
10 | 10 | 0.025 | 0.020 | 0.018 | 0.031 | 0.027 | 0.020 | ||||
10 | 15 | 0.023 | 0.016 | 0.017 | 0.029 | 0.027 | 0.020 | ||||
10 | 30 | 0.030 | 0.025 | 0.020 | 0.029 | 0.028 | 0.020 | ||||
20 | 20 | 0.024 | 0.022 | 0.020 | 0.028 | 0.026 | 0.021 | ||||
20 | 30 | 0.028 | 0.025 | 0.024 | 0.027 | 0.026 | 0.022 | ||||
20 | 50 | 0.026 | 0.024 | 0.021 | 0.028 | 0.027 | 0.020 | ||||
30 | 50 | 0.027 | 0.025 | 0.021 | 0.026 | 0.026 | 0.022 | ||||
50 | 50 | 0.030 | 0.028 | 0.020 | 0.027 | 0.026 | 0.020 | ||||
Model (c) | Model (d) | ||||||||||
\(n_1\) | \(n_2\) | \(RW\) | \(RF\) | \(W\) | \(RW\) | \(RF\) | \(W\) | ||||
6 | 6 | 0.035 | 0.024 | 0.022 | 0.047 | 0.036 | 0.030 | ||||
6 | 10 | 0.037 | 0.030 | 0.020 | 0.053 | 0.047 | 0.037 | ||||
10 | 10 | 0.030 | 0.025 | 0.020 | 0.046 | 0.042 | 0.033 | ||||
10 | 15 | 0.031 | 0.026 | 0.018 | 0.046 | 0.043 | 0.033 | ||||
10 | 30 | 0.031 | 0.029 | 0.022 | 0.046 | 0.044 | 0.030 | ||||
20 | 20 | 0.030 | 0.027 | 0.019 | 0.042 | 0.040 | 0.027 | ||||
20 | 30 | 0.030 | 0.028 | 0.021 | 0.046 | 0.044 | 0.030 | ||||
20 | 50 | 0.030 | 0.030 | 0.022 | 0.045 | 0.044 | 0.030 | ||||
30 | 50 | 0.031 | 0.029 | 0.020 | 0.042 | 0.041 | 0.026 | ||||
50 | 50 | 0.030 | 0.028 | 0.019 | 0.044 | 0.044 | 0.025 | ||||
Model (e) | Model (f) | ||||||||||
\(n_1\) | \(n_2\) | \(RW\) | \(RF\) | \(W\) | \(RW\) | \(RF\) | \(W\) | ||||
6 | 6 | 0.040 | 0.030 | 0.034 | 0.050 | 0.042 | 0.042 | ||||
6 | 10 | 0.050 | 0.045 | 0.035 | 0.054 | 0.042 | 0.046 | ||||
10 | 10 | 0.044 | 0.038 | 0.038 | 0.049 | 0.043 | 0.044 | ||||
10 | 15 | 0.045 | 0.042 | 0.033 | 0.054 | 0.048 | 0.049 | ||||
10 | 30 | 0.042 | 0.040 | 0.037 | 0.055 | 0.052 | 0.051 | ||||
20 | 20 | 0.044 | 0.042 | 0.028 | 0.054 | 0.049 | 0.047 | ||||
20 | 30 | 0.040 | 0.037 | 0.038 | 0.052 | 0.049 | 0.048 | ||||
20 | 50 | 0.041 | 0.041 | 0.026 | 0.051 | 0.049 | 0.046 | ||||
30 | 50 | 0.043 | 0.042 | 0.036 | 0.053 | 0.053 | 0.049 | ||||
50 | 50 | 0.044 | 0.043 | 0.028 | 0.054 | 0.052 | 0.049 | ||||
Model (g) | Model (h) | ||||||||||
\(n_1\) | \(n_2\) | \(RW\) | \(RF\) | \(W\) | \(RW\) | \(RF\) | \(W\) | ||||
6 | 6 | 0.053 | 0.041 | 0.045 | 0.048 | 0.032 | 0.044 | ||||
6 | 10 | 0.056 | 0.052 | 0.051 | 0.044 | 0.034 | 0.043 | ||||
10 | 10 | 0.055 | 0.048 | 0.047 | 0.044 | 0.036 | 0.042 | ||||
10 | 15 | 0.055 | 0.052 | 0.048 | 0.045 | 0.039 | 0.044 | ||||
10 | 30 | 0.052 | 0.050 | 0.044 | 0.045 | 0.039 | 0.045 | ||||
20 | 20 | 0.054 | 0.053 | 0.049 | 0.044 | 0.041 | 0.044 | ||||
20 | 30 | 0.054 | 0.053 | 0.048 | 0.047 | 0.044 | 0.047 | ||||
20 | 50 | 0.055 | 0.055 | 0.049 | 0.050 | 0.046 | 0.049 | ||||
30 | 50 | 0.054 | 0.054 | 0.048 | 0.046 | 0.045 | 0.046 | ||||
50 | 50 | 0.054 | 0.053 | 0.049 | 0.054 | 0.051 | 0.052 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.023 | 0.014 | 0.015 | 0.00 | 0.024 | 0.022 | 0.020 | ||
1.60 | 0.19 | 0.15 | 0.11 | 0.60 | 0.10 | 0.09 | 0.04 | ||
\(n=(6,6)\) | 3.20 | 0.51 | 0.46 | 0.30 | \(n=(20,20)\) | 1.20 | 0.35 | 0.33 | 0.09 |
4.80 | 0.74 | 0.70 | 0.46 | 1.80 | 0.65 | 0.64 | 0.17 | ||
6.40 | 0.84 | 0.82 | 0.56 | 2.40 | 0.84 | 0.83 | 0.25 | ||
8.00 | 0.91 | 0.89 | 0.64 | 3.00 | 0.94 | 0.94 | 0.33 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.026 | 0.016 | 0.017 | 0.00 | 0.028 | 0.025 | 0.024 | ||
1.50 | 0.21 | 0.17 | 0.10 | 0.50 | 0.09 | 0.08 | 0.03 | ||
\(n=(6,10)\) | 3.00 | 0.57 | 0.53 | 0.29 | \(n=(20,30)\) | 1.00 | 0.31 | 0.29 | 0.07 |
4.50 | 0.78 | 0.76 | 0.44 | 1.50 | 0.59 | 0.58 | 0.13 | ||
6.00 | 0.89 | 0.88 | 0.57 | 2.00 | 0.80 | 0.79 | 0.20 | ||
7.50 | 0.94 | 0.93 | 0.62 | 2.50 | 0.92 | 0.91 | 0.27 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.025 | 0.020 | 0.018 | 0.00 | 0.026 | 0.024 | 0.021 | ||
1.00 | 0.13 | 0.11 | 0.06 | 0.46 | 0.10 | 0.09 | 0.04 | ||
\(n=(10,10)\) | 2.00 | 0.43 | 0.39 | 0.18 | \(n=(20,50)\) | 0.92 | 0.32 | 0.30 | 0.06 |
3.00 | 0.70 | 0.67 | 0.32 | 1.38 | 0.60 | 0.59 | 0.12 | ||
4.00 | 0.85 | 0.83 | 0.42 | 1.84 | 0.81 | 0.81 | 0.18 | ||
5.00 | 0.92 | 0.91 | 0.51 | 2.30 | 0.92 | 0.92 | 0.25 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.023 | 0.016 | 0.017 | 0.00 | 0.027 | 0.025 | 0.021 | ||
0.80 | 0.11 | 0.09 | 0.05 | 0.40 | 0.09 | 0.09 | 0.03 | ||
\(n=(10,15)\) | 1.60 | 0.36 | 0.33 | 0.14 | \(n=(30,50)\) | 0.80 | 0.30 | 0.29 | 0.06 |
2.40 | 0.64 | 0.61 | 0.24 | 1.20 | 0.60 | 0.59 | 0.10 | ||
3.20 | 0.80 | 0.79 | 0.35 | 1.60 | 0.83 | 0.83 | 0.16 | ||
4.00 | 0.90 | 0.89 | 0.43 | 2.00 | 0.94 | 0.94 | 0.22 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.030 | 0.025 | 0.020 | 0.00 | 0.030 | 0.028 | 0.020 | ||
0.70 | 0.12 | 0.10 | 0.05 | 0.32 | 0.08 | 0.08 | 0.030 | ||
\(n=(10,30)\) | 1.40 | 0.37 | 0.35 | 0.11 | \(n=(50,50)\) | 0.64 | 0.26 | 0.26 | 0.05 |
2.10 | 0.65 | 0.63 | 0.21 | 0.96 | 0.55 | 0.54 | 0.08 | ||
2.80 | 0.80 | 0.79 | 0.30 | 1.28 | 0.79 | 0.78 | 0.11 | ||
3.50 | 0.90 | 0.89 | 0.39 | 1.60 | 0.93 | 0.92 | 0.16 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.031 | 0.025 | 0.019 | 0.00 | 0.028 | 0.026 | 0.021 | ||
5.40 | 0.22 | 0.19 | 0.13 | 2.00 | 0.11 | 0.11 | 0.04 | ||
\(n=(6,6)\) | 10.80 | 0.52 | 0.50 | 0.34 | \(n=(20,20)\) | 4.00 | 0.34 | 0.34 | 0.11 |
16.20 | 0.72 | 0.71 | 0.49 | 6.00 | 0.61 | 0.60 | 0.19 | ||
21.60 | 0.83 | 0.83 | 0.60 | 8.00 | 0.80 | 0.79 | 0.30 | ||
27.00 | 0.90 | 0.90 | 0.68 | 10.00 | 0.90 | 0.90 | 0.38 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.033 | 0.027 | 0.020 | 0.00 | 0.027 | 0.026 | 0.022 | ||
5.30 | 0.22 | 0.21 | 0.13 | 2.00 | 0.10 | 0.10 | 0.05 | ||
\(n=(6,10)\) | 10.60 | 0.53 | 0.52 | 0.35 | \(n=(20,30)\) | 4.00 | 0.33 | 0.33 | 0.10 |
15.90 | 0.72 | 0.71 | 0.50 | 6.00 | 0.62 | 0.62 | 0.20 | ||
21.20 | 0.82 | 0.82 | 0.59 | 8.00 | 0.81 | 0.80 | 0.29 | ||
26.50 | 0.90 | 0.90 | 0.68 | 10.00 | 0.90 | 0.90 | 0.37 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.031 | 0.027 | 0.020 | 0.00 | 0.028 | 0.027 | 0.20 | ||
3.60 | 0.17 | 0.16 | 0.09 | 2.00 | 0.11 | 0.11 | 0.05 | ||
\(n=(10,10)\) | 7.20 | 0.47 | 0.46 | 0.23 | \(n=(20,50)\) | 4.00 | 0.35 | 0.34 | 0.11 |
10.80 | 0.71 | 0.71 | 0.38 | 6.00 | 0.62 | 0.62 | 0.20 | ||
14.40 | 0.84 | 0.84 | 0.49 | 8.00 | 0.80 | 0.80 | 0.29 | ||
18.00 | 0.92 | 0.92 | 0.58 | 10.00 | 0.91 | 0.91 | 0.37 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.029 | 0.027 | 0.020 | 0.00 | 0.026 | 0.026 | 0.022 | ||
3.60 | 0.17 | 0.16 | 0.09 | 1.60 | 0.10 | 0.10 | 0.04 | ||
\(n=(10,15)\) | 7.20 | 0.49 | 0.48 | 0.23 | \(n=(30,50)\) | 3.20 | 0.32 | 0.32 | 0.08 |
10.80 | 0.72 | 0.72 | 0.39 | 4.80 | 0.62 | 0.62 | 0.15 | ||
14.40 | 0.85 | 0.85 | 0.50 | 6.40 | 0.82 | 0.82 | 0.23 | ||
18.00 | 0.92 | 0.91 | 0.57 | 8.00 | 0.92 | 0.92 | 0.30 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.029 | 0.028 | 0.020 | 0.00 | 0.027 | 0.026 | 0.020 | ||
3.40 | 0.16 | 0.16 | 0.09 | 1.12 | 0.08 | 0.08 | 0.03 | ||
\(n=(10,30)\) | 6.80 | 0.45 | 0.45 | 0.22 | \(n=(50,50)\) | 2.24 | 0.26 | 0.26 | 0.05 |
10.20 | 0.69 | 0.69 | 0.36 | 3.36 | 0.53 | 0.53 | 0.09 | ||
13.60 | 0.83 | 0.83 | 0.47 | 4.48 | 0.76 | 0.76 | 0.13 | ||
17.00 | 0.90 | 0.90 | 0.55 | 5.60 | 0.90 | 0.90 | 0.18 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.035 | 0.024 | 0.022 | 0.00 | 0.030 | 0.027 | 0.019 | ||
5.00 | 0.23 | 0.20 | 0.14 | 1.70 | 0.10 | 0.10 | 0.04 | ||
\(n=(6,6)\) | 10.00 | 0.55 | 0.52 | 0.37 | \(n=(20,20)\) | 3.40 | 0.33 | 0.32 | 0.11 |
15.00 | 0.76 | 0.75 | 0.53 | 5.10 | 0.60 | 0.59 | 0.19 | ||
20.00 | 0.87 | 0.86 | 0.62 | 6.80 | 0.80 | 0.80 | 0.29 | ||
25.00 | 0.92 | 0.92 | 0.70 | 8.50 | 0.92 | 0.91 | 0.38 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.037 | 0.030 | 0.020 | 0.00 | 0.030 | 0.028 | 0.021 | ||
4.80 | 0.23 | 0.21 | 0.14 | 1.70 | 0.10 | 0.10 | 0.04 | ||
\(n=(6,10)\) | 9.60 | 0.56 | 0.54 | 0.36 | \(n=(20,30)\) | 3.40 | 0.33 | 0.33 | 0.11 |
14.40 | 0.75 | 0.74 | 0.52 | 5.10 | 0.62 | 0.61 | 0.20 | ||
19.20 | 0.86 | 0.86 | 0.62 | 6.80 | 0.81 | 0.81 | 0.30 | ||
24.00 | 0.92 | 0.92 | 0.69 | 8.50 | 0.92 | 0.92 | 0.37 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.030 | 0.025 | 0.020 | 0.00 | 0.030 | 0.030 | 0.022 | ||
3.00 | 0.15 | 0.14 | 0.08 | 1.70 | 0.11 | 0.11 | 0.05 | ||
\(n=(10,10)\) | 6.00 | 0.44 | 0.42 | 0.22 | \(n=(20,50)\) | 3.40 | 0.35 | 0.35 | 0.11 |
9.00 | 0.71 | 0.69 | 0.37 | 5.10 | 0.63 | 0.63 | 0.20 | ||
12.00 | 0.84 | 0.84 | 0.48 | 6.80 | 0.82 | 0.82 | 0.30 | ||
15.00 | 0.92 | 0.92 | 0.56 | 8.50 | 0.92 | 0.92 | 0.38 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.031 | 0.026 | 0.018 | 0.00 | 0.031 | 0.029 | 0.20 | ||
2.60 | 0.13 | 0.12 | 0.06 | 1.30 | 0.09 | 0.09 | 0.03 | ||
\(n=(10,15)\) | 5.20 | 0.38 | 0.37 | 0.18 | \(n=(30,50)\) | 2.60 | 0.31 | 0.30 | 0.07 |
7.80 | 0.64 | 0.63 | 0.32 | 3.90 | 0.58 | 0.58 | 0.13 | ||
10.40 | 0.79 | 0.79 | 0.44 | 5.20 | 0.79 | 0.79 | 0.20 | ||
13.00 | 0.90 | 0.90 | 0.54 | 6.50 | 0.92 | 0.92 | 0.29 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.031 | 0.029 | 0.022 | 0.00 | 0.030 | 0.028 | 0.019 | ||
2.60 | 0.14 | 0.13 | 0.07 | 0.96 | 0.08 | 0.08 | 0.03 | ||
\(n=(10,30)\) | 5.20 | 0.39 | 0.39 | 0.18 | \(n=(50,50)\) | 1.92 | 0.25 | 0.25 | 0.05 |
7.80 | 0.64 | 0.64 | 0.32 | 2.88 | 0.54 | 0.53 | 0.09 | ||
10.40 | 0.80 | 0.80 | 0.44 | 3.84 | 0.78 | 0.77 | 0.13 | ||
13.00 | 0.90 | 0.90 | 0.54 | 4.80 | 0.91 | 0.91 | 0.20 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.047 | 0.036 | 0.030 | 0.00 | 0.42 | 0.040 | 0.027 | ||
2.00 | 0.15 | 0.13 | 0.11 | 0.80 | 0.11 | 0.10 | 0.07 | ||
\(n=(6,6)\) | 4.00 | 0.42 | 0.40 | 0.34 | \(n=(20,20)\) | 1.60 | 0.30 | 0.29 | 0.17 |
6.00 | 0.69 | 0.67 | 0.57 | 2.40 | 0.57 | 0.56 | 0.32 | ||
8.00 | 0.84 | 0.83 | 0.71 | 3.20 | 0.78 | 0.78 | 0.46 | ||
10.00 | 0.91 | 0.91 | 0.78 | 4.00 | 0.91 | 0.91 | 0.57 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.053 | 0.047 | 0.037 | 0.00 | 0.046 | 0.044 | 0.030 | ||
2.00 | 0.16 | 0.15 | 0.12 | 0.80 | 0.13 | 0.13 | 0.09 | ||
\(n=(6,10)\) | 4.00 | 0.43 | 0.42 | 0.34 | \(n=(20,30)\) | 1.60 | 0.37 | 0.36 | 0.25 |
6.00 | 0.70 | 0.69 | 0.59 | 2.40 | 0.65 | 0.65 | 0.47 | ||
8.00 | 0.84 | 0.84 | 0.72 | 3.20 | 0.82 | 0.82 | 0.62 | ||
10.00 | 0.91 | 0.91 | 0.79 | 4.00 | 0.91 | 0.91 | 0.71 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.046 | 0.042 | 0.033 | 0.00 | 0.045 | 0.044 | 0.030 | ||
1.30 | 0.12 | 0.12 | 0.09 | 0.80 | 0.11 | 0.11 | 0.06 | ||
\(n=(10,10)\) | 2.60 | 0.36 | 0.34 | 0.26 | \(n=(20,50)\) | 1.60 | 0.31 | 0.31 | 0.17 |
3.90 | 0.63 | 0.62 | 0.46 | 2.40 | 0.57 | 0.56 | 0.32 | ||
5.20 | 0.81 | 0.81 | 0.60 | 3.20 | 0.79 | 0.79 | 0.46 | ||
6.50 | 0.92 | 0.92 | 0.71 | 4.00 | 0.92 | 0.92 | 0.58 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.046 | 0.043 | 0.033 | 0.00 | 0.042 | 0.041 | 0.026 | ||
1.30 | 0.13 | 0.12 | 0.09 | 0.64 | 0.11 | 0.11 | 0.05 | ||
\(n=(10,15)\) | 2.60 | 0.37 | 0.36 | 0.26 | \(n=(30,50)\) | 1.28 | 0.30 | 0.30 | 0.14 |
3.90 | 0.65 | 0.64 | 0.47 | 1.92 | 0.56 | 0.56 | 0.27 | ||
5.20 | 0.83 | 0.82 | 0.62 | 2.56 | 0.79 | 0.79 | 0.39 | ||
6.50 | 0.92 | 0.91 | 0.71 | 3.20 | 0.92 | 0.92 | 0.51 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.046 | 0.044 | 0.030 | 0.00 | 0.044 | 0.044 | 0.025 | ||
1.30 | 0.13 | 0.13 | 0.09 | 0.48 | 0.10 | 0.10 | 0.05 | ||
\(n=(10,30)\) | 2.60 | 0.37 | 0.36 | 0.25 | \(n=(50,50)\) | 0.96 | 0.27 | 0.27 | 0.10 |
3.90 | 0.65 | 0.65 | 0.47 | 1.44 | 0.53 | 0.53 | 0.19 | ||
5.20 | 0.82 | 0.82 | 0.62 | 1.92 | 0.78 | 0.77 | 0.30 | ||
6.50 | 0.91 | 0.91 | 0.71 | 2.40 | 0.92 | 0.92 | 0.39 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.040 | 0.030 | 0.034 | 0.00 | 0.044 | 0.042 | 0.028 | ||
2.00 | 0.16 | 0.13 | 0.13 | 0.80 | 0.11 | 0.11 | 0.07 | ||
\(n=(6,6)\) | 4.00 | 0.44 | 0.40 | 0.38 | \(n=(20,20)\) | 1.60 | 0.30 | 0.29 | 0.17 |
6.00 | 0.70 | 0.67 | 0.62 | 2.40 | 0.55 | 0.55 | 0.31 | ||
8.00 | 0.84 | 0.83 | 0.76 | 3.20 | 0.78 | 0.78 | 0.46 | ||
10.00 | 0.92 | 0.91 | 0.85 | 4.00 | 0.91 | 0.91 | 0.57 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.050 | 0.045 | 0.035 | 0.00 | 0.040 | 0.037 | 0.038 | ||
1.90 | 0.15 | 0.14 | 0.11 | 0.80 | 0.10 | 0.10 | 0.08 | ||
\(n=(6,10)\) | 3.80 | 0.41 | 0.40 | 0.33 | \(n=(20,30)\) | 1.60 | 0.31 | 0.30 | 0.23 |
5.70 | 0.67 | 0.66 | 0.55 | 2.40 | 0.59 | 0.58 | 0.42 | ||
7.60 | 0.83 | 0.83 | 0.71 | 3.20 | 0.81 | 0.81 | 0.60 | ||
9.50 | 0.90 | 0.90 | 0.77 | 4.00 | 0.92 | 0.92 | 0.74 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.044 | 0.038 | 0.038 | 0.00 | 0.041 | 0.041 | 0.026 | ||
1.26 | 0.12 | 0.10 | 0.10 | 0.80 | 0.11 | 0.11 | 0.06 | ||
\(n=(10,10)\) | 2.52 | 0.35 | 0.33 | 0.29 | \(n=(20,50)\) | 1.60 | 0.30 | 0.30 | 0.17 |
3.78 | 0.62 | 0.60 | 0.50 | 2.40 | 0.58 | 0.58 | 0.32 | ||
5.04 | 0.80 | 0.80 | 0.67 | 3.20 | 0.79 | 0.79 | 0.46 | ||
6.30 | 0.91 | 0.91 | 0.78 | 4.00 | 0.91 | 0.91 | 0.57 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.045 | 0.042 | 0.033 | 0.00 | 0.043 | 0.042 | 0.041 | ||
1.24 | 0.12 | 0.12 | 0.09 | 0.64 | 0.11 | 0.11 | 0.08 | ||
\(n=(10,15)\) | 2.48 | 0.33 | 0.33 | 0.23 | \(n=(30,50)\) | 1.28 | 0.31 | 0.31 | 0.21 |
3.72 | 0.60 | 0.60 | 0.44 | 1.92 | 0.59 | 0.58 | 0.38 | ||
4.96 | 0.80 | 0.80 | 0.59 | 2.56 | 0.81 | 0.81 | 0.56 | ||
6.20 | 0.90 | 0.90 | 0.68 | 3.20 | 0.93 | 0.93 | 0.70 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.042 | 0.040 | 0.037 | 0.00 | 0.044 | 0.043 | 0.028 | ||
1.24 | 0.13 | 0.13 | 0.11 | 0.48 | 0.10 | 0.10 | 0.05 | ||
\(n=(10,30)\) | 2.48 | 0.36 | 0.36 | 0.29 | \(n=(50,50)\) | 0.96 | 0.27 | 0.27 | 0.10 |
3.72 | 0.63 | 0.63 | 0.52 | 1.44 | 0.55 | 0.55 | 0.20 | ||
4.96 | 0.81 | 0.81 | 0.69 | 1.92 | 0.78 | 0.78 | 0.30 | ||
6.20 | 0.91 | 0.91 | 0.80 | 2.40 | 0.92 | 0.92 | 0.41 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.050 | 0.042 | 0.042 | 0.00 | 0.054 | 0.049 | 0.047 | ||
0.90 | 0.13 | 0.10 | 0.11 | 0.40 | 0.11 | 0.11 | 0.10 | ||
\(n=(6,6)\) | 1.80 | 0.33 | 0.29 | 0.30 | \(n=(20,20)\) | 0.80 | 0.27 | 0.26 | 0.24 |
2.70 | 0.60 | 0.54 | 0.56 | 1.20 | 0.53 | 0.51 | 0.47 | ||
3.60 | 0.80 | 0.77 | 0.77 | 1.60 | 0.76 | 0.75 | 0.69 | ||
4.50 | 0.92 | 0.90 | 0.89 | 2.00 | 0.90 | 0.90 | 0.85 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.054 | 0.042 | 0.046 | 0.00 | 0.052 | 0.049 | 0.048 | ||
0.90 | 0.13 | 0.11 | 0.12 | 0.40 | 0.11 | 0.11 | 0.09 | ||
\(n=(6,10)\) | 1.80 | 0.37 | 0.34 | 0.34 | \(n=(20,30)\) | 0.80 | 0.29 | 0.29 | 0.25 |
2.70 | 0.63 | 0.59 | 0.59 | 1.20 | 0.55 | 0.54 | 0.48 | ||
3.60 | 0.83 | 0.81 | 0.80 | 1.60 | 0.77 | 0.77 | 0.71 | ||
4.50 | 0.92 | 0.92 | 0.90 | 2.00 | 0.92 | 0.92 | 0.87 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.049 | 0.043 | 0.044 | 0.00 | 0.051 | 0.049 | 0.046 | ||
0.64 | 0.11 | 0.10 | 0.10 | 0.40 | 0.12 | 0.11 | 0.10 | ||
\(n=(10,10)\) | 1.28 | 0.32 | 0.29 | 0.29 | \(n=(20,50)\) | 0.80 | 0.30 | 0.30 | 0.27 |
1.92 | 0.59 | 0.55 | 0.54 | 1.20 | 0.57 | 0.57 | 0.51 | ||
2.56 | 0.80 | 0.78 | 0.76 | 1.60 | 0.79 | 0.78 | 0.72 | ||
3.20 | 0.93 | 0.92 | 0.90 | 2.00 | 0.92 | 0.92 | 0.88 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.054 | 0.048 | 0.049 | 0.00 | 0.053 | 0.053 | 0.049 | ||
0.60 | 0.12 | 0.11 | 0.10 | 0.32 | 0.11 | 0.11 | 0.10 | ||
\(n=(10,15)\) | 1.20 | 0.31 | 0.29 | 0.28 | \(n=(30,50)\) | 0.64 | 0.29 | 0.28 | 0.25 |
1.80 | 0.56 | 0.54 | 0.52 | 0.96 | 0.55 | 0.55 | 0.49 | ||
2.40 | 0.78 | 0.77 | 0.73 | 1.28 | 0.78 | 0.78 | 0.71 | ||
3.00 | 0.92 | 0.91 | 0.88 | 1.60 | 0.92 | 0.92 | 0.87 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.055 | 0.052 | 0.051 | 0.00 | 0.054 | 0.052 | 0.049 | ||
0.60 | 0.12 | 0.11 | 0.11 | 0.26 | 0.11 | 0.11 | 0.10 | ||
\(n=(10,30)\) | 1.20 | 0.32 | 0.31 | 0.29 | \(n=(50,50)\) | 0.52 | 0.30 | 0.29 | 0.25 |
1.80 | 0.58 | 0.57 | 0.54 | 0.78 | 0.55 | 0.54 | 0.47 | ||
2.40 | 0.80 | 0.79 | 0.75 | 1.04 | 0.79 | 0.79 | 0.71 | ||
3.00 | 0.93 | 0.92 | 0.89 | 1.30 | 0.93 | 0.93 | 0.88 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.053 | 0.041 | 0.045 | 0.00 | 0.054 | 0.053 | 0.049 | ||
1.90 | 0.13 | 0.11 | 0.12 | 0.86 | 0.11 | 0.11 | 0.10 | ||
\(n=(6,6)\) | 3.80 | 0.33 | 0.29 | 0.30 | \(n=(20,20)\) | 1.72 | 0.30 | 0.29 | 0.26 |
5.70 | 0.60 | 0.56 | 0.57 | 2.58 | 0.55 | 0.54 | 0.50 | ||
7.60 | 0.81 | 0.78 | 0.78 | 3.44 | 0.78 | 0.78 | 0.74 | ||
9.50 | 0.92 | 0.91 | 0.90 | 4.30 | 0.92 | 0.91 | 0.89 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.056 | 0.052 | 0.051 | 0.00 | 0.054 | 0.053 | 0.048 | ||
1.90 | 0.13 | 0.12 | 0.11 | 0.86 | 0.12 | 0.11 | 0.10 | ||
\(n=(6,10)\) | 3.80 | 0.35 | 0.33 | 0.32 | \(n=(20,30)\) | 1.72 | 0.30 | 0.30 | 0.27 |
5.70 | 0.61 | 0.59 | 0.58 | 2.58 | 0.56 | 0.56 | 0.51 | ||
7.60 | 0.82 | 0.81 | 0.80 | 3.44 | 0.80 | 0.79 | 0.76 | ||
9.50 | 0.93 | 0.92 | 0.91 | 4.30 | 0.92 | 0.92 | 0.90 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.055 | 0.048 | 0.047 | 0.00 | 0.055 | 0.055 | 0.049 | ||
1.24 | 0.11 | 0.10 | 0.10 | 0.82 | 0.11 | 0.11 | 0.10 | ||
\(n=(10,10)\) | 2.48 | 0.28 | 0.27 | 0.25 | \(n=(20,50)\) | 1.64 | 0.29 | 0.29 | 0.26 |
3.72 | 0.53 | 0.51 | 0.49 | 2.46 | 0.53 | 0.53 | 0.49 | ||
4.96 | 0.75 | 0.74 | 0.72 | 3.28 | 0.76 | 0.75 | 0.71 | ||
6.20 | 0.90 | 0.89 | 0.87 | 4.10 | 0.90 | 0.90 | 0.87 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.055 | 0.052 | 0.048 | 0.00 | 0.054 | 0.054 | 0.048 | ||
1.24 | 0.11 | 0.11 | 0.10 | 0.64 | 0.11 | 0.11 | 0.10 | ||
\(n=(10,15)\) | 2.48 | 0.29 | 0.28 | 0.27 | \(n=(30,50)\) | 1.28 | 0.27 | 0.27 | 0.24 |
3.72 | 0.54 | 0.53 | 0.50 | 1.92 | 0.50 | 0.50 | 0.46 | ||
4.96 | 0.76 | 0.75 | 0.72 | 2.56 | 0.74 | 0.74 | 0.70 | ||
6.20 | 0.90 | 0.90 | 0.88 | 3.20 | 0.89 | 0.89 | 0.86 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.052 | 0.050 | 0.044 | 0.00 | 0.054 | 0.053 | 0.049 | ||
1.24 | 0.12 | 0.11 | 0.10 | 0.50 | 0.11 | 0.11 | 0.09 | ||
\(n=(10,30)\) | 2.48 | 0.30 | 0.29 | 0.27 | \(n=(50,50)\) | 1.00 | 0.26 | 0.26 | 0.23 |
3.72 | 0.55 | 0.54 | 0.51 | 1.50 | 0.50 | 0.49 | 0.45 | ||
4.96 | 0.76 | 0.76 | 0.73 | 2.00 | 0.73 | 0.73 | 0.68 | ||
6.20 | 0.90 | 0.90 | 0.88 | 2.50 | 0.90 | 0.89 | 0.86 |
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.048 | 0.032 | 0.044 | 0.00 | 0.044 | 0.041 | 0.044 | ||
1.16 | 0.13 | 0.10 | 0.12 | 0.54 | 0.11 | 0.10 | 0.10 | ||
\(n=(6,6)\) | 2.32 | 0.35 | 0.30 | 0.31 | \(n=(20,20)\) | 1.08 | 0.30 | 0.29 | 0.26 |
3.48 | 0.61 | 0.56 | 0.56 | 1.62 | 0.57 | 0.56 | 0.49 | ||
4.64 | 0.80 | 0.77 | 0.76 | 2.16 | 0.79 | 0.79 | 0.71 | ||
5.80 | 0.91 | 0.90 | 0.88 | 2.70 | 0.93 | 0.92 | 0.86 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.044 | 0.034 | 0.043 | 0.00 | 0.047 | 0.044 | 0.047 | ||
0.84 | 0.11 | 0.09 | 0.10 | 0.44 | 0.11 | 0.10 | 0.09 | ||
\(n=(6,10)\) | 1.68 | 0.30 | 0.26 | 0.27 | \(n=(20,30)\) | 0.88 | 0.28 | 0.27 | 0.24 |
2.52 | 0.57 | 0.52 | 0.51 | 1.32 | 0.56 | 0.54 | 0.47 | ||
3.36 | 0.78 | 0.74 | 0.72 | 1.76 | 0.79 | 0.78 | 0.69 | ||
4.20 | 0.91 | 0.89 | 0.69 | 2.20 | 0.92 | 0.92 | 0.86 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.044 | 0.036 | 0.042 | 0.00 | 0.050 | 0.046 | 0.049 | ||
0.80 | 0.12 | 0.10 | 0.11 | 0.34 | 0.10 | 0.09 | 0.09 | ||
\(n=(10,10)\) | 1.60 | 0.32 | 0.30 | 0.28 | \(n=(20,50)\) | 0.68 | 0.27 | 0.26 | 0.23 |
2.40 | 0.57 | 0.55 | 0.50 | 1.02 | 0.50 | 0.49 | 0.42 | ||
3.20 | 0.79 | 0.77 | 0.73 | 1.36 | 0.74 | 0.73 | 0.64 | ||
4.00 | 0.91 | 0.90 | 0.86 | 1.70 | 0.90 | 0.89 | 0.82 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.045 | 0.039 | 0.044 | 0.00 | 0.046 | 0.045 | 0.046 | ||
0.64 | 0.11 | 0.10 | 0.10 | 0.34 | 0.11 | 0.10 | 0.09 | ||
\(n=(10,15)\) | 1.28 | 0.29 | 0.27 | 0.25 | \(n=(30,50)\) | 0.68 | 0.29 | 0.28 | 0.24 |
1.92 | 0.55 | 0.52 | 0.48 | 1.02 | 0.55 | 0.54 | 0.45 | ||
2.56 | 0.78 | 0.76 | 0.71 | 1.36 | 0.79 | 0.78 | 0.69 | ||
3.20 | 0.91 | 0.90 | 0.85 | 1.70 | 0.92 | 0.92 | 0.85 | ||
\(d\) | \(RW\) | \(RF\) | \(W\) | \(d\) | \(RW\) | \(RF\) | \(W\) | ||
0.00 | 0.045 | 0.039 | 0.045 | 0.00 | 0.054 | 0.051 | 0.052 | ||
0.48 | 0.11 | 0.09 | 0.10 | 0.30 | 0.10 | 0.09 | 0.08 | ||
\(n=(10,30)\) | 0.96 | 0.28 | 0.26 | 0.24 | \(n=(50,50)\) | 0.60 | 0.25 | 0.25 | 0.21 |
1.44 | 0.54 | 0.51 | 0.46 | 0.90 | 0.48 | 0.48 | 0.40 | ||
1.92 | 0.77 | 0.75 | 0.68 | 1.20 | 0.73 | 0.72 | 0.62 | ||
2.40 | 0.91 | 0.90 | 0.85 | 1.50 | 0.90 | 0.89 | 0.80 |
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Güven, et al., "RobustBF: An R Package for Robust Solution to the Behrens-Fisher Problem", The R Journal, 2021
BibTeX citation
@article{RJ-2021-107, author = {Güven, Gamze and Acıtaş, Şükrü and Şamkar, Hatice and Şenoğlu, Birdal}, title = {RobustBF: An R Package for Robust Solution to the Behrens-Fisher Problem}, journal = {The R Journal}, year = {2021}, note = {https://rjournal.github.io/}, volume = {13}, issue = {2}, issn = {2073-4859}, pages = {713-733} }