Hopkins statistic (Hopkins and Skellam 1954) can be used to test for spatial randomness of data and for detecting clusters in data. Although the method is nearly 70 years old, there is persistent confusion regarding the definition and calculation of the statistic. We investigate the confusion and its possible origin. Using the most general definition of Hopkins statistic, we perform a small simulation to verify its distributional properties, provide a visualization of how the statistic is calculated, and provide a fast R function to correctly calculate the statistic. Finally, we propose a protocol of five questions to guide the use of Hopkins statistic.
Hopkins and Skellam (1954) introduced a statistic to test for spatial randomness of data. If the null hypothesis of spatial randomness is rejected, then one possible interpretation is that the data may be clustered into distinct groups. Since one of the problems with clustering methods is that they will always identify clusters, (even if there are no meaningful clusters in the data), Hopkins statistic can be used to determine if there are clusters in the data before applying clustering methods. In the description below on how to calculate Hopkins statistic, we follow the terminology of earlier authors and refer to an “event” as one of the existing data values in a matrix \(X\), and a “point” as a new, randomly chosen location. For clarity in the discussions below we make a distinction between \(D\), the dimension of the data, and \(d\), the exponent in the formula for Hopkins statistic.
Let \(X\) be a matrix of \(n\) events (in rows) and \(D\) variables (in columns). Let \(U\) be the space defined by \(X\).
Hopkins statistic is calculated with the following algorithm:
Because of sampling variability, it is common to calculate \(H\) multiple times and take the average. Under the null hypothesis of spatial randomness, this statistic has a Beta(\(m\),\(m\)) distribution and will always lie between 0 and 1. The interpretation of \(H\) follows these guidelines:
There exists considerable confusion about the definition of Hopkins statistic in scientific publications. In particular, when calculating Hopkins statistic, there are 3 different values of the exponent \(d\) (in step 4 above) that have been used in statistical literature: \(d=1\), \(d=2\), and the generalized \(d=D\). Here is a brief timeline of how this exponent has been presented.
1954: Hopkins and Skellam (1954) introduced Hopkins statistic in a two-dimensional setting. The formula they present is in a slightly different form, but is equivalent to \(\sum u_i^2 \big/ \sum (u_i^2 + w_i^2 )\). The exponent here is \(d=2\).
1976: Diggle et al. (1976) presented a formula for Hopkins statistic in a two-dimensional setting as \(\sum u_i \big/ \sum (u_i + w_i )\). This formula has no exponents and therefore at first glance appears to use the exponent \(d=1\) in the equation for Hopkins statistic. However, a careful reading of their text shows that their \(u_i\) and \(w_i\) values were actually squared Euclidean distances. If their \(u_i\) and \(w_i\) had represented ordinary (non-squared) Euclidean distances, then their formula would have been \(\sum u_i^2 \big/ \sum (u_i^2 + w_i^2 )\). We suspect this paper is the likely source of confusion by later authors.
1982: Cross and Jain (1982) generalized Hopkins statistic for \(X\) of any dimension \(d=D\) as \(\sum u_i^d \big/ \sum (u_i^d + w_i^d )\). This formula was also used by Zeng and Dubes (1985a), Dubes and Zeng (1987), and Banerjee and Dave (2004).
1990: Lawson and Jurs (1990) and Jurs and Lawson (1990) give the formula for Hopkins statistic as \(\sum u_i \big/ \sum (u_i + w_i)\), but used ordinary distances instead of squared distances. Perhaps this was a result of misunderstanding the formula in Diggle et al. (1976).
2015: The R function hopkins()
in the clustertend package (YiLan and RuTong 2015 version 1.4) cited Lawson and Jurs (1990) and used also used the exponent \(d=1\).
2022: The new function hopkins()
in the hopkins package (Wright 2022 version 1.0) uses the general exponent \(d=D\) as found in Cross and Jain (1982).
Having identified the confusion in the statistical literature, we now ask the question, “Does it matter what value of \(d\) is used in the exponent?” In a word, “yes”.
According to Cross and Jain (1982), under the null hypotheses of no structure in the data, the distribution of the Hopkins statistic is Beta(\(m\),\(m\)) where \(m\) is the number of rows sampled in \(X\). This distribution can be verified in a simple simulation study:
In Figure 1:
The empirical density of the blue curve is similar to the theoretical distribution shown by the black line. The empirical density of the red curve is clearly dissimilar. The distribution of Hopkins statistic with \(d=1\) is clearly incorrect (except in trivial cases where \(X\) has only 1 column). One more thing to note about the graph is that the blue curve is slightly flatter than the theoretical distribution shown in black. This is not accidental, but is caused by edge effects of the sampling region and will be discussed in a later section.
The first three examples in this section are adapted from Gastner (2005). The datasets are available in the spatstat.data package (Baddeley et al. 2021). A modified version of the hopkins()
function was written for this paper to show how the Hopkins statistic is calculated (inspired by Figure 1 of Lawson and Jurs (1990)). In order to minimize the amount of over-plotting, only \(m=3\) sampling points are used for these examples. In each figure, 3 of the existing events in \(X\) are chosen at random and a light-blue arrow is drawn to the nearest neighbor in \(X\). In addition, 3 points are drawn uniformly in the plotting region and a light-red arrow is drawn to the nearest neighbor in \(X\). The colored numbers are the lengths of the arrows.
The cells
data represent the centers of mass of 42 cells from insect tissue. The scatterplot of the data in Figure 2 shows that events are systematically spaced as nearly far apart as possible. Because the data is two-dimensional, Hopkins statistics is calculated as the sum of the squared distances \(u_i^2\) divided by the sum of the squared distances \(u_i^2 + w_i^2\):
(.046^2 + .081^2 + .021^2) /
( (.046^2 + .081^2 + .021^2) + (.152^2 + .14^2 + .139^2) )
[1] 0.1281644
The hopkins()
function returns the same value:
set.seed(17)
hopkins(cells, m=3)
[1] 0.1285197
The value of the Hopkins statistic in this calculation is based on only \(m=3\) events and will have sizable sampling error. To reduce the sampling error, a larger sample size can be used up to approximately 10% of the number of events. To reduce sampling error further while maintaining the independence assumption of the sampling in calculating Hopkins statistic, repeated samples can be drawn. Here we use the idea of Gastner (2005) to calculate Hopkins statistic 100 times and then calculate the mean and standard deviation for the 100 values of Hopkins statistic, which in this case are 0.21 and 0.06. This value of the statistic is quite a bit lower than 0.5 and indicates the events are spaced more evenly than purely-random events (p-value 0.05).
The japanesepines
data contains the locations of 65 Japanese black pine saplings in a square 5.7 meters on a side. The plot of the data in Figure 3 is an example of data in which the events are randomly spaced.
The value of Hopkins statistic using 3 events and points is:
(.023^2+.076^2+.07^2) /
((.023^2+.076^2+.07^2) + (.104^2+.1^2+.058^2))
[1] 0.3166596
The mean and standard deviation of the 100 Hopkins statistics are 0.48 and 0.12. The value of the statistic is close to 0.5 and indicates no evidence against a random distribution of data (p-value 0.9).
The redwood
data are the coordinates of 62 redwood seedlings in a square 23 meters on a side. The plot in Figure 4 shows events that exhibit clustering. The value of Hopkins statistic for the plot is:
(.085^2+.078^2+.158^2) /
((.085^2+.078^2+.158^2) + (.028^2+.028^2+.12^2))
[1] 0.7056101
The mean and standard deviation of the 100 Hopkins statistics are 0.79 and 0.13. The value of the statistic is much higher than 0.5, which indicates that the data are somewhat clustered (p-value 0.03).
Adolfsson et al. (2017) provide a review of various methods of detecting clusterability. One of the methods they considered was Hopkins statistic, which they calculated using 10% sampling. They evaluated the clusterability of nine R datasets by calculating Hopkins statistic 100 times and then reporting the proportion of time that Hopkins statistic exceeded the appropriate beta quantile. We can repeat their analysis and calculate Hopkins statistic for both \(d=1\) dimension and \(d=D\) dimensions, where \(D\) is the number of columns for each dataset.
dataset | n | D | Adolfsson | Hopkins1 | HopkinsD |
---|---|---|---|---|---|
faithful | 272 | 2 | 1.00 | 1.00 | 1.00 |
iris | 150 | 5 | 1.00 | 1.00 | 1.00 |
rivers | 141 | 1 | 0.92 | 0.89 | 0.90 |
swiss | 47 | 6 | 0.41 | 0.25 | 0.94 |
attitude | 30 | 7 | 0.00 | 0.00 | 0.59 |
cars | 50 | 2 | 0.19 | 0.23 | 0.68 |
trees | 31 | 3 | 0.18 | 0.22 | 0.71 |
USJudgeRatings | 43 | 12 | 0.69 | 0.53 | 1.00 |
USArrests | 50 | 4 | 0.01 | 0.00 | 0.56 |
In Table 1:
Since the Adolfsson
and Hopkins1
columns are similar (within sampling variability), it appears that Adolfsson et al. (2017) used Hopkins statistic with \(d=1\) as the exponent. This would be expected if they had used the clustertend package (YiLan and RuTong 2015 version 1.4) to calculate Hopkins statistic.
For a few of the datasets, there is substantial disagreement between the last two columns. For example, the swiss
data is significantly clusterable 41% of the time according to Adolfsson et al. (2017), but 94% of the time when using Hopkins statistic with exponent \(d=D\). A scatterplot of the swiss
data in Figure 5 shows that the data are strongly non-random, which agrees with the 94%.
Similarly, the trees
data is significantly clusterable 18% of the time according to the Adolfsson
column, but 71% of the time according to HopkinsD
. The scatterplot in Figure 6 shows strong non-random patterns, which agrees with the 71%
Scatterplot matrices of the swiss
, attitude
, cars
, trees
, and USArrests
datasets can be found in Brownstein et al. (2019). Each scatterplot matrix shows at least one pair of the variables with notable correlation and therefore the data are not randomly-distributed, but rather are clustered. For each of these datasets, the proportion of times Hopkins1
is significant is less than 0.5, but the proportion of times HopkinsD
is significant is greater than 0.5. The HopkinsD
statistic is accurately detecting the presence of clusters in these datasets.
In the cells
, japanesepines
and redwood
examples above, it is possible or even probable that there are additional events outside of the sampling frame that contains the data. The sampling frame thus has the effect of cutting off potential nearest neighbors from consideration. If the distribution of the data can be assumed to extend beyond the sampling frame and if the shape of the sampling frame can be viewed as a hypercube, then edge effects due to the sampling frame can be corrected by using a torus geometry that wraps edges of the sampling frame around to the opposite side (Li and Zhang 2007). To see an illustration of this, look again at the plot of the japanesepines
data in Figure 3. The randomly-generated event \(U\) in the upper left corner is a distance of \(0.076\) away from the nearest event. However, if the left edge of the plot is wrapped around an imaginary cylinder and connected to the right edge of the plot, then the nearest neighbor is the event in the upper-right corner at coordinates (0.97, 0.86).
To see what effect the torus geometry has on the distribution of the Hopkins statistic, consider the following simulation. We generate \(n=100\) events uniformly in a \(D=5\) dimension unit cube and sample \(m=10\) events to calculate the value of Hopkins statistic using both a simple geometry and a torus geometry. Repeat these steps \(B=1000\) times. The calculation of the nearest neighbor using a torus geometry is computationally more demanding than using a simple geometry, especially as the number of dimensions \(D\) increases, so the use of parallel computing can reduce the computing time linearly according to the number of processors used. As a point of reference, this small simulation study was performed in less than 1 minute on a reasonably-powerful laptop with 8 cores using the doParallel package (Microsoft Corporation and Weston 2020). We found that \(B=1000\) provided results that were stable, regardless of the seed value for the random number generation in the simulations.
In Figure 7:
When using a torus geometry to correct for edge effects in this example, the empirical distribution of Hopkins statistic is remarkably close to its theoretical distribution. In contrast, when a simple geometry is used, the empirical distribution of Hopkins statistic is somewhat flattened with heavier tails. The practical result is that when no edge correction is used, the Hopkins statistic is more likely to deviate from 0.5 and therefore more likely to suggest the data is not uniformly distributed. This erroneous interpretation is a greater risk as the number of dimensions \(D\) increases and edge effects become more pronounced
Another practical problem affecting the correct use and interpretation of Hopkins statistic has to do with the shape of the sampling frame. Consider the example data in Figure 8. On the left side, there were 250 random events simulated in a 2-dimensional unit square. On the right side, the same data are used, but have been subset to keep only the events inside a unit-diameter circle. For both figures, Hopkins statistic was calculated 100 times with 10 events sampled each time.
On the left side, both the bounding box and the actual sampling frame are the unit square. The median of 100 Hopkins statistics is 0.51, providing no evidence against random distribution. On the right side, the actual sampling frame of the data is a unit-circle, but the Hopkins statistic still uses the unit square (for generating new points in \(U\)) and the median Hopkins statistic is 0.75, indicating clustering of the data within the sampling frame, even though the distribution of the data was generated uniformly. A few more examples of problems related to the sampling frame can be found in Smith and Jain (1984).
To consider the problem with the sampling frame on real data, refer again to the trees
data in Figure 6. Because trees usually grow both in height and girth at the same time, it would be unexpected to find tall trees with narrow girth or short trees with large girth. Also, since the volume is a function of the girth and height, it is correlated with those two variables. In the scatterplot of girth versus volume, it would be nearly impossible to find points in the upper left or lower right corner of the square. From a biological point of view, the sampling frame cannot be shaped like a square and the null hypothesis of uniform distribution of data is violated a priori, which means the distribution of Hopkins statistic does not follow a Beta(\(m\),\(m\)) distribution.
Because Hopkins statistic is not hard to calculate and is easy to interpret, yet can be misused (as shown in the previous sections), we propose a protocol for using Hopkins statistic. The protocol simply asks the practitioner to consider the following five questions before calculating Hopkins statistic.
The important point of this protocol is to raise awareness of potential problems. We leave it to the practitioner to decide what do with the answers to these questions.
The statistical literature regarding Hopkins statistic is filled with confusion about how to calculate the statistic. Some publications have erroneously used the exponent \(d=1\) in the formula for Hopkins statistic and this error has propagated into much statistical software and led to incorrect conclusions. To remedy this situation, the R package hopkins (Wright 2022) provides a function hopkins()
that calculates Hopkins statistic using the general exponent \(d=D\) for D-dimensional data. The function can use simple geometry for fast calculations or torus geometry to correct for edge effects. Using this function, we show that the distribution of Hopkins statistic calculated with the general exponent \(d=D\) aligns closely with the theoretical distribution of the statistic. Because inference with Hopkins statistic can be trickier than expected, we introduce a protocol of five questions to consider when using Hopkins statistic.
Alternative versions of Hopkins statistic have been examined by Zeng and Dubes (1985b), Rotondi (1993), Li and Zhang (2007). Other methods of examining multivariate uniformity of data have been considered by Smith and Jain (1984), Yang and Modarres (2017), and Petrie and Willemain (2013).
Thanks to Deanne Wright for bringing the confusion about Hopkins statistic to our attention. Thanks to Vanessa Windhausen and Deanne Wright for reading early drafts of this paper and to Dianne Cook for reviewing the final version. Thanks to Wong (2013) for the pdist package for fast computation of nearest neighbors and thanks to Northrop (2021) for the donut package for nearest neighbor search on a torus.
Supplementary materials are available in addition to this article. It can be downloaded at RJ-2022-055.zip
clustertend, hopkins, doParallel, pdist, donut
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Wright, "Will the Real Hopkins Statistic Please Stand Up?", The R Journal, 2022
BibTeX citation
@article{RJ-2022-055, author = {Wright, Kevin}, title = {Will the Real Hopkins Statistic Please Stand Up?}, journal = {The R Journal}, year = {2022}, note = {https://doi.org/10.32614/RJ-2022-055}, doi = {10.32614/RJ-2022-055}, volume = {14}, issue = {3}, issn = {2073-4859}, pages = {282-292} }