Hopkins statistic (Hopkins and Skellam 1954) can be used to test for spatial randomness of data and for detecting clusters in data. Although the method is nearly 70 years old, there is persistent confusion regarding the definition and calculation of the statistic. We investigate the confusion and its possible origin. Using the most general definition of Hopkins statistic, we perform a small simulation to verify its distributional properties, provide a visualization of how the statistic is calculated, and provide a fast R function to correctly calculate the statistic. Finally, we propose a protocol of five questions to guide the use of Hopkins statistic.
Hopkins and Skellam (1954) introduced a statistic to test for spatial randomness of data. If the null hypothesis of spatial randomness is rejected, then one possible interpretation is that the data may be clustered into distinct groups. Since one of the problems with clustering methods is that they will always identify clusters, (even if there are no meaningful clusters in the data), Hopkins statistic can be used to determine if there are clusters in the data before applying clustering methods. In the description below on how to calculate Hopkins statistic, we follow the terminology of earlier authors and refer to an “event” as one of the existing data values in a matrix \(X\), and a “point” as a new, randomly chosen location. For clarity in the discussions below we make a distinction between \(D\), the dimension of the data, and \(d\), the exponent in the formula for Hopkins statistic.
Let \(X\) be a matrix of \(n\) events (in rows) and \(D\) variables (in columns). Let \(U\) be the space defined by \(X\).
Hopkins statistic is calculated with the following algorithm:
Because of sampling variability, it is common to calculate \(H\) multiple times and take the average. Under the null hypothesis of spatial randomness, this statistic has a Beta(\(m\),\(m\)) distribution and will always lie between 0 and 1. The interpretation of \(H\) follows these guidelines:
There exists considerable confusion about the definition of Hopkins statistic in scientific publications. In particular, when calculating Hopkins statistic, there are 3 different values of the exponent \(d\) (in step 4 above) that have been used in statistical literature: \(d=1\), \(d=2\), and the generalized \(d=D\). Here is a brief timeline of how this exponent has been presented.
1954: Hopkins and Skellam (1954) introduced Hopkins statistic in a two-dimensional setting. The formula they present is in a slightly different form, but is equivalent to \(\sum u_i^2 \big/ \sum (u_i^2 + w_i^2 )\). The exponent here is \(d=2\).
1976: Diggle et al. (1976) presented a formula for Hopkins statistic in a two-dimensional setting as \(\sum u_i \big/ \sum (u_i + w_i )\). This formula has no exponents and therefore at first glance appears to use the exponent \(d=1\) in the equation for Hopkins statistic. However, a careful reading of their text shows that their \(u_i\) and \(w_i\) values were actually squared Euclidean distances. If their \(u_i\) and \(w_i\) had represented ordinary (non-squared) Euclidean distances, then their formula would have been \(\sum u_i^2 \big/ \sum (u_i^2 + w_i^2 )\). We suspect this paper is the likely source of confusion by later authors.
1982: Cross and Jain (1982) generalized Hopkins statistic for \(X\) of any dimension \(d=D\) as \(\sum u_i^d \big/ \sum (u_i^d + w_i^d )\). This formula was also used by Zeng and Dubes (1985a), Dubes and Zeng (1987), and Banerjee and Dave (2004).
1990: Lawson and Jurs (1990) and Jurs and Lawson (1990) give the formula for Hopkins statistic as \(\sum u_i \big/ \sum (u_i + w_i)\), but used ordinary distances instead of squared distances. Perhaps this was a result of misunderstanding the formula in Diggle et al. (1976).
2015: The R function hopkins() in the clustertend package (YiLan and RuTong 2015 version 1.4) cited Lawson and Jurs (1990) and used also used the exponent \(d=1\).
2022: The new function hopkins() in the hopkins package (Wright 2022 version 1.0) uses the general exponent \(d=D\) as found in Cross and Jain (1982).
Having identified the confusion in the statistical literature, we now ask the question, “Does it matter what value of \(d\) is used in the exponent?” In a word, “yes”.
According to Cross and Jain (1982), under the null hypotheses of no structure in the data, the distribution of the Hopkins statistic is Beta(\(m\),\(m\)) where \(m\) is the number of rows sampled in \(X\). This distribution can be verified in a simple simulation study:
Figure 1: Results of a simulation study of the distribution of Hopkins statistic. The red and blue lines are the empirical density curves of 1000 Hopkins statistics calculated with exponents \(d=1\) (red) and \(d=3\) (blue). The black line is the theoretical distribution of the Hopkins statistic. The red line is very far away from the black line and shows that calculating Hopkins statistic with exponent \(d=1\) is incorrect.
In Figure 1:
The empirical density of the blue curve is similar to the theoretical distribution shown by the black line. The empirical density of the red curve is clearly dissimilar. The distribution of Hopkins statistic with \(d=1\) is clearly incorrect (except in trivial cases where \(X\) has only 1 column). One more thing to note about the graph is that the blue curve is slightly flatter than the theoretical distribution shown in black. This is not accidental, but is caused by edge effects of the sampling region and will be discussed in a later section.
The first three examples in this section are adapted from Gastner (2005). The datasets are available in the spatstat.data package (Baddeley et al. 2021). A modified version of the hopkins() function was written for this paper to show how the Hopkins statistic is calculated (inspired by Figure 1 of Lawson and Jurs (1990)). In order to minimize the amount of over-plotting, only \(m=3\) sampling points are used for these examples. In each figure, 3 of the existing events in \(X\) are chosen at random and a light-blue arrow is drawn to the nearest neighbor in \(X\). In addition, 3 points are drawn uniformly in the plotting region and a light-red arrow is drawn to the nearest neighbor in \(X\). The colored numbers are the lengths of the arrows.
Figure 2: An example of how Hopkins statistic is calculated with systematically-spaced data. The black circles are the events of the cells data. Each blue W represents a randomly-chosen event. Each blue arrow points from a W to the nearest-neighboring event. Each red U is a new, randomly-generated point. Each red arrow points from a U to the nearest-neighboring event. The numbers are the length of the arrows. In systematically-spaced data, red arrows tend to be shorter than blue arrows.
The cells data represent the centers of mass of 42 cells from insect tissue. The scatterplot of the data in Figure 2 shows that events are systematically spaced as nearly far apart as possible. Because the data is two-dimensional, Hopkins statistics is calculated as the sum of the squared distances \(u_i^2\) divided by the sum of the squared distances \(u_i^2 + w_i^2\):
(.046^2 + .081^2 + .021^2) /
( (.046^2 + .081^2 + .021^2) + (.152^2 + .14^2 + .139^2) )
[1] 0.1281644
The hopkins() function returns the same value:
set.seed(17)
hopkins(cells, m=3)
[1] 0.1285197
The value of the Hopkins statistic in this calculation is based on only \(m=3\) events and will have sizable sampling error. To reduce the sampling error, a larger sample size can be used up to approximately 10% of the number of events. To reduce sampling error further while maintaining the independence assumption of the sampling in calculating Hopkins statistic, repeated samples can be drawn. Here we use the idea of Gastner (2005) to calculate Hopkins statistic 100 times and then calculate the mean and standard deviation for the 100 values of Hopkins statistic, which in this case are 0.21 and 0.06. This value of the statistic is quite a bit lower than 0.5 and indicates the events are spaced more evenly than purely-random events (p-value 0.05).
The japanesepines data contains the locations of 65 Japanese black pine saplings in a square 5.7 meters on a side. The plot of the data in Figure 3 is an example of data in which the events are randomly spaced.
Figure 3: An example of how Hopkins statistic is calculated with randomly-spaced data. The black circles are the events of the japanesepines data. Each blue W represents a randomly-chosen event. Each blue arrow points from a W to the nearest-neighboring event. Each red U is a new, randomly-generated point. Each red arrow points from a U to the nearest-neighboring event. The numbers are the length of the arrows. In randomly-spaced data, red arrows tend to be similar in length to blue arrows.
The value of Hopkins statistic using 3 events and points is:
(.023^2+.076^2+.07^2) /
((.023^2+.076^2+.07^2) + (.104^2+.1^2+.058^2))
[1] 0.3166596
The mean and standard deviation of the 100 Hopkins statistics are 0.48 and 0.12. The value of the statistic is close to 0.5 and indicates no evidence against a random distribution of data (p-value 0.9).
Figure 4: An example of how Hopkins statistic is calculated with clustered data. The black circles are the events of the redwood data. Each blue W represents a randomly-chosen event. Each blue arrow points from a W to the nearest-neighboring event. Each red U is a new, randomly-generated point. Each red arrow points from a U to the nearest-neighboring event. The numbers are the length of the arrows. In clustered data, red arrows tend to be longer in length than blue arrows.
The redwood data are the coordinates of 62 redwood seedlings in a square 23 meters on a side. The plot in Figure 4 shows events that exhibit clustering. The value of Hopkins statistic for the plot is:
(.085^2+.078^2+.158^2) /
((.085^2+.078^2+.158^2) + (.028^2+.028^2+.12^2))
[1] 0.7056101
The mean and standard deviation of the 100 Hopkins statistics are 0.79 and 0.13. The value of the statistic is much higher than 0.5, which indicates that the data are somewhat clustered (p-value 0.03).
Adolfsson et al. (2017) provide a review of various methods of detecting clusterability. One of the methods they considered was Hopkins statistic, which they calculated using 10% sampling. They evaluated the clusterability of nine R datasets by calculating Hopkins statistic 100 times and then reporting the proportion of time that Hopkins statistic exceeded the appropriate beta quantile. We can repeat their analysis and calculate Hopkins statistic for both \(d=1\) dimension and \(d=D\) dimensions, where \(D\) is the number of columns for each dataset.
| dataset | n | D | Adolfsson | Hopkins1 | HopkinsD |
|---|---|---|---|---|---|
| faithful | 272 | 2 | 1.00 | 1.00 | 1.00 |
| iris | 150 | 5 | 1.00 | 1.00 | 1.00 |
| rivers | 141 | 1 | 0.92 | 0.89 | 0.90 |
| swiss | 47 | 6 | 0.41 | 0.25 | 0.94 |
| attitude | 30 | 7 | 0.00 | 0.00 | 0.59 |
| cars | 50 | 2 | 0.19 | 0.23 | 0.68 |
| trees | 31 | 3 | 0.18 | 0.22 | 0.71 |
| USJudgeRatings | 43 | 12 | 0.69 | 0.53 | 1.00 |
| USArrests | 50 | 4 | 0.01 | 0.00 | 0.56 |
In Table 1:
Since the Adolfsson and Hopkins1 columns are similar (within sampling variability), it appears that Adolfsson et al. (2017) used Hopkins statistic with \(d=1\) as the exponent. This would be expected if they had used the clustertend package (YiLan and RuTong 2015 version 1.4) to calculate Hopkins statistic.
For a few of the datasets, there is substantial disagreement between the last two columns. For example, the swiss data is significantly clusterable 41% of the time according to Adolfsson et al. (2017), but 94% of the time when using Hopkins statistic with exponent \(d=D\). A scatterplot of the swiss data in Figure 5 shows that the data are strongly non-random, which agrees with the 94%.
Figure 5: Pairwise scatterplots of the R dataset swiss. The meaning of the variables is not important here. Because some panels show a lack of spatial randomness of the data, we would expect Hopkins statistic to be significant.
Similarly, the trees data is significantly clusterable 18% of the time according to the Adolfsson column, but 71% of the time according to HopkinsD. The scatterplot in Figure 6 shows strong non-random patterns, which agrees with the 71%
Figure 6: Pairwise scatterplots of the R dataset trees. The data are Girth, Height, and Volume of 31 black cherry trees. Because all panels show a lack of spatial randomness of the data, we would expect Hopkins statistic to be significant.
Scatterplot matrices of the swiss, attitude, cars, trees, and USArrests datasets can be found in Brownstein et al. (2019). Each scatterplot matrix shows at least one pair of the variables with notable correlation and therefore the data are not randomly-distributed, but rather are clustered. For each of these datasets, the proportion of times Hopkins1 is significant is less than 0.5, but the proportion of times HopkinsD is significant is greater than 0.5. The HopkinsD statistic is accurately detecting the presence of clusters in these datasets.
In the cells, japanesepines and redwood examples above, it is possible or even probable that there are additional events outside of the sampling frame that contains the data. The sampling frame thus has the effect of cutting off potential nearest neighbors from consideration. If the distribution of the data can be assumed to extend beyond the sampling frame and if the shape of the sampling frame can be viewed as a hypercube, then edge effects due to the sampling frame can be corrected by using a torus geometry that wraps edges of the sampling frame around to the opposite side (Li and Zhang 2007). To see an illustration of this, look again at the plot of the japanesepines data in Figure 3. The randomly-generated event \(U\) in the upper left corner is a distance of \(0.076\) away from the nearest event. However, if the left edge of the plot is wrapped around an imaginary cylinder and connected to the right edge of the plot, then the nearest neighbor is the event in the upper-right corner at coordinates (0.97, 0.86).
To see what effect the torus geometry has on the distribution of the Hopkins statistic, consider the following simulation. We generate \(n=100\) events uniformly in a \(D=5\) dimension unit cube and sample \(m=10\) events to calculate the value of Hopkins statistic using both a simple geometry and a torus geometry. Repeat these steps \(B=1000\) times. The calculation of the nearest neighbor using a torus geometry is computationally more demanding than using a simple geometry, especially as the number of dimensions \(D\) increases, so the use of parallel computing can reduce the computing time linearly according to the number of processors used. As a point of reference, this small simulation study was performed in less than 1 minute on a reasonably-powerful laptop with 8 cores using the doParallel package (Microsoft Corporation and Weston 2020). We found that \(B=1000\) provided results that were stable, regardless of the seed value for the random number generation in the simulations.
Figure 7: Results of a simulation study considering how the spatial geometry affects Hopkins statistic. The thin black line is the theoretical distribution of Hopkins statistic. The blue and green lines are the empirical density curves of 1000 Hopkins statistics calculated with simple geometry (blue) and torus geometry (green). Calculating Hopkins statistic with a torus geometry aligns closely to the theoretical distribution.
In Figure 7:
When using a torus geometry to correct for edge effects in this example, the empirical distribution of Hopkins statistic is remarkably close to its theoretical distribution. In contrast, when a simple geometry is used, the empirical distribution of Hopkins statistic is somewhat flattened with heavier tails. The practical result is that when no edge correction is used, the Hopkins statistic is more likely to deviate from 0.5 and therefore more likely to suggest the data is not uniformly distributed. This erroneous interpretation is a greater risk as the number of dimensions \(D\) increases and edge effects become more pronounced