1 Introduction

The BiocNeighbors package implements a few algorithms for exact nearest neighbor searching:

  • The k-means for k-nearest neighbors (KMKNN) algorithm (Wang 2012) uses k-means clustering to create an index. Within each cluster, the distance of each of that cluster’s points to the cluster center are computed and used to sort all points. Given a query point, the distance to each cluster center is determined and the triangle inequality is applied to determine which points in each cluster warrant a full distance calculation.
  • The vantage point (VP) tree algorithm (Yianilos 1993) involves constructing a tree where each node is located at a data point and is associated with a subset of neighboring points. Each node progressively partitions points into two subsets that are either closer or further to the node than a given threshold. Given a query point, the triangle inequality is applied at each node in the tree to determine if the child nodes warrant searching.

Both methods involve a component of randomness during index construction, though the k-nearest neighbors result is fully deterministic1 Except in the presence of ties, see ?findKNN for details..

2 Identifying k-nearest neighbors

The most obvious application is to perform a k-nearest neighbors search. We’ll mock up an example here with a hypercube of points, for which we want to identify the 10 nearest neighbors for each point.

nobs <- 10000
ndim <- 20
data <- matrix(runif(nobs*ndim), ncol=ndim)

The findKNN() method expects a numeric matrix as input with data points as the rows and variables/dimensions as the columns. We indicate that we want to use the KMKNN algorithm by setting BNPARAM=KmknnParam() (which is also the default, so this is not strictly necessary here). We could use a VP tree instead by setting BNPARAM=VptreeParam().

fout <- findKNN(data, k=10, BNPARAM=KmknnParam())
head(fout$index)
##      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 1619 7201 7573  321 5921  863  312 3310  806  6225
## [2,]   75 7736 8735 3998 9001 7778 9996 5822 8719  4553
## [3,] 2075 8658 3029 5653 7567 6783 1383 4555  116   133
## [4,] 9536 8526 1625 7032 3053 9921 6167 7549 2690  7690
## [5,] 3092 3215 4245  484 6144 5606 6472 9528 7258    11
## [6,] 5278 5015 1460 8038  937 3264  558 4971 4883  7317
head(fout$distance)
##           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]     [,7]
## [1,] 0.9016699 0.9318565 0.9772659 0.9949794 1.038618 1.041176 1.056427
## [2,] 0.9917201 1.0245719 1.0259660 1.0266167 1.032166 1.050191 1.051409
## [3,] 0.8278054 0.9012230 1.0060742 1.0214293 1.037359 1.044597 1.057421
## [4,] 0.9249825 0.9271723 0.9663386 1.0077784 1.012115 1.019327 1.024750
## [5,] 0.9582251 0.9789901 1.0099590 1.0323164 1.036857 1.038603 1.041321
## [6,] 0.8968374 0.9329467 0.9414597 0.9798813 1.006685 1.022083 1.034494
##          [,8]     [,9]    [,10]
## [1,] 1.067170 1.075106 1.085567
## [2,] 1.062184 1.071266 1.080524
## [3,] 1.066949 1.077623 1.078929
## [4,] 1.035183 1.036362 1.041160
## [5,] 1.047111 1.056788 1.058003
## [6,] 1.042925 1.048861 1.051233

Each row of the index matrix corresponds to a point in data and contains the row indices in data that are its nearest neighbors. For example, the 3rd point in data has the following nearest neighbors:

fout$index[3,]
##  [1] 2075 8658 3029 5653 7567 6783 1383 4555  116  133

… with the following distances to those neighbors:

fout$distance[3,]
##  [1] 0.8278054 0.9012230 1.0060742 1.0214293 1.0373587 1.0445967 1.0574210
##  [8] 1.0669490 1.0776234 1.0789287

Note that the reported neighbors are sorted by distance.

3 Querying k-nearest neighbors

Another application is to identify the k-nearest neighbors in one dataset based on query points in another dataset. Again, we mock up a small data set:

nquery <- 1000
ndim <- 20
query <- matrix(runif(nquery*ndim), ncol=ndim)

We then use the queryKNN() function to identify the 5 nearest neighbors in data for each point in query.

qout <- queryKNN(data, query, k=5, BNPARAM=KmknnParam())
head(qout$index)
##      [,1] [,2] [,3] [,4] [,5]
## [1,] 7754 1432 5154   28 5659
## [2,] 4348 5057 6525 7569 9199
## [3,] 6429 7208 6693 9606 9287
## [4,] 4724 5517 6041 7145 1191
## [5,] 8891 5995 7069 4476 9397
## [6,] 5515    1 6353  312 8961
head(qout$distance)
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 0.9348642 0.9971274 1.0126300 1.0286586 1.0439788
## [2,] 0.9829474 1.0067578 1.0132464 1.0210246 1.0326293
## [3,] 0.8734023 0.8888589 0.8934396 0.9151888 0.9376425
## [4,] 0.8847359 0.8986644 0.9009389 0.9144456 0.9303230
## [5,] 0.9086042 0.9771849 0.9873951 1.0039442 1.0154987
## [6,] 0.6845494 1.0223525 1.0900201 1.0973710 1.0997798

Each row of the index matrix contains the row indices in data that are the nearest neighbors of a point in query. For example, the 3rd point in query has the following nearest neighbors in data:

qout$index[3,]
## [1] 6429 7208 6693 9606 9287

… with the following distances to those neighbors:

qout$distance[3,]
## [1] 0.8734023 0.8888589 0.8934396 0.9151888 0.9376425

Again, the reported neighbors are sorted by distance.

4 Further options

Users can perform the search for a subset of query points using the subset= argument. This yields the same result as but is more efficient than performing the search for all points and subsetting the output.

findKNN(data, k=5, subset=3:5)
## $index
##      [,1] [,2] [,3] [,4] [,5]
## [1,] 2075 8658 3029 5653 7567
## [2,] 9536 8526 1625 7032 3053
## [3,] 3092 3215 4245  484 6144
## 
## $distance
##           [,1]      [,2]      [,3]     [,4]     [,5]
## [1,] 0.8278054 0.9012230 1.0060742 1.021429 1.037359
## [2,] 0.9249825 0.9271723 0.9663386 1.007778 1.012115
## [3,] 0.9582251 0.9789901 1.0099590 1.032316 1.036857

If only the indices are of interest, users can set get.distance=FALSE to avoid returning the matrix of distances. This will save some time and memory.

names(findKNN(data, k=2, get.distance=FALSE))
## [1] "index"

It is also simple to speed up functions by parallelizing the calculations with the BiocParallel framework.

library(BiocParallel)
out <- findKNN(data, k=10, BPPARAM=MulticoreParam(3))

For multiple queries to a constant data, the pre-clustering can be performed in a separate step with buildIndex(). The result can then be passed to multiple calls, avoiding the overhead of repeated clustering2 The algorithm type is automatically determined when BNINDEX is specified, so there is no need to also specify BNPARAM in the later functions..

pre <- buildIndex(data, BNPARAM=KmknnParam())
out1 <- findKNN(BNINDEX=pre, k=5)
out2 <- queryKNN(BNINDEX=pre, query=query, k=2)

The default setting is to search on the Euclidean distance. Alternatively, we can use the Manhattan distance by setting distance="Manhattan" in the BiocNeighborParam object.

out.m <- findKNN(data, k=5, BNPARAM=KmknnParam(distance="Manhattan"))

Advanced users may also be interested in the raw.index= argument, which returns indices directly to the precomputed object rather than to data. This may be useful inside package functions where it may be more convenient to work on a common precomputed object.

5 Session information

sessionInfo()
## R Under development (unstable) (2019-10-24 r77329)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.3 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.11-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.11-bioc/R/lib/libRlapack.so
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] BiocParallel_1.21.0 BiocNeighbors_1.5.1 knitr_1.25         
## [4] BiocStyle_2.15.0   
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.2          bookdown_0.14       lattice_0.20-38    
##  [4] digest_0.6.22       grid_4.0.0          stats4_4.0.0       
##  [7] magrittr_1.5        evaluate_0.14       rlang_0.4.1        
## [10] stringi_1.4.3       S4Vectors_0.25.0    Matrix_1.2-17      
## [13] rmarkdown_1.16      tools_4.0.0         stringr_1.4.0      
## [16] parallel_4.0.0      xfun_0.10           yaml_2.2.0         
## [19] compiler_4.0.0      BiocGenerics_0.33.0 BiocManager_1.30.9 
## [22] htmltools_0.4.0

References

Wang, X. 2012. “A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.” Proc Int Jt Conf Neural Netw 43 (6):2351–8.

Yianilos, P. N. 1993. “Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces.” In SODA, 93:311–21. 194.