1 Introduction

The BiocNeighbors package implements a few algorithms for exact nearest neighbor searching:

  • The k-means for k-nearest neighbors (KMKNN) algorithm (Wang 2012) uses k-means clustering to create an index. Within each cluster, the distance of each of that cluster’s points to the cluster center are computed and used to sort all points. Given a query point, the distance to each cluster center is determined and the triangle inequality is applied to determine which points in each cluster warrant a full distance calculation.
  • The vantage point (VP) tree algorithm (Yianilos 1993) involves constructing a tree where each node is located at a data point and is associated with a subset of neighboring points. Each node progressively partitions points into two subsets that are either closer or further to the node than a given threshold. Given a query point, the triangle inequality is applied at each node in the tree to determine if the child nodes warrant searching.
  • The exhaustive search is a simple brute-force algorithm that computes distances to between all data and query points. This has the worst computational complexity but can actually be faster than the other exact algorithms in situations where indexing provides little benefit, e.g., data sets with few points and/or a very large number of dimensions.

Both KMKNN and VP-trees involve a component of randomness during index construction, though the k-nearest neighbors result is fully deterministic1 Except in the presence of ties, see ?"BiocNeighbors-ties" for details..

2 Identifying k-nearest neighbors

The most obvious application is to perform a k-nearest neighbors search. We’ll mock up an example here with a hypercube of points, for which we want to identify the 10 nearest neighbors for each point.

nobs <- 10000
ndim <- 20
data <- matrix(runif(nobs*ndim), ncol=ndim)

The findKNN() method expects a numeric matrix as input with data points as the rows and variables/dimensions as the columns. We indicate that we want to use the KMKNN algorithm by setting BNPARAM=KmknnParam() (which is also the default, so this is not strictly necessary here). We could use a VP tree instead by setting BNPARAM=VptreeParam().

fout <- findKNN(data, k=10, BNPARAM=KmknnParam())
head(fout$index)
##      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 6761 6490 7858 3518 9991 2234 9287 6025 6453  9751
## [2,] 6177 8914 5255 7662 6251 3964 7991 3071 4293  5722
## [3,] 4931 8433 7551  887 1049    9 4081 1614 2537  8840
## [4,]   77 3286 6627  904 1952 1296 8185 6153 4158  8354
## [5,]  706 1070 6761 4642 5527 2541  918 4700 3012  1009
## [6,] 3510 3663 9097 7678 2182 9631 2625 9789 6970   780
head(fout$distance)
##           [,1]      [,2]      [,3]      [,4]      [,5]      [,6]      [,7]
## [1,] 0.9311921 0.9533072 0.9609823 0.9679515 0.9707943 0.9928832 1.0175203
## [2,] 0.9749345 1.0742141 1.1105918 1.1189138 1.1246109 1.1255398 1.1552058
## [3,] 0.8354054 0.8572023 0.8938349 0.9341798 0.9382067 0.9430409 0.9487694
## [4,] 0.8482615 0.8823565 0.9305727 0.9823952 0.9882312 0.9947771 1.0032718
## [5,] 0.9930499 0.9959694 0.9979194 1.0212452 1.0548379 1.0710612 1.0717744
## [6,] 0.9782939 0.9837832 1.0039065 1.0121983 1.0275348 1.0300907 1.0412228
##           [,8]     [,9]     [,10]
## [1,] 1.0179058 1.022455 1.0270137
## [2,] 1.1684454 1.203869 1.2088244
## [3,] 0.9582898 0.959495 0.9670596
## [4,] 1.0037243 1.013675 1.0148637
## [5,] 1.0886201 1.096456 1.0989123
## [6,] 1.0517900 1.062558 1.0659374

Each row of the index matrix corresponds to a point in data and contains the row indices in data that are its nearest neighbors. For example, the 3rd point in data has the following nearest neighbors:

fout$index[3,]
##  [1] 4931 8433 7551  887 1049    9 4081 1614 2537 8840

… with the following distances to those neighbors:

fout$distance[3,]
##  [1] 0.8354054 0.8572023 0.8938349 0.9341798 0.9382067 0.9430409 0.9487694
##  [8] 0.9582898 0.9594950 0.9670596

Note that the reported neighbors are sorted by distance.

3 Querying k-nearest neighbors

Another application is to identify the k-nearest neighbors in one dataset based on query points in another dataset. Again, we mock up a small data set:

nquery <- 1000
ndim <- 20
query <- matrix(runif(nquery*ndim), ncol=ndim)

We then use the queryKNN() function to identify the 5 nearest neighbors in data for each point in query.

qout <- queryKNN(data, query, k=5, BNPARAM=KmknnParam())
head(qout$index)
##      [,1] [,2] [,3] [,4] [,5]
## [1,] 9020    5 1070 7537 7912
## [2,] 6532 6540 9380 6428 2441
## [3,] 2219 7960 4363 7889 3310
## [4,] 4234 2639 4829 7530 9032
## [5,] 5634 2927 7073 2402  405
## [6,] 1034 7687  391 2249 8661
head(qout$distance)
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 0.8902823 0.9543271 0.9679565 0.9762386 0.9912389
## [2,] 0.9175757 0.9600506 0.9620724 0.9741165 0.9866602
## [3,] 0.9224063 0.9388611 1.0451134 1.0566838 1.0736025
## [4,] 0.7767071 0.9495492 0.9980887 1.0071374 1.0428377
## [5,] 0.8486523 0.8806437 0.9405006 0.9476149 0.9530206
## [6,] 0.8454308 0.8562966 0.9424755 0.9503940 0.9528231

Each row of the index matrix contains the row indices in data that are the nearest neighbors of a point in query. For example, the 3rd point in query has the following nearest neighbors in data:

qout$index[3,]
## [1] 2219 7960 4363 7889 3310

… with the following distances to those neighbors:

qout$distance[3,]
## [1] 0.9224063 0.9388611 1.0451134 1.0566838 1.0736025

Again, the reported neighbors are sorted by distance.

4 Further options

Users can perform the search for a subset of query points using the subset= argument. This yields the same result as but is more efficient than performing the search for all points and subsetting the output.

findKNN(data, k=5, subset=3:5)
## $index
##      [,1] [,2] [,3] [,4] [,5]
## [1,] 4931 8433 7551  887 1049
## [2,]   77 3286 6627  904 1952
## [3,]  706 1070 6761 4642 5527
## 
## $distance
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 0.8354054 0.8572023 0.8938349 0.9341798 0.9382067
## [2,] 0.8482615 0.8823565 0.9305727 0.9823952 0.9882312
## [3,] 0.9930499 0.9959694 0.9979194 1.0212452 1.0548379

If only the indices are of interest, users can set get.distance=FALSE to avoid returning the matrix of distances. This will save some time and memory.

names(findKNN(data, k=2, get.distance=FALSE))
## [1] "index"

It is also simple to speed up functions by parallelizing the calculations with the BiocParallel framework.

library(BiocParallel)
out <- findKNN(data, k=10, BPPARAM=MulticoreParam(3))

For multiple queries to a constant data, the pre-clustering can be performed in a separate step with buildIndex(). The result can then be passed to multiple calls, avoiding the overhead of repeated clustering2 The algorithm type is automatically determined when BNINDEX is specified, so there is no need to also specify BNPARAM in the later functions..

pre <- buildIndex(data, BNPARAM=KmknnParam())
out1 <- findKNN(BNINDEX=pre, k=5)
out2 <- queryKNN(BNINDEX=pre, query=query, k=2)

The default setting is to search on the Euclidean distance. Alternatively, we can use the Manhattan distance by setting distance="Manhattan" in the BiocNeighborParam object.

out.m <- findKNN(data, k=5, BNPARAM=KmknnParam(distance="Manhattan"))

Advanced users may also be interested in the raw.index= argument, which returns indices directly to the precomputed object rather than to data. This may be useful inside package functions where it may be more convenient to work on a common precomputed object.

5 Session information

sessionInfo()
## R version 4.4.0 Patched (2024-04-24 r86482)
## Platform: x86_64-apple-darwin20
## Running under: macOS Monterey 12.7.4
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.4-x86_64/Resources/lib/libRblas.0.dylib 
## LAPACK: /Library/Frameworks/R.framework/Versions/4.4-x86_64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0
## 
## locale:
## [1] C/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## time zone: America/New_York
## tzcode source: internal
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] BiocParallel_1.39.0  BiocNeighbors_1.23.0 knitr_1.46          
## [4] BiocStyle_2.33.0    
## 
## loaded via a namespace (and not attached):
##  [1] cli_3.6.2           rlang_1.1.3         xfun_0.43          
##  [4] jsonlite_1.8.8      S4Vectors_0.43.0    htmltools_0.5.8.1  
##  [7] stats4_4.4.0        sass_0.4.9          rmarkdown_2.26     
## [10] grid_4.4.0          evaluate_0.23       jquerylib_0.1.4    
## [13] fastmap_1.1.1       yaml_2.3.8          lifecycle_1.0.4    
## [16] bookdown_0.39       BiocManager_1.30.22 compiler_4.4.0     
## [19] codetools_0.2-20    Rcpp_1.0.12         lattice_0.22-6     
## [22] digest_0.6.35       R6_2.5.1            parallel_4.4.0     
## [25] bslib_0.7.0         Matrix_1.7-0        tools_4.4.0        
## [28] BiocGenerics_0.51.0 cachem_1.0.8

References

Wang, X. 2012. “A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.” Proc Int Jt Conf Neural Netw 43 (6): 2351–8.

Yianilos, P. N. 1993. “Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces.” In SODA, 93:311–21. 194.