This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics.
See the results of this benchmark.
- Annoy
- FLANN
- scikit-learn: LSHForest, KDTree, BallTree
- PANNS
- NearPy
- KGraph
- NMSLIB (Non-Metric Space Library): SWGraph, HNSW, BallTree, MPLSH
- RPForest
- FALCONN
- FAISS
- DolphinnPy
Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but with little attempt at objectively comparing methods.
We have a number of precomputed data sets for this. All data sets are pre-split into train/test and come with ground truth data in the form of the top 100 neighbors. We store them in a HDF5 format:
Dataset | Dimensions | Train size | Test size | Neighbors | Distance | Download |
---|---|---|---|---|---|---|
Fashion-MNIST | 784 | 60,000 | 10,000 | 100 | Euclidean | HDF5 (217MB) |
GIST | 960 | 1,000,000 | 1,000 | 100 | Euclidean | N/A (coming shortly) |
GloVe | 25 | 1,133,628 | 59,886 | 100 | Angular | HDF5 (121MB) |
GloVe | 50 | 1,133,628 | 59,886 | 100 | Angular | HDF5 (235MB) |
GloVe | 100 | 1,133,628 | 59,886 | 100 | Angular | HDF5 (463MB) |
GloVe | 200 | 1,133,628 | 59,886 | 100 | Angular | HDF5 (918MB) |
MNIST | 784 | 60,000 | 10,000 | 100 | Euclidean | HDF5 (217MB) |
SIFT | 128 | 1,000,000 | 10,000 | 100 | Euclidean | HDF5 (501MB) |
Note that a few other datasets were used previously, in particular for Hamming and set similarity. We are going to add them back shortly in the more convenient HDF5 format.
Clone the repo and run bash install.sh
. This will install all libraries. It could take a while. It has been tested in Ubuntu 16.04. We advice to run it only in a VM.
Running a set of algorithms with specific parameters works:
- Check that
algos.yaml
contains the parameter settings that you want to test - To run experiments on SIFT, invoke
python run.py --dataset glove-100-angular
. Seepython run.py --help
for more information on possible settings. Note that experiments can take a long time. - To process the results, either use
python plot.py --dataset glove-100-angular
orpython createwebsite.py
. An example call:python createwebsite.py --plottype recall/time --latex --scatter --outputdir website/
.
You have two choices to include your own algorithm. If your algorithm has a Python wrapper (or is entirely written in Python), then all you need to do is to add your algorithm into ann_benchmarks/algorithms
by providing a small wrapper.
- Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
- In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
- This is meant to be an ongoing project and represent the current state.
- Make everything easy to replicate, including installing and preparing the datasets.
- Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
- High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
- No batching of queries, use single queries by default. ANN-Benchmarks saturates CPU cores by using a thread pool.
- Avoid extremely costly index building (more than several hours).
- Focus on datasets that fit in RAM. Out of core ANN could be the topic of a later comparison.
- We currently support CPU-based ANN algorithms. GPU support is planned as future work.
- Do proper train/test set of index data and query points.
The project is fully tested using Travis, with unit tests run for all different libraries and algorithms.