Skip to content

intellistream/SAGE-DB-Bench

Repository files navigation

CANDOR-Bench: Benchmarking In-Memory Continuous ANNS under Dynamic Open-World Streams

CANDOR-Bench (Continuous Approximate Nearest neighbor search under Dynamic Open-woRld Streams) is a benchmarking framework designed to evaluate in-memory ANNS algorithms under realistic, dynamic data stream conditions.

Table of Contents


Project Structure

CANDY-Benchmark/
├── benchmark/             
├── big-ann-benchmarks/             # Core benchmarking framework (Dynamic Open-World conditions)
│   ├── benchmark/
│   │   ├── algorithms/             # Concurrent Track
│   │   ├── concurrent/             # Congestion Track
│   │   ├── congestion/
│   │   ├── main.py
│   │   ├── runner.py
│   │   └── ……
│   ├── create_dataset.py
│   ├── requirements_py3.10.txt
│   ├── logging.conf
│   ├── neurips21/
│   ├── neurips23/                  # NeurIPS'23 benchmark configurations and scripts
│   │   ├── concurrent/             # Concurrent Track
│   │   ├── congestion/             # Congestion Track
│   │   ├── filter/
│   │   ├── ood/
│   │   ├── runbooks/               # Dynamic benchmark scenario definitions (e.g., T1, T3, etc.)
│   │   ├── sparse/
│   │   ├── streaming/              
│   │   └── ……
│   └──……
├── GTI/                            # Integrated GTI algorithm source
├── IP-DiskANN/                     # Integrated IP-DiskANN algorithm source
├── src/                            # Main algorithm implementations
├── include/                        # C++ header files
├── thirdparty/                     # External dependencies
├── Dockerfile                      # Docker build recipe
├── requirements.txt
├── setup.py                        # Python package setup
└── ……

Datasets and Algorithms

Our evaluation involves the following datasets and algorithms.

Summary of Datasets

Category Name Description Dimension Data Size Query Size Code Identifier
Real-world SIFTImage1281M10Ksift
OpenImagesStreamingImage5121M10K\
SunImage51279K200sun
SIFT100MImage128100M10Ksift100M
TreviImage4096100K200sift
MsongAudio420990K200msong
COCOMulti-Modal768100K500coco
GloveText1001.192M200glove
MSTuringText10030M10Kmsturing
Synthetic Gaussiani.i.d valuesAdjustable500K1000\
BlobGaussian Blobs768500K1000\
WTEText768100K100\
FreewayMLConstructed128100K1K\

Summary of Algorithms

Category Algorithm Name Description Code Identifier
Tree-based SPTAG Space-partitioning tree structure for efficient data segmentation. candy_sptag
LSH-based LSH Data-independent hashing to reduce dimensionality and approximate nearest neighbors. faiss_lsh
LSHAPG LSH-driven optimization using LSB-Tree to differentiate graph regions. candy_lshapg
Clustering-based PQ Product quantization for efficient clustering into compact subspaces. faiss_pq
IVFPQ Inverted index with product quantization for hierarchical clustering. faiss_IVFPQ
OnlinePQ Incremental updates of centroids in product quantization for streaming data. faiss_onlinepq
Puck Non-orthogonal inverted indexes with multiple quantization optimized for large-scale datasets. puck
SCANN Small-bit quantization to improve register utilization. faiss_fast_scan
Graph-based NSW Navigable Small World graph for fast nearest neighbor search. faiss_NSW
HNSW Hierarchical Navigable Small World for scalable search. faiss_HNSW
MNRU Enhances HNSW with efficient updates to prevent unreachable points in dynamic environments. candy_mnru
Cufe Enhances FreshDiskANN with batched neighbor expansion. cufe
Pyanns Enhances FreshDiskANN with fix-sized huge pages for optimized memory access. pyanns
IPDiskANN Enables efficient in-place deletions for FreshDiskANN, improving update performance without reconstructions. ipdiskann
GTI Hybrid tree-graph indexing for efficient, dynamic high-dimensional search, with optimized updates and construction. gti

Quick Start Guide


🚨🚨 Strong Recommendation: Use Docker! 🚨🚨

We strongly recommend using Docker to build and run this project.

There are many algorithm libraries with complex dependencies. Setting up the environment locally can be difficult and error-prone. Docker provides a consistent and reproducible environment, saving you time and avoiding compatibility issues.

Note: Building the Docker image may take 15–30 minutes depending on your network and hardware, please be patient.


Build With Docker

To build the project using Docker, simply use the provided Dockerfile located in the root directory. This ensures a consistent and reproducible environment for all dependencies and build steps.

  1. To initialize and update all submodules in the project, you can run:
git submodule update --init --recursive
  1. You can build the Docker image with:
docker build -t <your-image-name> .
  1. Once the image is built, you can run a container from it using the following command.
docker run -it <your-image-name>
  1. After entering the container, navigate to the project directory:
cd /app/big-ann-benchmarks

Example

Prepare dataset and compute groundtruth

cd big-ann-benchmarks
bash scripts/compute_general.sh

Run general experiments

bash scripts/run_general.sh

Wait experiments completed, and generate results, will be as gen-congestion.csv

python3 data_exporter.py --output gen --track congestion

More Usage

All the following operations are performed in the root directory of big-ann-benchmarks.

2.1 Preparing dataset

Create a small, sample dataset. For example, to create a dataset with 10000 20-dimensional random floating point vectors, run:

python3 create_dataset.py --dataset random-xs

To see a complete list of datasets, run the following:

python3 create_dataset.py --help

2.2 Running Algorithms on the congestion Track

To evaluate an algorithm under the congestion track, use the following command:

python3 run.py \
  --neurips23track congestion \
  --algorithm "$ALGO" \
  --nodocker \
  --rebuild \
  --runbook_path "$PATH" \
  --dataset "$DS"
  • algorithm "$ALGO": Name of the algorithm to evaluate.Detailed names of the algorithms can be found in the "Code Identifier" column (the last column) of the "summary of algorithms" table.
  • dataset "$DS": Name of the dataset to use.
  • runbook_path "$PATH": Path to the runbook file describing the test scenario.For example, the runbook path for the general experiment is neurips23/runbooks/congestion/general_experiment/general_experiment.yaml.
  • rebuild: Rebuild the target before running.

2.3 Computing Ground Truth for Runbooks

To compute ground truth for an runbook, Use the provided script to compute ground truth at various checkpoints:

python3 benchmark/congestion/compute_gt.py \
  --runbook "$PATH" \
  --dataset "$DS" \
  --gt_cmdline_tool ./DiskANN/build/apps/utils/compute_groundtruth

2.4 Exporting Results

  1. To make the results available for post-processing, change permissions of the results folder
chmod 777 -R results/
  1. The following command will summarize all results files into a single csv file
python3 data_export.py --out "$OUT" --track congestion

The --out parameter "$OUT" should be adjusted according to the testing scenario. For example, the value corresponding to the general experiment is gen. Common values include:

  • gen
  • batch
  • event
  • conceptDrift
  • randomContamination
  • randomDrop
  • wordContamination
  • bulkDeletion
  • batchDeletion
  • multiModal
  • ……

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 8