An empirical study of benchmarking LLM inference with KV cache offloading using vLLM and LMCache on NVIDIA GB200 with high-bandwidth NVLink-C2C .
-
Updated
Dec 20, 2025 - Python
An empirical study of benchmarking LLM inference with KV cache offloading using vLLM and LMCache on NVIDIA GB200 with high-bandwidth NVLink-C2C .
Add a description, image, and links to the gb200 topic page so that developers can more easily learn about it.
To associate your repository with the gb200 topic, visit your repo's landing page and select "manage topics."