Notes about LLaMA 2 model
-
Updated
Aug 30, 2023 - Python
Notes about LLaMA 2 model
This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT) variant. The implementation focuses on the model architecture and the inference process. The code is restructured and heavily commented to facilitate easy understanding of the key parts of the architecture.
This a minimal implementation of a GPT model but it has some advanced features such as temperature/ top-k/ top-p sampling, and KV Cache.
Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)
Fine-Tuned Mistral 7B Persian Large Language Model LLM / Persian Mistral 7B
EXPRESS REST API CACHING + RATE LIMITING + KV-STORE
Mistral and Mixtral (MoE) from scratch
Java-based caching solution designed to temporarily store key-value pairs with a specified time-to-live (TTL) duration.
Image Captioning With MobileNet-LLaMA 3
[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.
A Golang implemented Redis Server and Cluster. Go 语言实现的 Redis 服务器和分布式集群
Completion After Prompt Probability. Make your LLM make a choice
Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.
LLM KV cache compression made easy
Simple and easy to understand PyTorch implementation of Large Language Model (LLM) GPT and LLAMA from scratch with detailed steps. Implemented: Byte-Pair Tokenizer, Rotational Positional Embedding (RoPe), SwishGLU, RMSNorm, Mixture of Experts (MOE). Tested on Taylor Swift song lyrics dataset.
The Official Implementation of PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Add a description, image, and links to the kv-cache topic page so that developers can more easily learn about it.
To associate your repository with the kv-cache topic, visit your repo's landing page and select "manage topics."