SmartCache is an AI-powered caching system that optimizes resource caching by using dynamic Time-To-Live (TTL) values.
Unlike traditional caching systems that rely on static TTLs, SmartCache utilizes LSTM-based prediction models and time-series data to predict the optimal lifetime for each cache entry.
- Dynamic TTL Assignment: Automatically assigns per-resource TTL based on AI predictions rather than static configuration.
- Predictive Caching: Leverages historical access patterns to forecast resource popularity and adjust cache duration accordingly.
- Event-Driven Architecture: Built on a Kafka-based pipeline to ensure high scalability and component decoupling.
- Plug-and-Play SDK: Provides a generic SDK allowing developers to integrate predictive caching with minimal code changes.
- Dockerized Stack: The entire microservices suite is deployable via a single
docker-composecommand.
The system is organized into three distinct layers:
- User applications integrate via the SmartCache SDK.
set()calls interact with the SmartCache system to initialize predictive TTLs.get()calls function as standard Redis lookups to ensure low latency.
- Kafka: Acts as the event streaming backbone.
- Go Log Consumer: Ingests access logs into TimescaleDB/Postgres.
- Go TTL Updater: Periodically requests TTL predictions and updates Redis keys.
- Python LSTM Service: Analyzes time-series access logs to predict optimal TTLs.
- Redis: Stores the actual cached resources.
- TimescaleDB/Postgres: Stores historical logs for model training.
SmartCache is designed as a set of Dockerized microservices.
- Docker & Docker Compose
To deploy the complete stack (Redis, Kafka, Zookeeper, TimescaleDB, Go Consumers, and Python Predictor):
# Clone the repository
git clone https://github.com/sathwikshetty33/SmartCache.git
# Navigate to the directory
cd smartcache
# Start the services
docker-compose up -dIntegrating SmartCache into your Python application is straightforward.
from smartcache import SmartCache
# Initialize the client
cache = SmartCache(
redis_host="localhost",
redis_port=6379,
smartcache_host="localhost",
smartcache_port=5000
)
# Set a value (Logs access and sets initial dynamic TTL)
cache.set("video:123", "data_payload")
# Get a value (Standard Redis retrieval)
value = cache.get("video:123")-
Initial Set: The user calls
cache.set(). An initial short TTL is set, and an access log is sent to Kafka. -
Log Processing:
- The Go Log Consumer stores the event in TimescaleDB.
- The Go TTL Updater requests a prediction.
-
Prediction: The Python LSTM Service analyzes historical access patterns for that specific key.
-
Update: The TTL Updater dynamically adjusts the expiration time of the key in Redis based on the model's output.
- API Response Caching: optimizing storage for varying endpoint popularity.
- CDN Content: Managing static and dynamic assets.
- Database Queries: Caching complex query results.
- Edge Computing: Efficient resource management in serverless environments.
The system performance is evaluated based on:
- Cache Hit Ratio
- Average Response Latency
- Backend Load Reduction
- Prediction Accuracy of the LSTM model