⭐ Star us on GitHub — it helps!
This is a python project to auto scale the redis resources on Heroku platform.
In many applications, redis is used for queuing data instead of just caching data. In such applications, redis instances are more susceptible to incoming load.
Having redis auto scaling would avoid many SOS situations that usually arise during peak traffic while giving you complete control over the way you want to scale.
Top reasons for why you must have auto scaling on your redis instances:
- Dynamic Load Management - Helps to cater to the dynamic load on your application. Scales in accordance to the load & demand.
- Avoid Data Drop - Helps to ensure timely scaling to avoid redis data drop. Scales prematurely by monitoring key metrics and mitigates data drop of new incoming information
- Avoid Downtime - Helps to avoid downtime on your application. Monitors key metrics to ensure your redis is always accessible.
- Cost Saving - Finally, saves cost by scaling down in low-load scenarios
- Once any redis instance is configured, a scheduled cron collects the key metrics periodically.
- If any metric breaches the preset threshold, a next plan is fetched from the list of all plans. Current redis plan is changed to that plan
- If all metrics are below the respective thresholds, it is evaluated if a lower plan can be selected while still satisfying all the thresholds. If yes, only then the current redis plan is changed to a lower one
- All metrics and scaling actions are logged to a database for review / reference
- Periodic data cleaning is done to avoid exceeding DB plan
Follow below guidelines to quickly setup this project on your heroku account.
Environment Configuration
HEROKU_API_KEY
: API key for your heroku accountEMAIL_HOST
: Host name for your emailEMAIL_PORT
: Port for your emailEMAIL_HOST_PASSWORD
: Host passwordEMAIL_HOST_USER
: Host usernameSERVER_EMAIL
: Server emailRECIPIENTS
: Comma separated emails of notification recipientsENABLE_EMAILS
: A global flag to enable / disable all types of mailsEMAIL_PREFIX
: Email subject prefix for possible filter additionN_RETENTION_DAYS
: Number of days to retain metrics logs
Post deployment initialize.sh
is run from console.
This does 3 things:
- Applies all DB migrations
- Creates superuser for django admin panel
- Adds static info to database
Run python add_redis_mon.py
from console
This will prompt you for all information related to redis instance for which auto scaling needs to be enabled.
General Configuration
Name
: An identifier name for your redis insatnce. Can be alphanumericHeroku App Name
: This is the name of heroku application to which your redis instance is attachedRedis Heroku Name
: This is the URL name by which your heroku application identifies the redis. (Example: REDISCLOUD_ORANGE_URL or HEROKU_REDIS_BLANK_URL)
Metrics Configuration
Metric Rate
: Choose rate of metrics collection from given options. Select a faster rate only if your application demands a quick action.Avg memory percentage
: Choose the avg memory usage percentage you wish to maintain for you redis instanceAvg connection percentage
: Choose the avg number of client connections you wish to maintain with the redis instance
Scaling Configuration
Scaling Enable
- If set, auto scaling action would be taken to maintain the avg metric percentagesMin Plan
- Choose lower plan limit for scaling. If not set, lowest available plan will be selectedMax Plan
- Choose upper plan limit for scaling. If not set, highest available plan will be selectedScaling Rate
- Choose rate at which scaling check should be performed
Notification Configuration
Threshold Breach
- Email notification to configured recipients whenever any threshold is crossedScaling Success
- Email notification whenever a successful scaling action is performedScaling Failure
- Email notification whenever there is a failure in scaling action
-
Compatible with Rediscloud and Heroku-Redis
This project has been tested & verified for both Rediscloud and Heroku-Redis providers.
In theroy, it will also work for other service providers, if you add the list of plans & details for the particular provider.Steps
- Create new plan entries to logs_redisplan table
- Add plan_name that Heroku identifies in its CLI
- Add memory limit in MB
- Add connection limit (If inifinite, add -1)
- Add cost per month for reference
- Heroku API limit
Heroku limits the API calls to 4500 / hour which puts a limit to the number of redises that can be managed through this project.
Here is a summary of API calls madeIf we exclude the static and less frequent scaling actions, the max number of redis instances that can be supported by this project is given by below formula.# static calls = # server restarts + # new redis instances configured # periodic calls every hour = Sum ( (60/scaling-freq in min) + # scaling actions) over all scaling-enabled redis instances
For example, if all your redis instances are configured with a scaling frequncy of 10-min, you can have a maximum of 750 redis instances enabled for scaling. Instead, if the frequency is 1-min, you can have 75 redis instances enabled for scalingMax number of redis instances = 4500 / ( 60 / Avg scaling-frequency in min )
❓ How should I set this project up locally?
To setup the project locally, follow this reference guide
❓ My redis instance limit is getting exceeded due to Heroku API limit. How can I tackle it while still managing all the redis instances I want?
Although we predict that the limit on number of redis instances is large enough, you might still be in a need to manage more redis instances. In such a case, a simple solution is to create another Heroku API key from other team member's account. Deploy another heroku app with this repo and use the new key. Add all the redis instances above the limit, to this new project and voila! You have just doubled your limit!