Skip to content
Maikel edited this page Jul 29, 2019 · 9 revisions

As part of the v2's gradual roll-out there are few things to consider when migrating an instance from v1 to v2.

Scheduled maintenance

The only difference is that migrating from v1 to v2 is big change particularly for the database schema and that may take the process to take a bit longer and could potential cause some issues.

To not cause any trouble to our users we should schedule a maintenance window in which we can stop the service and deal with any issues without affecting them. So first, we need to know which one is the day with the least traffic so we minimize any impact and then, set the date and communicate it to users. You can copy the message we sent to Katuma users as the first instance to get v2.

Bugsnag

To know how things go we need to first have access to the instance's Bugsnag account. So first check whether said instance has a bugsnag_key in their appropriate secrets file in ofn-secrets.

Then, make sure you can access the account from Bitwarden. If not, ask in Slack and add it yourself. For older instances, the person that managed the instance set up might know about it.

Slack integration

As a next step, set up the Bugsnag Slack integration so that everyone in the team has visibility over any error the instance may experience. Things could wrong while you are away but also, things that require proactive attention tend to be forgotten. It's better to get notified about it.

You can follow the steps in Bugsnag's documentation. Make you sure you choose "devops-notifications".

Deployment

There is no special process other than the usual deployment with Ansible. As mentioned before it's just about the size of the change and dealing about potential issues. We've deployed this to other instances before and it's rather stable but chances are that the next instance uses an edge case that we might not have seen before.

Enable maintenance mode

First of all, you should first disable all traffic by enabling the maintenance mode. Run the following:

ansible-playbook playbooks/maintenance_mode.yml --limit uk-prod

You can customize the message by running something like:

ansible-playbook playbooks/maintenance_mode.yml --limit es-prod -e "maintenance_mode_message='Os dia! Donde fue el sitio!? No pasa nada, estamos manteniendo el servidor...'"

Stop Unicorn and Delayed Job

To make things a bit faster and avoid updating the database while deployment is taking place, it's worth stopping Unicorn and Delayed Job which will give extra RAM. Also, the schema change may cause issues to jobs currently running as happened to Belgium. The delayed job worker had the old database schema cached as expected while it changed during its life cycle and so it could not find the expected column.

systemctl stop unicorn_openfoodnetwork.service
systemctl stop delayed_job_openfoodnetwork.service

Refresh products cache

Due to the same reason, after the database schema is changed, the entire products cache will become stale leading to inconsistent shopfronts. To re-cache all order cycles simply run

bundle exec rake ofn:cache:warm_products

as explained in Refreshing the entire cache.

Start Unicorn and Delayed Job again

During deploy, Ansible will start Unicorn and Delayed Job again.

Disable maintenance mode

Finally, do so with

ansible-playbooks playbooks/maintenance_mode.yml --limit uk-prod -e "disable_maintenance=true"
Clone this wiki locally