Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support application-side failover awareness #13

Open
ianunruh opened this issue Nov 2, 2013 · 0 comments
Open

Support application-side failover awareness #13

ianunruh opened this issue Nov 2, 2013 · 0 comments

Comments

@ianunruh
Copy link
Contributor

ianunruh commented Nov 2, 2013

Example: MySQL master failover (this would apply to any hot/cold master configuration, like PostgreSQL, Pacemaker/Corosync configurations, etc.)

  1. Create new instance with database_master role

  2. Have it synchronize with the live database_master (determine liveness by looking at the instance that is in deploy state)

  3. Change current database_master to undeploy (don't actually push the undeploy configuration though)

  4. Push deploy configuration to webapp and database_slave instances

    At this point, the application and slaves should notice that there is no database_master in the deploy stage. They have the logic in Puppet to put their applications in a read-only or buffered mode.

  5. Once synchronization of the new database_master is finished, push configuration to the old database_master to undeploy.

  6. Push configuration to the new database_master to deploy.

  7. Push configuration to all database_slave and webapp instances to deploy.

Sequence diagram

I'm probably missing something here. Is this possible with the current model and process? If not, what do we need to do to be able to support it?

  1. You would have to have a special lifecycle stage for the new master so that it will sync with the current master. Otherwise, if you put it in deploy, dependent instances may think that the new instance is the master to use. This will break the application because it assumed the master was ready for use. In my mind, there would be a pre_deploy stage. The scenario for a master would look for existing masters in deploy and setup like a slave until they are put into deploy.
  2. How do we know that synchronization of the new master has completed? Polling? Is there are post-sync hook for MySQL/PostgreSQL?
  3. It could be possible to combine block device migration with this. We take an existing slave, grab its block device, and then we just have to wait for the new data to be synchronized. This would be a lot harder to automate though.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant