Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zero downtime update of cro web app #31

Open
melezhik opened this issue Oct 11, 2021 · 5 comments
Open

zero downtime update of cro web app #31

melezhik opened this issue Oct 11, 2021 · 5 comments

Comments

@melezhik
Copy link

melezhik commented Oct 11, 2021

Hi guys! First of all thanks for a great product you do.

I use cro for web application (https://mybf.io), my .cro.yaml file is that:

cro: 1
id: mbf
name: My Butterflies Web
entrypoint: app.raku
ignore:
  - .cache
  - .tom
  - articles
  - conf
  - cro.log
  - js

when I update any file not listed in ignore list it take cro awhile to restart an application to pick up changes, during this time my application is not available and my nginx server returns 502 error.

Any cure for that?

My web app command is:

nohub cro run > cro.log &

Snippet of app.raku running a cro web server is:

my Cro::Service $service = Cro::HTTP::Server.new:
    :host<0.0.0.0>, :port<2000>, :$application;

$service.start;

react whenever signal(SIGINT) {
    $service.stop;
    exit;
}

PS I know it could be hard to fix that on cro side, I am just interested in how someone would solve that

@melezhik melezhik changed the title zero downtime of cro web app zero downtime update of cro web app Oct 11, 2021
@jnthn
Copy link
Member

jnthn commented Oct 11, 2021

We intended cro run more as a development convenience than an ideal way to run in production (the cro command line tool is documented as a development tool). Of course, that won't stop anyone... :-)

What I personally do for zero downtime updates is leave the lifting to Kubernetes; essentially, upon a deploy it starts the new container while leaving the old one running, then when the new one is ready it starts to route traffic over to it. (While a readiness probe is the reliable way to do this, I've found that there's already some built-in default delay, and it tends to be long enough for smaller Cro applications anyway.)

@melezhik
Copy link
Author

melezhik commented Oct 11, 2021

Hi Jonathan ! Thanks for a quick response. I get that, but I am looking for less expressive solution, not like kubernetes )))

Let me maybe reshape my question - can I set up a custom ( aka maintenance ) page using cro to notify users that my app is under update right now. I know nginx could do that, but would be nice if I can do this using Raku/cro ...

Thanks

@jnthn
Copy link
Member

jnthn commented Oct 11, 2021

I am looking for less expressive solution, not like kubernetes

Yeah, for some things it's like using a flame thrower to swat a fly... :-)

So far as options:

  • For probably the next Cro release we'll introduce reverse proxy support in Cro::HTTP, which would mean you could write a small service that proxies requests to your application and, upon failure, serves a maintenance page. Presumably this service would only need very occasional updates compared to the larger application, so would help a bit. Not close to zero downtime, but a mitigation. Downside: another service, more latency.
  • If you are using containers at all, you can recreate the Kubernetes style thing by yourself, by launching the container without the port exposed, and then killing the old container and exposing the port when ready.
  • Otherwise, stop using the cro runner and script something along the lines of: have a PID file for the current running service, when you update just start a new instance of the service without killing the existing one. When it is ready, but before calling .start on the Cro::Service, send SIGINT to the PID of the running version. Wait for it to exit, then start listening and write the PID of the new now-running process into the file. Needs a little care to make it robust. Lots of variations on this theme.

@melezhik
Copy link
Author

Yeah, this cro based customized proxy server would be a good idea, at least for small/medium size projects where performance is not that important ...

@niner
Copy link

niner commented Feb 23, 2022

We have found haproxy to be very useful for this. It's small, stable and scalable. With its retry feature, you should be able to hold requests until the backend is ready again after a restart (users would just notice a delay but will get served eventually). If you scale up to multiple backends, you just need to add them to your configuration and get failover that way (i.e. you restart one of the backends and the other will get all requests until the first one is ready again). It also supports "backup" servers out of the box, i.e. backends that will only get used if all primaries are down. Such a backup server can be a simple nginx serving a static maintenance page.

Much of this can be achieved with nginx as frontend proxy as well, but it's much harder to get right and they are pushing their commercial nginx plus offering for this use case hard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants