Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Constrained environments #343

Open
hms opened this issue Sep 12, 2024 · 17 comments
Open

Memory Constrained environments #343

hms opened this issue Sep 12, 2024 · 17 comments

Comments

@hms
Copy link
Contributor

hms commented Sep 12, 2024

@rosa

At the risk of your crafting a Voodoo doll of me, and using it every time I reach out... Without knowing / understanding your design criteria and objectives, I'm at risk of asking poor questions or making bad suggestions, but here it goes anyway.... I'm going to apologize in advance for being "That Guy".

With the new V0.9 release (no tasks, nothing run), a fresh startup of SolidQueue, I see the following memory footprint (OSX):

  • Supervisor: 160Mb
  • Dispatcher: 108Mb
  • Scheduler: 105Mb
  • Worker: 110Mb

Once the Jobs actually do something of value, the worker reliably grows to 200Mb plus (I'm looking at your ActiveRecord...). For those of us running on cloud services and a shoestring budget, that's already tight. I'm my case, I run a second Worker to isolate high memory jobs so I can "recycle on OOM" while still servicing everything else via the other worker.

I can purchase my way into additional memory resources at a cost of 10x (literally) what I'm paying now. And it only goes up from there. So this issue is real and painful for me, and I would guess a bunch of other folks running on shoestring budgets.

I'm sure there are use-cases where larger deployments would want a Dispatcher without a Supervisor, so I think understand the rational for the current design. But it would be nice if there was a way to via configuration to have a SuperDispatcherVisor... have the supervisor take on the dispatchers responsibilities and allow us to reclaim 110Mb+.

@rosa
Copy link
Member

rosa commented Sep 12, 2024

Yes, I understand... I had an async mode where all processes ran as threads of the supervisor, so there was a single process, but it was decided that we wanted to have a single and only way to run this.

Do you have recurring tasks configured at all? You could skip the scheduler if not. Another question: are you starting Solid Queue via bin/jobs or the Rake task?

I'll see if I get to bring back the asyncmode.

@hms
Copy link
Contributor Author

hms commented Sep 12, 2024

@rosa

I do have recurring tasks.

I had an async mode where all processes ran as threads of the supervisor, so there was a single process, but it was decided that we wanted to have a single and only way to run this.

For what it's worth, this was a very good call. That being said, per the readme, the dispatcher really isn't doing that much anymore. Unless you got big plans for the dispatcher, it seems the Supervisor has the maintenance task thread (or a second thread) that could be doing what's left of the dispatchers job.

@rosa
Copy link
Member

rosa commented Sep 12, 2024

I think it depends on your setup 🤔 If you use delayed jobs, then the dispatcher will be making sure they get dispatched. This also applies to jobs automatically retried via Active Job with delay (the default). We do use them heavily and run several of them separate from workers, but perhaps you don't? In that case, would it help to just not run the dispatcher? You can achieve that by not configuring it at all, but I imagine you do need it in some cases 🤔 Although I imagine you need the concurrency maintenance task 😬 I think the async mode was a good idea for this case, TBH.

@hms
Copy link
Contributor Author

hms commented Sep 12, 2024

I'm starting to worry this last exchange is falling further and further into the "I didn't fully understand" side of things, again :-(

I have to admit to being a bit flummoxed between the complexity required for a competent Async Job subsystem and the desire to fit things into the itty bitty tiny box of small scale and affordable cloud based deployments.

Given where SolidQueue currently sits with memory utilization, I'm going to have good think on the trade-offs between running just 1 worker and assuming it's going to recycle (on OOM) for almost every execution Vs. just facing I have to push that memory dial to the right and eat the bill 😢

@rosa
Copy link
Member

rosa commented Sep 12, 2024

I have to admit to being a bit flummoxed between the complexity required for a competent Async Job subsystem

So sorry, this is my fault. I call it async mode but it's actually the same thing as now, just that instead of forking processes, the supervisor would just spawn threads for each of its supervised processes. I probably confused you with that name.

Are you starting Solid Queue via bin/jobs or the Rake task?

@hms
Copy link
Contributor Author

hms commented Sep 12, 2024

So sorry, this is my fault. I call it async mode but it's actually the same thing as now, just that instead of forking processes, the supervisor would just spawn threads for each of its supervised processes. I probably confused you with that name.

That one I actually understood.

Adding complexity to the code so it can be configured to run both ways seems like a big lift. I know you had it before, but every line of code in SolidQueue is a line that has to be supported, tested, and will eventually used in an unexpected way. I would guess Threads are ok for very lite / IO intensive work loads, but given the GVL I simply don't understand where the tradeoffs are on Threads Vs. Processes. I shouldn't have started this conversation without a better understanding.

Are you starting Solid Queue via bin/jobs or the Rake task?

I've switched to bin/jobs.

I can't thank you enough for being willing to engage, and tolerate / put up with my learning curve on some of these issues.

@rosa
Copy link
Member

rosa commented Sep 12, 2024

Oh, no, no please, it's me who should thank you for your patience and help to make Solid Queue better! 🙏 ❤️

The reason I asked about bin/jobs is that I learnt not that long ago that Rake tasks don't eager load code by default even if you set config.eager_load to true (the default in production). For that to happen, you need to set rake_eager_load to true, as it's false by default. bin/jobs loads Rails's environment so it'll use eager_load and will load the app before forking. I think this might help with memory because forks will share some of that already loaded stuff, but I think the savings are quite modest in general.

@hms
Copy link
Contributor Author

hms commented Sep 12, 2024

I'll look into the Eager Vs. Lazy tradeoffs. Thanks for that.

Once I get worker recycling working / finished, I'll have more suggestions to share that have helped. For example, SolidQueue.on_start { GC.auto_compact = true } helps and is shared between forks.

@rosa
Copy link
Member

rosa commented Sep 12, 2024

There's also Process.warmup in Ruby 3.3, which might help too, but I've never used it in production.

@hms
Copy link
Contributor Author

hms commented Sep 12, 2024

Oh that looks interesting! Thank you.

@majkelcc
Copy link

Do you have recurring tasks configured at all? You could skip the scheduler if not.

What's the easiest way to do this? Would an empty recurring.yml already disable the scheduler?

@dhh
Copy link
Member

dhh commented Sep 16, 2024

@hms If you're running in a super constrained environment, you could just use the puma plugin that we use in development. Then everything is running off that single Rails process. Just make sure you keep WEB_CONCURRENCY = 1.

Another option is to stop getting fleeced by cloud providers charging ridiculous prices for tiny hosts 😄. Rails 8 is actually about answering that question in the broad sense.

@rosa
Copy link
Member

rosa commented Sep 17, 2024

What's the easiest way to do this? Would an empty recurring.yml already disable the scheduler?

@majkelcc yes! If you have no recurring tasks defined at all (the default), then the scheduler will be automatically disabled. Alternatively, if you're running jobs in more than one place and want to disable it in one of them (this is what we do in HEY), then you can pass --skip_recurring to bin/jobs.

@hms
Copy link
Contributor Author

hms commented Sep 17, 2024

@dhh

Oh, how I hate Heroku and the games they play with radically inappropriate "starter" resource sizing (you think Apple is bad) all in an effort to prop up their already overly insanely high prices to force me into upgrades. (does that make it "Insanely high(2) prices?". And yes, I'm very jealous of your new monster Dells and the fact you got off the treadmill (want to rent me a small slice for something I can afford...)

But as a solo developer, who is extremely grateful for the technical compression the Rails community and you have delivered over the years, I can not put a price on the value of A) Not having to worry about anything DevOps; B) The comfort of 10+ years of using a system and feeling like you know all of it's corners

I'm very much crossing my fingers that Rails 8 reduces the moving parts enough that the learning curve of a new deployment strategy becomes within reach.

@dhh
Copy link
Member

dhh commented Sep 17, 2024

@hms You're the ideal target for the progress we're bringing to the deployment story in Rails 8. Stay tuned for Rails World!

But in the meantime, I'd try with the puma plugin approach.

@hms
Copy link
Contributor Author

hms commented Sep 17, 2024

@dhh In my case, I have at least split the web server from the SolidQueue environments, so I'm living large with 512MB x2.

I'm just bristling at the fact that I have to go from $9 to $50 a month to double that memory.

@dhh
Copy link
Member

dhh commented Sep 17, 2024

Highway robbery. Selling 512MB instances in 2024 is something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants