Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize for low memory use? #2900

Open
GSI opened this issue Oct 20, 2024 · 11 comments
Open

Optimize for low memory use? #2900

GSI opened this issue Oct 20, 2024 · 11 comments
Assignees

Comments

@GSI
Copy link

GSI commented Oct 20, 2024

I'm using Miniflux v2.2.1 with ~30 feeds on an old Raspberry Pi. Typically I have some 150 MB of RAM available.

It used to run fine, but recently the OS has to kill the program regularly:

[1383230.652600] Out of memory: Killed process 16052 (miniflux.app) total-vm:554492kB, anon-rss:137368kB, file-rss:4kB, shmem-rss:0kB, UID:985 pgtables:168kB oom_score_adj:0

I suspect that these crashes may be related to having set some feeds to "fetch original content" - which I only recently learned about. I'm not sure though.

As a first mitigation attempt I set BATCH_SIZE=1, but that didn't prevent the OOM's.

@jvoisin
Copy link
Collaborator

jvoisin commented Nov 10, 2024

Wow, this is odd. My instance is only using 40MB of ram, and I have a lot of feeds :/

@asilentdreamer
Copy link

For me miniflux in docker consistently uses 30-50 mb ram, while postgres can go over 150 when refreshing lots of feeds at once. I have little under 400 feeds.

@rdelaage
Copy link
Contributor

Maybe it would be interesting to find where this huge memory consumption come from. A way to to that would be to use the go profiler (https://go.dev/blog/pprof)

@rdelaage
Copy link
Contributor

@GSI Did you achieve to find what is consuming a lot of memory in your setup? If not, you can run pprof like that (it will require to build a custom version of Miniflux)

diff --git a/internal/http/server/httpd.go b/internal/http/server/httpd.go
index c7428a32..3ed8ee1e 100644
--- a/internal/http/server/httpd.go
+++ b/internal/http/server/httpd.go
@@ -9,6 +9,7 @@ import (
        "log/slog"
        "net"
        "net/http"
+       _ "net/http/pprof"
        "os"
        "strconv"
        "strings"
@@ -207,6 +208,8 @@ func setupHandler(store *storage.Storage, pool *worker.Pool) *mux.Router {
                w.Write([]byte(version.Version))
        }).Name("version")
 
+       router.PathPrefix("/debug/pprof/").Handler(http.DefaultServeMux)
+
        if config.Opts.HasMetricsCollector() {
                router.Handle("/metrics", promhttp.Handler()).Name("metrics")
                router.Use(func(next http.Handler) http.Handler {

And you can draw the memory usage graph like that go tool pprof -http :7879 http://localhost:7878/debug/pprof/heap (replace with the relevant addresses, and open the web ui in a web browser)

@GSI
Copy link
Author

GSI commented Nov 21, 2024

Thank you, that's an interesting tool. I just enabled it and will have to wait until the next OOM occurs.

The last one was 8 days ago, even though I have miniflux configured to update a single feed every 30 minutes (export BATCH_SIZE=1 and export POLLING_FREQUENCY=30).

@GSI
Copy link
Author

GSI commented Dec 5, 2024

In the meantime I had two OOM's, but I it seems pprof doesn't keep a history?

@rdelaage
Copy link
Contributor

rdelaage commented Dec 5, 2024

the profiler doesn't keep anything. You need to poll it using a command like that go tool pprof -http :7879 http://localhost:7878/debug/pprof/heap. Of course before the OOM, so for example if it get OOM killed each 1 week, you can poll after 6 days when it still running

jvoisin added a commit to jvoisin/v2 that referenced this issue Dec 8, 2024
While doing some profiling for miniflux#2900, I noticed that
`miniflux.app/v2/internal/locale.LoadCatalogMessages` is responsible for more
than 10% of the consumed memory. As most miniflux instances won't have enough
diverse users to use all the available translations at the same time, it
makes sense to load them on demand.

The overhead is a single function call and a check in a map, per call to
translation-related functions.

This should close miniflux#2975
fguillot pushed a commit that referenced this issue Dec 10, 2024
While doing some profiling for #2900, I noticed that
`miniflux.app/v2/internal/locale.LoadCatalogMessages` is responsible for more
than 10% of the consumed memory. As most miniflux instances won't have enough
diverse users to use all the available translations at the same time, it
makes sense to load them on demand.

The overhead is a single function call and a check in a map, per call to
translation-related functions.
@jvoisin
Copy link
Collaborator

jvoisin commented Dec 11, 2024

I have been profiling miniflux for a week now, and the used memory never went above a couple dozens of MB.

I sent a couple of (now merged) pull-requests to bring this number down, albeit I don't think it'll help in case of huge spikes, but since I can't reproduce your issue, it's the best I could do.

@GSI
Copy link
Author

GSI commented Dec 11, 2024

Thank you. I just wanted to create a cron task that fetches and stores something relevant to disk every minute for later inspection, but couldn't yet find a one-shot variant that would only load the pb.gz without then going on the launch the http service.

Can you give me a hint?

Maybe something like pprof's top or is there anything better suited?

@jvoisin
Copy link
Collaborator

jvoisin commented Dec 11, 2024

You can use this package

@jvoisin jvoisin self-assigned this Dec 19, 2024
@jvoisin
Copy link
Collaborator

jvoisin commented Dec 21, 2024

@GSI do you see less memory consumption with (the freshly released) 2.2.4?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

4 participants