-
Notifications
You must be signed in to change notification settings - Fork 736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize for low memory use? #2900
Comments
Wow, this is odd. My instance is only using 40MB of ram, and I have a lot of feeds :/ |
For me miniflux in docker consistently uses 30-50 mb ram, while postgres can go over 150 when refreshing lots of feeds at once. I have little under 400 feeds. |
Maybe it would be interesting to find where this huge memory consumption come from. A way to to that would be to use the go profiler (https://go.dev/blog/pprof) |
@GSI Did you achieve to find what is consuming a lot of memory in your setup? If not, you can run pprof like that (it will require to build a custom version of Miniflux) diff --git a/internal/http/server/httpd.go b/internal/http/server/httpd.go
index c7428a32..3ed8ee1e 100644
--- a/internal/http/server/httpd.go
+++ b/internal/http/server/httpd.go
@@ -9,6 +9,7 @@ import (
"log/slog"
"net"
"net/http"
+ _ "net/http/pprof"
"os"
"strconv"
"strings"
@@ -207,6 +208,8 @@ func setupHandler(store *storage.Storage, pool *worker.Pool) *mux.Router {
w.Write([]byte(version.Version))
}).Name("version")
+ router.PathPrefix("/debug/pprof/").Handler(http.DefaultServeMux)
+
if config.Opts.HasMetricsCollector() {
router.Handle("/metrics", promhttp.Handler()).Name("metrics")
router.Use(func(next http.Handler) http.Handler { And you can draw the memory usage graph like that |
Thank you, that's an interesting tool. I just enabled it and will have to wait until the next OOM occurs. The last one was 8 days ago, even though I have miniflux configured to update a single feed every 30 minutes ( |
In the meantime I had two OOM's, but I it seems pprof doesn't keep a history? |
the profiler doesn't keep anything. You need to poll it using a command like that |
While doing some profiling for miniflux#2900, I noticed that `miniflux.app/v2/internal/locale.LoadCatalogMessages` is responsible for more than 10% of the consumed memory. As most miniflux instances won't have enough diverse users to use all the available translations at the same time, it makes sense to load them on demand. The overhead is a single function call and a check in a map, per call to translation-related functions. This should close miniflux#2975
While doing some profiling for #2900, I noticed that `miniflux.app/v2/internal/locale.LoadCatalogMessages` is responsible for more than 10% of the consumed memory. As most miniflux instances won't have enough diverse users to use all the available translations at the same time, it makes sense to load them on demand. The overhead is a single function call and a check in a map, per call to translation-related functions.
I have been profiling miniflux for a week now, and the used memory never went above a couple dozens of MB. I sent a couple of (now merged) pull-requests to bring this number down, albeit I don't think it'll help in case of huge spikes, but since I can't reproduce your issue, it's the best I could do. |
Thank you. I just wanted to create a cron task that fetches and stores something relevant to disk every minute for later inspection, but couldn't yet find a one-shot variant that would only load the pb.gz without then going on the launch the http service. Can you give me a hint? Maybe something like pprof's |
You can use this package |
I'm using Miniflux v2.2.1 with ~30 feeds on an old Raspberry Pi. Typically I have some 150 MB of RAM available.
It used to run fine, but recently the OS has to kill the program regularly:
I suspect that these crashes may be related to having set some feeds to "fetch original content" - which I only recently learned about. I'm not sure though.
As a first mitigation attempt I set
BATCH_SIZE=1
, but that didn't prevent the OOM's.The text was updated successfully, but these errors were encountered: