This is just to followup from my prior post on latencies increasing with increasing uptime (see here).
There was a recent update to lemmy.ml (to 0.19.4-rc.2
) … and everything is so much snappier. AFAICT, there isn’t any obvious reason for this in the update itself(?) … so it’d be a good bet that there’s some memory leak or something that slows down some of the actions over time.
Also … interesting update … I didn’t pick up that there’d be some web-UI additions and they seem nice!
For the moment at least. Whatever problem we had before, it seemed to get worse over time, eventually requiring a restart. So we’ll have to wait and see.
My server seems to get slower until requiring a restart every few days, hoping this provides a fix for me too 🤞
Try switching to Postresql 16.2 or later.
What’s new in postgres?
Nothing particular, but there was a strange bug in previous versions that in combination with Lemmy caused a small memory leak.
In my case it’s lemmy itself that needs to be restarted, not the database server, is this the same bug you’re referring to?
Yes, restarting Lemmy somehow resets the memory use of the database as well.
Well, I’ve been on this instance through a few updates now (since Jan 2023) and my impression is that it’s a pretty regular pattern (IE, certain APIs like that for replying to a post/comment or even posting have increasing latencies as uptime goes up).
Sounds exactly like the problem I fixed and mostly caused
There were optimizations related to database triggers, these are probably responsible for the speedup.
Reddthat has 0.19.4 too, feels indeed snappier
Interesting. It could be for the same reason I suggest for lemmy.ml though. Do you notice latencies getting longer over time?
It’s a smaller server so I guess latency issues would appear at a slower pace than lemmy.ml
makes sense … but still … you’re noticing a difference. Maybe a “boiling frog” situation?
I would say it still feels snappier today than before the update (a couple weeks ago?), so definitely an improvement