I know there’s been a few posts about this before, but it’s been a month since the last one and it’s still ongoing. It doesn’t seem that any of the LW admins responded to Zag’s post on their help community, and the last response from lodion/Nath I’m aware of was from 4 months ago when there were outright federation failures as opposed to just lengthy delays.
@[email protected] posted a comment on the post from last month about the delays stating that it’s an issue on our end as our server isn’t keeping up. I’m not sure whether this is the case or not, and I’m not sure how to interpret the Grafana dashboard they linked to, but as it’s a new reply on an old post, I wanted to note it.
Current federation delays seem to be around 7 days. It doesn’t seem to be affecting posts themselves on Lemmy.world communities, but does affect all replies to them (even from users on other instances), and all upvotes on the posts. [Edit: on further investigation, this isn’t the case. The current delays are at least 13 days, and this does actually affect posts too]
I don’t want to sound too pushy, since the LW admins and Lodion/Nath are all volunteers, but I was hoping we might be able to get an update on what the cause is, and if it’s an issue in Lemmy itself, if anybody’s opened an issue on GitHub and the developers are aware.
(NB: I don’t interact that much with LW, so all of my testing has been on the Boost for Lemmy community.)
The next milestone release of lemmy is scheduled to include parallel processing per instance for incoming activities, which should resolve the issue.
I don’t know what the timeframe on that is though
https://github.com/LemmyNet/lemmy/pull/4623 is on the 0.19.5 milestone, until parallel sending is implemented there won’t be any benefit from parallel receiving.
0.19.4 will already have some improved logic for backgrounding some parts of the receiving logic to speed that up a little, but that won’t be enough to deal with this.
Oh, that’s good