-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Presence over federation should be less aggressive when trying to send transactions #11383
Comments
Do you have presence enabled? I think that's known to be particularly heavyweight on federation traffic.
Are you able to narrow down when this started, e.g. via a |
Reminds me a little bit of #11049, but I'm not convinced that's the same scenario (and besides, that should be fixed in 1.47) |
Some more thoughts:
|
A little local experimentation seems to show that Docker is prone to bypassing your host's DNS cache. My
Doing a little bit of research, it seems that Docker basically copies Trying in a Docker image I see:
In other words, it seems like Docker will bypass the DNS cache I have running on Another solution might be for the Synapse docker container to come with a built-in DNS cache.. :/ |
I do, but I have the same config for months and only noticed this recently.
Didn't try that yet. I did try to rollback to 1.46 and 1.45 and have the same behavior.
I had some doubt about DNS too. We might be onto something. I will try to see if I can setup a local DNS cache to see if that improves things.
Very unlikely. I have fiber and everything is cable connected. |
I think we can exclude DNS. I've have setup my federation worker outside of docker and run it directly on my host with sysremd-resolved (so with DNS cache) and I have observed exactly the same behavior. |
@DMRobertson I've made some more experiment... So this would mean my problem comes from the number of homeserver I'm federated with. |
seems #5373 could also be related to my problem. |
I have some more findings. Today, I tried to re-enable presence, and things just exploded right away. So I think it is safe to say that the problem come from there. |
Sounds like it falls under the umbrella of #9478. It's interesting that you mention this has only got worse recently. (But perhaps you only recently joined a room that's federated across multiple homeservers?) |
Could be, I can't tell for sure. I guess I can close this issue since #9478 already tracks all the work to be done regarding this issue. Thanks for the help @DMRobertson and @reivilibre |
Description
Starting a couple of week ago, my federation sender worker started to completely flood my LAN to the point where no traffic can flow inside anymore.
Steps to reproduce
I'm running my homeserver with workers. I have:
Everything goes well until I start the federation sender worker. After a few minutes, my LAN is completely unusable, no connection can be made to remote server or between device in the LAN. As soon as I stop the federation worker, the storm stops and everything come back to normal.
Here is a snippet of the worker logs:
sender.log
Version information
If not matrix.org:
Version: server_version 1.47.0 python_version: 3.8.12
Install method: Everything runs into docker containers
Platform: docker running on linux platform (I'm using arch btw ;-) )
All synapse processes runs on the same host.
I do have monitoring in place, so I can also provide grafana screenshot if that would be useful.
The text was updated successfully, but these errors were encountered: