-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High Memory usage for DelegatingHandler.init/1 #339
Comments
FWIW, i am streaming a HUGE amount of data from the server to the client, but I would expect the socket to transfer it and then drop it. |
Interesting! Versions of Bandit and Thousand Island? |
https://github.com/jaronoff97/tails/blob/main/mix.exs#L54 :) also, it's totally possible this is expected btw for a very high load socket but wanted a gut check because this seemed really high |
It's not expected that Bandit's memory usage grows long-term based on the volume of data sent (either by HTTP or WebSocket). There may be places in the lower networking stack that are keeping some buffers around, but they should not be growing without bound. Part of me wonders if it isn't in Phoenix's channels code. A couple things that would help:
|
i will attempt all this when i get a chance :) pretty slammed rn. FWIW i swapped from cowboy to bandit because the bandit performance was noticeably better, but I did see similar issues there. Maybe it is something in phoenix... are there traces i can emit from the system? |
If you think you've seen similar issues in Cowboy that's the first thing to validate. Given the relative complexity of Phoenix compared to Bandit/Cowboy I think it's much more likely the issue is there. |
Sorry for the late response! I think this may rhyme with #313 and #322. I've pushed up a branch that does a GC pass before switching the handler from HTTP/1 to WebSocket; would you be able to test
and see if that improves matters? |
Thanks @mtrudel. That helped significantly. You can see where I deployed and how it settled at 3.2gb of memory instead of 6.5gb. The top line is the memory usage from the same time yesterday and the bottom line is memory usage from a month ago when we were still running cowboy. I'm also still running with websocket compression off. |
Good news, but it seems like we still have work to do! Now that I know this line is producing good results, let me try adding a few more things to this branch for experimentation. We should be able to get well below the Cowboy line, all else being equal. |
Actually, could you switch back to cowboy for a spell just to make sure that all things are equal? It's not impossible that that growth in memory usage is due to something other than Bandit/Cowboy. |
It'd also be useful to know what things look like with compression re-enabled |
This is all / predominantly websocket load? |
These servers are handling websocket + graphql requests. The majority of the load is graphQL requests, but when I look at LiveDashboard, all of the processes with high memory usage are websockets. |
@aaronrenner try the branch again? I've added pict filtering between requests |
@jaronoff97 would you be able to take a look at phoenix 1.5.2 and see to what extent this resolves your issues? I've been working with @aaronrenner on Slack and think we've solved his issues (at least these ones), and I'd like get this issue moved along for your original query |
I have a lead on the remaining performance deficit that @aaronrenner identifies in his most recent post above; I can identify a similar spread in memory usage (an extra 20-25% over Cowboy) and reproduce it locally. Working it over on #345. |
@mtrudel sure! I'll give that a try and run a load test and get back to you by tomorrow! Thank you :D |
My hunch is that you'll see the same gain @aaronrenner did, but that there will still be a ~20-25% deficit vs Cowboy (that's what I'm working on now). If you have any similarly comparative graphs for CPU & scheduler utilization that would also be appreciated. |
|
@aaronrenner when you posted your most recent chart on May 1 that showed an improvement but still a ~1Gb difference, do you know if you had compression enabled or not in the bandit & cowboy cases? |
@mtrudel I just confirmed that compression was disabled for both cowboy (I believe this is the default) and bandit. The web server was changed via an environment variable in case System.get_env("WEB_SERVER", "cowboy") do
"bandit" ->
config :my_app, MyAppWeb.Endpoint,
adapter: Bandit.PhoenixAdapter,
http: [
websocket_options: [
compress: false
]
]
"cowboy" ->
config :my_app, MyAppWeb.Endpoint, adapter: Phoenix.Endpoint.Cowboy2Adapter
end |
Hello! I'm using bandit as my websocket adapter for a phoenix project where I'm streaming data from a server to the client and I'm noticing incredibly high memory usage. When i look at the dashboard's process tab I can see it's caused by the DelegatingHandler.init/1...
I would expect init to only be called once, so I'm surprised to see it present here
The text was updated successfully, but these errors were encountered: