You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for the lack of detail on this one, as I'm still trying to investigate further detail myself. I have a private project that has been in production status for a while now, and also has been on the 2.x.x branch of development long before there were beta releases for the milestone. I've been on this train for a minute.
My solution is no small project, it receives well over 15 HTTP requests per second (sustained), and does encounter malicious queries from time to time. These malicious queries helped me identify issues like PR 404. I learn about these issues in my production environment, as they happen. Its a good time :D
As I check my notes and possibly A/B test over the next days, I'll give you the punchline of a suspected issue that has seemingly emerged in my work:
I believe hummingbird 2 beta 6 introduced some amount of memory leakage that was not present in the earlier versions, possibly even nonexistent in the prior beta 5. This is a tentative but educated speculation based on an unexpected crash of my environment, which is otherwise very stable. I continue to investigate.
For the sake of discussion: My production server as it runs today is WAY OVER PROVISIONED with 24GB of ram (headless with no other applications to host), but unfortunately, this headroom is currently not enough to keep my process running more than a week or so.
frankly I was very caught off guard by this and as such, have very minimal info to go off of. but we can start by looking at the syslogs logs and confirming why exactly my process was killed:
Jun 15 18:41:08 localhost kernel: [6328716.711816] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-0.slice/session-11329.scope,task=pricedb,pid=1242483,uid=1000
Jun 15 18:41:08 localhost kernel: [6328716.711850] Out of memory: Killed process 1242483 (pricedb) total-vm:322911892kB, anon-rss:24076600kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:67016kB oom_score_adj:0
To be clear: the last time I've seen this process outright die like this was when I first discovered PR 404. At this point, I've given beta6 and beta5 roughly equivalent time in prod. I can say with certainty that beta5 never crashed but of course, this does not mean the issue does not exist in beta5.
DISCLAIMER: I have to acknowledge that this could easily be a bug that sits in my stack, not hummingbird. This is what I'm trying to sort out. For the sake of kicking this off, I'm inclined to believe this is NOT the most likely scenario, simply because my latest update to my process and prod environment was a single change.......hummingbird 2.x.x beta5 to beta6 in my Package.swift.
The prior release was seemingly very stable.
Thank you for the continued development of this awesome framework.
The text was updated successfully, but these errors were encountered:
There are very few differences between beta 5 and 6. I'll investigate further though. Which Hummingbird packages are you using? Are you certain no other dependencies changed version?
I couldn't find anything obvious. I did a test where I had a route that responded with the body that was in the request. And using wrk hit that route 52107270 times, transferring about 153GB and memory usage never went over 9MB according to the activity monitor.
Hey there,
Sorry for the lack of detail on this one, as I'm still trying to investigate further detail myself. I have a private project that has been in production status for a while now, and also has been on the 2.x.x branch of development long before there were beta releases for the milestone. I've been on this train for a minute.
My solution is no small project, it receives well over 15 HTTP requests per second (sustained), and does encounter malicious queries from time to time. These malicious queries helped me identify issues like PR 404. I learn about these issues in my production environment, as they happen. Its a good time :D
As I check my notes and possibly A/B test over the next days, I'll give you the punchline of a suspected issue that has seemingly emerged in my work:
I believe hummingbird 2 beta 6 introduced some amount of memory leakage that was not present in the earlier versions, possibly even nonexistent in the prior beta 5. This is a tentative but educated speculation based on an unexpected crash of my environment, which is otherwise very stable. I continue to investigate.
For the sake of discussion: My production server as it runs today is WAY OVER PROVISIONED with 24GB of ram (headless with no other applications to host), but unfortunately, this headroom is currently not enough to keep my process running more than a week or so.
frankly I was very caught off guard by this and as such, have very minimal info to go off of. but we can start by looking at the syslogs logs and confirming why exactly my process was killed:
To be clear: the last time I've seen this process outright die like this was when I first discovered PR 404. At this point, I've given beta6 and beta5 roughly equivalent time in prod. I can say with certainty that beta5 never crashed but of course, this does not mean the issue does not exist in beta5.
DISCLAIMER: I have to acknowledge that this could easily be a bug that sits in my stack, not hummingbird. This is what I'm trying to sort out. For the sake of kicking this off, I'm inclined to believe this is NOT the most likely scenario, simply because my latest update to my process and prod environment was a single change.......hummingbird 2.x.x beta5 to beta6 in my
Package.swift
.The prior release was seemingly very stable.
Thank you for the continued development of this awesome framework.
The text was updated successfully, but these errors were encountered: