-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
general question regarding memory consumption #76
Comments
Are you using no_ack=True? If so you are probably hitting this. Basically with no_ack set to true. RabbitMQ will keep sending messages over and over again, and because of how AmqpStorm was designed it will just keep adding those messages to the buffer indefinitely. The easiest solution is to just change the consumer to use no_ack=False. This will cause RabbitMQ to only send messages equivalent to your qos setting, qos defaults to 0 (which translates to a max of 32k messages). I can look at implementing back-pressure on large message buildups as well. I just don't have a good pattern for it at the moment. |
thank you this explains it and appreciate it!
…On Thu, Jul 11, 2019 at 8:11 PM Erik Olof Gunnar Andersson < ***@***.***> wrote:
Are you using no_ack=True? If so you are probably hitting this.
#34 <#34>
Basically with no_ack set to true. RabbitMQ will keep sending messages
over and over again, and because of how AmqpStorm was designed it will just
keep adding those messages to the buffer indefinitely.
The easiest solution is to just change the consumer to use no_ack=False.
This will cause RabbitMQ to only send messages equivalent to your qos
setting, qos defaults to 0 (which translates to a max of 32k messages).
I can look at implementing back-pressure on large message buildups as
well. I just don't have a good pattern for it at the moment.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#76?email_source=notifications&email_token=AAAD73D7ZXGHS5LCO4BYRHDP66HYXA5CNFSM4IAUTT5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZX3CGI#issuecomment-510636313>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAD73CBACL77UPUEOQ4ZODP66HYXANCNFSM4IAUTT5A>
.
--
------------------------------------------------------------
Lead Developer - Fiehnlab, UC Davis
gert wohlgemuth
work:
http://fiehnlab.ucdavis.edu/staff/wohlgemuth
phone:
530 665 9477
coding blog:
http://codingandmore.blogspot.com
linkedin:
http://www.linkedin.com/profile/view?id=28611299&trk=tab_pro
|
haven't looked at the code but would it be possible to stop reading from the socket if the buffer is over a certain size? (could cause other problems like missed heartbeats etc though) |
Yea I think that would be worth implementing. When I wrote this originally I relied on heartbeats to keep the connection open, but since then the design has changed and the connection will stay healthy as long as data is flowing (in both directions). One thing that makes it difficult to track how much data is actually built up is that the data is moved off the main buffer and directly onto the channels inbound queue. So we would need to combine the total size of the data (or the number of frames) on all channels and aggregate it back to the connection layer. Another possible side-effect of that is that one channel might block (or slow down) another channel from getting new messages. Since on the socket level we wouldn't know the intended target channel / consumer. |
I seem to have the same problem, but |
Would you be able to provide some example code to illustrate the issue? The |
Sorry, seems like my callback function is a culprit, there's an audio processing in a background. Simple message consumption (without the actual processing) didn't show the abnormal memory/CPU usage. |
hi,
we run into another problem, basically for some reason the amqp stack is consuming ungodly amounts of memory and we feel we are doing something wrong.
example using tracemalloc:
Top 30 lines
#1: pamqp/frame.py:62: 951.8 MiB
frame_data = data_in[FRAME_HEADER_SIZE:byte_count - 1]
#2: pamqp/decode.py:417: 361.4 MiB
return value.decode('utf-8')
#3: pamqp/decode.py:296: 165.6 MiB
data = {}
#4: pamqp/decode.py:258: 154.9 MiB
return 8, time.gmtime(value[0])
#5: pamqp/header.py:84: 109.0 MiB
self.properties = properties or specification.Basic.Properties()
#6: pamqp/body.py:21: 78.8 MiB
self.value = value
#7: pamqp/header.py:81: 78.8 MiB
self.class_id = None
#8: pamqp/frame.py:135: 57.4 MiB
method = specification.INDEX_MAPPINGmethod_index
#9: pamqp/frame.py:157: 40.2 MiB
content_header = header.ContentHeader()
#10: pamqp/frame.py:172: 40.2 MiB
content_body = body.ContentBody()
#11: pamqp/header.py:104: 20.1 MiB
self.body_size) = struct.unpack('>HHQ', data[0:12])
#12: pamqp/decode.py:157: 20.1 MiB
return 8, struct.unpack('>q', value[0:8])[0]
#13: amqpstorm/channel.py:229: 18.9 MiB
self._inbound.append(frame_in)
#14: :525: 9.8 MiB
#15: json/decoder.py:353: 4.6 MiB
obj, end = self.scan_once(s, idx)
This is after running for about 15-20 minutes and receiving maybe a million messages. Any idea why there is such a buildup of memory? It quickly exceeds a couple GB and we feel there is some odd issue happening.
kind regards
The text was updated successfully, but these errors were encountered: