You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In an application using smallrye-reactive-messaging-rabbitmq 4.21.0 (a Quarkus application), I am struggling to get an error in my publisher when the queue is full and I try to publish another message:
From the rabbitMQ side, I defined limits on my queue (max-length or max-length-bytes). In addition, I am using an "at least once dead lettering" configuration, which means quorum queues with overflow=reject-publish + dead-letter-strategy=at-least-once.
In the app properties, I have configured the "publish-confirms" + a "default-ttl" for published messages like this (btw, I didn't find these properties in the config reference doc, only in sources?):
When I publish a message once the queue is full, using mutinyEmitter.sendMessage(Msg), I think RabbitMQ is correctly not confirming the message published, but the Uni returned by sendMessage fails after quite a long time (12 minutes in my test).
The timeout "default-ttl" also doesn't seem to change anything. I have to use something like this, which is of course not reactive, and does not differentiate a "real" timeout from a nack:
What is the correct way to be sure that the messages sent are not lost, in imperative publishers (For example here I am publishing a message from a JAX-RS resource), in case of overflow config on a queue?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
In an application using smallrye-reactive-messaging-rabbitmq 4.21.0 (a Quarkus application), I am struggling to get an error in my publisher when the queue is full and I try to publish another message:
From the rabbitMQ side, I defined limits on my queue (max-length or max-length-bytes). In addition, I am using an "at least once dead lettering" configuration, which means quorum queues with overflow=reject-publish + dead-letter-strategy=at-least-once.
In the app properties, I have configured the "publish-confirms" + a "default-ttl" for published messages like this (btw, I didn't find these properties in the config reference doc, only in sources?):
When I publish a message once the queue is full, using mutinyEmitter.sendMessage(Msg), I think RabbitMQ is correctly not confirming the message published, but the Uni returned by sendMessage fails after quite a long time (12 minutes in my test).
The timeout "default-ttl" also doesn't seem to change anything. I have to use something like this, which is of course not reactive, and does not differentiate a "real" timeout from a nack:
.await().atMost(Duration.of(5, ChronoUnit.SECONDS));
What is the correct way to be sure that the messages sent are not lost, in imperative publishers (For example here I am publishing a message from a JAX-RS resource), in case of overflow config on a queue?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions