Replies: 4 comments
-
Hi @brycegoh, this isn’t something supported directly in River as of today, although it’s definitely something we’re thinking about and we have some ideas for how to make it happen. As of now I believe you’d have to layer on something within your jobs to make them hold until it’s their turn to run. This wouldn’t be a great option though as there would be actively running jobs taking up worker slots. I think this would have to come from the job system to work well. I’ve converted this to a discussion so it can get upvotes alongside the other feature requests. Let us know if you think there’s anything particularly unique or interesting about your use case here. Thanks! |
Beta Was this translation helpful? Give feedback.
-
I assume for rate limits with such a small period For a larger window, a window based unique constraint would make it a feature complete rate limiter I suppose. Disclaimer: I haven't actually used River. Saw this on Twitter yesterday & have only read about it. |
Beta Was this translation helpful? Give feedback.
-
I'm also interested in this feature. Any ideas on how to implement it? |
Beta Was this translation helpful? Give feedback.
-
I have a somewhat related use case. Some of our jobs are talking to third party APIs and we want to (easily) limit the number of requests we are sending there way. Right now we do that by configuring the max workers for the queue directly, but this wouldn't allow us to scale the worker pool and even during deployments, when we are spinning up new workers, the parallel work is multiplied. So it would be very handy for us to set a global rate limit on the queues and not worry about the number of worker processes too much. |
Beta Was this translation helpful? Give feedback.
-
Hi, wondering if there is a way to rate limit a queue such as 10 jobs per minute, etc? Otherwise, is there a suggested alternative? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions