Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR Accept for Resetting the Keys / Removing the Existing Keys #37

Open
romitmisra1 opened this issue May 30, 2019 · 6 comments
Open

PR Accept for Resetting the Keys / Removing the Existing Keys #37

romitmisra1 opened this issue May 30, 2019 · 6 comments

Comments

@romitmisra1
Copy link

Hi,
I have started using the module to integrate with our deployment. I am not 100 percent sure but so far from the Observed Behaviour, it seems that once the CL.THROTTLE commands gets executed the Key Entry is made, and subsequent CL.THROTTLE, use the entry to process the rate limit quota.
Working On A PR for the following, but wanted to discuss before hand if I have interpreted this correctly:-

1.Removing a Key Altogether
2.Resetting / Updating a key with new Rate Limit Tuple

Waiting for your response
Thanks
Romit

@brandur
Copy link
Owner

brandur commented May 30, 2019

Hi Romit,

I am not 100 percent sure but so far from the Observed Behaviour, it seems that once the CL.THROTTLE commands gets executed the Key Entry is made, and subsequent CL.THROTTLE, use the entry to process the rate limit quota.

Yes sort of. The library uses a pretty novel algorithm called GCRA. What it ends up storing is a "TAT", or theoretical arrival time. Check out the implementation for full details, but it's pretty easy to see this at work for yourself.

# build
$ cargo build --release

# start Redis (extension may depend on your OS -- this is for Mac)
$ redis-server --loadmodule target/release/libredis_cell.dylib

Then from another console:

$ redis-cli

127.0.0.1:6379> cl.throttle user123 5 5 3600
1) (integer) 0
2) (integer) 6
3) (integer) 5
4) (integer) -1
5) (integer) 720

127.0.0.1:6379> keys '*'
1) "user123"

127.0.0.1:6379> get user123
"1559260935732768000"

1.Removing a Key Altogether

Only one value is stored, so it's easy to do with with the built-in DEL:

127.0.0.1:6379> del user123
(integer) 1

2.Resetting / Updating a key with new Rate Limit Tuple

If the intent here is to change a user's rate, I'd recommend the simpler option of just deleting their key and then start issuing new CL.THROTTLE commands with the new rate.

@romitmisra1
Copy link
Author

Thanks a Lot, that makes more sense and is much cleaner and simple.
I tried to Deploy Redis In Master and Slave Topology ( 1 Master and 2 Slaves), with the module.
I was able to execute whopping 40k rps per second on a SINGLE KEY (CL.throttle Invocation)
I was expecting the commands to be synced to the Slave nodes as well, but the Only command that ever synced was the DEL (Delete Key, when TTL Expired), (Used Monitor Command on Slaves to See the Stream of Incoming Events)
Is this by design, that the replication is disabled, or has it been disabled on the Prod Code.
My intent is to use the Slave as a Standby in Separate Fault Domain, since the Layer serves a critical Data Path.
In the above case, as soon as the switchover happens, the CL.THROTTLE, would start hitting the new Master (Slave which got Promoted to Master)
Just want to understand, why we do not sync Keys to the Slaves

@brandur
Copy link
Owner

brandur commented Jun 9, 2019

Is this by design, that the replication is disabled, or has it been disabled on the Prod Code.

Hey @romitmisra1 ,

The results of CL.THROTTLE should be getting replicated out to replicas. You can see the specific code that causes this here, and the Redis Module API function that's getting invoked here. It tells a cluster to invoke the command as is on replicas with exactly the same args. This code was contributed to the project by a user who was using the library in clustered mode, so I would suspect that it works.

Can you double-check your setup and take a look at your logs on the master and each replica to make sure there's nothing that looks amiss? One failure case that I can think of, for example, would be in the redis-cell plugin was loaded on the master, but not on replicas (I'm not sure exactly what would happen in that case, but there's a decent chance it would result in the same symptoms that you're seeing).

If everything looks fine, if you come up with a basic set of commands to bring up a cluster with a similar configuration to yours, I'll dig in and see if I can find what's not working.

@romitmisra1
Copy link
Author

Thanks Brandur, I have already started debugging the code, and will get back with the detailed logs.
Will see if this is a config issue, or some edge case that I triggered, will keep you posted, and thanks for the reply

@brandur
Copy link
Owner

brandur commented Jun 9, 2019

Sounds good! Thanks.

@jmealo
Copy link

jmealo commented Apr 13, 2020

@romitmisra1: I hope that you are well. Could you give us an update on how this worked out for you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants