-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
filters/auth: use sync.Map for tokeninfo cache #3267
Conversation
AlexanderYastrebov
commented
Oct 9, 2024
- use sync.Map for tokeninfo cache to avoid synchronizing all callers on a single mutex
- evict stale entries periodically instead of least recently used
- store token expiration time instead of creation time
67cd561
to
813bf2a
Compare
info: info, | ||
href: c.history.PushFront(token), | ||
func (c *tokeninfoCache) evictLoop() { | ||
ticker := time.NewTicker(time.Minute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense this time.Minute
configurable at the next PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure which other value would be good.
mu sync.Mutex | ||
cache map[string]*entry | ||
// least recently used token at the end | ||
history *list.List |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see you would like to keep tokeninfo filter lock-free, that is why you are thinking about not keeping history any more.
However, there is a data structure which acts like a list and has lock-free implementations (at least in theory). https://en.wikipedia.org/wiki/Skip_list
Do you think it makes sense to try to obtain this data structure to have lock-free history access and not evict random items?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was used to evict oldest entries when cache grows over the limit.
This change simplifies this by simply removing random items if number of cached items is over the size limit.
It also adds a metric for monitoring.
In production setup one should have cache size set large enough such that entries are never evicted due to overflow. This way there is no need to complicate eviction algorithm, keep access history or use clever datastructures.
813bf2a
to
818dd4d
Compare
818dd4d
to
c429f5e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope my review makes sense.
Happy to see it happen!
* use sync.Map for tokeninfo cache to avoid synchronizing all callers on a single mutex * evict stale entries periodically instead of least recently used * store token expiration time instead of creation time ``` │ master │ HEAD │ │ sec/op │ sec/op vs base │ TokeninfoCache/tokens=1,cacheSize=1,p=0-8 275.5n ± 6% 170.1n ± 4% -38.26% (p=0.000 n=10) TokeninfoCache/tokens=2,cacheSize=2,p=0-8 492.9n ± 21% 176.8n ± 2% -64.12% (p=0.000 n=10) TokeninfoCache/tokens=100,cacheSize=100,p=0-8 455.9n ± 7% 165.5n ± 1% -63.70% (p=0.000 n=10) TokeninfoCache/tokens=100,cacheSize=100,p=10000-8 593.4n ± 4% 179.8n ± 4% -69.71% (p=0.000 n=10) TokeninfoCache/tokens=4,cacheSize=2,p=0-8 2571424.0n ± 0% 149.7n ± 3% -99.99% (p=0.000 n=10) TokeninfoCache/tokens=100,cacheSize=10,p=0-8 2579227.5n ± 0% 139.3n ± 1% -99.99% (p=0.000 n=10) geomean 7.903µ 162.9n -97.94% │ master │ HEAD │ │ B/op │ B/op vs base │ TokeninfoCache/tokens=1,cacheSize=1,p=0-8 344.0 ± 0% 344.0 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=2,cacheSize=2,p=0-8 344.0 ± 0% 344.0 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=100,cacheSize=100,p=0-8 344.0 ± 0% 344.0 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=100,cacheSize=100,p=10000-8 368.0 ± 1% 350.0 ± 0% -4.89% (p=0.000 n=10) TokeninfoCache/tokens=4,cacheSize=2,p=0-8 27.00 ± 0% 344.00 ± 0% +1174.07% (p=0.000 n=10) TokeninfoCache/tokens=100,cacheSize=10,p=0-8 27.00 ± 7% 344.00 ± 0% +1174.07% (p=0.000 n=10) geomean 149.0 345.0 +131.62% ¹ all samples are equal │ master │ HEAD │ │ allocs/op │ allocs/op vs base │ TokeninfoCache/tokens=1,cacheSize=1,p=0-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=2,cacheSize=2,p=0-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=100,cacheSize=100,p=0-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=100,cacheSize=100,p=10000-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=10) ¹ TokeninfoCache/tokens=4,cacheSize=2,p=0-8 0.000 ± 0% 3.000 ± 0% ? (p=0.000 n=10) TokeninfoCache/tokens=100,cacheSize=10,p=0-8 0.000 ± 0% 3.000 ± 0% ? (p=0.000 n=10) geomean ² 3.000 ? ¹ all samples are equal ² summaries must be >0 to compute geomean ``` Signed-off-by: Alexander Yastrebov <alexander.yastrebov@zalando.de>
c429f5e
to
294eaa7
Compare
👍 |
1 similar comment
👍 |