-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(server): server info lookup cache #3808
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not using Redis is a change in requirements from what was discussed last week. Removing that requirement certainly simplifies things, but what changed that we feel we can remove it?
I'd have preferred to remove Redis from #3791, which would have removed most of the complication with the layered cache.
} from '@/modules/core/repositories/server' | ||
import { db } from '@/db/knex' | ||
import { Resolvers } from '@/modules/core/graph/generated/graphql' | ||
import { LRUCache } from 'lru-cache' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use lru-cache if all we care about is ttl?
From lru-cache docs:
This is not primarily a TTL cache, and does not make strong TTL guarantees. There is no preemptive pruning of expired items by default, but you may set a TTL on the cache or on a single set. If you do so, it will treat expired items as missing, and delete them when fetched. If you are more interested in TTL caching than LRU caching, check out @isaacs/ttlcache.
@@ -22,7 +29,11 @@ const getPublicScopes = getPublicScopesFactory({ db }) | |||
export = { | |||
Query: { | |||
async serverInfo() { | |||
return await getServerInfo() | |||
const cachedServerInfo = getServerInfoFromCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we place this complexity in the caller? All callers will now have to implement the caching get/set calls.
getServerInfo is widespread throughout the codebase; if we want to get the full benefit of the cache we need to apply it wherever getServerInfo is called.
Better to move the complexity out of the caller and into the factory.
|
||
const cache = new LRUCache<string, ServerInfo>({ max: 1, ttl: 60 * 1000 }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defining it here means we will have a separate cache (and a potentially different response) everywhere that getServerInfo is called. Better to move this into a shared location.
@@ -58,6 +69,9 @@ export = { | |||
|
|||
const update = removeNullOrUndefinedKeys(args.info) | |||
await updateServerInfo(update) | |||
// we're currently going to ignore, that this should be propagated to all | |||
// backend instances, and going to rely on the TTL in the cache to propagate the changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We spoke last week of using Redis to minimise split-brain effects. What changed?
return serverInfo ?? null | ||
} | ||
|
||
export const storeServerInfoInCacheFactory = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no guarantees thatstoreServerInfoInCacheFactory
is interacting with the same cache that getServerInfoFromCacheFactory
is. That now becomes another responsibility of the caller, and places more complexity in the caller.
…gergo/serverInfoCache
Description & motivation
Changes:
To-do before merge:
Screenshots:
Validation of changes:
Checklist:
References