Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): server info lookup cache #3808

Merged
merged 2 commits into from
Jan 14, 2025
Merged

Conversation

gjedlicska
Copy link
Contributor

Description & motivation

Changes:

To-do before merge:

Screenshots:

Validation of changes:

Checklist:

  • My pull request follows the guidelines in the Contributing guide?
  • My pull request does not duplicate any other open Pull Requests for the same update/change?
  • My commits are related to the pull request and do not amend unrelated code or documentation.
  • My code follows a similar style to existing code.
  • I have added appropriate tests.
  • I have updated or added relevant documentation.

References

Copy link
Contributor

@iainsproat iainsproat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not using Redis is a change in requirements from what was discussed last week. Removing that requirement certainly simplifies things, but what changed that we feel we can remove it?
I'd have preferred to remove Redis from #3791, which would have removed most of the complication with the layered cache.

} from '@/modules/core/repositories/server'
import { db } from '@/db/knex'
import { Resolvers } from '@/modules/core/graph/generated/graphql'
import { LRUCache } from 'lru-cache'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use lru-cache if all we care about is ttl?

From lru-cache docs:

This is not primarily a TTL cache, and does not make strong TTL guarantees. There is no preemptive pruning of expired items by default, but you may set a TTL on the cache or on a single set. If you do so, it will treat expired items as missing, and delete them when fetched. If you are more interested in TTL caching than LRU caching, check out @isaacs/ttlcache.

@@ -22,7 +29,11 @@ const getPublicScopes = getPublicScopesFactory({ db })
export = {
Query: {
async serverInfo() {
return await getServerInfo()
const cachedServerInfo = getServerInfoFromCache()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would we place this complexity in the caller? All callers will now have to implement the caching get/set calls.
getServerInfo is widespread throughout the codebase; if we want to get the full benefit of the cache we need to apply it wherever getServerInfo is called.
Better to move the complexity out of the caller and into the factory.


const cache = new LRUCache<string, ServerInfo>({ max: 1, ttl: 60 * 1000 })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defining it here means we will have a separate cache (and a potentially different response) everywhere that getServerInfo is called. Better to move this into a shared location.

@@ -58,6 +69,9 @@ export = {

const update = removeNullOrUndefinedKeys(args.info)
await updateServerInfo(update)
// we're currently going to ignore, that this should be propagated to all
// backend instances, and going to rely on the TTL in the cache to propagate the changes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We spoke last week of using Redis to minimise split-brain effects. What changed?

return serverInfo ?? null
}

export const storeServerInfoInCacheFactory =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no guarantees thatstoreServerInfoInCacheFactory is interacting with the same cache that getServerInfoFromCacheFactory is. That now becomes another responsibility of the caller, and places more complexity in the caller.

@gjedlicska gjedlicska merged commit 9636a56 into main Jan 14, 2025
26 of 28 checks passed
@gjedlicska gjedlicska deleted the gergo/serverInfoCache branch January 14, 2025 12:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants