You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently have a ZSET called TORRENTS which stores the infohash of "active" torrents, with the unix millisecond timestamp as the "score".
In MyWaifu (production) this ZSET has 1.6million members (active torrents). Comparatively, other ZSETS e.g. to store seeders / leechers, are usually under 500 members.
This allows us to calculate the number of active torrents (e.g via ZCOUNT).
Currently, we modify this ZSET in the following ways:
EVERY ANNOUNCE results in a ZADD Date.now() infohash (O(log n))
Every 15 seconds (Prometheus scrape) we ZCOUNT TORRENTS Date.now() - 30 mins, Date.now() to get a count of active torrents (O(log n))
Every 30 mins do ZRANGEBYSCORE TORRENTS 0, Date.now() - 31 mins to get all the stale infohashes for cleanup. (O(log(N)+M))
(2) doesn't have too much of a problem, since only once every 15 seconds, and return value is just an int
(3) is a bit worse, since usually ~600k members are returned (MyWaifu in production)
(1) is also really bad, especially under load, since it will happen on every announce (~1500 a second!), on a very large ZSET.
Because redis is single threaded, operations on the TORRENTS ZSET block actual queries for other stuff, such as getting the seeders & leechers.
Since the TORRENTS ZSET is meant for stats (e.g. active torrents) and cleanup (deleting old seeder / leecher keys for old torrents), the query performance on these guys does not matter, if it does not block the response to an actual announce (HTTP request).
So, there is no penalty on announce performance if we move these to, e.g. a Postgres DB.
The text was updated successfully, but these errors were encountered:
We currently have a ZSET called
TORRENTS
which stores the infohash of "active" torrents, with the unix millisecond timestamp as the "score".In MyWaifu (production) this ZSET has 1.6million members (active torrents). Comparatively, other ZSETS e.g. to store seeders / leechers, are usually under 500 members.
This allows us to calculate the number of active torrents (e.g via ZCOUNT).
Currently, we modify this ZSET in the following ways:
ZADD Date.now() infohash
(O(log n))ZCOUNT TORRENTS Date.now() - 30 mins, Date.now()
to get a count of active torrents (O(log n))ZRANGEBYSCORE TORRENTS 0, Date.now() - 31 mins
to get all the stale infohashes for cleanup. (O(log(N)+M))(2) doesn't have too much of a problem, since only once every 15 seconds, and return value is just an int
(3) is a bit worse, since usually ~600k members are returned (MyWaifu in production)
(1) is also really bad, especially under load, since it will happen on every announce (~1500 a second!), on a very large ZSET.
Because redis is single threaded, operations on the TORRENTS ZSET block actual queries for other stuff, such as getting the seeders & leechers.
Since the TORRENTS ZSET is meant for stats (e.g. active torrents) and cleanup (deleting old seeder / leecher keys for old torrents), the query performance on these guys does not matter, if it does not block the response to an actual announce (HTTP request).
So, there is no penalty on announce performance if we move these to, e.g. a Postgres DB.
The text was updated successfully, but these errors were encountered: