Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve cleanup #15

Open
ckcr4lyf opened this issue Jan 30, 2023 · 4 comments
Open

Improve cleanup #15

ckcr4lyf opened this issue Jan 30, 2023 · 4 comments

Comments

@ckcr4lyf
Copy link
Owner

Current cleanup has a 0.1% chance per announce:

kiryuu/src/main.rs

Lines 171 to 176 in 9d45a11

// 0.1% chance to trigger a clean,
let chance = rand::thread_rng().gen_range(0..1000);
if chance == 0 {
post_announce_pipeline.cmd("ZREMRANGEBYSCORE").arg(&seeders_key).arg(0).arg(max_limit).ignore();
post_announce_pipeline.cmd("ZREMRANGEBYSCORE").arg(&leechers_key).arg(0).arg(max_limit).ignore();
}

This affects non popular torrents where the swarm dies of, e.g. in this one, the total (including stale) count in redis is 345, whereas the active (last 30 min) is just 8!

$ redis-cli
127.0.0.1:6379> ZCOUNT '4b0a62cb3d03d6e27a3878293ec9233b09254d74_seeders' 0 +inf
(integer) 345
127.0.0.1:6379> ZCOUNT '4b0a62cb3d03d6e27a3878293ec9233b09254d74_seeders' 1675089032225 1675090832225
(integer) 8

This introduces additional CPU strain per 8 announces per 30 min, vs. a more regular cleanup might be better (for less popular torrents)

@ckcr4lyf
Copy link
Owner Author

From cleaning all torrents:

image

  • Seeders: 60MiB
  • Leechers: 100 MiB

Total -> 160/460 = ~38%!!!

@ckcr4lyf
Copy link
Owner Author

image

On first (seeder) cleanup, I killed metrics so there is a gap. But the leecher clean shows no noticeable change in announce per min, latency during the cleaning

@ckcr4lyf
Copy link
Owner Author

ckcr4lyf commented Feb 3, 2023

Cleanup is added to kouko, would be better if it can be within kiryuu (kouko is in TS/JS so even SCAN + loops are heavy on CPU)

@ckcr4lyf
Copy link
Owner Author

We've added some cleanup to kiryuu in the postgres branch:

kiryuu/src/bin/clean.rs

Lines 47 to 48 in 88388cf

let (skey, lkey, ckey) = byte_functions::make_redis_keys(&infohash);
let cmd: bool = redis::cmd("DEL").arg(&skey).arg(&lkey).arg(&ckey).query_async(&mut redis_connection).await.expect("fucc");

However, this does not delete old peers from active torrents!

E.g., running postgres version on kiryuu-test.mywaify.best , we see a ZSET:

Biggest   zset found '"378aeb09a1b57b4f753b3c59f9976c198d5593db_seeders"' has 9485 members

But if we get the members, then we see there are some that last announced 10 days ago!

127.0.0.1:6379> ZRANGE "378aeb09a1b57b4f753b3c59f9976c198d5593db_seeders" -inf +inf BYSCORE LIMIT 0 10 WITHSCORES
 1) "REDACTED"
 2) "1730257691922"

If we ZCOUNT all vs. last 30 min:

127.0.0.1:6379> ZCOUNT "378aeb09a1b57b4f753b3c59f9976c198d5593db_seeders" -inf +inf
(integer) 9509
127.0.0.1:6379> ZCOUNT "378aeb09a1b57b4f753b3c59f9976c198d5593db_seeders" 1731274069000 +inf
(integer) 223

~97.6% space is wasted!

127.0.0.1:6379> MEMORY USAGE "378aeb09a1b57b4f753b3c59f9976c198d5593db_seeders"
(integer) 847028

In this particular example it turns out to be ~827kB wasted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant