Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: add benchmark for peerQueue #204

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 58 additions & 0 deletions p2p/peer_stats_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,12 @@ package p2p
import (
"container/heap"
"context"
"fmt"
"math/rand"
"testing"
"time"

"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/stretchr/testify/require"
)
Expand Down Expand Up @@ -110,3 +113,58 @@ func Test_StatDecreaseScore(t *testing.T) {
pStats.decreaseScore()
require.Equal(t, pStats.score(), float32(80.0))
}

// BenchmarkPeerQueue/push_for_10-14 2373187 489.3 ns/op 448 B/op 8 allocs/op
// BenchmarkPeerQueue/pop_for_10-14 1561532 761.8 ns/op 800 B/op 10 allocs/op
// BenchmarkPeerQueue/push_for_100-14 253057 4590 ns/op 2368 B/op 11 allocs/op
// BenchmarkPeerQueue/pop_for_100-14 151729 7503 ns/op 8000 B/op 100 allocs/op
// BenchmarkPeerQueue/push_for_1000-14 25915 46220 ns/op 17728 B/op 14 allocs/op
// BenchmarkPeerQueue/pop_for_1000-14 15207 75764 ns/op 80000 B/op 1000 allocs/op
// BenchmarkPeerQueue/push_for_1000000-14 15 69555594 ns/op 44948820 B/op 41 allocs/op
// BenchmarkPeerQueue/pop_for_1000000-14 15 75170956 ns/op 80000032 B/op 1000000 allocs/op
func BenchmarkPeerQueue(b *testing.B) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
defer cancel()

peers := [][]*peerStat{
generatePeerStats(b, 10),
generatePeerStats(b, 100),
generatePeerStats(b, 1000),
generatePeerStats(b, 1000000),
}

for _, peerStats := range peers {
var queue *peerQueue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think keeping queue inside every subtest will be better to misuse it in the future (assume moving 1 subtest before another or so).

b.Run(fmt.Sprintf("push for %d", len(peerStats)), func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
queue = newPeerQueue(ctx, peerStats)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially I thought it's a bad func to bench but it's getting interesting.

We create a new peerQueue for every session which is created per every GetRangeByHeight request. Here is the thing: in newPeerQueue we init priority queue via heap.Init function which creates queue in an optimal way (O(N) instead of O(N*logN) to be precise). BUT! we aren't using it correctly, we init heap with an empty slice and pushing elements one by one, which will result in O(N*logN) work.

if we will change newPeerStats to something like:

func newPeerStats(peers []*peerStat) peerStats {
	ps := make(peerStats, len(peers))
	copy(ps, peers)
	heap.Init(&ps)
	return ps
}

We can get the following results:

% go-perftuner bstat acopy.txt bcopy.txt 
args: [acopy.txt bcopy.txt]name                           old time/op    new time/op    delta
PeerQueue/push_for_10-10          784ns ± 0%     546ns ± 1%  -30.31%  (p=0.004 n=5+6)
PeerQueue/push_for_100-10        7.86µs ± 1%    5.95µs ± 0%  -24.29%  (p=0.004 n=6+5)
PeerQueue/push_for_1000-10       84.5µs ± 0%    59.3µs ± 0%  -29.80%  (p=0.002 n=6+6)
PeerQueue/push_for_1000000-10     116ms ± 2%     134ms ± 0%  +14.66%  (p=0.004 n=6+5)

name                           old alloc/op   new alloc/op   delta
PeerQueue/push_for_10-10           448B ± 0%      280B ± 0%  -37.50%  (p=0.002 n=6+6)
PeerQueue/push_for_100-10        2.37kB ± 0%    1.10kB ± 0%  -53.72%  (p=0.002 n=6+6)
PeerQueue/push_for_1000-10       17.7kB ± 0%     8.4kB ± 0%  -52.66%  (p=0.002 n=6+6)
PeerQueue/push_for_1000000-10    44.9MB ± 0%     8.0MB ± 0%  -82.19%  (p=0.002 n=6+6)

name                           old allocs/op  new allocs/op  delta
PeerQueue/push_for_10-10           8.00 ± 0%      4.00 ± 0%  -50.00%  (p=0.002 n=6+6)
PeerQueue/push_for_100-10          11.0 ± 0%       4.0 ± 0%  -63.64%  (p=0.002 n=6+6)
PeerQueue/push_for_1000-10         14.0 ± 0%       4.0 ± 0%  -71.43%  (p=0.002 n=6+6)
PeerQueue/push_for_1000000-10      41.0 ± 0%       4.0 ± 0%  -90.24%  (p=0.002 n=6+6)

(no idea why pushing 1M peers resulted in +15% time, yet).

}
})

b.Run(fmt.Sprintf("pop for %d", len(peerStats)), func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for range peerStats {
_ = queue.waitPop(ctx)
}
}
})
}
}

func generatePeerStats(b *testing.B, number int) []*peerStat {
stats := make([]*peerStat, number)
for i := range stats {
priv, _, err := crypto.GenerateKeyPair(crypto.Ed25519, 256)
require.NoError(b, err)
id, err := peer.IDFromPrivateKey(priv)
require.NoError(b, err)
stats[i] = &peerStat{
peerID: id,
peerScore: rand.Float32(),
}
}
return stats
}
Loading