You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ceph can stress disks in a quite unique way in certain configurations. Ceph can offload its journal to a separate disk. A common configuration is one SSD or NVMe disk being used as journal device for multiple other disks.
This journal device is a full data journal and the work-load is a series of small sequential writes with a sync between each of them.
There are some SSDs that are usually perform well are really bad at this workload. I've seen NVMe SSDs that only achieved < 100 IOPS that then even died after a few days.
Ceph can stress disks in a quite unique way in certain configurations. Ceph can offload its journal to a separate disk. A common configuration is one SSD or NVMe disk being used as journal device for multiple other disks.
This journal device is a full data journal and the work-load is a series of small sequential writes with a sync between each of them.
There are some SSDs that are usually perform well are really bad at this workload. I've seen NVMe SSDs that only achieved < 100 IOPS that then even died after a few days.
A good way to test this with test results for a lot of disks can be found in this blog post:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
I'd suggest to add the following test (copied from the blog post):
...and vary the --numjobs parameter between 1 and 16 and report the resulting IOPS.
Here are example results for 1 to 10 runs on a 240GB Samsung PM863a:
That's a very nice result.
Now let's compare it to a disk that doesn't do as well: a 128GB Samsung PM961 NVMe disk (which has way better specs)
This is essentially unusable as a journal device for multiple disks, but it's currently not possible to tell from the existing benchmarks.
The text was updated successfully, but these errors were encountered: