1 |
On Mon, Aug 3, 2020 at 8:24 AM Dale <rdalek1967@×××××.com> wrote: |
2 |
> |
3 |
> In the past, I've never seen the drive on the larger files be that slow even toward the end. Generally, it stays pretty close to 180MBs/sec or so which is what I usually get with PMR drives. |
4 |
|
5 |
Yeah, just hard to be certain without ditching the filesystem layer or |
6 |
doing some kind of comparison. The difference in write speed across |
7 |
the drive on recent drives I've gotten is more pronounced than I've |
8 |
seen on other drives, but these drives tend to be around 12TB. |
9 |
|
10 |
Definitely the thing to watch out for is a big drop in transfer rate |
11 |
once a large number of blocks have been transferred continuously, and |
12 |
then performance returns after you let the drive thrash for a while. |
13 |
I've seen complaints of zfs rebuilds going from hours/days to |
14 |
weeks/months in length, so it isn't just a 50% drop when you're doing |
15 |
worst-case access patterns. On the other hand I hear that mdadm isn't |
16 |
so bad, so if the writes are sequential the drive might be better at |
17 |
skipping the cache, and maybe zfs just does its rebuild |
18 |
non-sequentially (which isn't really ideal anyway). |
19 |
|
20 |
I haven't really dug into the guts of how zfs metadata works, but with |
21 |
btrfs I believe the chunks are basically their own layer, and the |
22 |
filesystem can scrub them without really any care about what files are |
23 |
stored in them. That allows them to be easily scrubbed sequentially. |
24 |
When I did rebuilds on btrfs they tended to run at about the max |
25 |
throughput of the drives as long as there wasn't any other access |
26 |
going on. It can also do read-only scrubs to check data integrity |
27 |
sequentially across the disk, which suggests the checksums are stored |
28 |
at a lower layer and so the data can be verified without worrying |
29 |
about file fragmentation and so on. This layering also lets btrfs |
30 |
switch "RAID modes" on the fly with half of the disk being RAID1 and |
31 |
half the disk being RAID5 and so on - each region of a disk is |
32 |
independent from the others and so mode changes only impact new |
33 |
regions until you do a full rebalance to rewrite everything. Of |
34 |
course, zfs has its own advantages. |
35 |
|
36 |
-- |
37 |
Rich |