Gentoo Archives: gentoo-user

From: Sid Spry <sid@××××.us>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Testing a used hard drive to make SURE it is good.
Date: Tue, 23 Jun 2020 18:44:42
Message-Id: 8be8c914-dccd-4ccb-b6f2-31f1585765a9@www.fastmail.com
In Reply to: Re: [gentoo-user] Testing a used hard drive to make SURE it is good. by Rich Freeman
1 On Tue, Jun 23, 2020, at 12:20 PM, Rich Freeman wrote:
2 > On Tue, Jun 23, 2020 at 12:14 PM Sid Spry <sid@××××.us> wrote:
3 > >
4 > > So if I'm understanding properly most drive firmware won't let you
5 > > operate the device in an append-only mode?
6 >
7 > So, there are several types of SMR drives.
8 >
9 > There are host-managed, drive-managed, and then hybrid devices that
10 > default to drive-managed for compatibility reasons but the host can
11 > send them a command to take full control so that it is the same as
12 > host-managed.
13 >
14 > A host-managed drive just does what the host tells it to. If the host
15 > tells it to do a write that obliterates some other data on the disk,
16 > the drive just does it, and it is the job of the host
17 > OS/filesystem/application to make sure that they protect any data they
18 > care about. At the drive level these perform identically to CMR
19 > because they just seek and write like any other drive. At the
20 > application level these could perform differently since the
21 > application might end up having to work around the drive. However,
22 > these drives are generally chosen for applications where this is not a
23 > big problem or where the problems can be efficiently mitigated.
24 >
25 > A drive-managed drive just looks like a regular drive to the host, and
26 > it ends up having to do a lot of read-before-rewrite operations
27 > because the host is treating it like it is CMR but the drive has to
28 > guarantee that nothing gets lost. A drive-managed disk has no way to
29 > operate in "append-only" mode. I'm not an expert in ATA but I believe
30 > disks are just given an LBA and a set of data to write. Without
31 > support for TRIM the drive has no way to know if it is safe to
32 > overwrite nearby cylinders, which means it has to preserve them.
33 >
34
35 Yeah, this is what I was wondering. It looks like there are devices that
36 do management that keeps you from using them at their full
37 performance.
38
39 > The biggest problem is that the vendors were trying to conceal the
40 > nature of the drives. If they advertised TRIM support it would be
41 > pretty obvious they were SMR.
42 >
43
44 It looks like I was right then. Maybe the market will settle soon and I
45 will be able to buy properly marked parts. It's a good thing I stumbled
46 into this, I was going to be buying more storage shortly.
47
48 > > If any do I suspect
49 > > NILFS (https://en.wikipedia.org/wiki/NILFS) may be a good choice:
50 > >
51 > > "NILFS is a log-structured filesystem, in that the storage medium is treated
52 > > like a circular buffer and new blocks are always written to the end. [...]"
53 > >
54 >
55 > On a host-managed disk this would perform the same as on a CMR disk,
56 > with the possible exception of any fixed metadata (I haven't read the
57 > gory details on the filesystem). If it has no fixed metadata (without
58 > surrounding unused tracks) then it would have no issues at all on SMR.
59 > F2FS takes a similar approach for SSDs, though it didn't really take
60 > off because ext4's support is good enough and I suspect that modern
61 > SSDs are fast enough at erasing.
62 >
63
64 There is not really a lot to NILFS' structure save the fact it doesn't delete.
65 It ends up being fairly similar to f2fs. On a SMR w/ TRIM drive this would
66 imply no (or very little) penalties for write operations as all writes are
67 actually just appends. I'm not sure the impact it would have on seek/read
68 time with a normal workload but some people report slight improvements
69 on SMR drives, especially if they are helium filled, as the denser packing
70 and lighter gas lead to increased read speeds.
71
72 Actually, f2fs may be the better choice, I had almost forgotten about it.
73 Bigger rollout and more testing. I would need to check feature set in more
74 detail to make a choice.
75
76 ---
77
78 Weirdly benchmarking tends to show f2fs as deficient to ext4 in most
79 cases.
80
81 For a while I've been interested in f2fs (or now nilfs) as a backing block
82 layer for a real filesystem. It seems to have a better data model than some
83 tree based filesystems, but I think we are seeing filesystems suck up
84 features like snapshotting and logging as a must-have instead of the
85 older LVM/raid based "storage pipelines."
86
87 But then, this is just reimplementing a smart storage controller on your
88 CPU, though that may be the best place for it.