1 |
On Tue, Jun 23, 2020 at 12:14 PM Sid Spry <sid@××××.us> wrote: |
2 |
> |
3 |
> So if I'm understanding properly most drive firmware won't let you |
4 |
> operate the device in an append-only mode? |
5 |
|
6 |
So, there are several types of SMR drives. |
7 |
|
8 |
There are host-managed, drive-managed, and then hybrid devices that |
9 |
default to drive-managed for compatibility reasons but the host can |
10 |
send them a command to take full control so that it is the same as |
11 |
host-managed. |
12 |
|
13 |
A host-managed drive just does what the host tells it to. If the host |
14 |
tells it to do a write that obliterates some other data on the disk, |
15 |
the drive just does it, and it is the job of the host |
16 |
OS/filesystem/application to make sure that they protect any data they |
17 |
care about. At the drive level these perform identically to CMR |
18 |
because they just seek and write like any other drive. At the |
19 |
application level these could perform differently since the |
20 |
application might end up having to work around the drive. However, |
21 |
these drives are generally chosen for applications where this is not a |
22 |
big problem or where the problems can be efficiently mitigated. |
23 |
|
24 |
A drive-managed drive just looks like a regular drive to the host, and |
25 |
it ends up having to do a lot of read-before-rewrite operations |
26 |
because the host is treating it like it is CMR but the drive has to |
27 |
guarantee that nothing gets lost. A drive-managed disk has no way to |
28 |
operate in "append-only" mode. I'm not an expert in ATA but I believe |
29 |
disks are just given an LBA and a set of data to write. Without |
30 |
support for TRIM the drive has no way to know if it is safe to |
31 |
overwrite nearby cylinders, which means it has to preserve them. |
32 |
|
33 |
The biggest problem is that the vendors were trying to conceal the |
34 |
nature of the drives. If they advertised TRIM support it would be |
35 |
pretty obvious they were SMR. |
36 |
|
37 |
> If any do I suspect |
38 |
> NILFS (https://en.wikipedia.org/wiki/NILFS) may be a good choice: |
39 |
> |
40 |
> "NILFS is a log-structured filesystem, in that the storage medium is treated |
41 |
> like a circular buffer and new blocks are always written to the end. [...]" |
42 |
> |
43 |
|
44 |
On a host-managed disk this would perform the same as on a CMR disk, |
45 |
with the possible exception of any fixed metadata (I haven't read the |
46 |
gory details on the filesystem). If it has no fixed metadata (without |
47 |
surrounding unused tracks) then it would have no issues at all on SMR. |
48 |
F2FS takes a similar approach for SSDs, though it didn't really take |
49 |
off because ext4's support is good enough and I suspect that modern |
50 |
SSDs are fast enough at erasing. |
51 |
|
52 |
As I mentioned SSDs have the same issue as SMR, just the scale of the |
53 |
problem is much smaller. The erase blocks are much smaller so less |
54 |
data needs to be read/rewritten following an erase, and then the |
55 |
operations are all just vastly faster. Taking 3x longer on a 1us |
56 |
operation is far different from taking 100x longer on a 100ms |
57 |
operation. |
58 |
|
59 |
-- |
60 |
Rich |