1 |
On Monday, 6 January 2020 13:53:41 GMT Rich Freeman wrote: |
2 |
> On Mon, Jan 6, 2020 at 8:25 AM Mick <michaelkintzios@×××××.com> wrote: |
3 |
> > If they are used as normal PC drives for regular writing |
4 |
> > of data, or with back up commands which use rsync, cp, etc. then the disk |
5 |
> > will fail much sooner than expected because of repeated multiple areas |
6 |
> > being deleted, before each smaller write. I recall reading about how |
7 |
> > short the life of SMR drives was shown to be when used in NAS devices - |
8 |
> > check google or youtube if you're interested in the specifics. |
9 |
> |
10 |
> Can you give a link - I'm not finding anything, and I'm a bit dubious |
11 |
> of this claim, because they still are just hard drives. These aren't |
12 |
> SSDs and hard drives should not have any kind of erasure limit. |
13 |
|
14 |
This (random) link strongly recommends against usage in NAS, but gives no |
15 |
reliability data: |
16 |
|
17 |
https://www.storagereview.com/seagate_archive_hdd_review_8tb |
18 |
|
19 |
This is a youtube video where someone was comparing SMR failures on a NAS: |
20 |
|
21 |
https://www.youtube.com/watch?v=CR_bfbOTY1o |
22 |
|
23 |
|
24 |
> Now, an SMR used for random writes is going to be a REALLY busy drive, |
25 |
> so I could see the drive being subject to a lot more wear and tear. |
26 |
> I'm just not aware of any kind of serious study. And of course any |
27 |
> particular model of hard drive can have reliability issues (just look |
28 |
> up the various reliability studies). |
29 |
|
30 |
Right, I haven't seen any lab reliability studies published. I would think |
31 |
more information could be sourced in IRC/ML where datacenter sysadmins hide to |
32 |
compare their ... hardware. :-) |
33 |
|
34 |
Reading another random link it seems Dale's 8TB SMR drive has a 20GB |
35 |
conventional PMR platter/area in it to catch and cache any small writes. The |
36 |
firmware will subsequently transfer the cached data on the SMR area of the |
37 |
drive in due course, after it deletes the requisite adjacent overlapping |
38 |
tracks. This means up to 20GB of initial writes will be normal, dropping to |
39 |
lower speeds thereafter as the PMR cache needs to be flushed: |
40 |
|
41 |
https://www.ixsystems.com/community/threads/smr-hard-drives-do-you-think-they-are-proper-nas-drives.35805/ |
42 |
|
43 |
If this is so, it explains Dale's observation of a hyperactive disk, well |
44 |
after it was dismounted. Its firmware's been busy! |
45 |
|
46 |
[snip ...] |
47 |
|
48 |
> Granted, I don't rewrite it often but unless zfs is |
49 |
> SMR-aware it is still going to be writing lots of modest-sized files |
50 |
> as the original files get chunked up and distributed across the nodes. |
51 |
> On the disk lizardfs data just looks like a browser cache, with |
52 |
> everything in numbered files about 60MB in size in my case. The files |
53 |
> also appear to turn over a bit during rebalancing. |
54 |
|
55 |
I would think bit flipping between the 20GB PMR cache and the 8TB SMR tracks |
56 |
represents an increased risk, vis A vis a single-step data transfer. Data |
57 |
scrubbing well after the write has completed and committed to the SMR tracks |
58 |
would reveal any anomalies. |
59 |
|
60 |
What would seriously mess things up is creating a raid with mixed PMR and SMR |
61 |
disks and running big (bigger than the internal cache) data writes. Some PMR |
62 |
disks will complete well before the SMR. I/O blocking and timeouts could |
63 |
ensue and the applications performing the writing could hang/fail. |
64 |
|
65 |
Anyway, write once - read often, fits well the use case for these disks. They |
66 |
should be right at home for long term video and media storage. |
67 |
-- |
68 |
Regards, |
69 |
Mick |