1 |
On Tuesday, 16 June 2020 12:26:01 BST Dale wrote: |
2 |
> Wols Lists wrote: |
3 |
> > On 16/06/20 10:04, Dale wrote: |
4 |
> >> I might add, I don't have LVM on that drive. I read it does not work |
5 |
> >> well with LVM, RAID etc as you say. Most likely, that drive will always |
6 |
> >> be a external drive for backups or something. If it ever finds itself |
7 |
> >> on the OS or /home, it'll be a last resort. |
8 |
> > |
9 |
> > LVM it's probably fine with. Raid, MUCH less so. What you need to make |
10 |
> > sure does NOT happen is a lot of random writes. That might make deleting |
11 |
> > an lvm snapshot slightly painful ... |
12 |
> > |
13 |
> > But adding a SMR drive to an existing ZFS raid is a guarantee for pain. |
14 |
> > I don't know why, but "resilvering" causes a lot of random writes. I |
15 |
> > don't think md-raid behaves this way. |
16 |
> > |
17 |
> > But it's the very nature of raid that, as soon as something goes wrong |
18 |
> > and a drive needs replacing, everything is going to get hammered. And |
19 |
> > SMR drives don't take kindly to being hammered ... :-) |
20 |
> > |
21 |
> > Even in normal use, a SMR drive is going to cause grief if it's not |
22 |
> > handled carefully. |
23 |
> > |
24 |
> > Cheers, |
25 |
> > Wol |
26 |
> |
27 |
> From what I've read, I agree. Basically, as some have posted in |
28 |
> different places, SMR drives are good when writing once and leaving it |
29 |
> alone. Basically, about like a DVD-R. From what I've read, let's say I |
30 |
> moved a lot of videos around, maybe moved the directory structure |
31 |
> around, which means a lot of data to move. I think I'd risk just |
32 |
> putting a new file system on it and then backup everything from |
33 |
> scratch. It may take a little longer given the amount of data but it |
34 |
> would be easier on the drive. It would keep it from hammering as you |
35 |
> put it that drive to death. |
36 |
> |
37 |
> I've also read about the resilvering problems too. I think LVM |
38 |
> snapshots and something about BTFS(sp?) has problems. I've also read |
39 |
> that on windoze, it can cause a system to freeze while it is trying to |
40 |
> rewrite the moved data too. It gets so slow, it actually makes the OS |
41 |
> not respond. I suspect it could happen on Linux to if the conditions |
42 |
> are right. |
43 |
> |
44 |
> I guess this is about saving money for the drive makers. The part that |
45 |
> seems to really get under peoples skin tho, them putting those drives |
46 |
> out there without telling people that they made changes that affect |
47 |
> performance. It's bad enough for people who use them where they work |
48 |
> well but the people that use RAID and such, it seems to bring them to |
49 |
> their knees at times. I can't count the number of times I've read that |
50 |
> people support a class action lawsuit over shipping SMR without telling |
51 |
> anyone. It could happen and I'm not sure it shouldn't. People using |
52 |
> RAID and such, especially in some systems, they need performance not |
53 |
> drives that beat themselves to death. |
54 |
> |
55 |
> My plan, avoid SMR if at all possible. Right now, I just don't need the |
56 |
> headaches. The one I got, I'm lucky it works OK, even if it does bump |
57 |
> around for quite a while after backups are done. |
58 |
> |
59 |
> My new to me hard drive is still testing. Got a few more hours left |
60 |
> yet. Then I'll run some more tests. It seems to be OK tho. |
61 |
> |
62 |
> Dale |
63 |
> |
64 |
> :-) :-) |
65 |
|
66 |
Just to add my 2c's before you throw that SMR away, the use case for these |
67 |
drives is to act as disk archives, rather than regular backups. You write |
68 |
data you want to keep, once. SMR disks would work well for your use case of |
69 |
old videos/music/photos you want to keep and won't be overwriting every other |
70 |
day/week/month. Using rsync with '-c' to compare checksums will also make |
71 |
sure what you've copied is as good/bad as the original fs source. |