1 |
Wols Lists wrote: |
2 |
> On 16/06/20 10:04, Dale wrote: |
3 |
>> I might add, I don't have LVM on that drive. I read it does not work |
4 |
>> well with LVM, RAID etc as you say. Most likely, that drive will always |
5 |
>> be a external drive for backups or something. If it ever finds itself |
6 |
>> on the OS or /home, it'll be a last resort. |
7 |
> LVM it's probably fine with. Raid, MUCH less so. What you need to make |
8 |
> sure does NOT happen is a lot of random writes. That might make deleting |
9 |
> an lvm snapshot slightly painful ... |
10 |
> |
11 |
> But adding a SMR drive to an existing ZFS raid is a guarantee for pain. |
12 |
> I don't know why, but "resilvering" causes a lot of random writes. I |
13 |
> don't think md-raid behaves this way. |
14 |
> |
15 |
> But it's the very nature of raid that, as soon as something goes wrong |
16 |
> and a drive needs replacing, everything is going to get hammered. And |
17 |
> SMR drives don't take kindly to being hammered ... :-) |
18 |
> |
19 |
> Even in normal use, a SMR drive is going to cause grief if it's not |
20 |
> handled carefully. |
21 |
> |
22 |
> Cheers, |
23 |
> Wol |
24 |
|
25 |
From what I've read, I agree. Basically, as some have posted in |
26 |
different places, SMR drives are good when writing once and leaving it |
27 |
alone. Basically, about like a DVD-R. From what I've read, let's say I |
28 |
moved a lot of videos around, maybe moved the directory structure |
29 |
around, which means a lot of data to move. I think I'd risk just |
30 |
putting a new file system on it and then backup everything from |
31 |
scratch. It may take a little longer given the amount of data but it |
32 |
would be easier on the drive. It would keep it from hammering as you |
33 |
put it that drive to death. |
34 |
|
35 |
I've also read about the resilvering problems too. I think LVM |
36 |
snapshots and something about BTFS(sp?) has problems. I've also read |
37 |
that on windoze, it can cause a system to freeze while it is trying to |
38 |
rewrite the moved data too. It gets so slow, it actually makes the OS |
39 |
not respond. I suspect it could happen on Linux to if the conditions |
40 |
are right. |
41 |
|
42 |
I guess this is about saving money for the drive makers. The part that |
43 |
seems to really get under peoples skin tho, them putting those drives |
44 |
out there without telling people that they made changes that affect |
45 |
performance. It's bad enough for people who use them where they work |
46 |
well but the people that use RAID and such, it seems to bring them to |
47 |
their knees at times. I can't count the number of times I've read that |
48 |
people support a class action lawsuit over shipping SMR without telling |
49 |
anyone. It could happen and I'm not sure it shouldn't. People using |
50 |
RAID and such, especially in some systems, they need performance not |
51 |
drives that beat themselves to death. |
52 |
|
53 |
My plan, avoid SMR if at all possible. Right now, I just don't need the |
54 |
headaches. The one I got, I'm lucky it works OK, even if it does bump |
55 |
around for quite a while after backups are done. |
56 |
|
57 |
My new to me hard drive is still testing. Got a few more hours left |
58 |
yet. Then I'll run some more tests. It seems to be OK tho. |
59 |
|
60 |
Dale |
61 |
|
62 |
:-) :-) |