Gentoo Archives: gentoo-user

From: William Kenworthy <billk@×××××××××.au>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] [OT] SMR drives (WAS: cryptsetup close and device in use when it is not)
Date: Sat, 31 Jul 2021 13:00:13
Message-Id: ec5183c6-1bd0-fe1f-90cc-d1992f6457d7@iinet.net.au
In Reply to: Re: [gentoo-user] [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) by Rich Freeman
1 On 31/7/21 8:21 pm, Rich Freeman wrote:
2 > On Fri, Jul 30, 2021 at 11:50 PM Wols Lists <antlists@××××××××××××.uk> wrote:
3 >> btw, you're scrubbing over USB? Are you running a raid over USB? Bad
4 >> things are likely to happen ...
5 > So, USB hosts vary in quality I'm sure, but I've been running USB3
6 > drives on lizardfs for a while now with zero issues.
7 >
8 > At first I was shucking them and using LSI HBAs. That was a pain for
9 > a bunch of reasons, and I would have issues probably due to the HBAs
10 > being old or maybe cheap cable issues (and new SAS hardware carries a
11 > hefty price tag).
12 >
13 > Then I decided to just try running a drive on USB3 and it worked fine.
14 > This isn't for heavy use, but it basically performs identically to
15 > SATA. I did the math and for spinning disks you can get 2 drives per
16 > host before the data rate starts to become a concern. This is for a
17 > distributed filesystem and I'm just using gigabit ethernet, and the
18 > cluster is needed more for capacity than IOPS, so USB3 isn't the
19 > bottleneck anyway.
20 >
21 > I have yet to have a USB drive have any sort of issue, or drop a
22 > connection. And they're running on cheap Pi4s for the most part
23 > (which have two USB3 hosts). If for some reason a drive or host
24 > dropped the filesystem is redundant at the host level, and it also
25 > gracefully recovers data if a host shows back up, but I have yet to
26 > see that even happen due to a USB issue. I've had far more issues
27 > when I was trying to use LSI HBAs on RockPro64 SBCs (which have a PCIe
28 > slot - I had to also use a powered riser).
29 >
30 > Now, if you want to do something where you're going to be pulling
31 > closer to max bandwidth out of all your disks at once and you have
32 > more than a few disks and you have it on 10GbE or faster, then USB3
33 > could be a bottleneck unless you have a lot of hosts (though even then
34 > adding USB3 hosts to the motherboard might not be any harder than
35 > adding SATA hosts).
36 >
37 I'll generally agree with your USB3 comments - besides the backup disk I
38 am running moosefs on 5 odroid HC2's (one old WD red or green on each,
39 the HC2 is a 32 bit BIG.little arm system and uses a built in USB sata
40 connection - excellent on a 5.12 kernel, just ok on 4.x series) and an
41 Odroid C4 (arm64) with 2 asmedia USB3 adaptors from ebay - the adaptors
42 are crap but do work somewhat with the right tweaks! and a single sata
43 ssd on the master (intel).  I tried using moosefs with a rpi3B in the
44 mix and it didn't go well once I started adding data - rpi 4's were not
45 available when I set it up.  I think that SMR disks will work quite well
46 on moosefs or lizardfs - I don't see long continuous writes to one disk
47 but a random distribution of writes across the cluster with gaps between
48 on each disk (1G network).
49
50 With a good adaptor, USB3 is great ... otherwise its been quite
51 frustrating :(  I do suspect linux and its pedantic correctness trying
52 to deal with hardware that isn't truly standardised (as in the
53 manufacturer probably supplies a windows driver that covers it up) is
54 part of the problem.  These adaptors are quite common and I needed to
55 apply the ATA command filter and turn off UAS using the usb tweaks
56 mechanism to stop the crashes and data corruption.  The comments in the
57 kernel driver code for these adaptors are illuminating!
58
59 BillK

Replies