1 |
On 2014-05-27 23:58, Harry Holt wrote: |
2 |
> On May 27, 2014 6:39 PM, "Bob Sanders" <rsanders@×××.com> wrote: |
3 |
> > |
4 |
> > Mark Knecht, mused, then expounded: |
5 |
> > > Hi all, |
6 |
> > > The list is quiet. Please excuse me waking it up. (Or trying |
7 |
> to...) ;-) |
8 |
> > > |
9 |
> > > I'm at the point where I'm a few months from running out of |
10 |
> disk |
11 |
> > > space on my RAID6 so I'm considering how to move forward. I |
12 |
> thought |
13 |
> > > I'd check in here and get any ideas folks have. Thanks in |
14 |
> advance. |
15 |
> > > |
16 |
> > |
17 |
> > Beware - if Adobe acroread is used, and you opt for a 3TB home |
18 |
> > directory, there is a chance it will not work. Or more |
19 |
> specifically, |
20 |
> > acroread is still 32-bit. It's only something I've seen with the |
21 |
> xfs |
22 |
> > filesystem. And Adobe has ignored it for approx. 3yrs now. |
23 |
> > |
24 |
> > > The system is a Gentoo 64-bit, mostly stable, using a |
25 |
> i7-980x |
26 |
> > > Extreme Edition processor with 24GB DRAM. Large chassis, 6 |
27 |
> removable |
28 |
> > > HD bays, room for 6 other drives, a large power supply. |
29 |
> > > |
30 |
> > > The disk subsystem is a 1.4TB RAID6 built from five SATA2 |
31 |
> 500GB WD |
32 |
> > > RAID-Edition 3 drives. The RAID has not had a single glitch in |
33 |
> the 4+ |
34 |
> > > years I've used this machine. |
35 |
> > > |
36 |
> > > Generally there are 4 classes of data on the RAID: |
37 |
> > > |
38 |
> > > 1) Gentoo (obviously), configs backed up every weekend. I plan to |
39 |
> > > rebuild from scratch using existing configs if there's a failure. |
40 |
> > > Being down for a couple of days is not an issue. |
41 |
> > > 2) VMs - about 300GB. Loaded every morning, stopped & saved every |
42 |
> > > night, backed up every weekend. |
43 |
> > > 3) Financial data - lots of it - stocks, futures, options, etc. |
44 |
> > > Performance requirements are pretty low. Backed up every weekend. |
45 |
> > > 4) Video files - backed up to a different location than items |
46 |
> 1/2/3 |
47 |
> > > whenever there are changes |
48 |
> > > |
49 |
> > > After eclean-dist/eclean-pkg I'm down to about 80GB free and |
50 |
> this |
51 |
> > > will fill up in 3-6 months so it's time to make some changes. |
52 |
> > > |
53 |
> > > My thoughts: |
54 |
> > > |
55 |
> > > 1) Buy three (or even just two) 5400 RPM 3TB WD Red drives and go |
56 |
> with |
57 |
> > > RAID1. This would use the internal SATA2 ports so it wouldn't be |
58 |
> the |
59 |
> > > highest performance but likely a lot better than my SATA2 RAID6. |
60 |
> > > |
61 |
> > > 2) Buy two 7200 RPM 3TB WD Red drives and an LSI logic hardware |
62 |
> RAID |
63 |
> > > controller. This would be SATA3 so probably way more performance |
64 |
> than |
65 |
> > > I have now. MUCH more expensive though. |
66 |
> > > |
67 |
> > |
68 |
> > RAID 1 is fine, RAID 10 is better, but comsumes 4 drives and SATA |
69 |
> ports. |
70 |
> > |
71 |
> > > 3) #1 + an SSD. I have an unused 120GB SSD so I could get |
72 |
> another, |
73 |
> > > make a 2-disk RAID1, put Gentoo on that and everything else on |
74 |
> the |
75 |
> > > newer 3TB drives. More complex, probably lower reliability and |
76 |
> I'm not |
77 |
> > > sure I gain much. |
78 |
> > > |
79 |
> > > Beyond this I need to talk file system types. I'm fat dumb |
80 |
> and |
81 |
> > > happy with Ext4 and don't really relish dealing with new stuff |
82 |
> but |
83 |
> > > now's the time to at least look. |
84 |
> > > |
85 |
> > |
86 |
> > If you change, do not use ZFS and possibly BTRFS if the system does |
87 |
> not |
88 |
> > have ECC DRAM. A single, unnoticed, ECC error can corrupt the |
89 |
> data pool |
90 |
> > and be written to the file system, which effectively renders it |
91 |
> corrupt |
92 |
> > without a way to recover. |
93 |
> > |
94 |
> > FWIW - a Synology DS414slim can hold 4 x 1TB WD Red NAS 2.5" drives |
95 |
> and |
96 |
> > provide a boot of nfs or iSCSI to your VMs. The downside is the |
97 |
> NAS box |
98 |
> > and drives would go for a bit north of $636. The upside is all |
99 |
> your |
100 |
> > movies and VM files could move off your workstation and the |
101 |
> workstation |
102 |
> > would still host the VMs via a mount of the NAS box. |
103 |
> |
104 |
> +1 for the Synology NAS boxes, those things are awesome, fast, |
105 |
> reliable, upgradable (if you buy a larger one), and the best value |
106 |
> available for iSCSI attached VMs. |
107 |
|
108 |
while i agree on the +1 for iscsi storage, there are a few drawbacks. |
109 |
yes the modularity is awesome primarily -- super simple to spin up |
110 |
backup system and "move" data with a simple connection command. |
111 |
also a top tip would be to have teh "data" part of the vm as an iscsi |
112 |
connection too, so you can easily detach/reattach to another vm. |
113 |
|
114 |
however, depending on the vm's you have you will probably start needing |
115 |
to use more than one gigabit connection to max out speeds: 1gigabit |
116 |
ethernet is not the same as 6gigabit sata3, and spinning rust is not the |
117 |
same as ssd. |
118 |
|
119 |
looking to the spec of the existing workstation, i'd be tempted to stay |
120 |
with mdadm rather than a hardware raid card (which is probably running |
121 |
embedded anyway) though with that i7 you have disabled turboboost right? |
122 |
|
123 |
what would be an interesting comparison is pci-express speed vs |
124 |
motherboard sata - cpu bridge speed, obviously spinning disks will not |
125 |
max 6gbit, and the motherboard may not give you 6x 6gbit real |
126 |
throughput, whereas dedicated hardware raid _might_ do if it had |
127 |
intelligent caching. |
128 |
|
129 |
other fun to look at would be lvm cos i personally think it's awesome. |
130 |
for an example the first half of spinning disks is substantially faster |
131 |
than the second half due to the tracks on the outer part, so i split |
132 |
each disk into three partitions fast,med,slow and add to lvm volume |
133 |
group, you can then group the fasts into a raid, medium into a raid and |
134 |
slows into a raid too; mdadm allows similar configs with partitions. |
135 |
|
136 |
ZFS for me lost it's lustre when minimum requirement was 1GB RAM per |
137 |
terabyte...i may have my gigabytes and gigabits mixed up on this one |
138 |
happy for someone to correct me. BTRFS looks very very interesting to |
139 |
me, though still not played with it but mostly for checksums, the rest i |
140 |
can do with lvm. |
141 |
|
142 |
you might also like to consider fun with deduplication, by have a raid |
143 |
base, with lvm on top with block level dedupe ala lessfs, then lvm |
144 |
inside the deduped-lvm (yeah i know i'm sick, but the doctor tells me |
145 |
the layers of abstraction eventually combine happily :) but i'm not sure |
146 |
you'll get much benefit from virtualmachines and movies being deduped. |
147 |
|
148 |
if you add an ssd into the mix you can also look at devicemapper caches |
149 |
such as bcache and dm-cache, or even just moving the journal of your |
150 |
ext4 partition there instead. |
151 |
|
152 |
crucially you need to think about what your issues you _need_ to solve |
153 |
and those that you would like to solve. space is obviously one issue, |
154 |
and performance is not really an issue for you. depending on your budget |
155 |
a pair of large sata drives + mdadm will be ideal, if you had lvm |
156 |
already you could simply 'move' then 'enlarge' your existing stuff (tm) |
157 |
: i'd like to know how btrfs would do the same for anyone who can let me |
158 |
know. |
159 |
you have raid6 because you probably know that raid5 is just waiting for |
160 |
trouble, so i'd probably start looking at btrfs for your finanical data |
161 |
to be checksummed. also consider ECC memory if your motherboard |
162 |
supports it, never mind the hosing of filesystems, if you are running |
163 |
vm's you do _not_ want memory making them behave oddly or worse, and if |
164 |
you have lots of active financial data (bloomberg + analytics) you run |
165 |
the risk of the butterfly effect making odd results. |