1 |
Hi, |
2 |
|
3 |
> The budget is miniscule - and the performance demands |
4 |
> (bandwidth and latency) are completely non-challenging. |
5 |
|
6 |
This IMHO pretty much rules out any kind of server-class hardware, which |
7 |
tends to be both costly and power-hungry. If you're thinking about |
8 |
buying used stuff, be sure to factor in the cost and difficulty of |
9 |
finding spares in some years' time. |
10 |
|
11 |
Given the point above I would also stick with software RAID. True HW |
12 |
RAID controllers are quite expensive and generally come with a x8 PCIe |
13 |
interface which will require a server motherboard -- x16 PCIe video card |
14 |
slots in commodity boards are usually only certified for x16 and x1 |
15 |
operation, so don't expect them to work reliably with other bus widths. |
16 |
|
17 |
Linux software RAID also has the advantage that the kernel is not tied |
18 |
to any specific piece of hardware. In case of a failure, your volumes |
19 |
will be readable on any other Linux system -- provided the disks |
20 |
themselves are not toast. |
21 |
|
22 |
If reliability is your primary concern, I would go for a simple RAID1 |
23 |
setup; if your volumes need to be bigger than a physical disk you can |
24 |
build a spanned volume over multiple mirrored pairs. Network throughput |
25 |
will mostly likely be your primary bottleneck, so I'd avoid striping as |
26 |
it would offer little in the way of performace at the expense of making |
27 |
data recovery extremely difficult in case the worst should happen. |
28 |
|
29 |
As for availability, I think the best strategy with a limited budget is |
30 |
to focus on reducing downtime: make sure your data can survive the |
31 |
failure of any single component, and choose hardware that you can get |
32 |
easily and for a reasonable price. Sh*t happens, so make it painless to |
33 |
clean up. |
34 |
|
35 |
Network protocol: |
36 |
|
37 |
If you do not need data sharing (i.e. if your volumes are only mounted |
38 |
by one client at a time), the simplest solution is to completely avoid |
39 |
having a FS on the storage server side -- just export the raw block |
40 |
device via iSCSI, and do everything on the client. In my experience this |
41 |
also works very well with Windows clients using the free MS iSCSI initiator. |
42 |
|
43 |
Alternatively, you can consider good old NFS, which performs decently |
44 |
and tends to behave a bit better -- especially when used over UDP -- in |
45 |
case of network glitches, like accidentally powering off a switch, |
46 |
yanking cables, losing wireless connectivity... |
47 |
|
48 |
CIFS should be avoided at all costs if your clients are not Windows |
49 |
machines. |
50 |
|
51 |
File systems: avoid complexity. As technically superior as it might be, |
52 |
in this kind of setup ZFS is only going to be resource hog and a |
53 |
maintenance headache; your priority should be having a rock-solid |
54 |
implementation and a reliable set of diagnostic/repair tools in case |
55 |
disaster strikes. Tried-and-true ext3 fits the bill nicely if you ask |
56 |
me; just remember to tune it properly according to your planned use -- |
57 |
eg. if a volume is going to be used to host huge VM disk images, be sure |
58 |
to create its filesystem with -T largefile4. |
59 |
|
60 |
Just my 2 cents, |
61 |
|
62 |
Andrea |