1 |
On 16 Mar 2010, at 16:32, Steve wrote: |
2 |
> ... |
3 |
>> Given the point above I would also stick with software RAID. |
4 |
> ... |
5 |
>> If reliability is your primary concern, I would go for a simple RAID1 |
6 |
>> setup; |
7 |
> Absolutely. Software raid is cheaper and implies less hardware to |
8 |
> fail. Similarly, RAID1 minimises the total number of disks required |
9 |
> to |
10 |
> survive a failure. It's the only way for me to go. |
11 |
|
12 |
How does your system boot if your RAID1 system volume fails? The one |
13 |
you have grub on? I think you mentioned a flash drive, which I've seen |
14 |
mentioned before. This seems sound, but just to point out that's |
15 |
another, different, single point of failure. |
16 |
|
17 |
>> If you do not need data sharing (i.e. if your volumes are only |
18 |
>> mounted |
19 |
>> by one client at a time), the simplest solution is to completely |
20 |
>> avoid |
21 |
>> having a FS on the storage server side -- just export the raw block |
22 |
>> device via iSCSI, and do everything on the client. |
23 |
> ... |
24 |
> Snap-shots, of course, are only really valuable for non-archive |
25 |
> data... |
26 |
> so, in future, I could add a ZFS volume using the same iSCSI strategy. |
27 |
|
28 |
I have wondered if it might be possible to create a large file (`dd |
29 |
if=/dev/zero of=/path/to/large/file` constrain at a size of 20gig or |
30 |
100gig or whatever) and treat it as a loopback device for stuff like |
31 |
this. It's not true snapshotting (in the ZFS / BTFS sense), but you |
32 |
can unmount it and make a copy quite quickly. |
33 |
|
34 |
Stroller. |