1 |
kashani wrote: |
2 |
|
3 |
> Running through the Dell storage page you end up spending $20k (list) |
4 |
> for their 12 SATA drive NAS device w/ 3year NBD, dual PS, etc. RAID 6 it |
5 |
> up and you've got 5TB usable. I'm sure there are cheaper options (feel |
6 |
> free to point them out), but I don't think you're going to save that |
7 |
> much over going directly to an iSCSI/NFS SAN with a second or third tier |
8 |
> vendor... ie not Netapp or EMC. And you've got to manage x number of |
9 |
> boxes, don't get volume management, snapshots, etc, and still have to |
10 |
> shuffle data around manually for backups or at least hot storage. |
11 |
|
12 |
An example of cheaper nas boxes: |
13 |
|
14 |
http://www.linuxdevices.com/articles/AT3184179979.html |
15 |
|
16 |
> How about the question, "Is losing 20% of your data any better than |
17 |
> losing 100% of your data?" IMO data loss is data loss whether it's |
18 |
> complete or partial. Of course assuming you have backups restoring 20% |
19 |
> is easier so it's possible I'm wrong here. I'm still not buying the |
20 |
> scenario where managing nine single points of failure is better than |
21 |
> managing one. And I think I can eliminate all the single points in a |
22 |
> single large system easier then rewriting my application to round robin |
23 |
> across 15 data stores that contain partial backups of each other. |
24 |
|
25 |
Frankly if you are attempting to manage 9TB of data for customers and |
26 |
managing 9 systems scares you, you need to start thinking about a change in |
27 |
career choices. Where are these 15 data stores you are talking about? |
28 |
|
29 |
The point to distributing the load to more devices is not to limit loss to |
30 |
20%. It is to make it easier to back up, restore, replicate, upgrade, |
31 |
maintain, administer, provide higher availability, redundancy, etc... |
32 |
Google "google file system" to bone up on the concept... |
33 |
|
34 |
Other options are investing your entire business on a single point of |
35 |
failure (single device), sans which can be clustered/raided, or using some |
36 |
service like akamai and not dealing with it at all ;) |
37 |
|
38 |
|
39 |
|
40 |
-- |
41 |
gentoo-server@g.o mailing list |