1 |
Sorry... I didn't understand exactly what you were trying to do... if you |
2 |
want to present the same disks to multiple host you would generally use a |
3 |
cluster "aware" application or a cluster "aware" filesystem that would |
4 |
establish the quarum and act a more true clustered environment with heartbeat. |
5 |
|
6 |
Going back to GFS, this is actually capable of doing just that. So I guess |
7 |
we are back to GFS :) |
8 |
|
9 |
http://mail.digicola.com/wiki/index.php?title=User:Martin:GFS |
10 |
|
11 |
http://www.yolinux.com/TUTORIALS/LinuxClustersAndFileSystems.html |
12 |
|
13 |
|
14 |
Should give you enough reading to get started... most of the applications we |
15 |
do this with are either read only data sets and don't need clustering |
16 |
filesystems or are oracle databases that use ocfs. So I can only speak in |
17 |
theory... I would check out what veritas products can do... They have an |
18 |
amazing track record on almost all nix platforms (solaris,hp ux) and have a |
19 |
lot of clustering capabilities. |
20 |
|
21 |
Regards, |
22 |
|
23 |
sean |
24 |
|
25 |
On 25-Jan-2007, Brian Kroth wrote: |
26 |
> |
27 |
> |
28 |
> paul k?lle wrote: |
29 |
> >Sean Cook schrieb: |
30 |
> >> |
31 |
> >>GFS is ok if you don't want to mess around with a SAN but it has no where |
32 |
> >>near the performance of fiber or iSCSI attached storage. |
33 |
> >Aren't those apples and oranges? I thought iSCSI is a block level |
34 |
> >protocol and doesn't do locking and such whereas GFS does... |
35 |
> |
36 |
> This is what I was getting at. I know the basics of working with the |
37 |
> SAN to get a set of machines to at least see a storage array. The next |
38 |
> step is getting them to read and write to say the same file on a |
39 |
> filesystem on that storage array without stepping on each others toes or |
40 |
> corrupting the filesystem that lives on top of that storage array. |
41 |
> That's where I haven't learned too much yet. |
42 |
> |
43 |
> I hadn't actually planned on using the SAN to boot off of, but that |
44 |
> might be an option for easier configuration/software management. I |
45 |
> simply wanted to use it almost as if it were an NFS mount that a group |
46 |
> of servers stored web content on. The problem I had with that model is |
47 |
> that the NFS server is a single point of failure. If on the other hand |
48 |
> all the servers are directly attached to the data, any one of them can |
49 |
> go down and the others won't care or notice. At least that's the |
50 |
> working theory behind it right now. |
51 |
> -- |
52 |
> gentoo-server@g.o mailing list |
53 |
> |
54 |
-- |
55 |
gentoo-server@g.o mailing list |