1 |
Hi, |
2 |
|
3 |
momentics wrote: |
4 |
> (cannot be agreed that grids needs to be explicitly defined, but we |
5 |
> can leave it as is â?? that's not important actually) |
6 |
Nope, true. I wasn't actually looking for a explicit definition of a |
7 |
grid, the description below is just fine. |
8 |
> Your example is quite competitiveâ?¦ I think that it works well when |
9 |
> properly used. |
10 |
> btw I'm working at dell emea :) - glad to hear of dellserv using ;) |
11 |
Ah, well we're happy with the hardware, the only complaint is that it is |
12 |
quite power-hungry ;-) |
13 |
> |
14 |
> Well, we are using 3dns/bigip for web/application/etc layers. I'm |
15 |
> expert here so there are no questions. The question is all about of |
16 |
> oracle rac that uses LVS in round robin (LVS is configured directly on |
17 |
> database nodes). |
18 |
> |
19 |
> The actual question |
20 |
> |
21 |
> Our development and app management is divided on two parts: "the |
22 |
> database" and "the rest". |
23 |
> Oracle/linadm are using LVS when they are building databases for us. |
24 |
> (just to clarify the question) Oracle RAC consists of several |
25 |
> Active/Active nodes, each of them has its own node address. There is |
26 |
> also dataguard but it is out of scope in this context. |
27 |
> |
28 |
Clear so far. |
29 |
> So, we have, say, N nodes and N node IPs. Normally, these IPs I should |
30 |
> use in Oracle client TNS name on app boxes. But linux gents are |
31 |
> building LVS on top, where we additionally have K LVS' IPs (K=N) and |
32 |
> they are suggesting to use LVS' instead of Oracle's ones. That's |
33 |
> probably is fine. |
34 |
> Oracle clients on each app box is configured in LB mode. Transactions |
35 |
> are short lived then connection is recycled (in a test period I'm |
36 |
> speaking of â?? immediately) |
37 |
|
38 |
> |
39 |
> So then, I'm using MRTG to see database loadâ?¦. Yes! It is not |
40 |
> loadbalanced properly over time. |
41 |
> When I'm injecting F5 that points to N database nodes in roundrobin |
42 |
> and proposes 2 BigIP's IPs in the same OraClient configuration it is |
43 |
> loadbalanced properly. |
44 |
This is where it is getting strange. |
45 |
You are using K LVS ip addressen and you say that K=N (amount of nodes) ??? |
46 |
That would mean that you have a loadbalancer address for every node ? |
47 |
|
48 |
We are using two loadbalancers with a single ip for each database cluster. |
49 |
The mysql-setup we're using is a simple replication setup meaning that |
50 |
the loadbalanced nodes are read-only with a writeable master. The nodes |
51 |
are reachable through LVS which we have setup using a Direct Routing |
52 |
approach. |
53 |
|
54 |
Previous experience with both our webfarm and database farm suggests |
55 |
that the LVS rr algorithm is not the most efficient algorithm. We |
56 |
switched to using wlc (weighted least connections) because of a number |
57 |
of reasons. |
58 |
|
59 |
The most important being: |
60 |
Queries are not made equal, one query will occupy a database server for |
61 |
far longer than the other. |
62 |
|
63 |
We've found that the amount of open connections on a database server is |
64 |
a far better indicator of it's load. |
65 |
|
66 |
Apart from that you might want to take a look at the stickyness setting |
67 |
(persistence) of the loadbalancer, this influences any loadbalancing |
68 |
algorithm because it forces the loadbalancers to keep a certain client |
69 |
with a previously allocated node if it returns for a query within a |
70 |
time-threshold. With LVS if not specified the default is 300 seconds |
71 |
which is quite long. Due to the nature of mysql-connections we use no |
72 |
persistence at all on our database loadbalancers. |
73 |
|
74 |
If you want to start using wlc or lc you should be aware that it is |
75 |
dependent on proper fault detection on the nodes. Because it distributes |
76 |
the load according to the amount of connections a broken server (with 0 |
77 |
connections) will suck up all new connections to the loadbalancer. We're |
78 |
using keepalived to check on our realservers and remove them from the |
79 |
loadbalancing pool in case of problems. |
80 |
> |
81 |
> The question is â?? I'm trying to find at least 1 idea on why we need to |
82 |
> use LVS here. |
83 |
> |
84 |
I don't think there is a specific need, you could probably just as well |
85 |
use the F5 loadbalancers. Depends on your budget probably ;-) |
86 |
|
87 |
Regards, |
88 |
|
89 |
Ramon |
90 |
|
91 |
-- |
92 |
gentoo-cluster@g.o mailing list |