1 |
On Oct 26, 2011 8:55 AM, "kashani" <kashani-list@××××××××.net> wrote: |
2 |
> |
3 |
> On 10/25/2011 6:27 PM, Pandu Poluan wrote: |
4 |
>> |
5 |
>> (Sorry for the late reply; somehow this thread got lost in the mess) |
6 |
>> |
7 |
>> On Oct 12, 2011 2:03 AM, "James" <wireless@×××××××××××.com |
8 |
>> <mailto:wireless@×××××××××××.com>> wrote: |
9 |
>> |
10 |
>> > |
11 |
>> > Pandu Poluan <pandu <at> poluan.info <http://poluan.info>> writes: |
12 |
>> > |
13 |
>> > |
14 |
>> > > The head honcho of my company just asked me to "plan for migration |
15 |
of |
16 |
>> > > X into the cloud" (where "X" is the online trading server that our |
17 |
>> > > investors used). |
18 |
>> > |
19 |
>> > This is a single server or many at different locations. |
20 |
>> > If a WAN monitoring is what you are after, along with individual |
21 |
>> > server resources, you have many choices. |
22 |
>> > |
23 |
>> |
24 |
>> It's a single server that's part of a three-server system. The server |
25 |
>> needs to communicate with its 2 cohorts continuously, so I have to |
26 |
>> provision enough backhaul bandwidth from the cloud to my data center. |
27 |
>> |
28 |
>> In addition to provisioning enough RAM and CPU, of course. |
29 |
>> |
30 |
>> > > Now, I need to monitor how much RAM is used throughout the day by X, |
31 |
>> > > also how much bandwidth gets eaten by X throughout the day. |
32 |
>> > |
33 |
>> > Most of the packages monitor ram as well as other resource utilization |
34 |
>> > of the servers, firewall, routers and other SNMP devices in your |
35 |
network. |
36 |
>> > some experimentation may be warranted to find what your team likes |
37 |
best. |
38 |
>> > |
39 |
>> |
40 |
>> Currently I've settled on a simple solution: run dstat[1] with nohup 30 |
41 |
>> minutes before 1st trading session, stop it 30 minutes after 2nd trading |
42 |
>> session, and send the CSV record via email. Less intrusion into the |
43 |
>> system (which the Systems guys rightly have reservations of). |
44 |
>> |
45 |
> |
46 |
> You're not going to be happy with this design for a couple of |
47 |
reasons. |
48 |
> |
49 |
> 1. It's more expensive that your current setup. If the two servers at your |
50 |
datacenter are down I assume the server is the cloud is useless and vice |
51 |
versa. You already have to maintain infrastructure for those two servers so |
52 |
you're realizing no savings by eliminating on server from your |
53 |
infrastructure. Buying a $1500 rack server amortized over three years is a |
54 |
better deal than paying for equivalent power in the cloud. |
55 |
> |
56 |
> 2. Latency. You're increasing it. |
57 |
> |
58 |
> 3. Cloud performance varies. Networks split, machines run slow, it |
59 |
happens. You'll have more consistent performance on your own machines. It's |
60 |
getting better, but it's still something with which to be aware. |
61 |
> |
62 |
> Migrating to virtual servers makes some sense, but you need to look |
63 |
at it on a case by case basis. |
64 |
> |
65 |
> kashani |
66 |
> |
67 |
|
68 |
Indeed. |
69 |
|
70 |
The fact is that the server-to-be-clouded is a trading server used by |
71 |
approx. 3000+ clients (investors). Our 20 Mbps line is barely able to cater |
72 |
them all, and my company has to pay through the nose for each additional |
73 |
Mbps. |
74 |
|
75 |
How big a nose? I'm looking at the quotation on my desk that says my.company |
76 |
needs to shell out 200 USD (excl tax) per additional Mbps per month... |
77 |
|
78 |
The traffic between that server and the other 2 is less than 20 Mbps, but |
79 |
how much less, that's what I'm trying to find out. |
80 |
|
81 |
As to latency... the "powers that be" had decided that, "... sucks to be |
82 |
trading on your own; better to go through our dealers... " :-P |
83 |
|
84 |
Rgds, |