1 |
On 05/19/2016 07:45 PM, Matthew Thode wrote: |
2 |
> On 05/19/2016 07:42 PM, Matthew Thode wrote: |
3 |
>> On 05/19/2016 06:47 PM, Robin H. Johnson wrote: |
4 |
>>> Infra has already discussed most of this hardware planning in |
5 |
>>> #gentoo-infra, but I thought it might be useful to see any other |
6 |
>>> comments on the hardware plan. If you wish to make private comments to |
7 |
>>> this thread, please send directly to infra@g.o or |
8 |
>>> gentoo-core@l.g.o instead of the gentoo-project list. |
9 |
>>> |
10 |
>>> Remarks like 'you should use ZFS instead of this' aren't directly |
11 |
>>> helpful to this discussion. What is more useful is pointing out any |
12 |
>>> potential problems you might see with the plan, or gotchas in the |
13 |
>>> hardware. |
14 |
>>> |
15 |
>>> We've previously run Ganeti [0] with general success, and we'd like to |
16 |
>>> continue doing so (vs libvirt or openstack). It offers VM storage |
17 |
>>> redundancy via DRBD (amongst other options), which we're going to take |
18 |
>>> best advantage of by using a cross-over 10Gbit link between two nodes |
19 |
>>> (as we have no 10GBit switching in the environment). Some of the VMs |
20 |
>>> will run on spinning disk, others on SSD, others maybe w/ dm-cache. |
21 |
>>> libvirt IS an easy fallback from Ganeti, but lacks some of the automated |
22 |
>>> failover and DRBD handling options. |
23 |
>>> |
24 |
>>> This will house at least the following existing VMs, all of which have |
25 |
>>> large storage needs: |
26 |
>>> - woodpecker.gentoo.org |
27 |
>>> - roverlay.dev.g.o |
28 |
>>> - tinderbox.amd64.dev.g.o |
29 |
>>> - devbox.amd64.dev.g.o |
30 |
>>> |
31 |
>>> And virtualize the following older systems: |
32 |
>>> [2007 Dells] |
33 |
>>> - finch.g.o (puppet) |
34 |
>>> - vulture.g.o (GSoC host) |
35 |
>>> [2010 Atoms] |
36 |
>>> - bellbird.g.o (infra services) |
37 |
>>> - bittern.g.o (blogs webhost) |
38 |
>>> - bobolink.g.o (rsync.g.o node, dns slave) |
39 |
>>> - brambling.g.o (bouncer, devmanual, infra-status) |
40 |
>>> [Other] |
41 |
>>> - meadowlark.g.o (infra services) |
42 |
>>> |
43 |
>>> And New VMs/services: |
44 |
>>> - split git to rsync & snapshot generation from dipper? |
45 |
>>> - split blogs (and other) database hosting from dipper? |
46 |
>>> |
47 |
>>> We'd probably keep the two other 2011 Dell systems in operation for the |
48 |
>>> moment, to distribute load better, but have enough capacity to run their |
49 |
>>> VMs as when they fail. |
50 |
>>> |
51 |
>>> The general best prices we've seen are from a vendor that's new to us, |
52 |
>>> WiredZone, and we're willing to give them a try unless somebody has even |
53 |
>>> better pricing to offer us. |
54 |
>>> |
55 |
>>> Hardware (all in $USD): |
56 |
>>> Supermicro SYS-2028TP-DECTR [1][2] |
57 |
>>> - $2,732.42/ea, quantity 1 |
58 |
>>> - two half-width 2U nodes in a single chassis w/ shared redundant PSU. |
59 |
>>> - each node has: |
60 |
>>> - 2x 10GBe ports (there are no SFP options) |
61 |
>>> - 12x 2.5" SAS3, controller in JBOD/IT mode |
62 |
>>> Per node: |
63 |
>>> Intel Xeon E5-2620v4 [3] - |
64 |
>>> - $421.56/ea, quantity 2 |
65 |
>>> 32GB DDR4 PC4-19200 (2400MHz) 288-pin RDIMM ECC Registered [4], |
66 |
>>> - $162.89/ea, quantity 4 |
67 |
>>> - require min of two DIMMs per CPU |
68 |
>>> - price jump to 64GB DIMMs very high. |
69 |
>>> - buy more RAM later? |
70 |
>>> Seagate 2TB SAS 12Gb/s 7200RPM 2.5in, ST2000NX0273 [5] |
71 |
>>> - $315.18/ea, quantity 4 |
72 |
>>> - 4-disk RAID5 (mdadm) |
73 |
>>> Samsung 850 EVO 1TB, MZ-75E1T0B/AM [6] |
74 |
>>> - $345.00/ea, quantity 2 |
75 |
>>> - RAID1 (mdadm) |
76 |
>>> = $3445.40/node |
77 |
>>> |
78 |
>>> Overall cost: |
79 |
>>> $2,732.42 - chassis |
80 |
>>> $3,445.40 - left node components |
81 |
>>> $3,445.40 - right node components |
82 |
>>> $ 315.18 - 1x spare ST2000NX0273 HDD |
83 |
>>> $ 25.00 - 3ft CAT6a patch cable (estimated) |
84 |
>>> |
85 |
>>> Parts sub-total: $9,963.40 |
86 |
>>> Labour sub-total: $300 (estimate) |
87 |
>>> Taxes: $0.00 (Oregon has no sales taxes) |
88 |
>>> S&H: $200 (estimate) |
89 |
>>> |
90 |
>>> Grant total: $10,463.40 (USD) |
91 |
>>> |
92 |
>>> Future hardware improvement options: |
93 |
>>> - Add more RAM |
94 |
>>> - Add up to 6x more disks per node. |
95 |
>>> |
96 |
>>> [0] http://www.ganeti.org/ |
97 |
>>> [1] http://www.supermicro.com/products/system/2U/2028/SYS-2028TP-DECTR.cfm |
98 |
>>> [2] http://www.wiredzone.com/supermicro-multi-node-servers-twin-barebone-dual-cpu-2-node-sys-2028tp-dectr-10024389 |
99 |
>>> [3] https://www.wiredzone.com/intel-components-cpu-processors-server-bx80660e52620v4-10025960 |
100 |
>>> [4] https://www.wiredzone.com/supermicro-components-memory-ddr4-mem-dr432l-sl01-er24-10025993 |
101 |
>>> [5] https://www.wiredzone.com/seagate-components-hard-drives-enterprise-st2000nx0273-10024175 |
102 |
>>> [6] https://www.wiredzone.com/samsung-components-hard-drives-enterprise-mz-75e1t0b-am-10024043 |
103 |
>>> |
104 |
>> |
105 |
>> +1 to this generally, the one question I have is if we want to spend ~1k |
106 |
>> more on one of the 4x nodes, it'd allow us to expand easier in the |
107 |
>> future. The reasoning against this that I can think of is that we want |
108 |
>> a higher disk/node ratio (which is just 6 per node instead of 12). |
109 |
>> |
110 |
> Another reason I just thought of is if we only have access to 15A |
111 |
> outlets. If so it'd limit us to one of the following (from our list) as |
112 |
> they are 1600W. |
113 |
> |
114 |
|
115 |
http://www.supermicro.com/products/system/2U/2028/SYS-2028TR-HTR.cfm |
116 |
http://www.supermicro.com/products/system/2U/2028/SYS-2028TR-H72R.cfm |
117 |
|
118 |
fixed... |
119 |
|
120 |
-- |
121 |
-- Matthew Thode (prometheanfire) |