1 |
On 05/19/2016 07:42 PM, Matthew Thode wrote: |
2 |
> On 05/19/2016 06:47 PM, Robin H. Johnson wrote: |
3 |
>> Infra has already discussed most of this hardware planning in |
4 |
>> #gentoo-infra, but I thought it might be useful to see any other |
5 |
>> comments on the hardware plan. If you wish to make private comments to |
6 |
>> this thread, please send directly to infra@g.o or |
7 |
>> gentoo-core@l.g.o instead of the gentoo-project list. |
8 |
>> |
9 |
>> Remarks like 'you should use ZFS instead of this' aren't directly |
10 |
>> helpful to this discussion. What is more useful is pointing out any |
11 |
>> potential problems you might see with the plan, or gotchas in the |
12 |
>> hardware. |
13 |
>> |
14 |
>> We've previously run Ganeti [0] with general success, and we'd like to |
15 |
>> continue doing so (vs libvirt or openstack). It offers VM storage |
16 |
>> redundancy via DRBD (amongst other options), which we're going to take |
17 |
>> best advantage of by using a cross-over 10Gbit link between two nodes |
18 |
>> (as we have no 10GBit switching in the environment). Some of the VMs |
19 |
>> will run on spinning disk, others on SSD, others maybe w/ dm-cache. |
20 |
>> libvirt IS an easy fallback from Ganeti, but lacks some of the automated |
21 |
>> failover and DRBD handling options. |
22 |
>> |
23 |
>> This will house at least the following existing VMs, all of which have |
24 |
>> large storage needs: |
25 |
>> - woodpecker.gentoo.org |
26 |
>> - roverlay.dev.g.o |
27 |
>> - tinderbox.amd64.dev.g.o |
28 |
>> - devbox.amd64.dev.g.o |
29 |
>> |
30 |
>> And virtualize the following older systems: |
31 |
>> [2007 Dells] |
32 |
>> - finch.g.o (puppet) |
33 |
>> - vulture.g.o (GSoC host) |
34 |
>> [2010 Atoms] |
35 |
>> - bellbird.g.o (infra services) |
36 |
>> - bittern.g.o (blogs webhost) |
37 |
>> - bobolink.g.o (rsync.g.o node, dns slave) |
38 |
>> - brambling.g.o (bouncer, devmanual, infra-status) |
39 |
>> [Other] |
40 |
>> - meadowlark.g.o (infra services) |
41 |
>> |
42 |
>> And New VMs/services: |
43 |
>> - split git to rsync & snapshot generation from dipper? |
44 |
>> - split blogs (and other) database hosting from dipper? |
45 |
>> |
46 |
>> We'd probably keep the two other 2011 Dell systems in operation for the |
47 |
>> moment, to distribute load better, but have enough capacity to run their |
48 |
>> VMs as when they fail. |
49 |
>> |
50 |
>> The general best prices we've seen are from a vendor that's new to us, |
51 |
>> WiredZone, and we're willing to give them a try unless somebody has even |
52 |
>> better pricing to offer us. |
53 |
>> |
54 |
>> Hardware (all in $USD): |
55 |
>> Supermicro SYS-2028TP-DECTR [1][2] |
56 |
>> - $2,732.42/ea, quantity 1 |
57 |
>> - two half-width 2U nodes in a single chassis w/ shared redundant PSU. |
58 |
>> - each node has: |
59 |
>> - 2x 10GBe ports (there are no SFP options) |
60 |
>> - 12x 2.5" SAS3, controller in JBOD/IT mode |
61 |
>> Per node: |
62 |
>> Intel Xeon E5-2620v4 [3] - |
63 |
>> - $421.56/ea, quantity 2 |
64 |
>> 32GB DDR4 PC4-19200 (2400MHz) 288-pin RDIMM ECC Registered [4], |
65 |
>> - $162.89/ea, quantity 4 |
66 |
>> - require min of two DIMMs per CPU |
67 |
>> - price jump to 64GB DIMMs very high. |
68 |
>> - buy more RAM later? |
69 |
>> Seagate 2TB SAS 12Gb/s 7200RPM 2.5in, ST2000NX0273 [5] |
70 |
>> - $315.18/ea, quantity 4 |
71 |
>> - 4-disk RAID5 (mdadm) |
72 |
>> Samsung 850 EVO 1TB, MZ-75E1T0B/AM [6] |
73 |
>> - $345.00/ea, quantity 2 |
74 |
>> - RAID1 (mdadm) |
75 |
>> = $3445.40/node |
76 |
>> |
77 |
>> Overall cost: |
78 |
>> $2,732.42 - chassis |
79 |
>> $3,445.40 - left node components |
80 |
>> $3,445.40 - right node components |
81 |
>> $ 315.18 - 1x spare ST2000NX0273 HDD |
82 |
>> $ 25.00 - 3ft CAT6a patch cable (estimated) |
83 |
>> |
84 |
>> Parts sub-total: $9,963.40 |
85 |
>> Labour sub-total: $300 (estimate) |
86 |
>> Taxes: $0.00 (Oregon has no sales taxes) |
87 |
>> S&H: $200 (estimate) |
88 |
>> |
89 |
>> Grant total: $10,463.40 (USD) |
90 |
>> |
91 |
>> Future hardware improvement options: |
92 |
>> - Add more RAM |
93 |
>> - Add up to 6x more disks per node. |
94 |
>> |
95 |
>> [0] http://www.ganeti.org/ |
96 |
>> [1] http://www.supermicro.com/products/system/2U/2028/SYS-2028TP-DECTR.cfm |
97 |
>> [2] http://www.wiredzone.com/supermicro-multi-node-servers-twin-barebone-dual-cpu-2-node-sys-2028tp-dectr-10024389 |
98 |
>> [3] https://www.wiredzone.com/intel-components-cpu-processors-server-bx80660e52620v4-10025960 |
99 |
>> [4] https://www.wiredzone.com/supermicro-components-memory-ddr4-mem-dr432l-sl01-er24-10025993 |
100 |
>> [5] https://www.wiredzone.com/seagate-components-hard-drives-enterprise-st2000nx0273-10024175 |
101 |
>> [6] https://www.wiredzone.com/samsung-components-hard-drives-enterprise-mz-75e1t0b-am-10024043 |
102 |
>> |
103 |
> |
104 |
> +1 to this generally, the one question I have is if we want to spend ~1k |
105 |
> more on one of the 4x nodes, it'd allow us to expand easier in the |
106 |
> future. The reasoning against this that I can think of is that we want |
107 |
> a higher disk/node ratio (which is just 6 per node instead of 12). |
108 |
> |
109 |
Another reason I just thought of is if we only have access to 15A |
110 |
outlets. If so it'd limit us to one of the following (from our list) as |
111 |
they are 1600W. |
112 |
|
113 |
http://www.supermicro.com/products/system/2U/2028/SYS-2028TR-HTR.cfmhttp://www.supermicro.com/products/system/2U/2028/SYS-2028TR-H72R.cfm |
114 |
|
115 |
|
116 |
-- |
117 |
-- Matthew Thode (prometheanfire) |