Gentoo Archives: gentoo-project

From: Matthew Thode <prometheanfire@g.o>
To: gentoo-project@l.g.o
Subject: Re: [gentoo-project] Replacement hardware planning for Gentoo VM hosting
Date: Fri, 20 May 2016 00:42:43
Message-Id: d0c0d66b-7724-b6e5-135e-dc1229dd5e9b@gentoo.org
In Reply to: [gentoo-project] Replacement hardware planning for Gentoo VM hosting by "Robin H. Johnson"
1 On 05/19/2016 06:47 PM, Robin H. Johnson wrote:
2 > Infra has already discussed most of this hardware planning in
3 > #gentoo-infra, but I thought it might be useful to see any other
4 > comments on the hardware plan. If you wish to make private comments to
5 > this thread, please send directly to infra@g.o or
6 > gentoo-core@l.g.o instead of the gentoo-project list.
7 >
8 > Remarks like 'you should use ZFS instead of this' aren't directly
9 > helpful to this discussion. What is more useful is pointing out any
10 > potential problems you might see with the plan, or gotchas in the
11 > hardware.
12 >
13 > We've previously run Ganeti [0] with general success, and we'd like to
14 > continue doing so (vs libvirt or openstack). It offers VM storage
15 > redundancy via DRBD (amongst other options), which we're going to take
16 > best advantage of by using a cross-over 10Gbit link between two nodes
17 > (as we have no 10GBit switching in the environment). Some of the VMs
18 > will run on spinning disk, others on SSD, others maybe w/ dm-cache.
19 > libvirt IS an easy fallback from Ganeti, but lacks some of the automated
20 > failover and DRBD handling options.
21 >
22 > This will house at least the following existing VMs, all of which have
23 > large storage needs:
24 > - woodpecker.gentoo.org
25 > - roverlay.dev.g.o
26 > - tinderbox.amd64.dev.g.o
27 > - devbox.amd64.dev.g.o
28 >
29 > And virtualize the following older systems:
30 > [2007 Dells]
31 > - finch.g.o (puppet)
32 > - vulture.g.o (GSoC host)
33 > [2010 Atoms]
34 > - bellbird.g.o (infra services)
35 > - bittern.g.o (blogs webhost)
36 > - bobolink.g.o (rsync.g.o node, dns slave)
37 > - brambling.g.o (bouncer, devmanual, infra-status)
38 > [Other]
39 > - meadowlark.g.o (infra services)
40 >
41 > And New VMs/services:
42 > - split git to rsync & snapshot generation from dipper?
43 > - split blogs (and other) database hosting from dipper?
44 >
45 > We'd probably keep the two other 2011 Dell systems in operation for the
46 > moment, to distribute load better, but have enough capacity to run their
47 > VMs as when they fail.
48 >
49 > The general best prices we've seen are from a vendor that's new to us,
50 > WiredZone, and we're willing to give them a try unless somebody has even
51 > better pricing to offer us.
52 >
53 > Hardware (all in $USD):
54 > Supermicro SYS-2028TP-DECTR [1][2]
55 > - $2,732.42/ea, quantity 1
56 > - two half-width 2U nodes in a single chassis w/ shared redundant PSU.
57 > - each node has:
58 > - 2x 10GBe ports (there are no SFP options)
59 > - 12x 2.5" SAS3, controller in JBOD/IT mode
60 > Per node:
61 > Intel Xeon E5-2620v4 [3] -
62 > - $421.56/ea, quantity 2
63 > 32GB DDR4 PC4-19200 (2400MHz) 288-pin RDIMM ECC Registered [4],
64 > - $162.89/ea, quantity 4
65 > - require min of two DIMMs per CPU
66 > - price jump to 64GB DIMMs very high.
67 > - buy more RAM later?
68 > Seagate 2TB SAS 12Gb/s 7200RPM 2.5in, ST2000NX0273 [5]
69 > - $315.18/ea, quantity 4
70 > - 4-disk RAID5 (mdadm)
71 > Samsung 850 EVO 1TB, MZ-75E1T0B/AM [6]
72 > - $345.00/ea, quantity 2
73 > - RAID1 (mdadm)
74 > = $3445.40/node
75 >
76 > Overall cost:
77 > $2,732.42 - chassis
78 > $3,445.40 - left node components
79 > $3,445.40 - right node components
80 > $ 315.18 - 1x spare ST2000NX0273 HDD
81 > $ 25.00 - 3ft CAT6a patch cable (estimated)
82 >
83 > Parts sub-total: $9,963.40
84 > Labour sub-total: $300 (estimate)
85 > Taxes: $0.00 (Oregon has no sales taxes)
86 > S&H: $200 (estimate)
87 >
88 > Grant total: $10,463.40 (USD)
89 >
90 > Future hardware improvement options:
91 > - Add more RAM
92 > - Add up to 6x more disks per node.
93 >
94 > [0] http://www.ganeti.org/
95 > [1] http://www.supermicro.com/products/system/2U/2028/SYS-2028TP-DECTR.cfm
96 > [2] http://www.wiredzone.com/supermicro-multi-node-servers-twin-barebone-dual-cpu-2-node-sys-2028tp-dectr-10024389
97 > [3] https://www.wiredzone.com/intel-components-cpu-processors-server-bx80660e52620v4-10025960
98 > [4] https://www.wiredzone.com/supermicro-components-memory-ddr4-mem-dr432l-sl01-er24-10025993
99 > [5] https://www.wiredzone.com/seagate-components-hard-drives-enterprise-st2000nx0273-10024175
100 > [6] https://www.wiredzone.com/samsung-components-hard-drives-enterprise-mz-75e1t0b-am-10024043
101 >
102
103 +1 to this generally, the one question I have is if we want to spend ~1k
104 more on one of the 4x nodes, it'd allow us to expand easier in the
105 future. The reasoning against this that I can think of is that we want
106 a higher disk/node ratio (which is just 6 per node instead of 12).
107
108 --
109 -- Matthew Thode (prometheanfire)

Attachments

File name MIME type
signature.asc application/pgp-signature

Replies

Subject Author
Re: [gentoo-project] Replacement hardware planning for Gentoo VM hosting Matthew Thode <prometheanfire@g.o>