Gentoo Archives: gentoo-cluster

From: Alexey Shvetsov <alexxy@g.o>
To: Andrew Savchenko <bircoph@g.o>
Cc: gentoo-cluster@l.g.o, k_f@g.o
Subject: [gentoo-cluster] Re: Stats on HPC clusters running Gentoo
Date: Thu, 11 Jan 2018 18:42:41
Message-Id: dd17a7cb9f0fc43ea6919aae240761da@gentoo.org
In Reply to: [gentoo-cluster] Stats on HPC clusters running Gentoo by Andrew Savchenko
1 Hi all!
2
3 Sorry for delay.
4
5 Well we currently have few systems at SPbSTU (and yes some of them runs
6 gentoo, but currently not that one that is in top500)
7
8 So about systems:
9
10 One old called Sigma
11
12 8 nodes:
13 cpu: 4 x AMD Opteron 6174, 12 core, 2.2GHz
14 mem: 128G RAM
15 net: 40Gbps QDR Infiniband, 10Gbps Ethernet
16 Storage FS: Lustre
17 Storage size: 13.5Tb
18
19 Second one is slightly exotic system called ccNUMA (actualy part of HPC
20 complex installed in 2014)
21
22 64 nodes:
23 cpu: 3 x AMD Opteron 6380, 16 core, 2.5GHz
24 mem: 96G RAM
25 net: 56Gbps FDR Infiniband, NumaScale
26 Storage FS: Lustre
27 Storage size: 1Pb
28
29 ccNUMA actuly uses NumaScale interconnect and bootloader so it can be
30 used as Jumbo ccNUMA Single System Image
31 with ~3000 cores and 12Tb RAM that will be seen by single linux kernel
32 (and yes it runs gentoo)
33
34
35 Other 3 partitions of our HPC systems currently runs CentOS 7 (but may
36 be migrated to gentoo in near future). All this partitions shares same
37 Lustre FS as ccNUMA
38
39 * tornado : nodes: 612
40 cpu: 2 x Intel Xeon CPU E5-2697 v3 @ 2.60GHz
41 cores/hwthreads: 28 / 28
42 mem: 64G
43 net: 56Gbps FDR Infiniband
44 * tornado-k40 : nodes: 56
45 cpu: 2 x Intel Xeon CPU E5-2697 v3 @ 2.60GHz
46 cores/hwthreads: 28 / 28
47 co-processor: 2 x Nvidia Tesla K40x
48 co-porcessor mem: 12G
49 mem: 64G
50 net: 56Gbps FDR Infiniband
51 * mic : nodes: 288
52 cpu: 1 x Intel Xeon Phi 5120D @ 1.10GHz
53 cores/hwthreads: 60 / 240
54 mem: 8G
55 net: 56Gbps FDR Infiniband
56
57 Storage: 1 PB Lustre FS
58
59
60
61 PS also we have some exotic storage system used for backup (400+ disks
62 attached via SAS and runs single ZFS filesystem)
63
64 Andrew Savchenko писал 05-01-2018 02:18:
65 > Hi Alexxy and all!
66 >
67 > While preparing Gentoo flyer[1,2] on upcoming FOSDEM event it was
68 > suggested to include some words about Gentoo in HPC. So I'm asking
69 > community about recent stats of using Gentoo on HPC clusters.
70 >
71 > I ran several clusters at a university on Gentoo, but they are weak
72 > by modern standards (<10 TFlops CPU, no GPU for the best one, though
73 > with IB QDR interconnect), so I'm reluctant to add them in the
74 > flyer. If someone can provide us with better and actual numbers,
75 > this will be nice.
76 >
77 > IIRC Alexxy is running some HPC setups. Are they on Gentoo? If yes,
78 > please provide some stats (Flops, number of cores, nodes, storage,
79 > interconnect and so on).
80 >
81 > If someone can provide any other relevant info, we'll be grateful.
82 >
83 > Years ago there was a Gentoo cluster project page with list of
84 > active setups, but I can't find it any more and it is likely to be
85 > outdated anyway.
86 >
87 > [1]
88 > https://download.sumptuouscapital.com/gentoo/fosdem2018/gentoo-fosdem2018-flyer.pdf
89 > [2] git://git.sumptuouscapital.com/gentoo/pr-fosdem2018-flyer.git
90 >
91 > Best regards,
92 > Andrew Savchenko
93
94 --
95 Best Regards,
96 Alexey 'Alexxy' Shvetsov
97 Best Regards,
98 Alexey 'Alexxy' Shvetsov, PhD
99 Department of Molecular and Radiation Biophysics
100 FSBI Petersburg Nuclear Physics Institute, NRC Kurchatov Institute,
101 Leningrad region, Gatchina, Russia
102 Gentoo Team Ru
103 Gentoo Linux Dev
104 mailto:alexxyum@×××××.com
105 mailto:alexxy@g.o
106 mailto:alexxy@×××××××××××××.ru

Replies

Subject Author
Re: [gentoo-cluster] Re: Stats on HPC clusters running Gentoo Andrew Savchenko <bircoph@g.o>