public inbox for gentoo-cluster@lists.gentoo.org
 help / color / mirror / Atom feed
From: Alexey Shvetsov <alexxy@gentoo.org>
To: Andrew Savchenko <bircoph@gentoo.org>
Cc: gentoo-cluster@lists.gentoo.org, k_f@gentoo.org
Subject: [gentoo-cluster] Re: Stats on HPC clusters running Gentoo
Date: Thu, 11 Jan 2018 21:42:37 +0300	[thread overview]
Message-ID: <dd17a7cb9f0fc43ea6919aae240761da@gentoo.org> (raw)
In-Reply-To: <20180105021803.c1cf3df2a48ff77d44aede0e@gentoo.org>

Hi all!

Sorry for delay.

Well we currently have few systems at SPbSTU (and yes some of them runs 
gentoo, but currently not that one that is in top500)

So about systems:

One old called Sigma

  8 nodes:
                 cpu:    4 x AMD Opteron 6174, 12 core, 2.2GHz
                 mem:    128G RAM
                 net:    40Gbps QDR Infiniband, 10Gbps Ethernet
                 Storage FS:     Lustre
                 Storage size:   13.5Tb

Second one is slightly exotic system called ccNUMA (actualy part of HPC 
complex installed in 2014)

64 nodes:
                 cpu:    3 x AMD Opteron 6380, 16 core, 2.5GHz
                 mem:    96G RAM
                 net:    56Gbps FDR Infiniband, NumaScale
                 Storage FS:     Lustre
                 Storage size:   1Pb

ccNUMA actuly uses NumaScale interconnect and bootloader so it can be 
used as Jumbo ccNUMA Single System Image
with ~3000 cores and 12Tb RAM that will be seen by single linux kernel 
(and yes it runs gentoo)


Other 3 partitions of our HPC systems currently runs CentOS 7 (but may 
be migrated to gentoo in near future). All this partitions shares same 
Lustre FS as ccNUMA

   * tornado     : nodes: 612
                     cpu: 2 x Intel Xeon CPU E5-2697 v3 @ 2.60GHz
         cores/hwthreads: 28 / 28
                     mem: 64G
                     net: 56Gbps FDR Infiniband
   * tornado-k40 : nodes: 56
                     cpu: 2 x Intel Xeon CPU E5-2697 v3 @ 2.60GHz
         cores/hwthreads: 28 / 28
            co-processor: 2 x Nvidia Tesla K40x
        co-porcessor mem: 12G
                     mem: 64G
                     net: 56Gbps FDR Infiniband
   * mic         : nodes: 288
                     cpu: 1 x Intel Xeon Phi 5120D @ 1.10GHz
         cores/hwthreads: 60 / 240
                     mem: 8G
                     net: 56Gbps FDR Infiniband

Storage: 1 PB Lustre FS



PS also we have some exotic storage system used for backup (400+ disks 
attached via SAS and runs single ZFS filesystem)

Andrew Savchenko писал 05-01-2018 02:18:
> Hi Alexxy and all!
> 
> While preparing Gentoo flyer[1,2] on upcoming FOSDEM event it was
> suggested to include some words about Gentoo in HPC. So I'm asking
> community about recent stats of using Gentoo on HPC clusters.
> 
> I ran several clusters at a university on Gentoo, but they are weak
> by modern standards (<10 TFlops CPU, no GPU for the best one, though
> with IB QDR interconnect), so I'm reluctant to add them in the
> flyer. If someone can provide us with better and actual numbers,
> this will be nice.
> 
> IIRC Alexxy is running some HPC setups. Are they on Gentoo? If yes,
> please provide some stats (Flops, number of cores, nodes, storage,
> interconnect and so on).
> 
> If someone can provide any other relevant info, we'll be grateful.
> 
> Years ago there was a Gentoo cluster project page with list of
> active setups, but I can't find it any more and it is likely to be
> outdated anyway.
> 
> [1]
> https://download.sumptuouscapital.com/gentoo/fosdem2018/gentoo-fosdem2018-flyer.pdf
> [2] git://git.sumptuouscapital.com/gentoo/pr-fosdem2018-flyer.git
> 
> Best regards,
> Andrew Savchenko

-- 
Best Regards,
Alexey 'Alexxy' Shvetsov
Best Regards,
Alexey 'Alexxy' Shvetsov, PhD
Department of Molecular and Radiation Biophysics
FSBI Petersburg Nuclear Physics Institute, NRC Kurchatov Institute,
Leningrad region, Gatchina, Russia
Gentoo Team Ru
Gentoo Linux Dev
mailto:alexxyum@gmail.com
mailto:alexxy@gentoo.org
mailto:alexxy@omrb.pnpi.spb.ru


  reply	other threads:[~2018-01-11 18:42 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-04 23:18 [gentoo-cluster] Stats on HPC clusters running Gentoo Andrew Savchenko
2018-01-11 18:42 ` Alexey Shvetsov [this message]
2018-01-12 23:32   ` [gentoo-cluster] " Andrew Savchenko
     [not found] ` <a22452a3-faba-dcf3-cecd-558d1a0c7406@gentoo.org>
     [not found]   ` <8be6559a-8116-6a22-7568-8956f77a4935@gentoo.org>
     [not found]     ` <9b749d52-c944-f9c3-44e8-74be8a7e3a61@gentoo.org>
     [not found]       ` <4bd29d6d-a454-d417-54ca-28a3595f468f@gentoo.org>
     [not found]         ` <fae48e63-9fd8-e70e-d815-f52b1ebcbab1@gentoo.org>
2018-01-14 20:03           ` [gentoo-cluster] " Alexey Shvetsov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd17a7cb9f0fc43ea6919aae240761da@gentoo.org \
    --to=alexxy@gentoo.org \
    --cc=bircoph@gentoo.org \
    --cc=gentoo-cluster@lists.gentoo.org \
    --cc=k_f@gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox