The Smart HPC staff handles the installation, management and maintenance of the High Performance Computing infrastructure, used by our researchers.
The group also takes care of the installation and management of software commonly used in scientific environments, such as compilers, libraries and software for molecular modeling and computational spectroscopy.
The main duty of the HPC staff is the management of the Avogadro computing cluster, which uses GNU/Linux as operating system and Bright Cluster Manager for cluster management and monitoring; additionally, PBS Pro is used for job submission and scheduling. This is completed using custom in-house scripts to automate management and monitoring task.
The HPC staff is in charge of:
HPC operations are supported by a dedicated and versatile infrastructure; users access the computing cluster by SSH to a gateway server, that acts also as the node for job submissions. The workload manager PBS Pro then takes care of queueing the computing jobs to the nodes, which are grouped in 7 clusters according to the architecture and role. You can read our Wiki pages to learn more on how to use the cluster.
|Cluster name||Total nodes||Cores per node||RAM per node||Notes|
|Cannizzaro||5||64, 24 or 16 (see notes)||64 or 128 GB (see notes)||The cluster is made by AMD and Intel servers with different number of cores and RAM amount. The nodes also hosts various models of Nvidia GPU cards for CUDA computations.|
|Mulliken||1||224||6 TB||HPE Superdome Flex Server with ccNUMA architecture (with Hyper-Threading)|
|Pauling||1||160||4 TB||SGI Ultraviolet 3000 with ccNUMA architecture|
|Vanthoff||1||240||6 TB||SGI Ultraviolet 2000 with ccNUMA architecture|