High Performance Computing (HPC)
The Smart HPC staff handles the installation, management and maintenance of the High Performance Computing infrastructure, used by our researchers.
The group also takes care of the installation and management of software commonly used in scientific environments, such as compilers, libraries and software for molecular modeling and computational spectroscopy.
The main duty of the HPC staff is the management of the Avogadro computing cluster, which uses GNU/Linux as operating system and Bright Cluster Manager for cluster management and monitoring; additionally, PBS Pro is used for job submission and scheduling. This is completed using custom in-house scripts to automate management and monitoring task.
The HPC staff is in charge of:
- Installation, maintenance, management and delivery of the Avogadro Cluster based on CentOS Linux 6.5 and Bright Cluster Manager 7.0; this includes also the development of custom in-house scripts to automate management and monitoring tasks;
- PBS Professional 12.2 server configuration and fine tuning of the scheduling process and execution queues;
-
- Installation and management of software commonly used in scientific environments, such as compilers, libraries and software for molecular modeling and computational spectroscopy, paying particular attention to the ease-of-use of the cluster;
- Data Synchronization and Mirroring of the Avogadro Cluster Users List;
- Installation, maintenance, management and update of the SMARTLAB and SIAS facilities devoted to Cultural Heritage activities in the DREAMS network;
- Configuration of the Hitachi HNAS 3080 Storage and Aggregation Links;
- Installation and configuration of the Nvidia CUDA Toolkit for hybrid CPUs/GPUs computations;
- Development of software and HPC support for Parallel Computing (OpenMP/MPI-2), using standard implementations like OpenMPI or vendor developed libraries, and for Parallel Hybrid Computing (GP-GPU);
- Benchmarking activities over newly release platforms potentially designed for HPC applications;
- Code development on demand and Linux support over x84-64 processors.
Resources
HPC operations are supported by a dedicated and versatile infrastructure; users access the computing cluster by SSH to a gateway server, that acts also as the node for job submissions. The workload manager PBS Pro then takes care of queueing the computing jobs to the nodes, which are grouped in 7 clusters according to the architecture and role.
Please check our Ganglia server for an updated status of the cluster load. Furthermore, you can read our Wiki pages to learn more on how to use the cluster.
Cluster name | Total nodes | Cores per node | RAM per node | Notes |
Cannizzaro | 7 | 64, 24 or 16 (see notes) | 64 or 128 GB (see notes) | The cluster is made by AMD and Intel servers with different number of cores and RAM amount |
Curie | 28 | 16 | 64 GB | |
Hoffmann | 24 | 16 | 128 GB | |
IIT | 32 | 16 | 24 GB | |
Kohn | 8 | 16 | 24 GB | |
Lee | 8 | 24 | 128 GB | |
Pauling | 1 | 160 | 4 TB | SGI Ultraviolet 3000 with ccNUMA architecture |
Pople | 8 | 64 | 128 GB | |
Vanthoff | 1 | 240 | 6 TB | SGI Ultraviolet 2000 with ccNUMA architecture |
Zewail | 8 | 8 | 12 GB | |
People
Giordano Mancini
Researcher
Piazza San Silvestro, 7 - Office 1.2
Scuola Normale Superiore, Pisa
giordano.mancini@sns.it
Mariano Mirra
Staff
Piazza San Silvestro, 7 - Office 1.6
Scuola Normale Superiore, Pisa
mariano.mirra@sns.it
Sergio Fenicia
Staff
Piazza San Silvestro, 7 - Office 1.6
Scuola Normale Superiore, Pisa
sergio.fenicia@sns.it
Alberto Coduti
Staff
Piazza San Silvestro, 7 - Office 1.1
Scuola Normale Superiore, Pisa
alberto.coduti@sns.it