Resources

This page contains resources for running ONETEP on various computing facilities.

Please note that a recent change to some browsers the configuration files will not open directly in a browser window. Should you be affected by this, download the files and access them directly.

Minerva

Description The Warwick Centre for Scientific Computing's High Performance Computing Cluster.

3000 cores.
Last updated 9th March 2015
Config file conf.minerva
Submit Script submit.minerva

Iridis3

Description The Iridis3 supercomputer at the University of Southampton

1008 Compute Nodes with two 4-core 2.27 Ghz Nehalem processors, providing over 72 TFlops.
Last updated 01 Aug 2011
Config file conf.iridis3
Submit Script submit.iridis3

Iridis3 GPU nodes

Description The GPU nodes of the Iridis3 supercomputer at the University of Southampton

30 Compute Nodes with one 8-core 2.4 Ghz Xeon processors and two NVIDIA Tesla 20-series, M2050, GPU processors.
Last updated 19 Feb 2013
Config file conf.iridis3.gpu
Submit Script submit.iridis3.gpu

Iridis4

Description The Iridis4 supercomputer at the University of Southampton

12200 cores (250 TFLOPS), 24 Intel Xeon Phi accelerators (25 TFLOPS)
Last updated 16 Jul 2013
Config file conf.iridis4_openmpi

Iridis5

Description The Iridis5 supercomputer at the University of Southampton
Last updated 08 May 2018
Config file: conf.iridis5.intel18.omp.scalapack (currently not working)
conf.iridis5.gfortran640.impi17.imkl17.omp.scalapack
Submit Script submit_onetep_iridis5.gfortran640.impi17.imkl17.omp.scalapack.slurm

Emerald GPU cluster

Description The GPU nodes of the Emerald cluster at Rutherford Appleton Laboratory. Please note that the use of the numawrap8gpu script along with exclusive use of nodes is important in order to get optimal performance.

60 HP SL390 compute nodes with two 6-core X5650 Intel Xeons and three 512-core M2090 NVIDIA GPUs and 48GB memory.

24 HP SL390 compute nodes with two 6-core X5650 Intel Xeons and eight 512-core M2090 NVIDIA GPUs and 96GB memory.

Currently, due to issues with the MVAPICH2 implementation of MPI, it is best to exclusively use the 8 GPU nodes and to use n*8 threads.

Last updated 19 Feb 2013
Config file conf.emerald.gpu
Submit Script submit.emerald.gpu
QPI script numawrap8gpu

CX1

Description The Imperial High Performance Computing Cluster.

510 dual core nodes of 3 different configurations (Intel Xeon EM64T CPUs running at 3.6 GHz with 2GB / node, Intel Xeon dual core 5150 CPUs running at 2.66 GHz with 4GB /node, intel Xeon dual core 5150 CPUs running at 2.66 GHz with 8GB /node).
Last updated 08 Nov 2019
Config file conf.cx1
Submit Script submit.cx1

CMTH Network

Description Desktops in the Imperial College Condensed Matter Theory Group and Thomas Young Centre.

Most machines are HPs with Intel i7-2600 CPUs and 15GB RAM.
Last updated 23 Oct 2019
Config file (Intel compiler) conf_intel.cmth
Config file (GNU compiler) conf_gcc.cmth
Submit Script submit.cmth

Workstation PCs in Southampton

Description The RHEL65 PCs in Southampton

These have Intel quad-core COREi7 processors with HyperThreading, and 12-16GB of RAM. The config file is for Intel Fortran v16, OpenMP, ScaLAPACK, Intel MPI.
Last updated 11 May 2016
Config file conf.RH65.intel16.omp.scalapack
Description The RHEL65 PCs in Southampton

These have Intel quad-core COREi7 processors with HyperThreading, and 12-16GB of RAM. The config file is for gfortran 4.9.1, OpenMP, ScaLAPACK, Intel MPI.
Last updated 11 May 2016
Config file conf.RH65.gfortran.intelmpi.omp.scalapack

Thomas

Description The Thomas HPC facility, run by the MMM Hub

The UK National Tier 2 High Performance Computing Hub in Materials and Molecular Modelling. Nodes have 2 x 12-core Intel Broadwell processors and 128GB RAM.
Last updated 23 October 2019
Config file conf.thomas
Page last modified on November 08, 2019, at 04:24 PM