Abisko

Abisko

 abisko_bosse.jpg

Abisko is one of the HPC clusters at HPC2N. It is comprised of 332 nodes with a total of 15936 CPU cores of which 328 nodes are available to our users. Each node is equipped with 4 AMD Opteron 6238 (Interlagos) 12 core 2.6 GHz processors, except for the 10 'fat' nodes, which are each equipped with 4 AMD Opteron 6344 (Abu Dhabi) 12 core (2.6 GHz) .
The 10 'fat' nodes have 512 GB RAM each, and the 322 'thin' nodes have 128 GB RAM each.

“The latest AMD technology with several energy-efficient processor cores and large memory capacity per node makes the system unique and provides the researchers with a great flexibility to accomplish many different computations and simulations.”

Professor Bo Kågström, HPC2N Director

Each 'fat' node has 1 TB local storage. Each 'thin' node has 500GB local storage.

The interconnect is 40 Gb/s Mellanox Infiniband, assuring fast internode communication.

"It is a tripling of the computing capacity compared with earlier. Abisko is so far the largest investment in both economic terms as well with respect to the total system performance and memory capacity."

Professor Bo Kågström

abisko_closeup_tall.jpg

 

 

Abisko was installed in 2011, and upgraded with 10 extra nodes early 2014.

Abisko has a theoretical peak performance of 163.74 TF and is reaching 131.9 TF with the HP LINPACK benchmark (found for the original 318 compute nodes, which had a theoretical peak performance of 160.7 TF, giving it a 82.05% efficiency).

Abisko was ranked 130 on the Top 500 list published June 2012.

abisko_small.jpg

Abisko runs Ubuntu 16.04 (Xenial Xerus). It has PathScale, Intel, Portland Group, and GNU compiler suites installed, as well as MPI libraries. It has a large host of software applications and numerical libraries installed as well.

Abisko is running SLURM as a job scheduler. Read more about how to use SLURM here.
 

"HPC2N is of course more attractive to our users and also our own research in parallel algorithm and software design gets more impact when we develop in pace with the technology evolution. For all researchers, it gives the possibility to study even more complex and large-scale computational problems in various fields."

Professor Bo Kågström

Hardware

  Abisko
Type Cluster
Nodes/Cores 328/15744 (10 'fat', 318 'thin')
CPU Each 'thin' node has 48 cores: 4 AMD Opteron 6238 (Interlagos) 12 core 2.6 GHz processors. Each 'fat' node has 48 cores: 4 AMD Opteron 6344 (Abu Dhabi) 12 core 2.6 GHz processors.
Memory 10 'fat' with 512 GB RAM/node, 318 'thin' with 128 GB RAM/node)
Disk 7200 rpm hotswap SATA disks. 1 TB /'fat' node, 500 GB /'thin' node
Interconnect Mellanox 4X QSFP 40 Gb/s InfiniBand
Theoretical Performance (TFlops) 163.74
HP Linpack (TFlops) 131.9 (for the original 318 nodes, giving it a 82.05% efficiency)
In use since Fall 2011 (interim system). Spring 2012 (final system). Extended with 10 extra nodes January 2014.
Named for Abisko National Park, in the Northern Swedish province of Lapland, near the Norwegian border.
Comments  Decomissioned on September 30 2020

As well as access to the pfs system, Abisko has local scratch space on the nodes (352 GB).

Architecture

Please see http://www.hpc2n.umu.se/resources/hardware/abisko/cpuarch describing the Abisko compute node CPU architecture.

Case studies

Please see https://www.hpc2n.umu.se/hardware/abisko/case-studies for some Abisko case studies.

The procurement of Abisko was made possible through a grant from the Swedish National Metacenter VR/SNIC and the support of Umeå University.

For more information please contact HPC2N Support.

Updated: 2024-10-10, 12:39