Retired Hardware

Old, Retired Hardware Systems

Machine Hardware Description

abisko-hardware_0.jpg

Abisko

  • 328 nodes/15744 cores (10 'fat', 318 'thin')
  • 4 AMD Opteron 6238 (Interlagos) 12 core 2.6 GHz processors (thin nodes)
  • 4 AMD Opteron 6344 (Abu Dhabi) 12 core 2.6 GHz processors (fat nodes)
  • 128 GB RAM/node (thin)
  • 512 GB RAM/node (fat)
  • Mellanox 4X QSFP 40 Gb/s InfiniBand
  • 163.74 TFlops (theoretical)

Named for Abisko National Park, in the Northern Swedish province of Lapland, near the Norwegian border.

The cluster Abisko was installed in fall 2011 (interim system). In spring 2012  the final system was installed. It was extended with 10 extra nodes in January 2014. It was retired in the end of 2020.

Go here to read more about Abisko.

Bender

 

Retired in 2019

akka1.jpg

Akka

  • 672 nodes/5376 cores
  • 2 x Intel Xeon quad-core L5420 (2.5 GHz)
  • 16 GB RAM/node
  • Infiniband 10 Gb/s
  • 53.8 TFlops

 

Akka is a massif in the southwestern corner of Stora Sjöfallet National Park in northern Sweden.

The cluster Akka was installed in 2008 and retired in early 2016.

Go here to read more about Akka.

ritsem_cluster.jpg
Ritsem
  • 68 nodes/544 cores
  • 2 x Intel Xeon quad-core E5430 (2.66GHz)
  • 16 GB RAM/node
  • GigaBit Ethernet
  • 5.8 TFlops

It was named for the location Ritsem, near the Stora Sjöfallet National Park in northern Sweden.

Ritsem was one of the SweGrid machines. On this machine, jobs could only be run over the grid - there was no local PBS job access. It was in use from 2008 to early 2014.

More information about this retired cluster can be found here.

EVGA GTX 285.jpg
amd-ati-radeon-5850.jpg
Hamrinsberget
  • 2 AMD six-core Opteron 8431, 2.4 GHz
  • 32 GB shared memory, 250 GB disk
  • 1 nVidia GeForce GTX 285, and 1 ATI Radeon HD 5800 card

This machine was used for GPGPU testing. It is named for a small mountain near Umeå University. It was in use 2009-2013.

More information about this retired cluster can be found here.

sarek_front_small.jpg
Sarek
  • A total of 385 processors and 1.54 TB of memory
  • 190 HP DL145 nodes, withh dual AMD Opteron 248 (2.2 GHz)
  • 2 HP DL585 with dual AMD Opteron 248 (2.2 GHz)
  • 8 GB memory per node
  • Myrinet 2000 high speed interconnect

This Opteron cluster is named sarek.hpc2n.umu.se after Sarek National Park

More information about this retired cluster can be found here.

ingrid_small.jpg
Ingrid
  • 100 nodes with Pentium4 (2.8 GHz)
  • 2 GB memory per node
  • Fast ethernet
This Linux cluster was part of a SweGrid initiative HPC2N received, as one of 6 sites. Each CPU ran at a peak floating point performance of 5.6 Gflops/s. It's main purpose was to be a Grid resource within SweGrid, but a part of the machine was also available for local use.

3494.jpg

TLB

  • One L22 library manager rack housing rapes, tape drives, and control equipment
  • One D12 drive rack only housing tapes
  • A capacity of 550 tape slots
  • Three 3592 tape drives capable of storing 300 GB of uncompressed data per tape at the transfer speed of 40 MB/s sustained
This was the previous tape storage facility at HPC2N. It was based on an IBM 3494 tape library with IBM 3592 tape drives.
seth_small.jpg
Seth
  • A total of 240 processors and 120 GB of memory
  • 120 nodes, dual Athlon MP2000+ (1.667 GHz)
  • 1 GB memory per node
  • Wulfkit3 SCI high speed interconnect
This HPC2N Super Cluster was named seth.hpc2n.umu.se after Seth Kempe, grandfather of Carl Kempe. Carl Kempe was at that time the chairman of the Kempe Foundations, who generously provided HPC2N with the means to purchase the cluster.

For more information about this retired cluster, go here.

knut.jpg
Knut
  • 64 Compute nodes
    • 1 120 MHz Power2SC Processor
    • 62 nodes with 128 MB Memory
    • 1 node with 256 MB Memory
    • 1 node with 1024 MB Memory
    • 1 IBM DFHSS4W, 4 GB
    • 1 IBM Switch Communications Adapter
  • 2 Compute nodes
    • 1 160 MHz Power2SC Processor
    • 256 MB Memory
    • 1 IBM DFHSS4W, 4GB
    • 1 IBM Switch Communications Adapter
  • 2 SMP nodes (1 login, 1 server)
    • 4 112 MHz PowerPC 604 Processors
    • 256 MB Memory
    • 3 IBM DFHSS2F, 2 GB
    • 1 IBM 10/100 Mbps NIC
    • 1 IBM Switch Communications Adapter
Knut was an IBM SP system up and running for the first time on January 12, 1997.
The acquisition was made possible through a generous donation from the Knut and Alice Wallenberg Foundation.

At this time, Knut was the most powerful computer in Sweden and also the most powerful IBM system in Europe. It quickly became a very stable production resource and stayed as the the main computing system at HPC2N for several years. As a matter of fact, Knut was available available to users from all over Sweden for a period of seven years (until december 2003), and later also used for algorithm development, test and evaluation. During most of this seven years period, the system utilization was on the level of 75-80%, 24 hours a day, seven days a week.

The 68 nodes IBM Scalable POWERparallel system (SP) consists of 64 120MHz Power2SC nodes, two 160MHz Power2SC nodes and two 4-way SMP 112MHz PowerPC 604 nodes. These are connected with an IBM High Performance switch.

More information about this retired cluster can be found here.

flipper.jpg
Flipper
  • One Front-End Control Work Station
    • 1 Midi tower
    • 2 PIII 550 MHz Processors
    • 512 MB Memory
    • 1 FUJITSU MPE3064AT, 6187MB
    • 1 IBM-DTLA-307060, 58644MB
    • 1 floppy unit
    • 1 CD rom unit
    • 2 Dlink 10/100 NIC
    • ATI 3D charge Gfx card
  • 8 Compute nodes
    • Midi tower
    • 2 PIII 550 MHz Processors
    • 512 MB Memory
    • 1 FUJITSU MPE3064AT, 6187MB
    • 1 floppy unit
    • 1 Dlink 10/100 NIC
    • 1 Wulfkit SCI
    • ATI 3D charge Gfx card
Flipper was a so called Beowulf Cluster, basiclly 'a-pile-of-PC's'. It was built as part of a master thesis by Fredrik Augustsson summer/fall 2000 with Åke Sandgren and Björn Torkelsson as supervisors.

Flipper consists of 9 dual Intel Pentium III 550 MHz processors. The nodes are connected with a switchless high capacity network from Dolphin ICS. For maintenance the cluster was also connected with fast Ethernet.

The cluster was set up in a 2x4 grid with a Control Work Station that was connected to the 8 nodes via a 100Mbit ethernet switch. The CWS was used as a login and development node for users, and as an administration node for the system administrators.

Flipper was mainly intended to be an experimental cluster.

More information about this retired cluster here.

alice2.jpg
Alice
  • 10 MIPS R10000 195 MHz Processors
  • 3072 MB Main Memory
  • 2 InfiniteReality2 graphics pipe
  • 64 MB Textute memory per pipe
  • 4 Raster Managers (320 MB Frame Buffer) per pipe
  • HIPPI interface
  • DIVO - Digital Video Option
  • 64 GB Disk
  • 2 24" SuperWide monitors
Alice was an SGI Onyx2 with Infinite Reality graphics. The acquisition in 1997 was made possible through a generous donation from the Knut and Alice Wallenberg Foundation. Alice was the main computing facility for scientific visualization and VR applications, but was also used for some computational purposes.

With its two large 24" monitors located in Wonderland and with most of the VR equipment connected to Alice, she was part of almost every visualization and VR project at HPC2N.

If you have any questions about Alice please send a mail to: vrlab@umu.se

More information about Alice here.

chips.jpg
Chips
  • 4 375 Mhz Power3 Processors
  • 4096 MB Memory
  • 1 IBM DPSS-309170N, 9GB
  • 1 IBM 10/100 Mbps NIC
Chips was an IBM Power3 based SMP system. It was our main facility for low/medium level compute kernel development in cooperation with IBM.

fish.jpg

Fish

  • 2 200 Mhz Power3 Processors
  • 256 MB Memory
  • 2 IBM DDRS-34560W, 4GB
  • 1 IBM 10/100 Mbps NIC
Fish was an IBM Power3 based SMP system. It was our main facility for low/medium level compute kernel development in cooperation with IBM.

More information about the retired resource, Fish, here.

octane_0.gif

Donald

  • 1 MIPS R10000 195 MHz Processors
  • 256 MB Main Memory
  • MXI Graphics
  • 4 MB Textute Memory
  • 8 GB Disk
Software:
  • IRIX 6.5.9f
  • C/C++/F77/F99
  • Performer
  • More software
  • Additional software installed on request

Donald was an SGI Octane MXI, suitable for smaller projects that did not need the power of Alice.

If you have any questions about Donald please send a mail to: vrlab@umu.se

3494_0.jpg
MSST
3494-tapes.jpg
Tape robot
  • 1 3494-L12 Library Unit
  • 2 3590E tape units
The mass storage system is based on equipment from IBM and consists of one Magstar 3494 tape library with 2 Magstar 3590 tape units. These are connected to the server node in Knut. Accompyning this is the disk system consisting of an IBM SSA subsystem.

Availability to the mass storage is via a HSM filesystem. There is also a fast parallell filesystem, PFS, available on Knut.

If you have any questions about the Mass Storage facilities please send a mail to: support@hpc2n.umu.se

 

Updated: 2024-11-01, 13:56