WRF

Software name: 
WRF
Policy 

WRF is available for all users at HPC2N.

General 

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.

Description 

WRF features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.

Availability 

At HPC2N we have WRF installed as a module on Kebnekaise.

Usage at HPC2N 

The binaries of WRF/WPS are available through the module system.

To access them you need to load the module on the command line and/or in the submit file. Use:

ml spider wrf

and

ml spider wps

to see which versions are available and how to install the module and its dependencies.

WRF/WPS 3.8.0 is built with Intel compilers and Intel MPI, and built with both MPI and OpenMP. 

WRF 3.9.1.1 are built with GCC compilers and OpenMPI. Both serial and parallel versions exist - do ml spider wrf to check!

Example, loading WRF version 3.8.0-dmpar

ml icc/2017.1.132-GCC-6.3.0-2.27
ml ifort/2017.1.132-GCC-6.3.0-2.27 impi/2017.1.132
ml WRF/3.8.0-dmpar

You can read more about loading modules on our Accessing software with Lmod page and our Using modules (Lmod) page.

The name of the wrf binary is wrf.exe and it is built with both MPI and OpenMP.

If that is not sufficient please contact support@hpc2n.umu.se of what you need and we will see if we can build it.

All other binaries from WRF are available in normal serial versions.

The input tables are located under /pfs/data/wrf/geog

The Vtables are located in $EBROOTWPS/WPS/ungrib/Variable_Tables (environment variable can only be used after module WPS is loaded).

Files in $EBROOTWRF/WRFV3/run may need to be copied or linked to your case directory if the program complains about missing files. 

Submitfile examples

Since wrf is built as a OpenMP/MPI combined binary special care must be taken in the submitfile.

#!/bin/bash
# Request 2 nodes exclusively
#SBATCH -N 2
#SBATCH --exclusive
# We want to run OpenMP on one NUMA unit (the cores that share a memory channel) 
# On Kebnekaise it is 14. Change -c accordingly. 
#SBATCH -c 14
# Slurm will then figure out the correct number of MPI tasks available
#SBATCH --time=6:00:00 

# WRF version 3.8.0 
ml icc/2017.1.132-GCC-6.3.0-2.27 
ml ifort/2017.1.132-GCC-6.3.0-2.27 impi/2017.1.132 
ml WRF/3.8.0-dmpar

# Set OMP_NUM_THREADS to the same value as -c, i.e. 12
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# --cpu_bind=rank_ldom is only possible if --exclusive was used above and allocates one MPI task
# with its 14 OpenMP cores per NUMA unit.
srun --cpu_bind=rank_ldom wrf.exe 
Additional info 

Documentation is available on the WRF homepage and the WRF model information page.

Updated: 2024-11-01, 13:56