Julia

Software name: 
Julia
Policy 

Julia is freely available to users at HPC2N. 

General 

Julia is a high-level, high-performance, dynamic programming language.

Description 

While Julia is a general-purpose language and can be used to write any application, many of its features are well suited for numerical analysis and computational science.

The Julia community has registered over 7,400 Julia packages for community use. These include various mathematical libraries, data manipulation tools, and packages for general purpose computing. In addition to these, you can easily use libraries from Python, R, C/Fortran, C++, and Java.

Availability 

On HPC2N we have Julia available as a module on Kebnekaise.

Usage at HPC2N 

To use the Julia module, add it to your environment. Use:

module spider julia

to see which versions are available.

The Julia modules do not have any prerequisites and you can load a module directly with:

ml Julia/<version>

Example, loading Julia version 1.8.5

ml Julia/1.8.5-linux-x86_64

You can read more about loading modules on our Accessing software with Lmod page and our Using modules (Lmod) page.

Loading the module should set any needed environmental variables as well as the path.

See the Julia language documentation page for more information about the usage of Julia.

Running Julia on Kebnekaise

You should run Julia through a batch script. Here are some simple examples of Julia batch scripts:

Serial jobs

This is a simple Hello World serial code example (filename: my_julia_test.jl):

println("Hello World")

the following batch script (filename: my_submit-script.sh) can be used to submit this job to the queue:

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ
#SBATCH -n 1

# Asking for walltime (HH:MM:SS) - change to what you need, here 30 min
#SBATCH --time=00:30:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64

# Launch your julia script
julia my_julia_test.jl

Submit the batch script with (change to the name you gave your script):

sbatch my_submit-script.sh

More memory (all memory in the node)

If the job uses 1 core (or a few of them) but it requires a considerable amount of memory, one can request a whole node and make use of its entire memory

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ

# Asking for a full node
#SBATCH -N 1

# 28 cores per task, so we get all memory from all cores
#SBATCH -c 28

# Asking for walltime (HH:MM:SS) - change to what you need, here 1 hour and 30 min
#SBATCH --time=01:30:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64

# Launch your julia script
julia my_julia_test.jl

Submit the batch script with (change to the name you gave your script):

sbatch my_submit-script.sh

Parallel jobs

Julia offers a wide palette of parallelization schemes. Read the documentation of the package that you are using and see if there is any existing parallelization scheme that you can use. We will list the most common ones in the following examples.

Implicit threaded

Some libraries have an internal threaded mechanism, the most common example is the Linear Algebra library which is built-in in the Julia installation. Here there is an example for computing a matrix-matrix multiplication operation (filename: matmat.jl):

using LinearAlgebra
BLAS.set_num_threads(4)
A = rand(1000,1000)
B = rand(1000,1000)
res = @time A*B

Notice the specific manner to set the number of threads (4 in this case). This job can be submitted with the following batch script: 

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ
#SBATCH -n 4

# Asking for walltime (HH:MM:SS) - change to what you need, here 30 min
#SBATCH --time=00:03:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64

# Launch your julia script
julia mat-mat.jl

Explicit threaded (Julia threads)

Julia has its own mechanism for scheduling threads. Notice that these threads can be independent of the ones provided by external libraries (as in the previous example). The main idea behind threaded mechanism is that threads share some address space which can provide fast memory access with a resulting potential speedup. Look at this example which displays the thread IDs and the number of available threads (filename: threaded.jl):

using .Threads

@threads for i = 1:12
  println("I am thread ", threadid(), " out of ", nthreads()) 
end

One can run this job with the following batch script:

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ
#SBATCH -n 4      #choose nr. of cores

# Asking for walltime (HH:MM:SS) - change to what you need, here 30 min
#SBATCH --time=00:03:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64

# Launch your julia script and choose the nr. of threads according to the nr. cores above
julia -t 4 threaded.jl

Notice that one can also set the number of threads with the environment variable JULIA_NUM_THREADS but its has lower priority than the -t X mentioned before. If you want to use the environment variable option in the batch script, change the following lines:

# Launch your julia script and choose the nr. of threads according to the nr. of cores above
export JULIA_NUM_THREADS=4
julia threaded.jl

 

Distributed computing

In the distributed computing scheme, each worker has its own memory space. This approach is useful if you plan to run your core over more cores than those provided in a single node, for instance.  A simple example where the worker ID and the number of workers values are displayed is given here (filename: distributed.jl):

using Distributed

@sync @distributed for i in 1:nworkers()
  println("I am worker ", myid(), " out of ", nworkers()) 
end

and the corresponding batch script is:

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ
#SBATCH -n 4      #choose nr. of cores

# Asking for walltime (HH:MM:SS) - change to what you need, here 30 min
#SBATCH --time=00:03:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64

# Launch your julia script and choose the nr. of workers according to the nr. cores above
julia -p 4 distributed.jl

MPI jobs

The already described Threaded and Distributed packages are contained in the core installation of Julia. Thus, basically you just need to import this package in your Julia script. MPI is not built-in by default, so you need to follow these steps prior to using it in your code: 

# Load the tool chain which contains a MPI library
$ ml foss/2021b
# Load Julia
$ ml Julia/1.8.5-linux-x86_64
# Start Julia on the command line
$ julia
# Change to package mode and add the MPI package
(v1.8) pkg> add MPI
# In the julian mode run these commands:
julia> using MPI
julia> MPI.install_mpiexecjl()
     [ Info: Installing `mpiexecjl` to `/home/u/username/.julia/bin`...
     [ Info: Done!
# Add the installed ``mpiexecjl`` wrapper to your path on the Linux command line
$ export PATH=/home/u/username/.julia/bin:$PATH
# Now the wrapper should be available on the command line

A simple example for displaying the ranks IDs is here (filename: mpi.jl):

using MPI
MPI.Init()

# Initialize the communicator
comm = MPI.COMM_WORLD
# Initialize the ranks
rank = MPI.Comm_rank(comm)
# Get the size of the communicator
size = MPI.Comm_size(comm)

println("I am rank ", rank, " out of ", size)

MPI.Finalize()

Here, there is an example of a batch script for this MPI job:

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ
#SBATCH -n 4      #choose nr. of cores

# Asking for walltime (HH:MM:SS) - change to what you need, here 30 min
#SBATCH --time=00:03:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64
# Module containing the MPI library
ml foss/2021b 

# export the PATH of the Julia MPI wrapper
export PATH=/home/u/username/.julia/bin:$PATH

# Launch your julia script and choose the nr. of workers
mpiexecjl -np 4 julia mpi.jl

 

GPU jobs

Julia offers good support for different types of GPUs. At HPC2N, we have different types of NVIDIA GPUs which can be use in Julia codes through the CUDA package. Follow these steps the first time you will use the Julia CUDA package:

$ ml Julia/1.8.5-linux-x86_64   # Load Julia module
$ ml CUDA/11.4.1                # Load CUDA toolkit module
$ julia                         # Start Julia on the terminal and add the CUDA package
(v1.8) pkg> add CUDA 
    Updating registry at `~/.julia/registries/General.toml`
    Resolving package versions...
    Installed CEnum ───────── v0.4.2
    ...

A simple GPU code is here (filename: gpu.jl): 

using CUDA

CUDA.versioninfo()

N = 2^8
x = rand(N, N)
y = rand(N, N)

A = CuArray(x)
B = CuArray(y)

# Calculation on CPU
@time x*y
# Calculation on GPU
@time A*B

with the corresponding batch script:

#!/bin/bash
# Your project id
#SBATCH -A HPC2NXXXX-ZZZ
#SBATCH -n 1      #choose nr. of cores

# Asking for walltime (HH:MM:SS) - change to what you need, here 30 min
#SBATCH --time=00:03:00

# Split output and error files - suggestion
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

# 1 GPU K80 card
#SBATCH --gres=gpu:k80:1     

# Purge modules before you load the Julia module
ml purge > /dev/null 2>&1
ml Julia/1.8.5-linux-x86_64
ml CUDA/11.4.1

# Launch your julia script
julia gpu.jl

Notice that 1 K80 GPU card was selected only. If all cards are selected in a node the #SBATCH --exclusive directive will be needed. 

Additional info 

More information about Julia can be found on the Julia homepage. You can also visit the Community Discourse and the Slack channel (useful tags: #gpu #general).

Updated: 2024-11-01, 13:56