A Permanent Head Damage Student in Colorful Fluid Dynamics

OpenFOAM Job Submission on TACC Lonestar 5

OpenFOAM case structure

The typical structure of an OpenFOAM simulation case is shown as:

$case case root directory
├─constant polyhedral mesh and transport properties
├─polyMesh
├─transportProperties
└─...
├─0 initial and boundary conditions
├─p
├─U
└─...
└─system config.s of time, io, flow and sparse linear system solving
├─controlDict time and io controls
├─fvSchemes FVM operator schemes
├─fvSolution flow and sparse linear system solving algorithms
└─...


Slurm job script

Slurm’s sbatch command is used for job submission on LS5,i.e.

1
sbatch $yourJobScriptFileName

Serial job

The job script below requests a serial job and 48 hours of run time in the normal queue:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/bin/bash
#----------------------------------------------------
# SLURM job script to run applications on
# TACC's Lonestar 5 system.
#
# A serial OpenFOAM case
#----------------------------------------------------

#SBATCH -J yourJobName # Job name
#SBATCH -o yourJobName_%j.out # Name of stdout output file (%j expands to jobId)
#SBATCH -e yourJobName_%j.err # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal # Queue name
#SBATCH -N 1 # Total number of nodes requested
#SBATCH -n 1 # Total number of mpi tasks requested
#SBATCH -t 48:00:00 # Run time (hh:mm:ss) - 48 hours (maximum)

# Slurm email notifications are now working on Lonestar 5
#SBATCH --mail-user=yourEmailAddress
#SBATCH --mail-type=all

# Launch the executable flow solver based on OpenFOAM
$flowSolverName -case $caseRootDir

# $flowSolverName ==> the OpenFOAM flow solver name
# $caseRootDir ==> the absolute (full) path of your root case directory
# For example, if you want to run 'interFoam' solver and your case is located in
# '$WORK/aa/bb/cc/yourCase', the command in line 22 should be
# interFoam -case $WORK/aa/bb/cc/yourCase

Parallel job

The job script below requests a parallel job with 8 threads spread over 1 node and 48 hours of run time in the normal queue:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!/bin/bash
#----------------------------------------------------
# SLURM job script to run applications on
# TACC's Lonestar 5 system.
#
# A parallel OpenFOAM case
#----------------------------------------------------

#SBATCH -J yourJobName # Job name
#SBATCH -o yourJobName_%j.out # Name of stdout output file (%j expands to jobId)
#SBATCH -e yourJobName_%j.err # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal # Queue name
#SBATCH -N 1 # Total number of nodes requested
#SBATCH -n 8 # Total number of mpi tasks requested
#SBATCH -t 48:00:00 # Run time (hh:mm:ss) - 48 hours (maximum)

# Slurm email notifications are now working on Lonestar 5
#SBATCH --mail-user=yourEmailAddress
#SBATCH --mail-type=all

# Launch the executable flow solver based on OpenFOAM
# 'mpirun' command is replaced with 'ibrun'
ibrun -np 8 $flowSolverName -parallel -case $caseRootDir

# $flowSolverName ==> the OpenFOAM flow solver name
# $caseRootDir ==> the absolute (full) path of your root case directory
# For example, if you want to run 'interFoam' solver and your case is located in
# '$WORK/aa/bb/cc/yourCase', the command in line 23 should be
# ibrun -np 8 interFoam -parallel -case $WORK/aa/bb/cc/yourCase

It should be noted that each LS5 computational node has totally 24 threads, once the total number of mpi tasks -n exceeds 24, one more computational nodes -N should be requested.


This post is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License.

Install OpenFOAM v1806 on Ubuntu 16.04 LTS with Intel Compiler

  1. 1. OpenFOAM case structure
  2. 2. Slurm job script
    1. 2.1. Serial job
    2. 2.2. Parallel job