OpenFOAM containers at Pawsey: Glossary

Key Points

Basic information for OpenFOAM containers at Pawsey
  • A singularity image is a file that can be stored anywhere, but we reccommend to use some defined “policy” within your group

  • OpenFOAM images maintained by pawsey are stored at /group/singularity/pawseyRepository/OpenFOAM/

  • Containers with MPI applications need to be equipped with MPICH for running on Crays

Executing the full workflow of a case
  • Use singularity exec $image <OpenFOAM-Tool> <Tool-Options> for using containerised OpenFOAM tools

  • Pre- and Post-Processing are usually single threaded and should be executed on Zeus

  • Always use the recommended Pawsey Best Practices for OpenFOAM

  • Most recent versions of OpenFOAM are not installed system-wide at Pawsey’s Supercomputers, but are available via singularity containers

Compile and execute user's own tools
  • Define a host directory that will play the role of WM_PROJECT_USER_DIR

  • For example, projectUserDir=./anyDirectory

  • Then bind that directory to the path defined inside for WM_PROJECT_USER_DIR

  • For this exercise, singularity exec -B $projectUserDir:/home/ofuser/OpenFOAM/ofuser-v1912 $theImage <mySolver> <myOptions>

Building an OpenFOAM container with MPICH
  • Take it easy, be patient

  • Use existing definition files examples and available guides to define the right installation recipe

  • Main difference with a standard OpenFOAM installation guide are’:’

  • 1.Avoiding any step that performs an installation of OpenMPI

  • 2.Settings in prefs.sh for using WM_MPLIB=SYSTEMMPI (MPICH in this case)

  • 3.Settings in bashrc for defining the new location for installation

  • 4.Settings in bashrc for defining WM_PROJECT_USER_DIR

Use OverlayFS to reduce the number of result files
  • Singularity can deal with an OverlayFS, but only one OverlayFS can be mounted per container instance

  • As each core writes results to a single processor*, this works for saving results inside the corresponding overlay*

  • Unfortunately, the reconstructPar tool cannot read results from several overlay* files at the same time. Therefore, decomposed results must be copied back to the host file system before reconstruction.

  • Last point may seem like a killer, but extraction and reconstruction may be performed in small batches avoiding the appearence of many files at the same time in the host file system.

  • Here the small batch is of size=1 (just a single time result), but the following episode deals with batches of larger size

Advanced scripts for postprocessing with OverlayFS
  • No, unfortunately a container cannot mount more than 1 OverlayFS file at the same time

  • Yes, this implies that the results need to be copied back to the host file system before reconstruction

  • In order to avoid the presence of many files in the host, this should be done by small batches - 1.Copy small batch of results from the interior of the ./overlayFSDir/overlay* files towards the ./bakDir/bak.processor* directories in the host file system - 2.Now create processor* soft links to point to ./bakDir/bak.processor* directories and not to the directory structure inside the OverlayFS files - 3.Reconstruct that small batch - 4.Remove the decomposed result-times from the ./bakDir/bak.processor* directories. Only the fully reconstructed result-times are kept in the host. And the original decomposed results are only kept inside the OverlayFS files. - 5.Continue the cycle until postprocessing all the result-times needed

Glossary

FIXME