This example shows how to run the heat transfer/reduction pipeline with Swift. This application works as shown in the figure, with the grey callouts indicating user-controlled options for the program.
-
Swift starts two programs: the application (as Nh processes) and the reducer (as Nr processes), connected by the transport. As shown in the figure, it also runs a Swift control program plus (if the Dataspaces transport is used) a Dataspaces process.
-
Application: The
Heat Transfer
program runs a heat transfer simulation for a 2-D domain on a 2-D array of processes) (Domain size and process count can be varied.) It outputs two variables, T and dT, at specified intervals. -
Reducer: The
adios_write
program applies a reduction operation to each variable and writes the uncompressed and compressed results to two files,heat.bp
andstaged.bp
, respectively. (Each file contains both T and dT.) Three reductions are supported:-
Identity
-
Compress via SZ, with specified accuracy
-
Compress via ZFP, with specified accuracy
-
-
Transport: Three options are supported, as follows.
-
FLEXPATH: The only one tested so far. Description TBD
-
MPI: Description TBD
-
Dataspace: Description TBD
-
-
As 1 process is needed for Swift, and 1 process for the Dataspaces transport if used, the pipeline involves Nh + Nr + 1 or Nh + Nr + 2 processes. It is conventional to supply that number as the argument to the
run-workflow.sh
command described below. (Swift does not yet provide control for the mapping of processes to cores and nodes.)
In the following, we describe in turn:
-
How to install the Savanna package that you will use to run the pipeline
-
How to perform a single execution of a pipeline such as that shown in the figure
-
How to use the Cheetah system to configure and run a campaign involving multiple executions
-
We assume that Spack is installed on your system. But in case of interest, these steps work on a Linux Ubuntu workstation (some elements, such as zlib* and zsh may already be available on some systems):
sudo apt install git sudo apt install curl sudo apt install gfortran sudo apt install bison sudo apt install flex sudo apt install environment-modules sudo apt install zlib* sudo apt install zsh git clone https://github.com/CODARcode/spack . spack/share/spack/setup-env.sh spack compilers
Then edit
~/.spack/compilers.yaml
(on Ubuntu, at least, it’s in~/.spack/linux/compilers.yaml
) and add /usr/bin/gfortran for both f77 and fcc.It seems to help to restart at this point.
-
Install Savanna via Spack and then
spack load
required modules:cd spack git checkout -t origin/codar spack install savanna@develop spack load stc turbine adios mpich sz dataspaces
-
Build the heat transfer simulator and stager
-
Determine the location of the code (
spack find -p savanna
), say $L, andcd $L/Heat_Transfer
. The following will work as long as you only have one Savanna installed:path2savanna = `spack find -p savanna | grep savanna | awk '{ print $2 }'` cd $path2savanna/Heat_Transfer
-
Edit
Makefile
to setCC=mpicc
andFC=mpif90
-
Build heat transfer simulator and stager:
make cd stage_write make
-
-
Run the workflow
-
Configure staging method
-
To use FlexPath: edit
heat_transfer.xml
to enable theFLEXPATH
method. (Ensure that the following text, near the end of the file, is not commented out:<method group="heat" method="FLEXPATH">QUEUE_SIZE=4;verbose=3</method>
. All other lines starting with<method group="heat" method…
should be commented out.) -
To use DataSpaces: edit
heat_transfer.xml
to enable theDATASPACES
method. (In this case the following line should not be commented out:<method group="heat" method="DATASPACES"/>
, while all other lines starting with<method group="heat" method…
should be commented out.) -
To use MPI: edit
heat_transfer.xml
to enable theMPI
method. (In this case the following line should not be commented out:<method group="heat" method="MPI"/>
, while all other lines starting with<method group="heat" method…
should be commented out.).
-
-
Edit
run-workflow.sh
to setLAUNCH
to point to/dir/to/mpix-launch-swift/src
. For an example, we can do with Spack as follows:LAUNCH=$(spack find -p mpix-launch-swift | grep mpix-launch-swift | awk '{print $2}')/src
-
You can also edit
workflow.swift
to modify the number of processes and problem size -
Then, type
./run-workflow.sh P [DATASPACES|FLEXPATH|MPI]
where P is the number of MPI processes. (With default settings in
workflow.swift
, 12 processes are used for the simulation and 3 are used for the stager, requiring P=12 + 3 + 1 for Swift = 16. If DataSpaces transport method is used, then an additional process will be required, for a total of 17. For a quick run, editworkflow.swift
to set 1 each for both simulation and stager, thus requiring P=1+1+1=3 (four with DataSpaces.) The second option specifies the staging transport. If no staging transport is specified, FlexPath will be used.
-
-
Run with compression
To perform compression through ADIOS, we need to provide additional options for stager
:
-
the list of variables to compress
-
the compression method and compression parameters to use
These are all specified in the arguments2
variable in "workflow.swift" (line 62).
For example, the following line requests stager
to compress the T
and dT
variables with the SZ
method, maintaining absolute errors lower than 0.001 (The latter is a SZ-specific parameter. More details can be found in the ADIOS manual).
arguments2 = split("heat.bp staged.bp FLEXPATH \"\" MPI \"\" \"T,dt\" \"SZ:accuracy=0.001\"", " ");
This second example does the same thing but used the ZFP compression library:
arguments2 = split("heat.bp staged.bp FLEXPATH \"\" MPI \"\" \"T,dt\" \"ZFP:accuracy=0.001\"", " ");
We have described how to execute a single instance of the pipeline. The Cheetah system allows you to run a campaign involving a set of such executions, each with different configuration parameters.
Instructions on how to run with Cheetah are on a separate page.
These notes are for developers that want to work from git clones, not Spack.
-
Install all APT packages in the Installation section above (or equivalent on your OS)
-
Download/install EVPATH/FlexPath
Directions here: https://www.cc.gatech.edu/systems/projects/EVPath$ mkdir evpath-build $ cd evpath-build $ wget http://www.cc.gatech.edu/systems/projects/EVPath/chaos_bootstrap.pl $ ./chaos_bootstrap.pl -i
-
Now edit chaos_build_config to remove the BUILDLIST entries after evpath with a comment (%). Then:
$ perl ./chaos_build.pl
Notennti will fail, that is OK.
-
-
Download/install ADIOS
$ wget http://users.nccs.gov/~pnorbert/adios-1.11.0.tar.gz $ ./configure --prefix=... --with-flexpath=...
Then put the ADIOS bin/ directory in your PATH
-
Clone/install Swift/T
$ git clone git@github.com:swift-lang/swift-t.git # or $ git clone https://github.com/swift-lang/swift-t.git
Follow directions here:
-
http://swift-lang.github.io/swift-t/guide.html#Build_configuration or
-
http://swift-lang.github.io/swift-t/guide.html#_from_source
Then put the STC and Turbine bin/ directories in your PATH
-
-
Install the MPIX_Launch module:
Follow directions here: -
Build the heat_transfer simulator code:
make
-
Build the stage_write program
cd stage_write make cd -
-
Edit run-workflow.sh
-
Set the LAUNCH variable to the MPIX_Launch_Swift src/ directory (containing pkgIndex.tcl)
-
This allows STC to find the module
-
-
Set the EVPATH variable to the EVPath installation (the directory specified in chaos_bootstrap.pl)
-
This is needed so that the ADIOS-linked programs can find EVPath at run time
-
-
-
Run ./run-workflow.sh
It should take 20 seconds or less, if it hangs, kill it and check for error messages.
-
Download/install DataSpaces from:
-
Download from:
http://personal.cac.rutgers.edu/TASSL/projects/data/download.html./configure --prefix=... CC=mpicc FC=mpif90
-
-
Put the DataSpaces bin directory in your PATH
-
Rebuild ADIOS: reconfigure with --with-dataspaces=…
-
If you already compiled the simulator (this directory) or stage_write, for each of them do: 'make clean ; make' (because ADIOS was rebuilt)
-
Edit heat_transfer.xml to enable method="DATASPACES" and disable the other methods.
-
Run ./run-workflow.sh + but provide more processes