We list some basic steps for beginner users of the code:
- In the first step, ensure that the prerequisites listed
here
are fulfilled on the machine where you are going to run the code (either laptop, workstation or supercomputer); - Clone or fork the code. To fork it, you can follow the explanation provided
here
. For cloning it, you can simply type on the command line:git clone https://github.com/Multiphysics-Flow-Solvers/FluTAS.git
- Go inside the directory
src/
of FluTAS, i.e.,cd src/
. Now you are ready to compile it.
The compilation step depends on which application
is chosen and on which architecture
the user is working. Currently, the code supports the following applications:
single_phase
: single-phase, incompressible, optionally with heat transfer effect.
To create the executable for this application, type on the command line:make clean APP=single_phase && make ARCH=generic-gnu APP=single_phase DO_DBG=0 -j4
two_phase_inc_isot
: two-phase, incompressible and isothermal flow.
To create the executable for this application, type on the command line:make clean APP=two_phase_inc_isot && make ARCH=generic-gnu APP=two_phase_inc_isot DO_DBG=0 -j4
two_phase_ht
: two-phase, incompressible flow with heat transfer.
To create the executable for this application, type on the command line:make clean APP=two_phase_ht && make ARCH=generic-gnu APP=two_phase_ht DO_DBG=0 -j4
Note that in the above example, we have compiled the code in parallel (the option -j4
means using 4 processors). Moreover, we employ the GNU compiler to build code. Other compilation options can be found in the targets
folder, where INTEL, NVF and CRAY targets are available and can be simply linked changing the argument of ARCH
above. For code development and extension, we recommend compiling FluTAS in debugging mode, i.e., setting DO_DBG=1
. If no application and/or targets are chosen or if the chosen ones are mispelled, two_phase_inc_isot
is built by default using the GNU compiler, i.e. target.generic-gnu
.
Once the compilation has been successfully performed, the next step is to fill the input files, i.e. those files with the extension *.in
. Several examples of input files for different canonical flows are present in the examples
folder. For a correct run of FluTAS, the input files must be placed in the same directory together with the executable. We recommend reading the document INFO_INPUT
for a detailed description of how the input files should be filled.
Once the code is successfully compiled and the input files properly filled, FluTAS can be run. We recommend the following steps:
- Create a run folder outside the
src
directory, i.e.,mkdir run
. In your workstation, any choice is fine while on clusters we recommend cloning the code and compile it in your home directory (typically backed-up but with limited storage) and placing the executables (bothflutas
andflutas.<YOUR_APPLICATION>
) together with the input files in your project folder (typically not backed-up but with more storage) - On the command line (or in a submission script), type (or write)
mpirun -n NP flutas
whereNP
is the number of processors or GPUs prescribed indns.in
, i.e. the product between the components ofdims_in
(set in the input filedns.in
, seeINFO_INPUT
). Note that in case of different compilation choices (e.g., INTEL, CRAY), the commandmpirun
requires to be adjusted depending on the environment you are working and the compilation options you have chosen.
The data generated during the execution of the program are stored in the directory data
, automatically created by FluTAS in the same location where the executable is placed. Before running the simulation, it is important to decide what to print. To this end, the files out1d.h90
, out2d.h90
and out3d.h90
in src/apps/<YOUR_APPLICATION>/postp.<YOUR_APPLICATION>/
set which data are written in 1-, 2- and 3-dimensional output files, respectively. Replace <YOUR_APPLICATION>
with the name of your application. Note that the code should be recompiled after editing out*d.h90
files. The generated binary files can be read and visualized following the instruction reported here
.
The code is compiled using different preprocessor flags which also control the employed modules and subroutines. If shared among different applications, these flags are in the Makefile. If they are specific to a certain application, they are typically placed in src/apps/<YOUR_APPLICATION>/apps.<YOUR_APPLICATION>/
. As usual, remember to replace <YOUR_APPLICATION>
with the name of your application.
Currently, the following preprocessor flags can be used:
-D_USE_CUDA
: enabled by settingUSE_CUDA=1
. It uses the variableCUDA_LIB
to locate CUDA libraries (please adapt it to your system). Also, remember to adapt the compilation flags to your system (currently-gpu=cc70,cuda11.0
).-D_USE_NVTX
: enables the NVTX profiling. Please modify the variableLIB_NVTX
to your system.-D_CONSTANT_COEFFS_POISSON
: enabled by settingCONSTANT_COEFFS_POISSON=1
. Please keep it to 1. Future developments will include other variable coefficient Poisson solvers.-D_USE_VOF
: enables the VoF module by settingUSE_VOF=1
. Reminder: it forces the usage of vof.in input file (see below).-D_VOF_DBG
: Skips the flow solution and imposes a pure advection at constant velocity for VoF debugging purposes.-D_HEAT_TRANSFER
: enables the computation of the energy equation by settingHEAT_TRANSFER=1
. Reminder: it forces the usage of heat_transfer.in input file (see below).-D_DO_POSTPROC
: enabled withDO_POSTPROC=1
, allows for the computation of statistics and data extraction on-the-fly. Reminder: it forces the usage of post.in input file (see below).-D_TURB_FORCING
: enabled withTURB_FORCING=1
, enables the external turbulence forcing, either with ABC or TGV methods. Reminder: it forces the usage of forcing.in input file (see below).-D_TIMING
: enables timestep timing-D_TWOD
: enables the computation of 2D cases. The X direction is not computed and it should be set with periodic boundaries and 2 grid points.-D_BOUSSINESQ
: enables the solution of the heat transfer equation using the Boussinesq approximation in the gas phase
We are happy if researchers choose FluTAS as a base solver for their applications. For future and further developments, we provide the following recommendations:
- Clone or fork the repository (private or public, depending on the granted access you have);
- Work on a separate branch from the master or main one. To do so, type on the command line
git checkout -b "<NAME_OF_YOUR_BRANCH>"
. Choose a name for your meaningful name for your branch and work on it; - You can use the already available applications or create new ones. In this case, we recommend following the already available
templates
; - If you plan to create new applications:
- create a new directory with the name of your app in
apps
folder, i.e.,cp -r "<NAME_OF_AN_EXISTING_APP>" "<NAME_OF_YOUR_NEW_APP>"
. As existing app, choose the one closest to your expected needs; - adjust the name of the existing files and folders inside the new directory. You can type on the terminal:
mv main__"<NAME_OF_AN_EXISTING_APP>".f90 main__"<NAME_OF_YOUR_NEW_APP>".f90 mv post."<NAME_OF_AN_EXISTING_APP>" post."<NAME_OF_YOUR_NEW_APP>" mv app."<NAME_OF_AN_EXISTING_APP>" app."<NAME_OF_YOUR_NEW_APP>"
- open the file
app."<NAME_OF_YOUR_NEW_APP>"
, e.g.vi app."<NAME_OF_YOUR_NEW_APP>"
and modify the name of the existingmain__"<NAME_OF_AN_EXISTING_APP>".f90
withmain__"<NAME_OF_YOUR_NEW_APP>".f90
. In the same file, there is also a list of preprocessor flags specific to this app. Adjust them or add new ones depending on your needs; - modify the descriptions and comments inside
param.f90
andmain__"<NAME_OF_YOUR_NEW_APP>".f90
- try to create an executable for the new application to test if everything has been done properly
make clean APP=<NAME_OF_YOUR_NEW_APP> && make ARCH=generic-gnu APP=<NAME_OF_YOUR_NEW_APP> DO_DBG=0 -j4
flutas."<NAME_OF_YOUR_NEW_APP>"
will be created in the directorysrc/
. - create a new directory with the name of your app in
- Once the new application is created, the user can start to modify the source code. Many code styles are possible, but a modular and sustainable programming practice is strongly encouraged, i.e.
- Keep the subroutines as pure as possible with most of the variables declared with a defined intent, i.e.
in
,out
orinout
. This input/output approach must be respected for the "large" arrays (e.g. velocity, pressure, temperature fields, etc.); - Do not define global arrays visible to all the subroutines, but define all the arrays locally and only in the subroutines where they are needed. This is also beneficial for the GPU porting and ensures efficient use of the available memory;
- For each new module and new subroutines, always employ the statement
implicit none
; - For each new module, import only the required subroutines and variables;
- For each new module, explicitly declare what the public subroutines are. The remaining ones are kept private by default;
- Add comments to the code and respect a minimum indentation for readability;
- Do not make abuse of preprocessor macros. Sometimes duplicate the subroutines rather than use too many preprocessor macros;
- For each new development and application, create different benchmarks of increasing complexities that serve as examples for the new application and test, debug and continuously integrate the code. In the
examples
directory, create a new folder for the new app and place the examples there. Ideally, just changing the input files, and after generating the executable for that application, the user can run a specific example without modifying the source code.
- Keep the subroutines as pure as possible with most of the variables declared with a defined intent, i.e.