Compiling, Linking, and Executing Applications That Use the Participant Library
Last update: 10.07.2023
Compiling, Linking, and Executing Applications That Use the Participant Library
The instructions for linking and compiling the participant library depend on the target platform, the target language, the MPI distribution, and the compiler.
Supported Languages
System Coupling interfaces are provided for the following target languages:
Language | Standard |
---|---|
C++ | C++11 or later |
C | C89 or later |
Fortran | 2003 or later (both fixed form and free form) |
Python | 3.10* |
- Only some Python interpreters are supported. The CPython interpreter located in
<ANSYSInstallationPath>/commonfiles/CPython
is supported on Windows and on Linux. On Windows, the Python 3.10 interpreter from www.python.org is supported. The Python interpreter that comes with Microsoft Visual Studio is also supported on Windows. The Python interpreter that comes with Intel Parallel Studio is currently not supported. Other Python interpreters have not been tested.
System Coupling Participant Library Resources
Resources are available at the following locations:
Public Header Files
* `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/include/SystemCouplingParticipant`
All public header files for the C++, C, and Fortran interfaces are located in the above directory. Both fixed and free form headers are available for Fortran codes.
C++
Only SystemCoupling.hpp
header file should be included into the source code.
C
Only syscSystemCoupling.h
header file should be included into the source code.
Fortran
Only syscPartLib.fi
header file should be included into the source code.
Python
pyExt.SystemCouplingParticipant
module should be imported in Python. However, first the following directory must be included in PYTHONPATH
environment variable: <ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin
In addition, due to a change in the behavior of python 3.8 and above on windows platforms, the following code should be added to the python code prior to import of the system coupling participant module:
Link-Time Dependencies
- Windows - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/lib` - `SysC.SystemCouplingParticipant.lib` - `SysC.SystemCouplingParticipantFortran.lib` (only if using Fortran APIs) - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/<platform>/<mpi>` - `mpi_wrapper.lib` - Linux - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin` - `SysC.SystemCouplingParticipant.so` - `SysC.SystemCouplingParticipantFortran.so` (only if using Fortran APIs) - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/<platform>/<mpi>` - `libmpi_wrapper.so`
Run-Time Dependencies
- `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin` - `SysC.SystemCouplingParticipant.dll` (Windows) - `SysC.SystemCouplingParticipantFortran.dll` (Windows, only if using Fortran APIs) - `SysC.SystemCouplingParticipant.Python.dll` (Windows, only if using Python APIs) - `libSysC.SystemCouplingParticipant.so` (Linux) - `libSysC.SystemCouplingParticipantFortran.so` (Linux, only if using Fortran APIs) - `libSysC.SystemCouplingParticipant.Python.so` (Linux, only if using Python APIs) - `pyExt/_SystemCouplingParticipant.pyd` (Windows, only if using Python APIs) - `pyExt/_SystemCouplingParticipant.so` (Linux, only if using Python APIs) - `pyExt/SystemCouplingParticipant.pyc` (only if using Python APIs)
-
<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin/compiler
-
<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/<platform>/<mpi>
- If not using a supported MPI distribution (this includes not using MPI at all), stub MPI wrapper should be used (replace <mpi> with "stub" above).
The above directories need to be included in the
PATH
environment variable (on Windows) orLD_LIBRARY_PATH
environment variable (on Linux) at run-time.
- If not using a supported MPI distribution (this includes not using MPI at all), stub MPI wrapper should be used (replace <mpi> with "stub" above).
Ansys CPython Interpreter (for Python only)
- `<ANSYSInstallationPath>/commonfiles/CPython/<version>/release/python`
Examples
In the following examples, mock solver applications from Heat Transfer in Square Channel Air Flow tutorial are built with different configurations. These examples can be used as a reference when building more complex applications. The actual build steps depend on your system configuration, compilers, and other details. The source code for these applications is provided with the participant library tutorial package.
In the following examples, replace
- <ANSYSInstallationPath> with the correct Ansys installation path.
- <MultiportVersion> with the correct version of the Fluent Multiport library.
- <IntelMPIPath> with the correct path to the Intel MPI library.
The following compiler and MPI versions were used:
- Linux
- g++, gcc, and gfortran 8.2.0
- Intel(R) Fortran Compiler, Version 19.0.5.281
- Intel(R) MPI Library for Linux OS, Version 2017 Update 4
- Windows
- Microsoft (R) C/C++ Optimizing Compiler Version 19.10.25027 for x64
- Intel(R) Visual Fortran Intel(R) 64 Compiler Version 19.0.4.245
- Intel(R) MPI Library for Windows OS, Version 2018 Update 3
Linux
C++
Note that it is important to add the -D_GLIBCXX_USE_CXX11_ABI=0
compiler flag. If this flag cannot be used, then it is recommended that you use the C APIs.
C
Fortran
GNU Fortran Compiler
Intel Fortran Compiler
C++ Parallel Version Using Intel MPI
C Parallel Version Using Intel MPI
Fortran Parallel Version Using Intel MPI
GNU Fortran Compiler
Intel Fortran Compiler
C++ Parallel Version Using Fluent MPI Wrapper
Fortran Parallel Version Using Fluent MPI Wrapper
Note that to link the MPI Wrapper, ‘include 'mpif.h’` statement in the application must be replaced with the following C preprocessor directive:
C preprocessor must be used. GNU Fortran compiler option for this is -cpp
.
Executing in Standalone Mode
Add the following locations to the LD_LIBRARY_PATH
environment variable:
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin/compiler
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub
Now, if complied (C++, C, or Fortran) execute the program in standalone mode:
./ChannelFlowMockSolver --scname="test"
If using Python, also add the following location to the PYTHONPATH
environment variable:
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin
Now, execute the Python script in standalone mode:
python ChannelFlowMockSolver.py --scname="test"
Executing Parallel Version in Standalone Mode Using Intel MPI
Add the following locations to the LD_LIBRARY_PATH
environment variable:
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin/compiler
<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/intel
Now, execute the program in standalone mode locally using two processes:
mpirun -n 2 ./ChannelFlowMockSolverMPI --scname="test"
Windows
C++
To compile in debug mode using /ZI
flag (/JMC
and /RTC1
flags are added optionally):
Note that the participant library is compiled with the /MT
flag on Windows. You must compile C++ code that links against the participant library with /MT
or /MD
flags. The debug run-times, /MTd
and /MDd
are not compatible with the participant library. If you cannot change the compiler flags, then it is recommended that you use the C APIs instead.
C
Fortran
C++ Parallel Version Using Intel MPI
C Parallel Version Using Intel MPI
Fortran Parallel Version Using Intel MPI
C++ Parallel Version Using Fluent MPI Wrapper
Fortran Parallel Version Using Fluent MPI Wrapper
Note that to link the MPI Wrapper, ‘include 'mpif.h’` statement in the application must be replaced with the following C preprocessor directive:
C preprocessor must be used. Intel Fortran compiler option for this is -fpp
.
Executing in Standalone Mode
Add the following locations to the PATH
environment variable:
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\compiler
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\win64\stub
Now, if complied (C++, C, or Fortran) execute the program in standalone mode:
ChannelFlowMockSolver.exe --scname="test"
If using Python, also add the following location to the PYTHONPATH
environment variable:
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin
Now, execute the Python script in standalone mode:
python.exe ChannelFlowMockSolver.py --scname="test"
Executing Parallel Version in Standalone Mode Using Intel MPI
Add the following locations to the PATH
environment variable:
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin\compiler
<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\win64\intel
Now, execute the program in standalone mode locally using two processes:
mpiexec -localonly -noprompt -n 2 ChannelFlowMockSolverMPI.exe --scname="test"