Skip to main content

System Coupling Python library 2023 R2

Compiling, Linking, and Executing Applications That Use the Participant Library

Last update: 10.07.2023

The instructions for linking and compiling the participant library depend on the target platform, the target language, the MPI distribution, and the compiler.

Supported Languages

System Coupling interfaces are provided for the following target languages:

Language Standard
C++ C++11 or later
C C89 or later
Fortran 2003 or later (both fixed form and free form)
Python 3.10*
  • Only some Python interpreters are supported. The CPython interpreter located in <ANSYSInstallationPath>/commonfiles/CPython is supported on Windows and on Linux. On Windows, the Python 3.10 interpreter from www.python.org is supported. The Python interpreter that comes with Microsoft Visual Studio is also supported on Windows. The Python interpreter that comes with Intel Parallel Studio is currently not supported. Other Python interpreters have not been tested.

System Coupling Participant Library Resources

Resources are available at the following locations:

Public Header Files

* `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/include/SystemCouplingParticipant`

All public header files for the C++, C, and Fortran interfaces are located in the above directory. Both fixed and free form headers are available for Fortran codes.

C++

Only SystemCoupling.hpp header file should be included into the source code.

C

Only syscSystemCoupling.h header file should be included into the source code.

Fortran

Only syscPartLib.fi header file should be included into the source code.

Python

pyExt.SystemCouplingParticipant module should be imported in Python. However, first the following directory must be included in PYTHONPATH environment variable: <ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin

In addition, due to a change in the behavior of python 3.8 and above on windows platforms, the following code should be added to the python code prior to import of the system coupling participant module:

import os
import sys
if sys.platform == "win32":
os.add_dll_directory("<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin")
os.add_dll_directory("<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin/compiler"))
os.add_dll_directory("<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/<platform>/<mpi>")

Link-Time Dependencies

- Windows
    - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/lib`
        - `SysC.SystemCouplingParticipant.lib`
        - `SysC.SystemCouplingParticipantFortran.lib` (only if using Fortran APIs)
    - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/<platform>/<mpi>`
        - `mpi_wrapper.lib`
- Linux
    - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin`
        - `SysC.SystemCouplingParticipant.so`
        - `SysC.SystemCouplingParticipantFortran.so` (only if using Fortran APIs)
    - `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/<platform>/<mpi>`
        - `libmpi_wrapper.so`

Run-Time Dependencies

- `<ANSYSInstallationPath>/SystemCoupling/runTime/<platform>/bin`
    - `SysC.SystemCouplingParticipant.dll` (Windows)
    - `SysC.SystemCouplingParticipantFortran.dll` (Windows, only if using Fortran APIs)
    - `SysC.SystemCouplingParticipant.Python.dll` (Windows, only if using Python APIs)
    - `libSysC.SystemCouplingParticipant.so` (Linux)
    - `libSysC.SystemCouplingParticipantFortran.so` (Linux, only if using Fortran APIs)
    - `libSysC.SystemCouplingParticipant.Python.so` (Linux, only if using Python APIs)
    - `pyExt/_SystemCouplingParticipant.pyd` (Windows, only if using Python APIs)
    - `pyExt/_SystemCouplingParticipant.so` (Linux, only if using Python APIs)
    - `pyExt/SystemCouplingParticipant.pyc` (only if using Python APIs)
  • &lt;ANSYSInstallationPath&gt;/SystemCoupling/runTime/&lt;platform&gt;/bin/compiler
  • &lt;ANSYSInstallationPath&gt;/SystemCoupling/runTime/&lt;platform&gt;/cnlauncher/fluent/fluent&lt;MultiportVersion&gt;/multiport/mpi_wrapper/&lt;platform&gt;/&lt;mpi&gt;
    • If not using a supported MPI distribution (this includes not using MPI at all), stub MPI wrapper should be used (replace <mpi> with "stub" above).

The above directories need to be included in the PATH environment variable (on Windows) or LD_LIBRARY_PATH environment variable (on Linux) at run-time.

Ansys CPython Interpreter (for Python only)

- `<ANSYSInstallationPath>/commonfiles/CPython/<version>/release/python`

Examples

In the following examples, mock solver applications from Heat Transfer in Square Channel Air Flow tutorial are built with different configurations. These examples can be used as a reference when building more complex applications. The actual build steps depend on your system configuration, compilers, and other details. The source code for these applications is provided with the participant library tutorial package.

In the following examples, replace

  • <ANSYSInstallationPath> with the correct Ansys installation path.
  • <MultiportVersion> with the correct version of the Fluent Multiport library.
  • <IntelMPIPath> with the correct path to the Intel MPI library.

The following compiler and MPI versions were used:

  • Linux
    • g++, gcc, and gfortran 8.2.0
    • Intel(R) Fortran Compiler, Version 19.0.5.281
    • Intel(R) MPI Library for Linux OS, Version 2017 Update 4
  • Windows
    • Microsoft (R) C/C++ Optimizing Compiler Version 19.10.25027 for x64
    • Intel(R) Visual Fortran Intel(R) 64 Compiler Version 19.0.4.245
    • Intel(R) MPI Library for Windows OS, Version 2018 Update 3

Linux

C++

g++ -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolver ChannelFlowMockSolver.cpp -lSysC.SystemCouplingParticipant -lmpi_wrapper

Note that it is important to add the -D_GLIBCXX_USE_CXX11_ABI=0 compiler flag. If this flag cannot be used, then it is recommended that you use the C APIs.

C

gcc -std=c11 -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolver ChannelFlowMockSolver.c -lSysC.SystemCouplingParticipant -lmpi_wrapper

Fortran

GNU Fortran Compiler

gfortran -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include/SystemCouplingParticipant/FortranFixedForm -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolver ChannelFlowMockSolver.f -lSysC.SystemCouplingParticipantFortran -lmpi_wrapper

Intel Fortran Compiler

ifort -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include/SystemCouplingParticipant/FortranFixedForm -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolver ChannelFlowMockSolver.f -lSysC.SystemCouplingParticipantFortran -lmpi_wrapper -lgfortran

C++ Parallel Version Using Intel MPI

g++ -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include -I<IntelMPIPath>/intel64/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<IntelMPIPath>/intel64/lib/release -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolverMPI ChannelFlowMockSolverMPI.cpp -lSysC.SystemCouplingParticipant -lmpi -lmpi_wrapper

C Parallel Version Using Intel MPI

gcc -std=c11 -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include -I<IntelMPIPath>/intel64/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<IntelMPIPath>/intel64/lib/release -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolverMPI ChannelFlowMockSolverMPI.c -lSysC.SystemCouplingParticipant -lmpi -lmpi_wrapper

Fortran Parallel Version Using Intel MPI

GNU Fortran Compiler

gfortran -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include/SystemCouplingParticipant/FortranFixedForm -I<IntelMPIPath>/intel64/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<IntelMPIPath>/intel64/lib -L<IntelMPIPath>/intel64/lib/release -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolverMPI ChannelFlowMockSolverMPI.f -lSysC.SystemCouplingParticipantFortran -lmpifort -lmpi -lmpi_wrapper

Intel Fortran Compiler

ifort -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include/SystemCouplingParticipant/FortranFixedForm -I<IntelMPIPath>/intel64/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<IntelMPIPath>/intel64/lib -L<IntelMPIPath>/intel64/lib/release -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolverMPI ChannelFlowMockSolverMPI.f -lSysC.SystemCouplingParticipantFortran -lmpifort -lmpi -lmpi_wrapper -lgfortran

C++ Parallel Version Using Fluent MPI Wrapper

g++ -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolverMPI ChannelFlowMockSolverMPI.cpp -lSysC.SystemCouplingParticipant -lmpi_wrapper

Fortran Parallel Version Using Fluent MPI Wrapper

Note that to link the MPI Wrapper, ‘include 'mpif.h’` statement in the application must be replaced with the following C preprocessor directive:

#include "mpif.h"

C preprocessor must be used. GNU Fortran compiler option for this is -cpp.

gfortran -cpp -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/include/SystemCouplingParticipant/FortranFixedForm -I<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/include -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin -L<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub -o ChannelFlowMockSolverMPI ChannelFlowMockSolverMPI.f -lSysC.SystemCouplingParticipantFortran -lmpi_wrapper

Executing in Standalone Mode

Add the following locations to the LD_LIBRARY_PATH environment variable:

<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin <ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin/compiler <ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/stub

Now, if complied (C++, C, or Fortran) execute the program in standalone mode:

./ChannelFlowMockSolver --scname="test"

If using Python, also add the following location to the PYTHONPATH environment variable:

<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin

Now, execute the Python script in standalone mode:

python ChannelFlowMockSolver.py --scname="test"

Executing Parallel Version in Standalone Mode Using Intel MPI

Add the following locations to the LD_LIBRARY_PATH environment variable:

<ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin <ANSYSInstallationPath>/SystemCoupling/runTime/linx64/bin/compiler <ANSYSInstallationPath>/SystemCoupling/runTime/linx64/cnlauncher/fluent/fluent<MultiportVersion>/multiport/mpi_wrapper/lnamd64/intel

Now, execute the program in standalone mode locally using two processes:

mpirun -n 2 ./ChannelFlowMockSolverMPI --scname="test"

Windows

C++

cl /EHsc /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include" ChannelFlowMockSolver.cpp /FeChannelFlowMockSolver.exe /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipant.lib

To compile in debug mode using /ZI flag (/JMC and /RTC1 flags are added optionally):

cl /EHsc /ZI /JMC /RTC1 /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include" ChannelFlowMockSolver.cpp /FeChannelFlowMockSolver.exe /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipant.lib

Note that the participant library is compiled with the /MT flag on Windows. You must compile C++ code that links against the participant library with /MT or /MD flags. The debug run-times, /MTd and /MDd are not compatible with the participant library. If you cannot change the compiler flags, then it is recommended that you use the C APIs instead.

C

cl /EHsc /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include" ChannelFlowMockSolver.c /FeChannelFlowMockSolver.exe /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipant.lib

Fortran

ifort -I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include\SystemCouplingParticipant\FortranFixedForm" /FeChannelFlowMockSolver.exe ChannelFlowMockSolver.f /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipantFortran.lib

C++ Parallel Version Using Intel MPI

cl /EHsc /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include" /I"<IntelMPIPath>\intel64\include" ChannelFlowMockSolverMPI.cpp /FeChannelFlowMockSolverMPI.exe /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipant.lib /LIBPATH:"<IntelMPIPath>\intel64\lib" impi.lib

C Parallel Version Using Intel MPI

cl /EHsc /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include" /I"<IntelMPIPath>\intel64\include" ChannelFlowMockSolverMPI.c /FeChannelFlowMockSolverMPI.exe /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipant.lib /LIBPATH:"<IntelMPIPath>\intel64\lib" impi.lib

Fortran Parallel Version Using Intel MPI

ifort -I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include\SystemCouplingParticipant\FortranFixedForm" -I"<IntelMPIPath>\intel64\include" /FeChannelFlowMockSolverMPI.exe ChannelFlowMockSolverMPI.f /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipantFortran.lib /LIBPATH:"<IntelMPIPath>\intel64\lib" impi.lib

C++ Parallel Version Using Fluent MPI Wrapper

cl /EHsc /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include" /I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\include" ChannelFlowMockSolverMPI.cpp /FeChannelFlowMockSolverMPI.exe /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipant.lib /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\win64\stub" mpi_wrapper.lib

Fortran Parallel Version Using Fluent MPI Wrapper

Note that to link the MPI Wrapper, ‘include 'mpif.h’` statement in the application must be replaced with the following C preprocessor directive:

#include "mpif.h"

C preprocessor must be used. Intel Fortran compiler option for this is -fpp.

ifort -fpp -I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\include\SystemCouplingParticipant\FortranFixedForm" -I"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\include" /FeChannelFlowMockSolverMPI.exe ChannelFlowMockSolverMPI.f /link /subsystem:console /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\lib" SysC.SystemCouplingParticipantFortran.lib /LIBPATH:"<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\win64\stub" mpi_wrapper.lib

Executing in Standalone Mode

Add the following locations to the PATH environment variable:

  • <ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin
  • <ANSYSInstallationPath>\SystemCoupling\runTime\winx64\compiler
  • <ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\win64\stub

Now, if complied (C++, C, or Fortran) execute the program in standalone mode:

ChannelFlowMockSolver.exe --scname="test"

If using Python, also add the following location to the PYTHONPATH environment variable:

<ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin

Now, execute the Python script in standalone mode:

python.exe ChannelFlowMockSolver.py --scname="test"

Executing Parallel Version in Standalone Mode Using Intel MPI

Add the following locations to the PATH environment variable:

  • <ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin
  • <ANSYSInstallationPath>\SystemCoupling\runTime\winx64\bin\compiler
  • <ANSYSInstallationPath>\SystemCoupling\runTime\winx64\cnlauncher\fluent\fluent<MultiportVersion>\multiport\mpi_wrapper\win64\intel

Now, execute the program in standalone mode locally using two processes:

mpiexec -localonly -noprompt -n 2 ChannelFlowMockSolverMPI.exe --scname="test"