AMROC/VTF Installation of machine For instance on Ubuntu, use sudo apt-get install mercurial bison flex libfl-dev patch zlib1g-dev libjpeg62 libjpeg62-dev libxi-dev gfortran g++ gnuplot paraview auto conf to add the packages that are missing in a standard desktop installation. For parallel program development on Ubuntu it should suffice to add mpi-default-bin mpi-default-dev Note that on Ubuntu the mpirun command will usually be available only in the form mpirun.openmpi, mpirun.lam, mpirun.mpich2. Make sure th e right version in your case is used in the file vtf/amroc/python/startup.py below the lines that include string.find(version,'Ubuntu') For instance on Fedora, activate the following packages autoconf, automake, binutils, gawk, gcc, g++, gfortran, glibc-devel, gnuplot, make, python, python-devel, Berkeley yacc, flex and for parallel program development openmpi, openmpi-devel, openmpi-libs After verifying your installation, the following UNIX commands should be available on the machine: gcc, g++, ld, ar, ranlib gfortran or f77 or g77 make python, awk yacc, lex autoconf, automake gnuplot and mpicc mpic++ / mpiCC / mpicxx mpif90 mpirun for parallel program development. Among others file formats, VTF/AMROC supports binary VTK files for visualization. To visualize such files both Paraview or VisIt can be u sed. For instance on Fedora, Paraview is available as a package. --------------------------------------------------------------------- Installation of AMROC A convenient script is provided to compile the mandatory HDF4 libraries. The script also creates the directories as assumed by the VTF so ftware. $ cd vtf/third-party/HDF4.2r9 $ ./install $HOME/hdf4 This creates the libraries once in the user's $HOME-directory. On Linux without Fortran 77 compiler set export F77=gfortran (bash syntax) before running install. autoconf files Configuration and compilation A typical configuration command would be $ ./configure -C --enable-opt=yes --enable-mpi=yes --enable-maintainer-mode --enable-shellnewmat HDF4_DIR=$HOME/hdf4 On Linux without dedicated Fortran 77 compiler append F77=gfortran to above line. Then go into the directory build directory and compile the libraries $ cd gnu-opt-mpi $ make If the system has multiple cores, the last step can also be replaced by using $ cd gnu-opt-mpi $ ../parmake which would use all available cores for compilation, except in directories with F90 modules. As the build environment for the VTF allows a user to keep multiple top-level build directories (e.g., gnu-debug-mpi, gnu-opt-mpi, intel- debug, ...) on one machine, scripts are provided to adjust the environment of the current shell to run executables transparently for the user from different locations. These scripts are vtf/ac/paths.sh and vtf/ac/paths.csh. They can be run from an arbitrary directory in the following way: bash: source paths.sh [path to top-level build directory] or . paths.sh [path to top-level build directory] sh, ksh: . paths.sh [path to top-level build directory] csh, tcsh: source paths.csh [path to top-level build directory] For instance: $ cd vtf $ source ac/paths.sh gnu-opt-mpi Testing the installation and the build Extend your shell environment for your current build as explained directly above. cd into the top-level build directory and execute ../amroc/testrun_lbm.sh -m make -s -r 4 for a parallel test and ../amroc/testrun_lbm.sh -m make -s -r 0 in serial. The test is successful if Gnuplot windows with evolving graphs pop up one after another. Use ../amroc/testrun_lbm.sh -c to compare the computational results against reference solutions. For instance: $ cd gnu-opt-mpi $ ../amroc/testrun.sh -m make -s -r 4 $ ../amroc/testrun_gfm.sh -m make -s -r 4 $ ../amroc/testrun_lbm.sh -m make -s -r 4 $ ../fsi/testrun.sh -m make -s -r 4 To compare with the stored reference solutions: $ ../amroc/testrun.sh -c $ ../amroc/testrun_gfm.sh -c $ ../amroc/testrun_lbm.sh -c $ ../fsi/testrun.sh -c Realistic example Rigid body motion: Change to compilation directory gnu-opt/gnu-opt-mpi: cd amroc/lbm/applications/Navier-Stokes/2d/MovingCylinder make Change to corresponding directory with solver.in: cd vtf/amroc/lbm/applications/Navier-Stokes/2d/MovingCylinder ./run.py or ./run.py 2 (if you have compiled with MPI on a dual-core system) Progress of the computation can be monitored by looking at the files out.txt and especially P*.log. Create binary VTK files for Paraview or VisIt if not done by run.py already: hdf2tab.sh -m Execute VisIt or Paraview and load the VTK files for visualization. For instance: $ cd vtf $ source ac/paths.sh gnu-opt-mpi $ cd gnu-opt-mpi/amroc/lbm/applications/Navier-Stokes/2d/MovingCylinder $ make $ cd ../../../../../../../amroc/lbm/applications/Navier-Stokes/2d/MovingCylinder $ run.py 4 Or: $ cd vtf $ source ac/paths.sh gnu-opt-mpi $ cd gnu-opt-mpi/amroc/clawpack/applications/euler/2d/SphereLiftOff $ make $ cd ../../../../../../../amroc/clawpack/applications/euler/2d/SphereLiftOff $ run.py $ hdf2tab.sh "-f display_file_visit.in" FSI example (requires MPI): cd $HOME/vtf/gnu-opt-mpi/vtf/fsi/beam-amroc/VibratingBeam make cd $HOME/vtf/fsi/beam-amroc/VibratingBeam ./run.py 4 Change LastNode entry in solver.in for different processor number. hdf2tab.sh Execute VisIt or Paraview and load the VTK files for visualization. Creating new applications within the autoconf environment The addition of new applications is simple if the configure command has been run with the developer option --enable-maintainer-mode. Create the applications. Normally this done by copying an entire application directory and replacing the executable name in src/Makefile.am. Locate the corresponding configure.ac file in a higher up directory. Add the new files application/Makefile.am and application/src/Makefile.am to the AC_CONFIG_FILES() entry. Add the new application directory to the entry SUBDIRS of the next higher up Makefile.am (this is not mandatory). Navigate with cd into the build location and into a directory at the same level or below the changed configure.ac file, e.g., cd gnu-opt-mpi/amroc/lbm Type make. Thanks to option --enable-maintainer-mode, this will automatically regenerate configure, all new Makefiles, and also run ./config.status, which will create the build directory for the new application. When the new application would appear rather late in the build process, it is possible at this point to stop the compilation process and to navigate directly into the sub-directory of the new application, e.g., cd applications/3d/Navier-Stokes/NewCode; make