Installation

Source code

The entire GALAMOST package is a Free Software under the GNU General Public License. The package is mainly distributed as source code and binary program. The code and binary program can be downloaded from our website http://galamost.ciac.jl.cn/Download.php. The developing codes are available from https://bitbucket.org/galamostdevelopergroup/source-code/src/master/. At present, only Linux operating systems are supported. Here is the guide for installation by the codes.

  1. Requrements:

    1. Python >=2.6
    2. Boost library >=1.53.0
    3. NVIDIA CUDA Toolkit >= 7.0
    4. MPI(MVAPICH2 >= 2.3 or OpenMPI >= 4.0.0)
       For MVAPICH2, environment variable "export MV2_ENABLE_AFFINITY=0" is necessary to
       avoid multiple processes running on a same CPU core
    
    Note: MPI is only needed for version 4 with '--mpi=on' in configuration by default.
    The MPI is not need with '--mpi=off'.
    
  2. Before compiling and installing of source code, the compiling system should be configured first by configure.

    Examples:

    ./configure --prefix=/opt/galamost4
    

More configuration options are given here:

Options Functions Examples Defaults
--prefix Installation path --prefix=/opt/galamost4 --prefix=/opt/galamost4
--cuda_arch Compute capability of GPU --cuda_arch=75 Automatically detecting
--precision Precision format --precision=double ``–precision=single
--gprof Profiling tool --gprof=on --gprof=off
--gdb GDB tool --gdb=on --gdb=off
--cuda CUDA toolkit path --cuda=/usr/local/cuda-7.5 Automatically detecting
--gpu_mpi GPU direct communication --gpu_mpi=on --gpu_mpi=off
--mpi Switch MPI --mpi=on --mpi=on
--mpi_dir MPI path --mpi_dir=/usr/local Automatically detecting
--boost Boost path --boost=/usr Automatically detecting
--python Python path --python=/usr Automatically detecting

If the automatical detection is failed, please set the value explicitly. The compute capability will be automatically detected. If the detection failed or the machine equipped with multiple GPUs with different compute capabilitied, you could manually set the compute capability by --cuda_arch. The compute capability of some NVIDIA GPU is listed. For more GPUs, please visit https://developer.nvidia.com/cuda-gpus.

GPUs Compute Capability
Tesla V100 70
Tesla P100 60
Tesla P40 61
Tesla P4 61
NVIDIA TITAN V 70
NVIDIA TITAN Xp 61
NVIDIA TITAN X 61
GeForce GTX 1080 Ti 61
  1. After configuration, a Makefile file will be generated in your current directory. Then you can compile and install the package by make install.

    Examples:

    make install -j4
    # where -j indicates the number of threads to compile the code.