Skip to content

Building the prerequisites on MacOS X

Vedran Novakovic edited this page Sep 5, 2016 · 14 revisions

Homebrew

Homebrew can install the most of the required general-purpose development tools, e.g.,

  • autoconf
  • automake
  • libtool
  • cmake
  • ...

If some tool or library is obsolete or missing on the system, trying to get it installed by Homebrew may be the fastest option.

Be aware that, for proper functioning, brew tool may require setting the correct proxies in your shell (Terminal), via http_proxy and https_proxy variables. Please consult man brew for more information.

GCC

To install GNU C, C++, and Fortran compilers and libraries, Homebrew may be used:

brew install gcc

Alternatively, see Installing GCC page how to install GCC from the source distribution. I have used the latest development sources from the SVN (trunk) to successfully bootstrap the compilers. In essence, the procedure is to:

  1. Checkout the SVN sources.
  2. Download the missing prerequisites (if not already installed on the system), by invoking ./contrib/download_prerequisites from the top-level directory of a working copy created in step 1.
  3. Configure the build. To avoid any clashes with the system's gcc driver, or with a possible Homebrew GCC installation, etc., saying ./configure --prefix=$GCCDIR, where GCCDIR is set to a sub-directory of the user's $HOME, may be a reasonable choice.
  4. Say make bootstrap and then make install.
  5. Be sure to prepend $GCCDIR/bin to PATH, $GCCDIR/lib to DYLD_LIBRARY_PATH, and (optionally) $GCCDIR/share/man to MANPATH, in order to use the newly built compilers, instead of those already present on the system.

METIS

Set METISDIR to a directory you wish to install METIS to, and execute from the source directory:

make config prefix=$METISDIR cc=`which gcc` openmp=1
make
make install

Add $METISDIR/bin to PATH, and $METISDIR/lib to DYLD_LIBRARY_PATH, if a shared library is built.

MPI

You might wish to choose some MPI implementation that Homebrew offers.

Alternatively, I have installed MVAPICH2 2.1 from the sources. The configure line is:

./configure --prefix=$MPIDIR --enable-error-checking=runtime --enable-error-messages=all --enable-fast=all --enable-fortran=yes --enable-cxx --enable-romio --enable-threads=multiple --enable-dependency-tracking --enable-nemesis-shm-collectives --with-device=ch3:nemesis --with-pm=hydra --disable-shared

If there is a compile error related to HOST_NAME_MAX in src/mpid/ch3/src/mpid_abort.c file after running make, it can be avoided by substituting 256 for that undefined symbol.

If no --prefix option is specified, MVAPICH2 will be installed under /usr/local, so sudo make install-strip would be needed.

Otherwise, add $MPIDIR/bin to PATH, $MPIDIR/lib to DYLD_LIBRARY_PATH (if the shared libraries are built), and (optionally) $MPIDIR/share/man to MANPATH.

CUDA

Downloading and installing CUDA Toolkit should be straightforward. Afterwards:

  1. Please make sure that you export CUDADIR=/usr/local/cuda.
  2. With such a definition, add $CUDADIR/bin to PATH, $CUDADIR/lib to DYLD_LIBRARY_PATH, and (optionally) $CUDADIR/doc/man to MANPATH.
  3. Check if the CUDA subsystem works properly, by building and playing with the NVIDIA_CUDA-X.Y_Samples in your home directory (just say make...).

With CUDA 8.0 RC, some of the samples will not build; namely, the ones which require nvgraph library. To avoid this problem, add the following to the samples' Makefile:

ifeq ($(TARGET_ARCH),x86_64)
FILTER_OUT += 7_CUDALibraries/nvgraph_Pagerank/Makefile
FILTER_OUT += 7_CUDALibraries/nvgraph_SSSP/Makefile
FILTER_OUT += 7_CUDALibraries/nvgraph_SemiRingSpMV/Makefile
endif

A similar trick should work for any sample that fails to build.

MAGMA

MAGMA 2.1 GPU library can now be built. I have Intel MKL installed, and thus opted for a GCC + MKL configuration, derived from make.inc.mkl-gcc, with some flags imported from make.inc.macos, as well. Here is the full make.inc:

#//////////////////////////////////////////////////////////////////////////////
#   -- MAGMA (version 2.1.0) --
#      Univ. of Tennessee, Knoxville
#      Univ. of California, Berkeley
#      Univ. of Colorado, Denver
#      @date August 2016
#//////////////////////////////////////////////////////////////////////////////

# GPU_TARGET contains one or more of Fermi, Kepler, or Maxwell,
# to specify for which GPUs you want to compile MAGMA:
#     Fermi   - NVIDIA compute capability 2.x cards
#     Kepler  - NVIDIA compute capability 3.x cards
#     Maxwell - NVIDIA compute capability 5.x cards
#     Pascal  - NVIDIA compute capability 6.x cards
# The default is "Fermi Kepler".
# Note that NVIDIA no longer supports 1.x cards, as of CUDA 6.5.
# See http://developer.nvidia.com/cuda-gpus
#
GPU_TARGET ?= Kepler

# --------------------
# programs

CC        = gcc
CXX       = g++
NVCC      = nvcc
FORT      = gfortran

ARCH      = ar
ARCHFLAGS = cr
RANLIB    = ranlib


# --------------------
# flags

# Use -fPIC to make shared (.so) and static (.a) library;
# can be commented out if making only static library.
#FPIC      = -fPIC

CFLAGS    = -O3 $(FPIC) -fopenmp -DADD_ -Wall -Wshadow -DMAGMA_WITH_MKL -DMAGMA_NOAFFINITY
FFLAGS    = -O3 $(FPIC)          -DADD_ -Wall -Wno-unused-dummy-argument
F90FLAGS  = -O3 $(FPIC)          -DADD_ -Wall -Wno-unused-dummy-argument -x f95-cpp-input
NVCCFLAGS = -O3                  -DADD_ -Xcompiler "$(FPIC) -Wall -Wno-unused-function"
LDFLAGS   =     $(FPIC) #-fopenmp

# C++11 (gcc >= 4.7) is not required, but has benefits like atomic operations
CXXFLAGS := $(CFLAGS) -std=c++11
CFLAGS   += -std=c99


# --------------------
# libraries

# see MKL Link Advisor at http://software.intel.com/sites/products/mkl/
# gcc with MKL 10.3, GNU OpenMP threads (use -fopenmp in CFLAGS, LDFLAGS)
#LIB       = -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -lpthread -lstdc++ -lm -lgfortran

# Supposedly, gcc can use Intel threads (libiomp5) instead, but be careful that
# libiomp5 and libgomp are NOT BOTH linked. Above, we use gnu threads as a safer option.
# gcc with MKL 10.3, Intel OpenMP threads (remove -fopenmp from LDFLAGS above)
LIB       = -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lstdc++ -lm -lgfortran

LIB      += -lcublas -lcusparse -lcudart


# --------------------
# directories

# define library directories preferably in your environment, or here.
# for MKL run, e.g.: source /opt/intel/composerxe/mkl/bin/mklvars.sh intel64
#MKLROOT ?= /opt/intel/composerxe/mkl
#CUDADIR ?= /usr/local/cuda
-include make.check-mkl
-include make.check-cuda

LIBDIR    = -L$(CUDADIR)/lib \
            -L$(MKLROOT)/lib

INC       = -I$(CUDADIR)/include \
            -I$(MKLROOT)/include

The major changes to the original configuration are:

  1. GPU_TARGET is set to Kepler only. It is reasonable to build only for the GPU architecture that is present in the target machine, since nvcc compilation for multiple architectures takes a long time.
  2. FPIC is commented out, since I was interested in static libraries only (but if the shared libraries should be required, that might change).
  3. -DMAGMA_NOAFFINITY is added to CFLAGS, since there is no POSIX thread affinity interface in MacOS API.
  4. -fopenmp is commented out from LDFLAGS, and LIB is changed to a definition with -lmkl_intel_thread (instead of one with -lmkl_gnu_thread), because there is no mkl_gnu_thread library in MacOS version of MKL. That also means linking with Intel OpenMP library, not the GNU one, and I can only hope that the two are binary and otherwise compatible...
  5. LIBDIR is changed to reflect the directory layout of CUDA and MKL on MacOS.

The build and some testing executables work fine, but it is far away from guaranteeing that this is the proper configuration to follow.

If a shared library is built, $MAGMADIR\lib should be added to DYLD_LIBRARY_PATH.

SPRAL

There are a few changes needed to build SPRAL v2015-04-20:

  1. In Makefile.in (and Makefile.am) change AM_LD_FLAGS variable to AM_LD_FLAGS = -L/usr/local/cuda/lib -lcuda.
  2. In Makefile.in (and Makefile.am) comment out -lrt from LDADD variable, since that library is not available on MacOS X.
  3. In examples/C/ssmfe directory, all 5 C source files need to have their #include <cblas.h> changed to #include <mkl_cblas.h>, in order to compile with Intel MKL.

The configure invocation is as follows:

NVCCFLAGS="-g -arch=sm_30" CPPFLAGS="-I$MKLROOT/include" ./configure --prefix=$SPRALDIR --enable-dependency-tracking --with-blas="-L$MKLROOT -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" --with-lapack="-L$MKLROOT -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" --with-metis="-L$METISDIR/lib -lmetis"

Note that NVCCFLAGS are changed so that only the actually present hardware is targeted (here, a Kepler-based mobile GPU). After make and make install, SPRAL should be installed in $SPRALDIR.

Add $SPRALDIR/bin to PATH.

HWLOC

For hwloc 1.11.4, ./configure --prefix=$HWLOCDIR --enable-dependency-tracking --enable-static, with make and make install, would do.

Add $HWLOCDIR/bin to PATH, $HWLOCDIR/lib to DYLD_LIBRARY_PATH (if a shared library is built), and (optionally) $HWLOCDIR/share/man to MANPATH.

StarPU

StarPU SVN trunk needs a couple of fixes in configure.ac.

When checking whether nvcc supports sm_13 architecture, it might fail with the newer CUDA releases. Just change NVCCFLAGS="$NVCCFLAGS -arch sm_13" to, e.g., NVCCFLAGS="$NVCCFLAGS -arch sm_30", or whatever the native GPU architecture is.

When checking whether CUDA is working, add -Wl,-rpath /usr/local/cuda/lib after -lcuda in LDFLAGS.

Also, NVCCFLAGS="${NVCCFLAGS} -ccbin \${CC}" should be avoided, since -ccbin \${CC} will call GCC as the host compiler, which is no longer supported on MacOS X.

Then, run ./autogen.sh, and configure with:

MAGMA_CFLAGS="-DADD_ -I$MAGMADIR/include" MAGMA_LIBS="-L$MAGMADIR/lib -lmagma_sparse -lmagma -lcusparse" HWLOC_CFLAGS="-I$HWLOCDIR/include" HWLOC_LIBS="-L$HWLOCDIR/lib -lhwloc" ./configure --prefix=$STARPUDIR --enable-dependency-tracking --disable-starpu-top --enable-openmp --enable-blas-lib=mkl --with-cuda-dir=$CUDADIR --with-mkl-cflags="-I$MKLROOT/include" --with-mkl-ldflags="-L$MKLROOT -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" --disable-build-doc --without-mpicc --without-smpirun --without-mpiexec --without-mpifort --disable-opencl --disable-socl

The output should look like:

configure:

	CPUs   enabled: yes
	CUDA   enabled: yes
	OpenCL enabled: no
	SCC    enabled: no
	MIC    enabled: no

	Compile-time limits
	(change these with --enable-maxcpus, --enable-maxcudadev,
	--enable-maxopencldev, --enable-maxmicdev, --enable-maxnodes,
	--enable-maxbuffers)
	(Note these numbers do not represent the number of detected
	devices, but the maximum number of devices StarPU can manage)

	Maximum number of CPUs:           64
	Maximum number of CUDA devices:   4
	Maximum number of OpenCL devices: 0
	Maximum number of SCC devices:    0
	Maximum number of MIC threads:    0
	Maximum number of memory nodes:   8
	Maximum number of task buffers:   8

	GPU-GPU transfers: yes
	Allocation cache:  yes

	Magma enabled:     yes
	BLAS library:      mkl
	hwloc:             yes
	FxT trace enabled: no
	StarPU-Top:        no

        Documentation:     no
        Examples:          yes

	StarPU Extensions:
	       MPI enabled:                                 no
	       MPI test suite:                              no
	       FFT Support:                                 yes
	       GCC plug-in:                                 no
	       GCC plug-in test suite (requires GNU Guile): no
	       OpenMP runtime support enabled:              yes
	       SOCL enabled:                                no
	       SOCL test suite:                             no
	       Scheduler Hypervisor:                        no
	       simgrid enabled:                             no
	       ayudame enabled:                             no
	       Native fortran support:                      yes
	       Native MPI fortran support:                  

Then, make, make install, add $STARPUDIR/bin to PATH, and $STARPUDIR/lib to DYLD_LIBRARY_PATH.

PLASMA

Copy make.inc.mkl-gcc to make.inc:

# PLASMA example make.inc, using Intel MKL and gcc
#
# PLASMA is a software package provided by:
# University of Tennessee, US,
# University of Manchester, UK.

# --------------------
# programs

CC        = gcc

ARCH      = ar
ARCHFLAGS = cr
RANLIB    = ranlib


# --------------------
# flags

# Use -fPIC to make shared (.so) and static (.a) libraries;
# can be commented out if making only static libraries.
#FPIC      = -fPIC

CFLAGS    = -fopenmp $(FPIC) -O3 -std=c99 -Wall -pedantic -Wshadow -Wno-unused-function
LDFLAGS   = -fopenmp $(FPIC)

# options for MKL
CFLAGS   += -DPLASMA_WITH_MKL \
            -DMKL_Complex16="double _Complex" \
            -DMKL_Complex8="float _Complex"


# --------------------
# libraries
# This assumes $MKLROOT is set in your environment.
# Add these to your .cshrc or .bashrc, adjusting for where MKL is installed:
# in .cshrc:   source /opt/intel/bin/compilervars.csh intel64
# in .bashrc:  source /opt/intel/bin/compilervars.sh  intel64

# With gcc OpenMP (libgomp), use -lmkl_sequential or (-lmkl_gnu_thread   with MKL_NUM_THREADS=1).
# With icc OpenMP (liomp5),  use -lmkl_sequential or (-lmkl_intel_thread with MKL_NUM_THREADS=1).
LIBS      = -L$(MKLROOT)/lib -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lm

INC       = -I$(MKLROOT)/include

Then, make, and make install prefix=$PLASMADIR. If a shared library is built, add $PLASMADIR/lib to DYLD_LIBRARY_PATH.