-
Notifications
You must be signed in to change notification settings - Fork 1
Building on Panther (ppc64le) with GNU compilers
Vedran Novakovic edited this page Feb 10, 2017
·
6 revisions
GNU C/C++/Fortran 6.1.1.
(In the following example, $CDS_HOME refers to a top-level user's directory on the Central Data Store.)
Run configure
as:
CC="nvcc -ccbin /usr/bin/g++" CXX="nvcc -ccbin /usr/bin/g++" ./configure --prefix=$CDS_HOME/ppc64le --enable-dependency-tracking --disable-cairo --disable-cpuid --disable-libxml2 --enable-static --disable-shared
Use the development (Git) version of PAPI.
Run configure
as:
CC=gcc CFLAGS="-fopenmp -mcpu=power8" F77=gfortran FFLAGS="-fopenmp -frecursive -mcpu=power8" ./configure --prefix=$CDS_HOME/ppc64le --with-static-lib=yes --with-shared-lib=yes
Use Netlib's development (Git) version of LAPACK.
Change the following to create make.inc
from INSTALL/make.inc.gfortran
:
FORTRAN = gfortran
OPTS = -O3 -frecursive -mcpu=power8
DRVOPTS = $(OPTS)
NOOPT = -O0 -frecursive -mcpu=power8
LOADER = gfortran
LOADOPTS = $(OPTS)
CC = gcc
CFLAGS = -O3 -mcpu=power8
LAPACKE_WITH_TMG = Yes
Create make.inc
as follows:
#//////////////////////////////////////////////////////////////////////////////
# -- MAGMA (version 2.2.0) --
# Univ. of Tennessee, Knoxville
# Univ. of California, Berkeley
# Univ. of Colorado, Denver
# @date November 2016
#//////////////////////////////////////////////////////////////////////////////
# GPU_TARGET contains one or more of Fermi, Kepler, or Maxwell,
# to specify for which GPUs you want to compile MAGMA:
# Fermi - NVIDIA compute capability 2.x cards
# Kepler - NVIDIA compute capability 3.x cards
# Maxwell - NVIDIA compute capability 5.x cards
# The default is "Fermi Kepler".
# Note that NVIDIA no longer supports 1.x cards, as of CUDA 6.5.
# See http://developer.nvidia.com/cuda-gpus
#
GPU_TARGET ?= Kepler
# --------------------
# programs
CC = gcc
CXX = g++
NVCC = nvcc -ccbin /usr/bin/g++
FORT = gfortran
ARCH = ar
ARCHFLAGS = cr
RANLIB = ranlib
# --------------------
# flags
# Use -fPIC to make shared (.so) and static (.a) library;
# can be commented out if making only static library.
#FPIC = -fPIC
CFLAGS = -O3 $(FPIC) -DNDEBUG -DADD_ -Wall -fopenmp -mcpu=power8
FFLAGS = -O3 $(FPIC) -DNDEBUG -DADD_ -Wall -Wno-unused-dummy-argument -mcpu=power8
F90FLAGS = -O3 $(FPIC) -DNDEBUG -DADD_ -Wall -Wno-unused-dummy-argument -mcpu=power8 -x f95-cpp-input
NVCCFLAGS = -O3 -DNDEBUG -DADD_ -Xcompiler "-mcpu=power8"
LDFLAGS = $(FPIC) -fopenmp -mcpu=power8
# C++11 (gcc >= 4.7) is not required, but has benefits like atomic operations
CXXFLAGS := $(CFLAGS) -std=c++11
CFLAGS += -std=c99
# --------------------
# libraries
# gcc with reference BLAS and LAPACK
LIB = -llapack -lrefblas /gpfs/panther/local/apps/gcc/gcc/6.1.1/lib64/libgfortran.so
LIB += -lcublas -lcusparse -lcudart -lcudadevrt
# --------------------
# directories
# define library directories preferably in your environment, or here.
BLASDIR ?= $(CDS_HOME)/ppc64le
CUDADIR ?= /usr/local/cuda
-include make.check-cuda
LIBDIR = -L$(CUDADIR)/lib64 -L$(BLASDIR)/lib
INC = -I$(CUDADIR)/include
Edit Makefile
according to the diff:
--- /gpfs/fairthorpe/local/SCD/jpf02/vxn61-jpf02/CDS-ppc64le/Makefile 2017-02-10 16:59:30.892637801 +0000
+++ Makefile 2017-02-10 13:20:39.126743960 +0000
@@ -73,7 +73,7 @@
GPU_TARGET += sm20
endif
ifneq ($(findstring Kepler, $(GPU_TARGET)),)
- GPU_TARGET += sm30 sm35
+ GPU_TARGET += sm35
endif
ifneq ($(findstring Maxwell, $(GPU_TARGET)),)
GPU_TARGET += sm50 sm52
@@ -117,9 +117,9 @@
NV_COMP := -gencode arch=compute_30,code=compute_30
endif
ifneq ($(findstring sm35, $(GPU_TARGET)),)
- MIN_ARCH ?= 350
- NV_SM += -gencode arch=compute_35,code=sm_35
- NV_COMP := -gencode arch=compute_35,code=compute_35
+ MIN_ARCH ?= 370
+ NV_SM += -gencode arch=compute_37,code=sm_37
+ NV_COMP := -gencode arch=compute_37,code=compute_37
endif
ifneq ($(findstring sm50, $(GPU_TARGET)),)
MIN_ARCH ?= 500
Edit Makefile.am
to set nvcc
options:
--- /gpfs/fairthorpe/local/SCD/jpf02/vxn61-jpf02/CDS/spral/Makefile.am 2016-10-20 16:30:21.000000000 +0100
+++ Makefile.am 2017-02-10 15:54:52.350148042 +0000
@@ -5,8 +5,8 @@
# NVCC setup
PTX_FLAGS = -v
-#NVCCFLAGS = -Iinclude -arch=sm_20 -g -Xptxas="${PTX_FLAGS}"
-OPENMP_LIB = -lgomp # FIXME: autoconf this
+NVCCFLAGS = -O3 -Iinclude -arch=sm_37 -Xptxas="${PTX_FLAGS}" -Xcompiler="-mcpu=power8"
+OPENMP_LIB = -lgomp
AM_NVCC_FLAGS = -I$(top_srcdir)/include -I$(top_srcdir)/src
AM_LD_FLAGS = -lcuda
NVCCLINK = \
Edit configure.ac
to explicitly enable HWLOC:
--- /gpfs/fairthorpe/local/SCD/jpf02/vxn61-jpf02/CDS/spral/configure.ac 2016-10-20 16:30:21.000000000 +0100
+++ configure.ac 2017-02-10 15:54:59.260618000 +0000
@@ -72,12 +72,7 @@
)
# Check for hwloc
-PKG_PROG_PKG_CONFIG # initialise $PKG_CONFIG
-PKG_CONFIG="$PKG_CONFIG --static" # we will be linking statically
-PKG_CHECK_MODULES([HWLOC], [hwloc],
- AC_DEFINE(HAVE_HWLOC,1,[Define if you have hwloc library]),
- AC_MSG_WARN([hwloc not supplied: cannot detect NUMA regions])
- )
+AC_DEFINE(HAVE_HWLOC,1,[Define if you have hwloc library])
AS_IF([test "x$NVCC" != x], [
SPRAL_NVCC_LIB
If needed, update config.guess
and config.sub
to the latest versions, to make the tools aware of ppc64le architecture.
Call configure
as:
CPPFLAGS="-I$CDS_HOME/ppc64le/include" LIBS="-lhwloc" LDFLAGS="-L$CDS_HOME/ppc64le/lib" CC=gcc CXX=g++ CFLAGS="-O3 -mcpu=power8" CXXFLAGS="-O3 -mcpu=power8" F77=gfortran FFLAGS="-O3 -frecursive -mcpu=power8" FCFLAGS="-O3 -frecursive -mcpu=power8" NVCC="nvcc -ccbin /usr/bin/g++" ./configure --prefix=$CDS_HOME/ppc64le --enable-dependency-tracking --with-metis=$CDS_HOME/ppc64le/lib/libmetis.a --with-blas=$CDS_HOME/ppc64le/lib/librefblas.a --with-lapack=$CDS_HOME/ppc64le/lib/liblapack.a