Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputer centers, where many users and application teams share common installations of software on clusters with exotic architectures, using non-standard libraries. Spack is non-destructive: installing a new version does not break existing installations. In this way several configurations can coexist on the same system.
Most importantly, Spack is simple. It offers a simple spec syntax so that users can specify versions and configuration options concisely. Spack is also simple for package authors: package files are written in pure Python, and specs allow package authors to maintain a single file for many different builds of the same package.
These instructions are intended to guide you on how to use Spack on the FAS RC Cannon cluster.
Spack works out of the box. Simply clone Spack to get going. In this example, we will clone the latest version of Spack.
Note:
Spack
can be installed in your home or lab space. For best performance and efficiency, we recommend to install Spack in your lab directory, e.g.,/n/holylabs/LABS/<PI_LAB>/Lab/software
or other lab storage if holylabs is not available.
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
Cloning into 'spack'...
remote: Enumerating objects: 19108, done.
remote: Counting objects: 100% (19108/19108), done.
remote: Compressing objects: 100% (10461/10461), done.
remote: Total 19108 (delta 2000), reused 13700 (delta 1592), pack-reused 0
Receiving objects: 100% (19108/19108), 12.63 MiB | 25.17 MiB/s, done.
Resolving deltas: 100% (2000/2000), done.
This will create the spack
folder in the current directory. Next, go to this directory and add Spack to your path. Spack has some nice command-line integration tools, so instead of simply appending to your PATH
variable, source the Spack setup script.
$ cd spack/
$ . share/spack/setup-env.sh
$ spack --version
0.21.0.dev0 (89fc9a9d47108c5d34f3f5180eb10d5253689c29)
By default Spack will match your usual file permissions which typically are set up without group write permission. For lab wide installs of Spack though you will want to ensure that it has group write enforced. You can set this by going to the etc/spack
directory in your Spack installation and adding a file called packages.yaml
(or editing the exiting one) with the following contents:
packages:
all:
permissions:
write: group
group: jharvard_lab
By default Spack will autodetect which architecture your underlying hardware is and build software to match that. However in cases where you are running on heterogeneous hardware it is best to use a more generic flag. You can set this by going to the etc/spack
directory in your Spack installation and adding a file called packages.yaml
(or editing the exiting one) with the following contents:
packages:
all:
target: [x86_64]
Once your Spack environment has been installed it cannot be easily moved. Some of the packages in Spack hardcode the absolute paths into themselves and thus cannot be changed with out rebuilding them. As such simply copying the Spack installation will not actually move the Spack installation.
The easiest way to move a space install if you need to keep the exact same stack of software is to first create a spack environment with all the software you need. Once you have that you can export the environment similar to how you would for conda environments. After that you can then use that environment file export to rebuild in the new location.
A complete list of all available Spack packages can be found also here.
The spack list
displays the available packages, e.g.,
$ spack list
==> 6752 packages
<omitted output>
NOTE: You can also look for available
spack
packages at https://packages.spack.io
The spack list
command can also take a query string. Spack automatically adds wildcards to both ends of the string, or you can add your own wildcards. For example, we can view all available Python
packages.
# with wildcard at both ends of the strings
$ spack list py
==> 1979 packages
<omitted outout>
# add your own wilcard: here, list packages that start with py
$ spack list 'py-*'
==> 1960 packages.
<omitted output>
You can also look for specific packages, e.g.,
$ spack list lammps
==> 1 packages.
lammps
You can display available software versions, e.g.,
$ spack versions lammps
==> Safe versions (already checksummed):
master 20211214 20210929.2 20210929 20210831 20210728 20210514 20210310 20200721 20200505 20200227 20200204 20200109 20191030 20190807 20181212 20181127 20181109 20181010 20180905 20180822 20180316 20170922
20220107 20211027 20210929.1 20210920 20210730 20210702 20210408 20201029 20200630 20200303 20200218 20200124 20191120 20190919 20190605 20181207 20181115 20181024 20180918 20180831 20180629 20180222 20170901
==> Remote versions (not yet checksummed):
1Sep2017
Note: for the spack versions
command, the package name needs to match exactly. For example, spack versions lamm
will not be found:
$ spack versions lamm
==> Error: Package 'lamm' not found.
You may need to run 'spack clean -m'.
Installing packages with Spack is very straightforward. To install a package simply type spack install PACKAGE_NAME
. Large packages with multiple dependencies can take significant time to install, thus we recommend doing this in a screen/tmux session or a Open Ondemand Remote Desktop session.
To install the latest version of a package, type:
$ spack install bzip2
To install a specific version (1.0.8) of bzip2
, add @
and the version number you need:
$ spack install [email protected]
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-prqkzynv2nwko5mktitebgkeumuxkveu.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/clingo-bootstrap-spack/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-prqkzynv2nwko5mktitebgkeumuxkveu.spack
==> Installing "clingo-bootstrap@spack%[email protected]~docs~ipo+python+static_libstdcpp build_type=Release arch=linux-centos7-x86_64" from a buildcache
==> Installing libiconv-1.16-rc3o6ckaij6pgxu5444faznhssp4gcia
==> No binary for libiconv-1.16-rc3o6ckaij6pgxu5444faznhssp4gcia found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/e6/e6a1b1b589654277ee790cce3734f07876ac4ccfaecbee8afa0b649cf529cc04.tar.gz
==> No patches needed for libiconv
==> libiconv: Executing phase: 'autoreconf'
==> libiconv: Executing phase: 'configure'
==> libiconv: Executing phase: 'build'
==> libiconv: Executing phase: 'install'
==> libiconv: Successfully installed libiconv-1.16-rc3o6ckaij6pgxu5444faznhssp4gcia
Fetch: 0.20s. Build: 32.76s. Total: 32.96s.
[+] /home/spack/opt/spack/linux-rocky8-icelake/gcc-8.5.0/libiconv-1.16-rc3o6ckaij6pgxu5444faznhssp4gcia
==> Installing diffutils-3.8-ejut7cm752b57stai5g6f7nsmte4jvps
==> No binary for diffutils-3.8-ejut7cm752b57stai5g6f7nsmte4jvps found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/a6/a6bdd7d1b31266d11c4f4de6c1b748d4607ab0231af5188fc2533d0ae2438fec.tar.xz
==> No patches needed for diffutils
==> diffutils: Executing phase: 'autoreconf'
==> diffutils: Executing phase: 'configure'
==> diffutils: Executing phase: 'build'
==> diffutils: Executing phase: 'install'
==> diffutils: Successfully installed diffutils-3.8-ejut7cm752b57stai5g6f7nsmte4jvps
Fetch: 0.17s. Build: 46.04s. Total: 46.21s.
[+] /home/spack/opt/spack/linux-rocky8-icelake/gcc-8.5.0/diffutils-3.8-ejut7cm752b57stai5g6f7nsmte4jvps
==> Installing bzip2-1.0.8-aohgpu7zn62kzpanpohuevbkufypbnff
==> No binary for bzip2-1.0.8-aohgpu7zn62kzpanpohuevbkufypbnff found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/ab/ab5a03176ee106d3f0fa90e381da478ddae405918153cca248e682cd0c4a2269.tar.gz
==> Ran patch() for bzip2
==> bzip2: Executing phase: 'install'
==> bzip2: Successfully installed bzip2-1.0.8-aohgpu7zn62kzpanpohuevbkufypbnff
Fetch: 0.23s. Build: 2.07s. Total: 2.30s.
[+] /home/spack/opt/spack/linux-rocky8-icelake/gcc-8.5.0/bzip2-1.0.8-aohgpu7zn62kzpanpohuevbkufypbnff
Here we installed a specific version (1.0.8) of bzip2
. The installed packages can be displayed by the command spack find
:
$ spack find
-- linux-rocky8-icelake / [email protected] -----------------------------
[email protected] [email protected] [email protected]
==> 3 installed packages
One can also request that Spack uses a specific compiler flavor / version to install packages, e.g.,
$ spack install [email protected]%[email protected]
==> Installing zlib-1.2.13-xlt7jpku4zv2d4jhrr3azbz2vnktzfeb
==> No binary for zlib-1.2.13-xlt7jpku4zv2d4jhrr3azbz2vnktzfeb found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/b3/b3a24de97a8fdbc835b9833169501030b8977031bcb54b3b3ac13740f846ab30.tar.gz
==> No patches needed for zlib
==> zlib: Executing phase: 'edit'
==> zlib: Executing phase: 'build'
==> zlib: Executing phase: 'install'
==> zlib: Successfully installed zlib-1.2.13-xlt7jpku4zv2d4jhrr3azbz2vnktzfeb
Fetch: 0.46s. Build: 1.87s. Total: 2.33s.
[+] /home/spack/opt/spack/linux-rocky8-icelake/gcc-8.5.0/zlib-1.2.13-xlt7jpku4zv2d4jhrr3azbz2vnktzfeb
To specify the desired compiler, one uses the %
sigil.
The @
sigil is used to specify versions, both of packages and of compilers, e.g.,
$ spack install [email protected]
$ spack install [email protected]%[email protected]
Spack will normally built its own package stack, even if there are libaries available as part of the operating system. If you want Spack to build against system libraries instead of building its own you will need to have it discover what libraries available natively on the system. You can do this using the spack external find
.
$ spack external find
==> The following specs have been detected on this system and added to /n/home/jharvard/.spack/packages.yaml
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
This even works with modules loaded from other package managers. You simply have to have those loaded prior to running the find command. After these have been added to Spack, Spack will try to use them if it can in future builds rather than installing its own versions.
Spack provides an easy way to uninstall packages with the spack uninstall PACKAGE_NAME
, e.g.,
$ spack uninstall [email protected]%[email protected]
==> The following packages will be uninstalled:
-- linux-rocky8-icelake / [email protected] -----------------------------
xlt7jpk [email protected]
==> Do you want to proceed? [y/N] y
==> Successfully uninstalled [email protected]%[email protected]+optimize+pic+shared build_system=makefile arch=linux-rocky8-icelake/xlt7jpk
Note: The recommended way of uninstalling packages is by specifying the full package name, including the package version and compiler flavor and version used to install the package on the first place.
There are several different ways to use Spack packages once you have installed them. The easiest way is to use spack load PACKAGE_NAME
to load and spack unload PACKAGE_NAME
to unload packages, e.g.,
$ spack load bzip2
$ which bzip2
/home/spack/opt/spack/linux-rocky8-icelake/gcc-8.5.0/bzip2-1.0.8-aohgpu7zn62kzpanpohuevbkufypbnff/bin/bzip2
The loaded packages can be listed with spack find --loaded
, e.g.,
$ spack find --loaded
-- linux-rocky8-icelake / [email protected] -----------------------------
[email protected] [email protected] [email protected]
==> 3 loaded packages
If you no longer need the loaded packages, you can unload them with:
$ spack unload
[pkrastev@builds01 spack]$ spack find --loaded
==> 0 loaded packages
Spack has the ability to build packages with multiple compilers and compiler versions. This can be particularly useful, if a package needs to be built with specific compilers and compiler versions. You can display the available compilers by the spack compilers
command, e.g.,
$ spack compilers
==> Available compilers
-- gcc rocky8-x86_64 --------------------------------------------
[email protected]
The listed compilers are system level compilers provided by the OS itself. On the cluster, we support a set of core compilers, such as GNU (GCC) compiler suit, Intel, and PGI provided on the cluster through software modules.
You can easily add additional compilers to spack by loading the appropriate software modules, running the spack compiler find
command, and edit the compilers.yaml
configuration file. For instance, if you need GCC version 12.2.0 you need to do the following:
$ module load gcc/12.2.0-fasrc01
$ which gcc
/n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gcc
$ spack compiler find
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
[email protected]
==> Compilers are defined in the following files:
~/.spack/linux/compilers.yaml
If you run spack compilers
again, you will see that the new compiler has been added to the compiler list and made a default (listed first), e.g.,
$ spack compilers
==> Available compilers
-- gcc rocky8-x86_64 --------------------------------------------
[email protected] [email protected]
Note: By default, spack does not fill in the
modules:
field in thecompilers.yaml
file. If you are using a compiler from a module, then you should add this field manually.
Use your favorite text editor, e.g., Vim
, Emacs
,VSCode
, etc., to edit the compiler configuration YAML file ~/.spack/linux/compilers.yaml
, e.g.,
vi ~/.spack/linux/compilers.yaml
Each -compiler:
section in this file is similar to the below:
- compiler:
spec: [email protected]
paths:
cc: /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gcc
cxx: /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/g++
f77: /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gfortran
fc: /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gfortran
flags: {}
operating_system: rocky8
target: x86_64
modules: []
environment: {}
extra_rpaths: []
We have to edit the modules: []
line to read
modules: [gcc/12.2.0-fasrc01]
and save the compiler config. file. If more than one modules are required by the compiler, these need to be separated by semicolon (;).
We can display the configuration of a specific compiler by the spack compiler info
command, e.g.,
$ spack compiler info [email protected]
[email protected]:
paths:
cc = /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gcc
cxx = /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/g++
f77 = /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gfortran
fc = /n/sw/helmod-rocky8/apps/Core/gcc/12.2.0-fasrc01/bin/gfortran
modules = ['gcc/12.2.0-fasrc01']
operating system = rocky8
Once the new compiler is configured, it can be used to build packages. The below example shows how to install the GNU Scientific Library (GSL) with [email protected]
.
# Check available GSL versions
$ spack versions gsl
==> Safe versions (already checksummed):
2.7.1 2.7 2.6 2.5 2.4 2.3 2.2.1 2.1 2.0 1.16
==> Remote versions (not yet checksummed):
2.2 1.15 1.14 1.13 1.12 1.11 1.10 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1.1 1.1 1.0
# Install GSL version 2.7.1 with GCC version 12.2.0
$ spack install [email protected]%[email protected]
==> Installing gsl-2.7.1-uj6i6eqdsymvgupsqulhgewhb7nkr2vc
==> No binary for gsl-2.7.1-uj6i6eqdsymvgupsqulhgewhb7nkr2vc found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/dc/dcb0fbd43048832b757ff9942691a8dd70026d5da0ff85601e52687f6deeb34b.tar.gz
==> No patches needed for gsl
==> gsl: Executing phase: 'autoreconf'
==> gsl: Executing phase: 'configure'
==> gsl: Executing phase: 'build'
==> gsl: Executing phase: 'install'
==> gsl: Successfully installed gsl-2.7.1-uj6i6eqdsymvgupsqulhgewhb7nkr2vc
Fetch: 0.93s. Build: 1m 38.29s. Total: 1m 39.22s.
[+] /home/spack/opt/spack/linux-rocky8-icelake/gcc-12.2.0/gsl-2.7.1-uj6i6eqdsymvgupsqulhgewhb7nkr2vc
# Load the installed package
$ spack load [email protected]%[email protected]
# List the loaded package
$ spack find --loaded
-- linux-rocky8-icelake / [email protected] ----------------------------
[email protected]
==> 1 loaded package
NOTE: Please, note that you first need to do
module purge
to make sure that all modules are unloaded for this to work.
Many HPC software packages work in parallel using MPI. Although spack
has the ability to install MPI libraries from scratch, the recommended way is to configure spack
to use MPI already available on the cluster as software modules, instead of building its own MPI libraries.
MPI is configured through the packages.yaml
file. For instance, if we need OpenMPI
version 4.1.3 compiled with GCC
version 12, we could follow the below steps to add this MPI configuration:
$ module load gcc/12.2.0-fasrc01 openmpi/4.1.5-fasrc03
$ echo $MPI_HOME
/n/sw/helmod-rocky8/apps/Comp/gcc/12.2.0-fasrc01/openmpi/4.1.5-fasrc03
Use your favorite text editor, e.g., Vim
, Emacs
,VSCode
, etc., to edit the packages configuration YAML file ~/.spack/packages.yaml
, e.g.,
$ vi ~/.spack/packages.yaml
Note: If the file
~/.spack/packages.yaml
does not exist, you will need to create it.
Include the following contents:
packages:
openmpi:
externals:
- spec: [email protected]%[email protected]
prefix: /n/sw/helmod-rocky8/apps/Comp/gcc/12.2.0-fasrc01/openmpi/4.1.5-fasrc03
buildable: False
The option buildable: False
reassures that MPI won't be built from source. Instead, spack
will use the MPI provided as a software module in the corresponding prefix.
Once the MPI is configured, it can be used to build packages. The below example shows how to install HDF5
version 1.12.2 with [email protected]
and [email protected]
.
$ module purge
$ spack install [email protected] % [email protected] ^ [email protected]
...
==> Installing hdf5-1.12.2-lfmo7dvzrgmu35mt74zqjz2mfcwa2urb
==> No binary for hdf5-1.12.2-lfmo7dvzrgmu35mt74zqjz2mfcwa2urb found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/2a/2a89af03d56ce7502dcae18232c241281ad1773561ec00c0f0e8ee2463910f14.tar.gz
==> Ran patch() for hdf5
==> hdf5: Executing phase: 'cmake'
==> hdf5: Executing phase: 'build'
==> hdf5: Executing phase: 'install'
==> hdf5: Successfully installed hdf5-1.12.2-lfmo7dvzrgmu35mt74zqjz2mfcwa2urb
Fetch: 0.58s. Build: 1m 21.39s. Total: 1m 21.98s.
[+] /home/spack/opt/spack/linux-rocky8-icelake/gcc-12.2.0/hdf5-1.12.2-lfmo7dvzrgmu35mt74zqjz2mfcwa2urb
# Load the installed package
$ spack load [email protected]%[email protected]
# List the loaded package
$ spack find --loaded
-- linux-rocky8-icelake / [email protected] ----------------------------
[email protected] ca-certificates-mozilla@2022-10-11 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
==> 15 loaded packages
Note: Please note the command
module purge
. This is required as otherwise the build fails.
When spack builds it uses a stage
directory located in /tmp
. Spack also cleans up this space once it is done building, regardless of if the build succeeds or fails. This can make troubleshooting failed builds difficult as the logs from those builds are stored in stage
. To preserve these files for debugging you will first want to set the $TMP
environmental variable to a location that you want to dump files in stage
to. Then you will want to add the --keep-stage
flag to spack (ex. spack install --keep-stage
), which tells spack to keep the staging files rather than remove them.
2 errors found in build log:
10 Configured with: ./configure --prefix=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01 --program-prefix= --exec-prefix=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01 --bindir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin
--sbindir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/sbin --sysconfdir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/etc --datadir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/share --includedir=/n/helmod/apps/centos7
/Core/gcc/10.2.0-fasrc01/include --libdir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/lib64 --libexecdir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/libexec --localstatedir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc0
1/var --sharedstatedir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/var/lib --mandir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/share/man --infodir=/n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/share/info
11 Thread model: posix
12 Supported LTO compression algorithms: zlib
13 gcc version 10.2.0 (GCC)
14 COLLECT_GCC_OPTIONS='-o' '/tmp/tmp.LkhfoOt8fH/a.out' '-v' '-mtune=generic' '-march=x86-64'
15 /n/sw/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/../libexec/gcc/x86_64-pc-linux-gnu/10.2.0/cc1 -quiet -v -iprefix /n/sw/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/../lib64/gcc/x86_64-pc-linux-gnu/10.2.0/ /tmp/tmp.
LkhfoOt8fH/hello-49015.c -quiet -dumpbase hello-49015.c -mtune=generic -march=x86-64 -auxbase hello-49015 -version -o /tmp/ccVvRxDx.s
>> 16 /n/sw/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/../libexec/gcc/x86_64-pc-linux-gnu/10.2.0/cc1: error while loading shared libraries: libmpfr.so.6: cannot open shared object file: No such file or directory
17
18 ERROR: Linker : not found
>> 19 ** makelocalrc step has FAILED. Linker not found **
20 ** See gcc output above **
21 Command used:
22 /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gcc -o /tmp/tmp.LkhfoOt8fH/a.out -v /tmp/tmp.LkhfoOt8fH/hello-49015.c
23 cat /tmp/tmp.LkhfoOt8fH/hello-49015.c:
24 #include <stdio.h>
25 int main()
See build log for details:
/tmp/jharvard
/spack-stage/spack-stage-nvhpc-22.7-iepk6vgndc7hmzs3evxqz6qw2vf6qt7s/spack-build-out.txt
In this error the compiler cannot find a library it is dependent on mpfr
. To fix this we will need to add the relevant library to the compiler definition in ~/.spack/linux/compilers.yaml
. In this case we are using gcc/10.2.0-fasrc01
which when loaded also loads:
[jharvard@holy7c22501 ~]# module list
Currently Loaded Modules:
1) gmp/6.2.1-fasrc01 2) mpfr/4.1.0-fasrc01 3) mpc/1.2.1-fasrc01 4) gcc/10.2.0-fasrc01
So we will need to grab the location of these libraries to add them. To find that you can do:
[jharvard@holy7c22501 ~]# module display mpfr/4.1.0-fasrc01
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/n/helmod/modulefiles/centos7/Core/mpfr/4.1.0-fasrc01.lua:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
help([[mpfr-4.1.0-fasrc01
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
]], [[
]])
whatis("Name: mpfr")
whatis("Version: 4.1.0-fasrc01")
whatis("Description: The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.")
setenv("MPFR_HOME","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01")
setenv("MPFR_INCLUDE","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/include")
setenv("MPFR_LIB","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64")
prepend_path("CPATH","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/include")
prepend_path("FPATH","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/include")
prepend_path("INFOPATH","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/share/info")
prepend_path("LD_LIBRARY_PATH","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64")
prepend_path("LIBRARY_PATH","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64")
prepend_path("PKG_CONFIG_PATH","/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64/pkgconfig")
And then pull out the LIBRARY_PATH
. Once we have the paths for all three of these dependencies we can add them to the ~/.spack/linux/compilers.yaml
as follows
- compiler:
spec: [email protected]
paths:
cc: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gcc
cxx: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/g++
f77: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gfortran
fc: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gfortran
flags: {}
operating_system: centos7
target: x86_64
modules: []
environment:
prepend_path:
LIBRARY_PATH: /n/helmod/apps/centos7/Core/mpc/1.2.1-fasrc01/lib64:/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64:/n/helmod/apps/centos7/Core/gmp/6.2.1-fasrc01/lib64
LD_LIBRARY_PATH: /n/helmod/apps/centos7/Core/mpc/1.2.1-fasrc01/lib64:/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64:/n/helmod/apps/centos7/Core/gmp/6.2.1-fasrc01/lib64
extra_rpaths: []
Namely we needed to add the prepend_path
to the environment
. With those additional paths defined the compiler will now work because it can find its dependencies.
This is the same type of error as the Cannot open shared object file: No such file or directory
. Namely the compiler cannot find the libraries it is dependent on. See the troubleshooting section for the shared objects error for how to resolve.
If you are trying to install a package and get an error about only macOS
$ spack install [email protected]
==> Error: Only supported on macOS
You need to update your compilers. For example, here you can see only Ubuntu compilers are available, which do not work on Rocky 8
$ spack compiler list
==> Available compilers
-- clang ubuntu18.04-x86_64 -------------------------------------
[email protected]
-- gcc ubuntu18.04-x86_64 ---------------------------------------
[email protected] [email protected]
Then, run compiler find
to update compilers
$ spack compiler find
==> Added 1 new compiler to /n/home01/jharvard/.spack/linux/compilers.yaml
[email protected]
==> Compilers are defined in the following files:
/n/home01/jharvard/.spack/linux/compilers.yaml
Now, you can see a Rocky 8 compiler is also available
$ spack compiler list
==> Available compilers
-- clang ubuntu18.04-x86_64 -------------------------------------
[email protected]
-- gcc rocky8-x86_64 --------------------------------------------
[email protected]
-- gcc ubuntu18.04-x86_64 ---------------------------------------
[email protected] [email protected]
And you can proceed with the spack package installs.
- Intel MPI Library Configuration
- Spack Environments
- Spack Recipes: instructions for installing various software packages through spack