This experiment is based on article Primordial Black Hole clusters, phenomenology & implications by Juan Garcia-Bellido (shortly: JGB).
The goal of this experiment is to create a self-consistent model with Plummer density profile and log-normal mass spectrum, and then evolve it for Hubble time.
Mass spectra used in the article.
To reproduce the experiment, follow these steps:
-
Activate the Agama environment:
conda activate agama
-
Start Nemo (from
nemo
repository root):source start_nemo.sh
-
Switch to custom NEMO version (needed for Nbodyx methods, such as Nbody0 and Nbody6):
cd $NEMO git remote add custom https://github.com/savchenkoyana/nemo.git git checkout nbodyx cd src/nbody/evolve/aarseth/nbody0 make nmax cd $NEMO make rebuild
If you want to go back to the default NEMO version, run:
cd $NEMO git checkout master make rebuild
-
Go to the experiment root directory:
cd /path/to/Nbody/02_Reproduce_JGB/
-
To reproduce any experiment from the original article, run the corresponding sh-script. For example:
bash sh_scripts/run_exp_MA.sh
You can also choose your own parameters for distributions and run the commands from sh-script one-by-one (see next section).
Note that all scripts in this experiment overwrite the existing files. Don't forget to backup your experiments before trying to reproduce them!
The combinations of parameters from the original article and corresponding sh-scripts are listed below:
Experiment name |
|
s, |
Plummer radius, pc | Number of particles | Script to reproduce evolution | |
---|---|---|---|---|---|---|
|
0 | 1 | 0.5 | 10 | sh_scripts/run_exp_sigma0.5.sh | |
|
0 | 1 | 1 | 10 | sh_scripts/run_exp_sigma1.0.sh | |
|
0 | 1 | 1.5 | 10 | sh_scripts/run_exp_sigma1.5.sh | |
M & A | 10 | 1.5 | 0.954 | 10 | sh_scripts/run_exp_MA.sh |
This section desctibes how to perform the evolution of PBH cluster in an external potential of SMBH.
-
Create initial coordinates of cluster:
python create_ic.py --mu <MU> --sigma <SIGMA> --scale <SCALE> --r <PLUMMER_RADIUS> --N <N>
Here
N
is the number of particles in simulation,MU
,SIGMA
, andSCALE
are log-normal distribution parameters of PBH mass spectrum, andPLUMMER_RADIUS
is a characteristic size of Plummer density distribution (typepython create_ic.py --help
for more details).The above command will automatically create (or re-create) a directory with name
snap_mu<MU>_s<SCALE>_sigma<SIGMA>_r<PLUMMER_RADIUS>_N<N>
containing fileIC.nemo
with initial coordinates for evolution. -
Preprocess data
JGB writes:
Clusters are themselves immersed in a central gravitational potential with orbital radius
$R_c$ = 34 kpc and central mass$M = 4.37 × 10^{10} M_{☉}$ throughout the entire evolution. This is just a point mass approximation which leads to a circular movement of period T = 2.81 GyrThe easiest way to create this point-mass potential is to add a new particle representing the central mass to the existing snapshot with PBH cluster data:
python preprocess_snap.py \ --nemo-file `DIRNAME`/IC.nemo \ --r 10 \ --r-shift 34 0 0 \ --v-shift 0 -74.35014 0 \ --add-point-source \ --source-mass 4.37e10
After this step, you will get a new file
<DIRNAME>/IC_preprocessed.nemo
with the old data concatenated with the new data (a steady point of mass$4.37\times10^{10} M_\odot$ at (0, 0, 0)).Note that
preprocess.py
also converts parsecs to kiloparsecs for convenience using NEMO'ssnapscale
. The reverse transformation is performed bypostrprocess.py
. For more details see section Units. -
Run evolution of this snapshot with SMBH:
gyrfalcON in=<DIRNAME>/IC_preprocessed.nemo out=<DIRNAME>/<OUT_NAME>.nemo eps=<eps> kmax=<kmax> Grav=<Grav> tstop=<tstop> step=<step> logstep=300
A set of recommended parameters for
gyrFalcON
is provided bypreprocess_snap.py
script at the end of its output. -
It would be useful to postprocess your data to plot profiles, spectras, etc.
To postprocess snapshot evolved in JGB potential (the original article), run:
python postprocess.py --nemo-file <DIRNAME>/<OUT_NAME>.nemo --remove-point-source
The postprocessed file with name
<OUT_NAME>_postprocessed.nemo
will be stored in<DIRNAME>
folder.
-
To visualize cluster evolution, run:
snapplot <DIRNAME>/out.nemo
Use these options for customization:
snapplot <DIRNAME>/out.nemo xrange=<xmin>:<xmax> yrange=<ymin>:<ymax> times=<tmin>:<tmax>
-
There is also a possibility to visulaize the evolution using glnemo2.
-
Another option is to use custom visualization script from this repository:
python animate.py --nemo-file <DIRNAME>/out_postprocessed.nemo --times <t1> <t2> ... <tn> --add-point-source
--times <t1> <t2> ... <tn>
means that all timestamps from snapshot that you want to use to plot the graph should be separated by a space. E.g.,--times 0.0 1.0 2.0
. Before feeding timestamps, make sure they are present in the snapshot. To get a list of timestamps from a snapshot, run:python stat.py --nemo-file <DIRNAME>/<OUT_NAME>_postprocessed.nemo --n-timestamps <N>
where
<N>
is the desired number of timestamps.If you used sh-scripts, check
txt
-file in directory with your snapshot
Plot density profile
python plot_density_profile.py --nemo-file <DIRNAME>/<OUT_NAME>_postprocessed.nemo --times <t1> <t2> ... <tn> --mu <MU> --sigma <SIGMA> --scale <SCALE> --r <PLUMMER_RADIUS>
To plot Lagrange radius at different timestamps for different experiments, run:
python plot_lagrange_radius.py --nemo-files <DIRNAME1>/<OUT_NAME>_postprocessed.nemo <DIRNAME2>/<OUT_NAME>_postprocessed.nemo --times <t1> <t2> ... <tn>
As NEMO's tool for computation of cluster's density center sometimes fail and I haven't fixed it yet, it is better to add --remove-outliers
at the end of the command:
python plot_lagrange_radius.py --nemo-files <DIRNAME1>/<OUT_NAME>_postprocessed.nemo <DIRNAME2>/<OUT_NAME>_postprocessed.nemo --times <t1> <t2> ... <tn> --remove-outliers
You can compare your results with plots from the article:
Note that Lagrange radius at
Compute and plot mass spectrum for a given snapshot along with the original distribution function:
python plot_mass_spectrum.py --nemo-file <DIRNAME>/<OUT_NAME>_postprocessed.nemo --times <t1> <t2> ... <tn> --mu <MU> --sigma <SIGMA> --scale <SCALE> --r <PLUMMER_RADIUS>
The mass distribution for your snapshot (the resulting histograms) and original pdf (the line plot) should look like a log-normal distribution with your parameters at
To plot the distribution of masses only for particles inside the half-mass radius, run:
python plot_mass_spectrum.py --nemo-file <DIRNAME>/<OUT_NAME>_postprocessed.nemo --times <t1> <t2> ... <tn> --mu <MU> --sigma <SIGMA> --scale <SCALE> --r <PLUMMER_RADIUS> --lagrange --remove-outliers
To test your pipeline, you may evolve a cluster in its own gravitational field (without any potentials). The final density after the evolution should look like the initial density. This indicates that your model is truly self-consistent.
Many researchers use Nbody6++GPU in order to perform evolution of clusters. We need to compare our method (fast gytFalcON with complexity $O(N)$) with precise but slow Nbodyx methods (Nbody0, Nbody4, Nbody6, etc.) with complexity
At the moment I compare gyrfalcON with Nbody0 (the simplest Nbody method). To run both gyrFalcON and Nbody0 methods together, use 1
in a bash script for evolution:
bash sh_scripts/run_exp_MA.sh 1
The default command
bash sh_scripts/run_exp_MA.sh
is equivalent tobash sh_scripts/run_exp_MA.sh 0
, the latter argument means whether to run Nbody0 or not.
You can compare results visually (by running animate.py
) or plot all statistics: density profile, mass spectrum or lagrange radius.
To plot lagrange radii for gyrFalcON and Nbody0 together, run this command:
python plot_lagrange_radius.py \
--remove-outliers \
--nemo-files /path/to/dir1/out_postprocessed.nemo ... /path/to/dirn/out_postprocessed.nemo \
--mu <MU> \
--sigma <SIGMA> \
--scale <SCALE> \
--times t1 ... tn \
--nbody-nemo-files /path/to/dirn/out_nbody_postprocessed.nemo \
--nbody-times t1 ... tn
Note that timestamps for gyrFalcON and Nbody0 will likely differ, so we need to feed them separately. There is also a way to get the nearest timestamp in snapshot using NEMO's
snaptrim
with optiontimefuzz=nearest
. However, there is a bug related to close timestamps in a simulation snapshot. So my way is uglier but less error prone.
It takes about 4 hours to run a gyrFalcON simulation on our CPU. Running Nbody0 simulation takes about 3 days depending on task you choose. Running M & A experiment with
N = 20000
was not reasonable because it would take us more than two weeks, so we usedN = 10000
for this experiment.
The main diffrence between Nbody0 and gyrFalcON is that Nbody0 always uses G=1
. We, on the other hand, use non-usual units in our experiments:
- We use pc (length), km/s (velocity) and
$M_{☉}$ (mass) units for creating a cluster model because of convenience - We use another units for evolution: kpc for lenght, km/s for velocity and
$M_{☉}$ for mass. Mostly these units are motivated by the fact that they are used with the Milky Way potential from Portail et al. (2017) implemented in Agama and we would probably like to reuse our code for this task in future. Also characteristic size of the system changes to kpc (the size of cluster orbit).
With that in mind, I decided that the easiest choice of units for Nbody0 is: kpc, km/s and
- In these units
G=1
by definition - It is easy to compare results of evolution between Nbody0 and gyrFalcON and reuse the same scripts, as we do not need to change the time, velocity and length scale. We only change the mass scale.
Here is a list of what we need to fully reproduce the article:
- N-body simulation
- Comparison between Nbody6++GPU (or Nbody6) and GyrfalcON
- Gravitational waves
- Black hole mergers