Skip to content

Commit

Permalink
Cherry-picked commits for 2.1.4 release (#4606)
Browse files Browse the repository at this point in the history
* Remove reference to relative_example in docs

* Use RTLD_GLOBAL for libgomp (#4353)

* Use RTLD_GLOBAL for libgomp

In conda-forge/pycbc-feedstock#74 it was suggested to use RTLD_GLOBAL for libgomp. Let's see if this works fine with the test suite (which should answer @josh-willis 's concerns).

* Move import of ctypes/gomp into __enter__

* Try this

* Revert "Revert "Allow SNR optimizer to use candidate point in initial array (#4393)""

This reverts commit 7be12f1.

We are now catching up with master, where the bug originally introduced
by #4393 is fixed properly, so here I am undoing the temporary fix.

* SNR optimisation options for pycbc_live (#4432)

* Moving the live optimizer option changes to my own branch

* Completing the snr optimization argument group

* updating pycbc_live

* re-adding bug fix

* removing TODO message

* Bug with d_e options

* Adding optimizer-seed

* fixing the d_e optimizer

* replacing run.sh code

* resolve merge conflict

* fixing run.sh

* cleaning up args_to_string func

* changing comment

* codeclimate fixes

* module docstring

* Update module docstring copyright

Co-authored-by: Gareth S Cabourn Davies <[email protected]>

* Add gareth

* removing argv

* argument changing

* removing duplicated arguments

* minor CC points

* remove bug introduced when making CC happier

---------

Co-authored-by: Gareth S Cabourn Davies <[email protected]>
Co-authored-by: Thomas Dent <[email protected]>

* Improvements to single-detector trigger fitting code for PyCBC Live (#4486)

* Cleanup

* Cleanup

* Refactor duration bin parsing code and add support for reading from bank

* Minor fix/cleanup to logging

* Update CLI checks for duration bins

* Cleanup

* Ignore inconsistent config when combining

* Fix bug

* Fix typo

Co-authored-by: Gareth S Cabourn Davies <[email protected]>

* Comment from Gareth

---------

Co-authored-by: Gareth S Cabourn Davies <[email protected]>

* [pycbc live] Don't add snr options to command if they don't exist (#4518)

* Don't run snr optimizer setup if not optimizing snr

* moving the check to a more appropraite place

* setting snr_opt_options to None if not optimizing

* [pycbc live] Allowing the use of psd variation in the ranking statistic for pycbc live (#4533)

* Modifying files to include psd variation in single detector statistic calculation

* ending variation.py with a blank line

* Changing to an increment agnostic solution

* removing change already fixed

* Updating function names and docstrings

* removing ToDos and adding more helpful comments

* Removing unused import

* Codeclimate fixes

* Removing excess logging and whitespace mistakes

* Removing unused objects + codeclimate fixes

* Updating comments and docstrings, removing matchedfilter changes

* Revert "Updating comments and docstrings, removing matchedfilter changes"

This reverts commit 0e6473a.

* Removing matchedfilter changes, updating comments and docstrings

* Move --verbose to the end of the commands

* more comment updates

* Repositioning filter recreation

* Changes to comments and removing whitespace

Co-authored-by: Thomas Dent <[email protected]>

* removing refchecks

* Adding option veification for psd variation

* Apply suggestions from code review

Co-authored-by: Thomas Dent <[email protected]>

* fixing EOL error

* Refactoring the filter creation function

* codeclimate fixes

* undo

* full_filt func

* removing indentation

* code climate

* code climate

* try to quiet codeclimate

* codeclimate doesn't know PEP8

* brackets obviate line continuation

---------

Co-authored-by: Thomas Dent <[email protected]>

* added scaling of initial pop in snr_optimizer (#4561)

* added scaling of initial pop

* init popn in optimize_di & pso func

* added changes in optimize_pso

* usig logging.debug for snr

* Do not set matplotlib's backend in internal modules (#4592)

* Set version to 2.1.4

* Remove reference to single_template_examples in docs

* Remove reference to hierarchical_model in docs

* Live: produce empty trigger fit plot for detectors with no triggers (#4600)

* Live: produce empty trigger fit plot for detectors with no triggers

* allow for below-threshold triggers

* fix thinko in option parsing for defaults (#4615)

* fix thinko in option parsing for defaults

When an option is not given at all getattr on the args object gives None, but we don't want to translate that into "--option-name None" on the command line.

* bugfix 

obviously we needed to define 'key_name' first ..

* Improvements to single fit plots (#4509)

* Improvements to single fit plots

* Apply suggestions from Gareth

Co-authored-by: Gareth S Cabourn Davies <[email protected]>

---------

Co-authored-by: Gareth S Cabourn Davies <[email protected]>

---------

Co-authored-by: Ian Harry <[email protected]>
Co-authored-by: Arthur Tolley <[email protected]>
Co-authored-by: Gareth S Cabourn Davies <[email protected]>
Co-authored-by: Thomas Dent <[email protected]>
Co-authored-by: Praveen Kumar <[email protected]>
  • Loading branch information
6 people committed Feb 6, 2024
1 parent 5d4d8bb commit b5bba8f
Show file tree
Hide file tree
Showing 16 changed files with 1,005 additions and 462 deletions.
121 changes: 69 additions & 52 deletions bin/live/pycbc_live_combine_single_fits
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,10 @@
"""Combine PyCBC Live single-detector trigger fitting parameters from several
different files."""

import h5py, numpy as np, argparse
import argparse
import logging
import numpy as np
import h5py
import pycbc


Expand Down Expand Up @@ -45,66 +47,80 @@ if args.conservative_percentile < 50 or \
"otherwise it is either not a percentile, or not "
"conservative.")

counts_all = {ifo: [] for ifo in args.ifos}
alphas_all = {ifo: [] for ifo in args.ifos}
analysis_dates = []
logging.info("%d input files", len(args.trfits_files))

# We only want to combine fit results if they were done with the same
# configuration. So start by finding the most recent fit file and reading its
# configuration parameters.

with h5py.File(args.trfits_files[0], 'r') as fit_f0:
# Store some attributes so we can check that all files are
# comparable
logging.info("Determining the most recent configuration parameters")

# Keep the upper and lower bins
bl = fit_f0['bins_lower'][:]
bu = fit_f0['bins_upper'][:]
latest_date = None
for f in args.trfits_files:
with h5py.File(f, 'r') as fit_f:
if latest_date is None or fit_f.attrs['analysis_date'] > latest_date:
latest_date = fit_f.attrs['analysis_date']
bl = fit_f['bins_lower'][:]
bu = fit_f['bins_upper'][:]
sngl_rank = fit_f.attrs['sngl_ranking']
fit_thresh = fit_f.attrs['fit_threshold']
fit_func = fit_f.attrs['fit_function']

sngl_rank = fit_f0.attrs['sngl_ranking']
fit_thresh = fit_f0.attrs['fit_threshold']
fit_func = fit_f0.attrs['fit_function']
# Now go back through the fit files and read the actual information. Skip the
# files that have fit parameters inconsistent with what we found earlier.

live_times = {ifo: [] for ifo in args.ifos}
logging.info("Reading individual fit results")

live_times = {ifo: [] for ifo in args.ifos}
trigger_file_starts = []
trigger_file_ends = []

n_files = len(args.trfits_files)
logging.info("Checking through %d files", n_files)
counts_all = {ifo: [] for ifo in args.ifos}
alphas_all = {ifo: [] for ifo in args.ifos}

for f in args.trfits_files:
fits_f = h5py.File(f, 'r')
# Check that the file uses the same setup as file 0, to make sure
# all coefficients are comparable

assert fits_f.attrs['sngl_ranking'] == sngl_rank
assert fits_f.attrs['fit_threshold'] == fit_thresh
assert fits_f.attrs['fit_function'] == fit_func
assert all(fits_f['bins_lower'][:] == bl)
assert all(fits_f['bins_upper'][:] == bu)

# Get the time of the first/last triggers in the trigger_fits file
gps_last = 0
gps_first = np.inf
for ifo in args.ifos:
if ifo not in fits_f:
with h5py.File(f, 'r') as fits_f:
# Check that the file uses the same setup as file 0, to make sure
# all coefficients are comparable
same_conf = (fits_f.attrs['sngl_ranking'] == sngl_rank
and fits_f.attrs['fit_threshold'] == fit_thresh
and fits_f.attrs['fit_function'] == fit_func
and all(fits_f['bins_lower'][:] == bl)
and all(fits_f['bins_upper'][:] == bu))
if not same_conf:
logging.warn(
"Found a change in the fit configuration, skipping %s",
f
)
continue
trig_times = fits_f[ifo]['triggers']['end_time'][:]
gps_last = max(gps_last, trig_times.max())
gps_first = min(gps_first, trig_times.min())
trigger_file_starts.append(gps_first)
trigger_file_ends.append(gps_last)

for ifo in args.ifos:
if ifo not in fits_f:
live_times[ifo].append(0)
counts_all[ifo].append(-1 * np.ones_like(bl))
alphas_all[ifo].append(-1 * np.ones_like(bl))
else:
live_times[ifo].append(fits_f[ifo].attrs['live_time'])
counts_all[ifo].append(fits_f[ifo + '/counts'][:])
alphas_all[ifo].append(fits_f[ifo + '/fit_coeff'][:])
if any(np.isnan(fits_f[ifo + '/fit_coeff'][:])):
logging.info("nan in %s, %s", f, ifo)
logging.info(fits_f[ifo + '/fit_coeff'][:])
fits_f.close()

# We now determine the (approximate) start/end times of the
# trigger_fits file via the time of the first/last triggers in it.
# Ideally this would be recorded exactly in the file.
gps_last = 0
gps_first = np.inf
for ifo in args.ifos:
if ifo not in fits_f:
continue
trig_times = fits_f[ifo]['triggers']['end_time'][:]
gps_last = max(gps_last, trig_times.max())
gps_first = min(gps_first, trig_times.min())
trigger_file_starts.append(gps_first)
trigger_file_ends.append(gps_last)

# Read the fitting parameters
for ifo in args.ifos:
if ifo not in fits_f:
live_times[ifo].append(0)
counts_all[ifo].append(-1 * np.ones_like(bl))
alphas_all[ifo].append(-1 * np.ones_like(bl))
else:
ffi = fits_f[ifo]
live_times[ifo].append(ffi.attrs['live_time'])
counts_all[ifo].append(ffi['counts'][:])
alphas_all[ifo].append(ffi['fit_coeff'][:])
if any(np.isnan(ffi['fit_coeff'][:])):
logging.warn("nan in %s, %s", f, ifo)
logging.warn(ffi['fit_coeff'][:])

# Set up the date array, this is stored as an offset from the first trigger time of
# the first file to the last trigger of the file
Expand All @@ -115,7 +131,7 @@ ad_order = np.argsort(trigger_file_starts)
start_time_n = trigger_file_starts[ad_order[0]]
ad = trigger_file_ends[ad_order] - start_time_n

# Get the counts and alphas
# Swap the time and bin dimensions for counts and alphas
counts_bin = {ifo: [c for c in zip(*counts_all[ifo])] for ifo in args.ifos}
alphas_bin = {ifo: [a for a in zip(*alphas_all[ifo])] for ifo in args.ifos}

Expand All @@ -125,6 +141,7 @@ cons_alphas_out = {ifo: np.zeros(len(alphas_bin[ifo])) for ifo in args.ifos}
cons_counts_out = {ifo: np.inf * np.ones(len(alphas_bin[ifo])) for ifo in args.ifos}

logging.info("Writing results")

fout = h5py.File(args.output, 'w')
fout.attrs['fit_threshold'] = fit_thresh
fout.attrs['conservative_percentile'] = args.conservative_percentile
Expand Down
45 changes: 20 additions & 25 deletions bin/live/pycbc_live_plot_combined_single_fits
Original file line number Diff line number Diff line change
Expand Up @@ -12,18 +12,23 @@
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.

import h5py, numpy as np, argparse
"""Plot the time evolution of fit parameters of PyCBC Live triggers.
"""

import argparse
import logging
import numpy as np
import h5py
import matplotlib
matplotlib.use('agg')
from matplotlib import pyplot as plt
import logging

from lal import gpstime
import pycbc

import pycbc


parser = argparse.ArgumentParser(usage="",
description="Combine fitting parameters from several different files")
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--verbose", action="store_true",
help="Print extra debugging information", default=False)
parser.add_argument("--combined-fits-file", required=True,
Expand All @@ -45,7 +50,7 @@ parser.add_argument("--colormap", default="rainbow_r", choices=plt.colormaps(),
parser.add_argument("--log-colormap", action='store_true',
help="Use log spacing for choosing colormap values "
"based on duration bins.")
args=parser.parse_args()
args = parser.parse_args()

if '{ifo}' not in args.output_plot_name_format or \
'{type}' not in args.output_plot_name_format:
Expand Down Expand Up @@ -94,21 +99,6 @@ with h5py.File(args.combined_fits_file, 'r') as cff:
bin_starts = bins_edges[:-1]
bin_ends = bins_edges[1:]

bin_max = max(bin_ends)
bin_min = min(bin_starts)

def bin_proportion(upper, lower, log_spacing=False):
if log_spacing:
ll = np.log(lower)
ul = np.log(lower)
centl = (ll + ul) / 2.
minl = np.log(bin_min)
maxl = np.log(bin_max)
return (centl - minl) / (maxl - minl)

else:
return ((lower + upper) / 2. - bin_min) / (bin_max - bin_min)

# Set up the x ticks - note that these are rounded to the nearest
# midnight, so may not line up exactly with the data
min_start = min([separate_starts[ifo].min() for ifo in ifos])
Expand Down Expand Up @@ -157,8 +147,7 @@ for ifo in ifos:
mr = mean_count[ifo][i] / live_total[ifo]
cr = cons_count[ifo][i] / live_total[ifo]

bin_prop = bin_proportion(bu, bl,
log_spacing=args.log_colormap)
bin_prop = i / len(bin_starts)
bin_colour = plt.get_cmap(args.colormap)(bin_prop)
bin_label = f"duration {bl:.2f}-{bu:.2f}"
alpha_lines += ax_alpha.plot(separate_starts[ifo], alphas, c=bin_colour,
Expand Down Expand Up @@ -199,6 +188,12 @@ for ifo in ifos:
ax.grid(zorder=-30)

fig_count.tight_layout()
fig_count.savefig(args.output_plot_name_format.format(ifo=ifo, type='counts'))
fig_count.savefig(
args.output_plot_name_format.format(ifo=ifo, type='counts')
)
fig_alpha.tight_layout()
fig_alpha.savefig(args.output_plot_name_format.format(ifo=ifo, type='fit_coeffs'))
fig_alpha.savefig(
args.output_plot_name_format.format(ifo=ifo, type='fit_coeffs')
)

logging.info("Done")
Loading

0 comments on commit b5bba8f

Please sign in to comment.