Skip to content

Commit

Permalink
Improve flagging and noise estimation (#695)
Browse files Browse the repository at this point in the history
* Implement consistent handling of flags

* More cleanups, run format_source

* Pass amplitude flags into Offset template kernels.

* More work on flagging consistency

* Update flag masks to be inclusive for filtering and more conservative for binning

* Remove debugging

* Add more debugging plots

* Restore the artificial flagging used in the obsmatrix unit test.

* Restore another check

* Update filterbin flag defaults based on offline conversation.

* Fix handling of flagged detectors in noise estimation.  Better support for disabled templates in template matrix.

* Add detector cutting to all unit tests by default.  Cleanup resulting problems due to assumptions about looping over local detectors.  Add extra debug plot support to offset template.

* Debugging test failures

* Ensure that the noise weighting operator always sets output detdata units, even if not processing detectors from some observations.

* Update platform scripts from llvm-17 to llvm-18.

* Address some review comments

* Split out the per-detector flag mask as a trait separate from the per-sample flag mask

* Many small fixes, address review comments.

* Remove redundant memory clearing.  Fix Offset template band-diagonal preconditioner.  Fix timing imports to point to timing rather than utils module.

* Revert name of demodulation stokes weights.  Fix typo in FlagNoiseFit.

* Comment out debug statements

* Fix stale comment
  • Loading branch information
tskisner authored Jan 4, 2024
1 parent 1b17c7e commit a857f1f
Show file tree
Hide file tree
Showing 94 changed files with 2,333 additions and 853 deletions.
6 changes: 3 additions & 3 deletions platforms/linux-llvm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

# This assumes you have created a virtualenv and installed toast
# dependencies (e.g. with wheels/install_deps_linux.sh) into that virtual
# env. This assumes installation of llvm-17 from LLVM apt sources.
# env. This assumes installation of llvm-18 from LLVM apt sources.

venv_path=$(dirname $(dirname $(which python3)))

export LD_LIBRARY_PATH="${venv_path}/lib"

cmake \
-DCMAKE_C_COMPILER="clang-17" \
-DCMAKE_CXX_COMPILER="clang++-17" \
-DCMAKE_C_COMPILER="clang-18" \
-DCMAKE_CXX_COMPILER="clang++-18" \
-DCMAKE_C_FLAGS="-O3 -g -fPIC -pthread -I${venv_path}/include" \
-DCMAKE_CXX_FLAGS="-O3 -g -fPIC -pthread -std=c++11 -stdlib=libc++ -I${venv_path}/include" \
-DUSE_OPENMP_TARGET=TRUE \
Expand Down
8 changes: 4 additions & 4 deletions platforms/venv_dev_llvm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ set -e

opts="$@"

suf="-17"
suf="-18"
if [ "x$(which clang++${suf})" = "x" ]; then
echo "The clang++${suf} compiler is not in your PATH, trying clang++"
if [ "x$(which clang++)" = "x" ]; then
Expand Down Expand Up @@ -46,10 +46,10 @@ export CFLAGS="-O3 -g -fPIC -pthread"
export FCFLAGS="-O3 -g -fPIC -pthread"
export CXXFLAGS="-O3 -g -fPIC -pthread -std=c++11 -stdlib=libc++ -I${INCDIR}"

export LD_LIBRARY_PATH="/usr/lib/llvm-17/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${LIBDIR}"
export LIBRARY_PATH="/usr/lib/llvm-17/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${LIBDIR}"
export LD_LIBRARY_PATH="/usr/lib/llvm-18/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${LIBDIR}"
export LIBRARY_PATH="/usr/lib/llvm-18/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${LIBDIR}"

# # export CMAKE_MODULE_LINKER_FLAGS="-L/usr/lib/llvm-17/lib -stdlib=libc++ "
# # export CMAKE_MODULE_LINKER_FLAGS="-L/usr/lib/llvm-18/lib -stdlib=libc++ "
# export LINK_OPTIONS="-stdlib=libc++"

cmake \
Expand Down
8 changes: 4 additions & 4 deletions platforms/venv_dev_llvm_static.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ set -e

opts="$@"

suf="-17"
suf="-18"
if [ "x$(which clang++${suf})" = "x" ]; then
echo "The clang++${suf} compiler is not in your PATH, trying clang++"
if [ "x$(which clang++)" = "x" ]; then
Expand Down Expand Up @@ -46,10 +46,10 @@ export CFLAGS="-O3 -g -fPIC -pthread"
export FCFLAGS="-O3 -g -fPIC -pthread"
export CXXFLAGS="-O3 -g -fPIC -pthread -std=c++11 -stdlib=libc++ -I${INCDIR}"

export LD_LIBRARY_PATH="/usr/lib/llvm-17/lib:${LIBDIR}"
export LIBRARY_PATH="/usr/lib/llvm-17/lib:${LIBDIR}"
export LD_LIBRARY_PATH="/usr/lib/llvm-18/lib:${LIBDIR}"
export LIBRARY_PATH="/usr/lib/llvm-18/lib:${LIBDIR}"

# # export CMAKE_MODULE_LINKER_FLAGS="-L/usr/lib/llvm-17/lib -stdlib=libc++ "
# # export CMAKE_MODULE_LINKER_FLAGS="-L/usr/lib/llvm-18/lib -stdlib=libc++ "
# export LINK_OPTIONS="-stdlib=libc++"

cmake \
Expand Down
4 changes: 2 additions & 2 deletions platforms/venv_dev_setup_llvm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ popd >/dev/null 2>&1
# Location of build helper tools
venvpkgdir=$(dirname "${scriptdir}")/packaging/venv

suf="-17"
suf="-18"
if [ "x$(which clang++${suf})" = "x" ]; then
echo "The clang++${suf} compiler is not in your PATH, trying clang++"
if [ "x$(which clang++)" = "x" ]; then
Expand Down Expand Up @@ -54,6 +54,6 @@ export CXXFLAGS="-O3 -g -fPIC -pthread -std=c++11 -stdlib=libc++"
export OMPFLAGS="-fopenmp"
export FCLIBS="-L/usr/lib/gcc/x86_64-linux-gnu/11 -lgfortran"

export LD_LIBRARY_PATH="/usr/lib/llvm-17/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${envname}/lib"
export LD_LIBRARY_PATH="/usr/lib/llvm-18/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${envname}/lib"

eval "${venvpkgdir}/install_deps_venv.sh" "${envname}" ${optional} no
4 changes: 2 additions & 2 deletions platforms/venv_dev_setup_llvm_static.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ popd >/dev/null 2>&1
# Location of build helper tools
venvpkgdir=$(dirname "${scriptdir}")/packaging/venv

suf="-17"
suf="-18"
if [ "x$(which clang++${suf})" = "x" ]; then
echo "The clang++${suf} compiler is not in your PATH, trying clang++"
if [ "x$(which clang++)" = "x" ]; then
Expand Down Expand Up @@ -54,6 +54,6 @@ export CXXFLAGS="-O3 -g -fPIC -pthread -std=c++11 -stdlib=libc++"
export OMPFLAGS="-fopenmp"
export FCLIBS="-L/usr/lib/gcc/x86_64-linux-gnu/11 -lgfortran"

export LD_LIBRARY_PATH="/usr/lib/llvm-17/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${envname}/lib"
export LD_LIBRARY_PATH="/usr/lib/llvm-18/lib:/usr/lib/gcc/x86_64-linux-gnu/11:${envname}/lib"

eval "${venvpkgdir}/install_deps_venv.sh" "${envname}" ${optional} yes
21 changes: 11 additions & 10 deletions src/libtoast/src/toast_tod_filter.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ void toast::filter_polynomial(int64_t order, size_t n, uint8_t * flags,
last = &full_templates[(iorder - 1) * scanlen];
lastlast = &full_templates[(iorder - 2) * scanlen];
double orderinv = 1. / iorder;

#pragma omp simd
for (size_t i = 0; i < scanlen; ++i) {
const double x = xstart + i * dx;
Expand Down Expand Up @@ -130,6 +131,16 @@ void toast::filter_polynomial(int64_t order, size_t n, uint8_t * flags,
singular_values.data(), rcond_limit,
&rank, WORK.data(), LWORK, &info);

if (info != 0) {
auto log = toast::Logger::get();
std::ostringstream o;
o << "DGELLS: " << ngood << "/" << scanlen << " good samples, order " <<
norder;
o << " failed with info " << info;
log.error(o.str().c_str(), TOAST_HERE());
throw std::runtime_error(o.str().c_str());
}

for (int iorder = 0; iorder < norder; ++iorder) {
double * temp = &full_templates[iorder * scanlen];
for (int isignal = 0; isignal < nsignal; ++isignal) {
Expand All @@ -143,16 +154,6 @@ void toast::filter_polynomial(int64_t order, size_t n, uint8_t * flags,
}
}
}

if (info != 0) {
auto log = toast::Logger::get();
std::ostringstream o;
o << "DGELLS: " << ngood << "/" << scanlen << " good samples, order " <<
norder;
o << " failed with info " << info;
log.error(o.str().c_str(), TOAST_HERE());
throw std::runtime_error(o.str().c_str());
}
}
}

Expand Down
98 changes: 98 additions & 0 deletions src/toast/_libtoast/ops_filterbin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -311,6 +311,104 @@ void build_template_covariance(std::vector <int64_t> & starts,
}
}

// Alternative implementation which uses the buffer extraction helper functions
// and is closer to the techniques we have been using for omp target offload
// kernels. Left here as a future starting point.
//
// The python calling code then also needs updated, for example:
//
// array_starts = np.array(templates.starts, dtype=np.int64)
// array_stops = np.array(templates.stops, dtype=np.int64)
// fgood = good.astype(np.float64)
// build_template_covariance(
// array_starts,
// array_stops,
// templates.templates,
// fgood,
// invcov,
// )
//
// void build_template_covariance(
// py::buffer starts,
// py::buffer stops,
// py::list templates,
// py::buffer good,
// py::buffer template_covariance
// ) {
// // The total number of templates
// int64_t n_template = templates.size();

// // This is used to return the actual shape of each buffer
// std::vector <int64_t> temp_shape(2);

// // The "good" array is the length of the full number of samples
// double * raw_good = extract_buffer <double> (
// good, "good", 1, temp_shape, {-1}
// );
// int64_t n_sample = temp_shape[0];

// // Start and stop samples of each template
// int64_t * raw_starts = extract_buffer <int64_t> (
// starts, "starts", 1, temp_shape, {n_template}
// );
// int64_t * raw_stops = extract_buffer <int64_t> (
// stops, "stops", 1, temp_shape, {n_template}
// );

// // Template covariance
// double * raw_template_cov = extract_buffer <double> (
// template_covariance,
// "template_covariance",
// 2,
// temp_shape,
// {n_template, n_template}
// );

// //#pragma omp parallel for private(temp_shape) schedule(static)
// for (int64_t row = 0; row < n_template; ++row) {
// int64_t row_start = raw_starts[row];
// int64_t row_stop = raw_stops[row];

// // Extract the row template
// double * raw_row = extract_buffer <double> (
// templates[row], "row", 1, temp_shape, {row_stop-row_start}
// );

// for (int64_t col = row; col < n_template; ++col) {
// int64_t col_start = raw_starts[col];
// int64_t col_stop = raw_stops[col];

// // Extract the column template
// double * raw_col = extract_buffer <double> (
// templates[col], "col", 1, temp_shape, {col_stop-col_start}
// );

// // The sample intersection
// int64_t istart = std::max(row_start, col_start);
// int64_t istop = std::min(row_stop, col_stop);

// double val = 0;
// if ((row == col) && (istop - istart <= 1)) {
// val = 1;
// }
// for (int64_t isample = istart; isample < istop; ++isample) {
// //std::cerr << "DBG: (" << row << "," << col << "," << isample << ")
// trow[" << isample << "-" << row_start << "]=" << raw_row[isample - row_start] << "
// tcol[" << isample << "-" << col_start << "]=" << raw_col[isample - col_start] << "
// g=" << raw_good[isample] << std::endl;
// val += raw_row[isample - row_start]
// * raw_col[isample - col_start]
// * raw_good[isample];
// }
// raw_template_cov[row * n_template + col] = val;
// if (row != col) {
// raw_template_cov[col * n_template + row] = val;
// }
// }
// }
// return;
// }

void accumulate_observation_matrix(py::array_t <double,
py::array::c_style | py::array::forcecast> c_obs_matrix,
py::array_t <int64_t, py::array::c_style | py::array::forcecast> c_pixels,
Expand Down
3 changes: 3 additions & 0 deletions src/toast/_libtoast/template_offset.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ void init_template_offset(py::module & m) {
int64_t amp_offset,
py::buffer n_amp_views,
py::buffer amplitudes,
py::buffer amplitude_flags,
int32_t data_index,
py::buffer det_data,
py::buffer intervals,
Expand Down Expand Up @@ -143,6 +144,7 @@ void init_template_offset(py::module & m) {
int64_t amp_offset,
py::buffer n_amp_views,
py::buffer amplitudes,
py::buffer amplitude_flags,
py::buffer intervals,
bool use_accel
) {
Expand Down Expand Up @@ -306,6 +308,7 @@ void init_template_offset(py::module & m) {
"template_offset_apply_diag_precond", [](
py::buffer offset_var,
py::buffer amplitudes_in,
py::buffer amplitude_flags,
py::buffer amplitudes_out,
bool use_accel
) {
Expand Down
16 changes: 10 additions & 6 deletions src/toast/_libtoast/tod_filter.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@


void sum_detectors(py::array_t <int64_t,
py::array::c_style | py::array::forcecast> detectors,
py::array::c_style | py::array::forcecast> det_indx,
py::array_t <int64_t,
py::array::c_style | py::array::forcecast> flag_indx,
py::array_t <unsigned char,
py::array::c_style | py::array::forcecast> shared_flags,
unsigned char shared_flag_mask,
Expand All @@ -20,14 +22,15 @@ void sum_detectors(py::array_t <int64_t,
py::array::c_style | py::array::forcecast> sum_data,
py::array_t <int64_t,
py::array::c_style | py::array::forcecast> hits) {
auto fast_detectors = detectors.unchecked <1>();
auto fast_detindx = det_indx.unchecked <1>();
auto fast_flagindx = flag_indx.unchecked <1>();
auto fast_shared_flags = shared_flags.unchecked <1>();
auto fast_det_data = det_data.unchecked <2>();
auto fast_det_flags = det_flags.unchecked <2>();
auto fast_sum_data = sum_data.mutable_unchecked <1>();
auto fast_hits = hits.mutable_unchecked <1>();

size_t ndet = fast_detectors.shape(0);
size_t ndet = fast_detindx.shape(0);
size_t nsample = fast_det_data.shape(1);

size_t buflen = 10000;
Expand All @@ -38,11 +41,12 @@ void sum_detectors(py::array_t <int64_t,
size_t sample_start = ibuf * buflen;
size_t sample_stop = std::min(sample_start + buflen, nsample);
for (size_t idet = 0; idet < ndet; ++idet) {
int64_t row = fast_detectors(idet);
int64_t datarow = fast_detindx(idet);
int64_t flagrow = fast_flagindx(idet);
for (size_t sample = sample_start; sample < sample_stop; ++sample) {
if ((fast_shared_flags(sample) & shared_flag_mask) != 0) continue;
if ((fast_det_flags(row, sample) & det_flag_mask) != 0) continue;
fast_sum_data(sample) += fast_det_data(row, sample);
if ((fast_det_flags(flagrow, sample) & det_flag_mask) != 0) continue;
fast_sum_data(sample) += fast_det_data(datarow, sample);
fast_hits(sample)++;
}
}
Expand Down
9 changes: 7 additions & 2 deletions src/toast/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,19 +77,24 @@ def clear(self):
self._internal.clear()
return

def all_local_detectors(self, selection=None):
def all_local_detectors(self, selection=None, flagmask=0):
"""Get the superset of local detectors in all observations.
This builds up the result from calling `select_local_detectors()` on
all observations.
Args:
selection (list): Only consider this list of detectors
flagmask (int): Apply this det_mask to the detector selection in
each observation.
Returns:
(list): The list of all local detectors across all observations.
"""
all_dets = OrderedDict()
for ob in self.obs:
dets = ob.select_local_detectors(selection=selection)
dets = ob.select_local_detectors(selection=selection, flagmask=flagmask)
for d in dets:
if d not in all_dets:
all_dets[d] = None
Expand Down
43 changes: 24 additions & 19 deletions src/toast/noise.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,28 +233,33 @@ def _detector_weight(self, det):
first = max(0, first - 1)
last = min(freq.size - 1, last + 1)
noisevar_mid = np.median(psd[first:last])
# Noise in the end of the PSD
first = np.searchsorted(freq, rate * 0.45, side="left")
last = np.searchsorted(freq, rate * 0.50, side="right")
if first == last:
first = max(0, first - 1)
last = min(freq.size - 1, last + 1)
noisevar_end = np.median(psd[first:last])
if noisevar_end / noisevar_mid < 0.5:
# There is a transfer function roll-off. Measure the
# white noise plateau value
first = np.searchsorted(freq, rate * 0.2, side="left")
last = np.searchsorted(freq, rate * 0.4, side="right")
if noisevar_mid == 0:
# This detector is flagged in the noise model.
# Set its weight to zero.
invvar = 0.0 * (1.0 / u.K**2)
else:
# No significant roll-off. Use the last PSD bins for
# the noise variance
# Noise in the end of the PSD
first = np.searchsorted(freq, rate * 0.45, side="left")
last = np.searchsorted(freq, rate * 0.50, side="right")
if first == last:
first = max(0, first - 1)
last = min(freq.size - 1, last + 1)
noisevar = np.median(psd[first:last])
invvar = (1.0 / noisevar / rate).decompose()
if first == last:
first = max(0, first - 1)
last = min(freq.size - 1, last + 1)
noisevar_end = np.median(psd[first:last])
if noisevar_end / noisevar_mid < 0.5:
# There is a transfer function roll-off. Measure the
# white noise plateau value
first = np.searchsorted(freq, rate * 0.2, side="left")
last = np.searchsorted(freq, rate * 0.4, side="right")
else:
# No significant roll-off. Use the last PSD bins for
# the noise variance
first = np.searchsorted(freq, rate * 0.45, side="left")
last = np.searchsorted(freq, rate * 0.50, side="right")
if first == last:
first = max(0, first - 1)
last = min(freq.size - 1, last + 1)
noisevar = np.median(psd[first:last])
invvar = (1.0 / noisevar / rate).decompose()
for det in self._dets_for_keys[k]:
self._detweights[det] += mix[det][k] * invvar
return self._detweights[det]
Expand Down
Loading

0 comments on commit a857f1f

Please sign in to comment.