Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/eqcorrscan/EQcorrscan
Browse files Browse the repository at this point in the history
  • Loading branch information
calum-chamberlain committed Aug 9, 2022
2 parents 5e0d861 + c822f57 commit 72e769d
Show file tree
Hide file tree
Showing 29 changed files with 518 additions and 229 deletions.
4 changes: 2 additions & 2 deletions .github/test_conda_env.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ channels:
dependencies:
- numpy>=1.12
- matplotlib>=1.3.0
- scipy>=0.18
- scipy>=0.18,<1.9.0 # Pinned due to scipy/obspy hanning renaming
- mock
- obspy>=1.3.0
- h5py
Expand All @@ -17,7 +17,7 @@ dependencies:
- pytest-pep8
- pytest-xdist
- pytest-rerunfailures
- pytest-mpl
- pytest-mpl<0.16.0
- codecov
- pip
- pip:
Expand Down
4 changes: 2 additions & 2 deletions .github/test_conda_env_macOS.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ dependencies:
- llvm-openmp>=4.0.1
- numpy>=1.12
- matplotlib>=1.3.0
- scipy>=0.18
- scipy>=0.18,<1.9.0 # Pinned due to scipy/obspy hanning renaming
- mock
- obspy>=1.3.0
- h5py<3.2 # Issue with dep resolution: https://github.com/conda-forge/h5py-feedstock/issues/92
Expand All @@ -26,7 +26,7 @@ dependencies:
- pytest-pep8
- pytest-xdist
- pytest-rerunfailures
- pytest-mpl
- pytest-mpl<0.16.0
- codecov
- pip
- pip:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/runtest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ jobs:
- name: run main test suite
shell: bash -l {0}
run: |
py.test -n 2 -m "not serial and not network and not superslow" --cov-report=xml
py.test -n 2 -m "not serial and not network and not superslow" --cov-report=xml --dist loadscope
- name: run serial test
if: always()
Expand Down
18 changes: 18 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
## Current
* core.match_filter
- Bug-fix: peak-cores could be defined twice in _group_detect through kwargs.
Fix: only update peak_cores if it isn't there already.
* core.match_filter.tribe
- Detect now allows passing of pre-processed data
* core.match_filter.template
- Remove duplicate detections from overlapping windows using `._uniq()`
* core.lag_calc._xcorr_interp
- CC-interpolation replaced with resampling (more robust), old method
deprecated. Use new method with use_new_resamp_method=True as **kwarg.
* core.lag_calc:
- Fixed bug where minimum CC defined via min_cc_from_mean_cc_factor was not
set correctly for negative correlation sums.
* utils.correlate
- Fast Matched Filter now supported natively for version >= 1.4.0
- Only full correlation stacks are returned now (e.g. where fewer than than
Expand All @@ -21,6 +32,13 @@
the old parallelization strategy across traces.
- Now includes `all_horiz`-option that will correlate all matching horizontal
channels no matter to which of these the S-pick is linking.
* utils.clustering
- Allow to handle indirect comparison of event-waveforms when (i.e., events
without matching traces which can be compared indirectly via a third event)
- Allows to set clustering method, metric, and sort_order from
scipy.cluster.hierarchy.linkage.
* tribe, template, template_gen, archive_read, clustering: remove option to read
from seishub (deprecated in obspy).

## 0.4.3
* core.match_filter
Expand Down
1 change: 1 addition & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@
* Felix Halpaap
* Iman Kahbasi
* eQ Halauwet
* Glenn Nelson
2 changes: 1 addition & 1 deletion eqcorrscan/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

__all__ = ['core', 'utils', 'tutorials', 'tests']

__version__ = '0.4.3'
__version__ = '0.4.4'

# Cope with changes to name-space to remove most of the camel-case
_import_map = {}
Expand Down
54 changes: 44 additions & 10 deletions eqcorrscan/core/lag_calc.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
from eqcorrscan.core.match_filter.template import Template
from eqcorrscan.utils.plotting import plot_repicked

show_interp_deprec_warning = True

Logger = logging.getLogger(__name__)

Expand All @@ -43,14 +44,19 @@ def __str__(self):
return 'LagCalcError: ' + self.value


def _xcorr_interp(ccc, dt):
def _xcorr_interp(ccc, dt, resample_factor=10, use_new_resamp_method=False,
**kwargs):
"""
Interpolate around the maximum correlation value for sub-sample precision.
Resample correlation-trace and check if there is a better CCC peak for
sub-sample precision.
:param ccc: Cross-correlation array
:type ccc: numpy.ndarray
:param dt: sample interval
:type dt: float
:param resample_factor:
Factor for upsampling CC-values (only for use_new_resamp_method=True)
:type resample_factor: int
:return: Position of interpolated maximum in seconds from start of ccc
:rtype: float
Expand All @@ -59,6 +65,32 @@ def _xcorr_interp(ccc, dt):
cc = ccc[0]
else:
cc = ccc

# New method with resampling - make this the default in a future version
if use_new_resamp_method:
cc_resampled = scipy.signal.resample(cc, len(cc) * resample_factor + 1)
dt_resampled = dt / resample_factor
cc_t = np.arange(0, len(cc_resampled) * dt_resampled, dt_resampled)
peak_index = cc_resampled.argmax()
cc_peak = max(cc_resampled)

shift = cc_t[peak_index]
if (cc_peak < np.amax(cc) or cc_peak > 1.0 or
not 0 < shift < len(ccc) * dt):
# Sometimes the interpolation returns a worse result.
Logger.warning("Interpolation did not give an accurate result, "
"returning maximum in data")
return np.argmax(ccc) * dt, np.amax(ccc)
return shift, cc_peak

# Otherwise use old interpolation method, but warn with deprcation message
# (but show it only once):
global show_interp_deprec_warning
if show_interp_deprec_warning:
Logger.warning(
'This method for interpolating cross-correlations is deprecated, '
'use a more robust method with use_new_resamp_method=True')
show_interp_deprec_warning = False
# Code borrowed from obspy.signal.cross_correlation.xcorr_pick_correction
cc_curvature = np.concatenate((np.zeros(1), np.diff(cc, 2), np.zeros(1)))
cc_t = np.arange(0, len(cc) * dt, dt)
Expand Down Expand Up @@ -191,7 +223,8 @@ def xcorr_pick_family(family, stream, shift_len=0.2, min_cc=0.4,
min_cc_from_mean_cc_factor=None,
horizontal_chans=['E', 'N', '1', '2'],
vertical_chans=['Z'], cores=1, interpolate=False,
plot=False, plotdir=None, export_cc=False, cc_dir=None):
plot=False, plotdir=None, export_cc=False, cc_dir=None,
**kwargs):
"""
Compute cross-correlation picks for detections in a family.
Expand Down Expand Up @@ -273,8 +306,9 @@ def xcorr_pick_family(family, stream, shift_len=0.2, min_cc=0.4,
checksum, cccsum, used_chans = 0.0, 0.0, 0
event = Event()
if min_cc_from_mean_cc_factor is not None:
cc_thresh = min(detection.detect_val / detection.no_chans
* min_cc_from_mean_cc_factor, min_cc)
cc_thresh = min(abs(detection.detect_val / detection.no_chans
* min_cc_from_mean_cc_factor),
min_cc)
Logger.info('Setting minimum cc-threshold for detection %s to %s',
detection.id, str(cc_thresh))
else:
Expand All @@ -285,7 +319,7 @@ def xcorr_pick_family(family, stream, shift_len=0.2, min_cc=0.4,
tr = detect_stream.select(
station=stachan.channel[0], channel=stachan.channel[1])[0]
if interpolate:
shift, cc_max = _xcorr_interp(correlation, dt=delta)
shift, cc_max = _xcorr_interp(correlation, dt=delta, **kwargs)
else:
cc_max = np.amax(correlation)
shift = np.argmax(correlation) * delta
Expand Down Expand Up @@ -387,7 +421,7 @@ def _prepare_data(family, detect_data, shift_len):
length = round(length_samples) / family.template.samp_rate
Logger.info("Setting length to {0}s to give an integer number of "
"samples".format(length))
prepick = shift_len
prepick = shift_len + family.template.prepick
detect_streams_dict = family.extract_streams(
stream=detect_data, length=length, prepick=prepick)
for key, detect_stream in detect_streams_dict.items():
Expand Down Expand Up @@ -419,7 +453,7 @@ def lag_calc(detections, detect_data, template_names, templates,
shift_len=0.2, min_cc=0.4, min_cc_from_mean_cc_factor=None,
horizontal_chans=['E', 'N', '1', '2'],
vertical_chans=['Z'], cores=1, interpolate=False,
plot=False, plotdir=None, export_cc=False, cc_dir=None):
plot=False, plotdir=None, export_cc=False, cc_dir=None, **kwargs):
"""
Cross-correlation derived picking of seismic events.
Expand Down Expand Up @@ -557,7 +591,7 @@ def lag_calc(detections, detect_data, template_names, templates,
detections=template_detections,
template=Template(
name=template_name, st=template,
samp_rate=template[0].stats.sampling_rate))
samp_rate=template[0].stats.sampling_rate, prepick=0.0))
# Make a sparse template
if len(template_detections) > 0:
template_dict = xcorr_pick_family(
Expand All @@ -566,7 +600,7 @@ def lag_calc(detections, detect_data, template_names, templates,
horizontal_chans=horizontal_chans,
vertical_chans=vertical_chans, interpolate=interpolate,
cores=cores, shift_len=shift_len, plot=plot, plotdir=plotdir,
export_cc=export_cc, cc_dir=cc_dir)
export_cc=export_cc, cc_dir=cc_dir, **kwargs)
initial_cat.update(template_dict)
# Order the catalogue to match the input
output_cat = Catalog()
Expand Down
7 changes: 0 additions & 7 deletions eqcorrscan/core/match_filter/detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,13 +152,6 @@ def __gt__(self, other):
def __ge__(self, other):
return not self.__lt__(other)

def __hash__(self):
"""
Cannot hash Detection objects, they may change.
:return: 0
"""
return 0

def __ne__(self, other):
return not self.__eq__(other)

Expand Down
5 changes: 3 additions & 2 deletions eqcorrscan/core/match_filter/family.py
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,8 @@ def _uniq(self):
.. rubric:: Example
>>> from eqcorrscan import Template, Detection
>>> from eqcorrscan import Template, Detection, Family
>>> from obspy import UTCDateTime
>>> family = Family(
... template=Template(name='a'), detections=[
... Detection(template_name='a', detect_time=UTCDateTime(0),
Expand Down Expand Up @@ -618,7 +619,7 @@ def lag_calc(self, stream, pre_processed, shift_len=0.2, min_cc=0.4,
min_cc_from_mean_cc_factor=min_cc_from_mean_cc_factor,
vertical_chans=vertical_chans, cores=cores,
interpolate=interpolate, plot=plot, plotdir=plotdir,
export_cc=export_cc, cc_dir=cc_dir)
export_cc=export_cc, cc_dir=cc_dir, **kwargs)
catalog_out = Catalog([ev for ev in picked_dict.values()])
for detection_id, event in picked_dict.items():
for pick in event.picks:
Expand Down
1 change: 1 addition & 0 deletions eqcorrscan/core/match_filter/matched_filter.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,7 @@ def _group_detect(templates, stream, threshold, threshold_type, trig_int,
n_groups += 1
else:
n_groups = 1
kwargs.update({'peak_cores': kwargs.get('peak_cores', process_cores)})
for st_chunk in streams:
chunk_start, chunk_end = (min(tr.stats.starttime for tr in st_chunk),
max(tr.stats.endtime for tr in st_chunk))
Expand Down
12 changes: 7 additions & 5 deletions eqcorrscan/core/match_filter/template.py
Original file line number Diff line number Diff line change
Expand Up @@ -520,7 +520,10 @@ def detect(self, stream, threshold, threshold_type, trig_int,
parallel_process=parallel_process, xcorr_func=xcorr_func,
concurrency=concurrency, cores=cores, ignore_length=ignore_length,
overlap=overlap, full_peaks=full_peaks, **kwargs)
return party[0]
family = party[0]
# Remove duplicates
family.detections = family._uniq().detections
return family

def construct(self, method, name, lowcut, highcut, samp_rate, filt_order,
length, prepick, swin="all", process_len=86400,
Expand All @@ -532,8 +535,8 @@ def construct(self, method, name, lowcut, highcut, samp_rate, filt_order,
:param method:
Method to make the template, the only available method is:
`from_sac`. For all other methods (`from_seishub`, `from_client`
and `from_meta_file`) use `Tribe.construct()`.
`from_sac`. For all other methods (`from_client` and
`from_meta_file`) use `Tribe.construct()`.
:type method: str
:type name: str
:param name: Name for the template
Expand Down Expand Up @@ -635,8 +638,7 @@ def construct(self, method, name, lowcut, highcut, samp_rate, filt_order,
Tribe.construct instead.
"""
if method in ['from_meta_file', 'from_seishub', 'from_client',
'multi_template_gen']:
if method in ['from_meta_file', 'from_client', 'multi_template_gen']:
raise NotImplementedError('Method is not supported, '
'use Tribe.construct instead.')
streams, events, process_lengths = template_gen.template_gen(
Expand Down
9 changes: 2 additions & 7 deletions eqcorrscan/core/match_filter/tribe.py
Original file line number Diff line number Diff line change
Expand Up @@ -924,8 +924,8 @@ def construct(self, method, lowcut, highcut, samp_rate, filt_order,
:type method: str
:param method:
Method of Tribe generation. Possible options are: `from_client`,
`from_seishub`, `from_meta_file`. See below on the additional
required arguments for each method.
`from_meta_file`. See below on the additional required arguments
for each method.
:type lowcut: float
:param lowcut:
Low cut (Hz), if set to None will not apply a lowcut
Expand Down Expand Up @@ -999,11 +999,6 @@ def construct(self, method, lowcut, highcut, samp_rate, filt_order,
:param `obspy.core.event.Catalog` catalog:
Catalog of events to generate template for
:param float data_pad: Pad length for data-downloads in seconds
- `from_seishub` requires:
:param str url: url to seishub database
:param `obspy.core.event.Catalog` catalog:
Catalog of events to generate template for
:param float data_pad: Pad length for data-downloads in seconds
- `from_meta_file` requires:
:param str meta_file:
Path to obspy-readable event file, or an obspy Catalog
Expand Down
Loading

0 comments on commit 72e769d

Please sign in to comment.