4.6.3.1.3. match_filter.Party¶
-
class
eqcorrscan.core.match_filter.
Party
(families=None)[source]¶ Bases:
object
Container for multiple Family objects.
Methods
copy
()Returns a copy of the Party. decluster
(trig_int[, timing, metric])De-cluster a Party of detections by enforcing a detection separation. get_catalog
()Get an obspy catalog object from the party. lag_calc
(stream, pre_processed[, shift_len, ...])Compute picks based on cross-correlation alignment. min_chans
(min_chans)Remove detections with fewer channels used than min_chans read
([filename])Read a Party from a file. rethreshold
(new_threshold[, new_threshold_type])Remove detections from the Party that are below a new threshold. sort
()Sort the families by template name. write
(filename[, format])Write Family out, select output format. -
copy
()[source]¶ Returns a copy of the Party.
Returns: Copy of party Example
>>> party = Party(families=[Family(template=Template(name='a'))]) >>> party_b = party.copy() >>> party == party_b True
-
decluster
(trig_int, timing='detect', metric='avg_cor')[source]¶ De-cluster a Party of detections by enforcing a detection separation.
De-clustering occurs between events detected by different (or the same) templates. If multiple detections occur within trig_int then the preferred detection will be determined by the metric argument. This can be either the average single-station correlation coefficient which is calculated as Detection.detect_val / Detection.no_chans, or the raw cross channel correlation sum which is simply Detection.detect_val.
Parameters: - trig_int (float) Minimum detection separation in seconds.
- metric (str) What metric to sort peaks by. Either ‘avg_cor’ which takes the single station average correlation or ‘cor_sum’ which takes the total correlation sum across all channels.
- timing (str) Either ‘detect’ or ‘origin’ to decluster based on either the detection time or the origin time.
Warning
Works in place on object, if you need to keep the original safe then run this on a copy of the object!
Example
>>> party = Party().read() >>> len(party) 4 >>> declustered = party.decluster(20) >>> len(party) 3
-
get_catalog
()[source]¶ Get an obspy catalog object from the party.
Returns: obspy.core.event.Catalog
Example
>>> party = Party().read() >>> cat = party.get_catalog() >>> print(len(cat)) 4
-
lag_calc
(stream, pre_processed, shift_len=0.2, min_cc=0.4, horizontal_chans=['E', 'N', '1', '2'], vertical_chans=['Z'], cores=1, interpolate=False, plot=False, parallel=True, overlap='calculate', debug=0)[source]¶ Compute picks based on cross-correlation alignment.
Parameters: - stream (obspy.core.stream.Stream) All the data needed to cut from - can be a gappy Stream.
- pre_processed (bool) Whether the stream has been pre-processed or not to match the templates. See note below.
- shift_len (float) Shift length allowed for the pick in seconds, will be plus/minus this amount - default=0.2
- min_cc (float) Minimum cross-correlation value to be considered a pick, default=0.4.
- horizontal_chans (list) List of channel endings for horizontal-channels, on which S-picks will be made.
- vertical_chans (list) List of channel endings for vertical-channels, on which P-picks will be made.
- cores (int) Number of cores to use in parallel processing, defaults to one.
- interpolate (bool) Interpolate the correlation function to achieve sub-sample precision.
- plot (bool) To generate a plot for every detection or not, defaults to False
- parallel (bool) Turn parallel processing on or off.
- overlap (float) Either None, “calculate” or a float of number of seconds to overlap detection streams by. This is to counter the effects of the delay-and-stack in calcualting cross-correlation sums. Setting overlap = “calculate” will work out the appropriate overlap based on the maximum lags within templates.
- debug (int) Debug output level, 0-5 with 5 being the most output.
Returns: Catalog of events with picks. No origin information is included. These events can then be written out via
obspy.core.event.Catalog.write()
, or to Nordic Sfiles usingeqcorrscan.utils.sfile_util.eventtosfile()
and located externally.Return type: Note
Note on pre-processing: You can provide a pre-processed stream, which may be beneficial for detections over large time periods (the stream can have gaps, which reduces memory usage). However, in this case the processing steps are not checked, so you must ensure that all the template in the Party have the same sampling rate and filtering as the stream. If pre-processing has not be done then the data will be processed according to the parameters in the templates, in this case templates will be grouped by processing parameters and run with similarly processed data. In this case, all templates do not have to have the same processing parameters.
Note
Picks are corrected for the template pre-pick time.
-
min_chans
(min_chans)[source]¶ Remove detections with fewer channels used than min_chans
Parameters: min_chans (int) Minimum number of channels to allow a detection. Returns: Party Note
Works in place on Party.
Example
>>> party = Party().read() >>> print(len(party)) 4 >>> party = party.min_chans(5) >>> print(len(party)) 1
-
read
(filename=None)[source]¶ Read a Party from a file.
Parameters: filename (str) File to read from Example
>>> Party().read() Party of 4 Families.
-
rethreshold
(new_threshold, new_threshold_type='MAD')[source]¶ Remove detections from the Party that are below a new threshold. Note, threshold can only be set higher.
Warning
Works in place on Party.
Parameters: Example
Using the MAD threshold on detections made using the MAD threshold:
>>> party = Party().read() >>> len(party) 4 >>> party = party.rethreshold(10.0) >>> len(party) 4 >>> # Note that all detections are self detections
Using the absolute thresholding method on the same Party:
>>> party = Party().read().rethreshold(6.0, 'absolute') >>> len(party) 1
Using the av_chan_corr method on the same Party:
>>> party = Party().read().rethreshold(0.9, 'av_chan_corr') >>> len(party) 4
-
sort
()[source]¶ Sort the families by template name.
Example
>>> party = Party(families=[Family(template=Template(name='b')), ... Family(template=Template(name='a'))]) >>> party[0] Family of 0 detections from template b >>> party.sort()[0] Family of 0 detections from template a
-
write
(filename, format='tar')[source]¶ Write Family out, select output format.
Parameters: Note
csv format will write out detection objects, all other outputs will write the catalog. These cannot be rebuilt into a Family object. The only format that can be read back into Family objects is the ‘tar’ type.
Note
We recommend writing to the ‘tar’ format, which will write out all the template information (wavefiles as miniseed and metadata) alongside the detections and store these in a tar archive. This is readable by other programs and maintains all information required for further study.
Example
>>> party = Party().read() >>> party.write('test_tar_write', format='tar') Writing family 0 Writing family 1 Writing family 2 Writing family 3 Party of 4 Families. >>> party.write('test_csv_write.csv', format='csv') Party of 4 Families. >>> party.write('test_quakeml.ml', format='quakeml') Party of 4 Families.
-