ms3 package¶
Submodules¶
ms3.annotations module¶
-
class
ms3.annotations.
Annotations
(tsv_path=None, df=None, index_col=None, sep='\t', infer_types={}, logger_name='Annotations', level=None, **kwargs)[source]¶ Bases:
object
-
dcml_double_re
= re.compile('\n ^(?P<first>\n (\\.?\n ((?P<globalkey>[a-gA-G](b*|\\#*))\\.)?\n , re.VERBOSE)¶
-
dcml_re
= re.compile('^(\\.?\n ((?P<globalkey>[a-gA-G](b*|\\#*))\\.)?\n ((?P<localkey>(b*|\\#*)(VII|VI|V|IV|III|II|I|vii|vi|v|iv|iii|ii|i))\\.)?\n , re.VERBOSE)¶
-
property
expanded
¶
-
get_labels
(staff=None, voice=None, label_type=None, positioning=True, decode=False, drop=False)[source]¶ Returns a list of harmony tags from the parsed score.
- Parameters
staff (
int
, optional) – Select harmonies from a given staff only. Pass staff=1 for the upper staff.label_type ({0, 1, 2, 3, 'dcml', ..}, optional) –
- If MuseScore’s harmony feature has been used, you can filter harmony types by passing
0 for unrecognized strings 1 for Roman Numeral Analysis 2 for Nashville Numbers 3 for encoded absolute chords ‘dcml’ for labels from the DCML harmonic annotation standard … self-defined types that have been added to self.regex_dict through the use of self.infer_types()
positioning (
bool
, optional) – Set to True if you want to include information about how labels have been manually positioned.decode (
bool
, optional) – Set to True if you don’t want to keep labels in their original form as encoded by MuseScore (with root and bass as TPC (tonal pitch class) where C = 14).
-
ms3.bs4_measures module¶
-
class
ms3.bs4_measures.
MeasureList
(df, section_breaks=True, secure=False, reset_index=True, logger_name='MeasureList', level=None)[source]¶ Bases:
object
Turns a _MSCX_bs4._measures DataFrame into a measure list and performs a couple of consistency checks on the score.
-
df
¶ The input DataFrame from _MSCX_bs4.raw_measures
- Type
-
section_breaks
¶ By default, section breaks allow for several anacrusis measures within the piece (relevant for mc_offset column) and make it possible to omit a repeat sign in the following bar (relevant for next column). Set to False if you want to ignore section breaks.
- Type
bool
, default True
-
secure
¶ By default, measure information from lower staves is considered to contain only redundant information. Set to True if you want to be warned about additional measure information from lower staves that is not taken into account.
- Type
bool
, default False
-
reset_index
¶ By default, the original index of df is replaced. Pass False to keep original index values.
- Type
bool
, default True
-
level
¶ Pass a level name for which (and above which) you want to see log records.
- Type
{‘W’, ‘D’, ‘I’, ‘E’, ‘C’, ‘WARNING’, ‘DEBUG’, ‘INFO’, ‘ERROR’, ‘CRITICAL’}, optional
-
ml
¶ The measure list in the making; the final result.
- Type
-
volta_structure
¶ Keys are first MCs of volta groups, values are dictionaries of {volta_no: [mc1, mc2 …]}
- Type
-
check_measure_numbers
(mc_col='mc', mn_col='mn', act_dur='act_dur', mc_offset='mc_offset', dont_count='dont_count', numbering_offset='numbering_offset')[source]¶
-
get_unique_measure_list
(**kwargs)[source]¶ Keep only the measure information from the first staff. Uses: keep_one_row_each()
- Parameters
mc_col (
str
, optional) – DataFrame columns where MC and staff can be found. Staff is to be dropped.staff_col (
str
, optional) – DataFrame columns where MC and staff can be found. Staff is to be dropped.secure (
bool
) – If the dropped rows contain additional information, set secure to True to be informed about the information being lost by the function keep_one_row_each().**kwargs (Additional parameter passed on to keep_one_row_each(). Ignored if secure=False.) –
-
-
ms3.bs4_measures.
get_volta_structure
(df, mc, volta_start, volta_length, frac_col=None)[source]¶ Uses: treat_group()
-
ms3.bs4_measures.
keep_one_row_each
(df, compress_col, differentiating_col, differentiating_val=None, ignore_columns=None, fillna=True, drop_differentiating=True)[source]¶ - Eliminates duplicates in compress_col but warns about values within the
dropped rows which diverge from those of the remaining rows. The differentiating_col serves to identify places where information gets lost during the process.
The result of this function is the same as df.drop_duplicates(subset=[compress_col]) if differentiating_val is None, and df[df[compress_col] == differentiating_val] otherwise but with the difference that only adjacent duplicates are eliminated.
- Parameters
compress_col (
str
) – Column with duplicates (e.g. measure counts).differentiating_col (
str
) – Column that differentiates duplicates (e.g. staff IDs).differentiating_val (value, optional) – If you want to keep rows with a certain differentiating_col value, pass that value (e.g. a certain staff). Otherwise, the first row of every compress_col value is kept.
ignore_columns (
Iterable
, optional) – These columns are not checked.fillna (
bool
, optional) – By default, missing values of kept rows are filled if the dropped rows contain one unique value in that particular column. Pass False to keep rows as they are.drop_differentiating (
bool
, optional) – By default, the column that differentiates the compress_col is dropped. Pass False to prevent that.
-
ms3.bs4_measures.
make_mn_col
(df, dont_count, numbering_offset, name='mn')[source]¶ Compute measure numbers where one or two columns can influence the counting.
- Parameters
df (
pd.DataFrame
) – If no other parameters are given, every row is counted, starting from 1.dont_count (
str
, optional) – This column has notna() for measures where the option “Exclude from bar count” is activated, NaN otherwise.numbering_offset (
str
, optional) – This column has values of the MuseScore option “Add to bar number”, which adds notna() values to this and all subsequent measures.
-
ms3.bs4_measures.
make_next_col
(df, mc_col='mc', repeats='repeats', volta_structure={}, section_breaks=None, name='next')[source]¶ Uses a NextColumnMaker object to create a column with all MCs that can follow each MC (e.g. due to repetitions).
- Parameters
df (
pandas.DataFrame
) – Raw measure list.mc_col (
str
, optional) – Column names.repeats (
str
, optional) – Column names.volta_structure (
dict
, optional) – This parameter can be computed by get_volta_structure(). It is empty if there are no voltas in the piece.section_breaks (
str
, optional) – If you pass the name of a column, the string ‘section’ is taken into account as ending a section and therefore potentially ending a repeated part even when the repeat sign is missing.
-
ms3.bs4_measures.
make_offset_col
(df, mc_col='mc', timesig='timesig', act_dur='act_dur', next_col='next', section_breaks=None, name='mc_offset')[source]¶ If one MN is composed of two MCs, the resulting column indicates the second MC’s offset from the MN’s beginning.
- Parameters
mc_col (
str
, optional) – Names of the required columns.timesig (
str
, optional) – Names of the required columns.act_dur (
str
, optional) – Names of the required columns.next_col (
str
, optional) – Names of the required columns.section_breaks (
str
, optional) – If you pass the name of a column, the string ‘section’ is taken into account as ending a section and therefore potentially ending a repeated part even when the repeat sign is missing.
ms3.bs4_parser module¶
-
ms3.bs4_parser.
make_spanner_cols
(df, spanner_types=None)[source]¶ - From a raw chord list as returned by
get_chords(spanners=True)
create a DataFrame with Spanner IDs for all chords for all spanner types they are associated with.
- Parameters
spanner_types (
collection
) – If this parameter is passed, only the enlisted spanner types (e.g.Slur
orPedal
) are included.
- From a raw chord list as returned by
ms3.expand_dcml module¶
This is the same code as in the corpora repo as copied on September 24, 2020 and then adapted.
-
class
ms3.expand_dcml.
SliceMaker
[source]¶ Bases:
object
- This class serves for storing slice notation such as :3 as a variable or
passing it as function argument.
Examples
SM = SliceMaker() some_function( slice_this, SM[3:8] )
select_all = SM[:] df.loc[select_all]
-
ms3.expand_dcml.
changes2list
(changes, sort=True)[source]¶ Splits a string of changes into a list of 4-tuples.
Example
>>> changes2list('+#7b5') [('+#7', '+', '#', '7'), ('b5', '', 'b', '5')]
-
class
ms3.expand_dcml.
expand_labels
(df, column, regex, groupby={'group_keys': False, 'level': 0}, cols={}, dropna=False, propagate=True, relative_to_global=False, chord_tones=False, absolute=False, all_in_c=False, logger_name='expand_labels')[source]¶ Bases:
object
-
abs2rel_key
(absolute, localkey, global_minor=False)[source]¶ Expresses a Roman numeral as scale degree relative to a given localkey. The result changes depending on whether Roman numeral and localkey are interpreted within a global major or minor key. Uses: split_sd() ``
- Parameters
Examples
In a minor context, the key of II would appear within the key of vii as #III.
>>> abs2rel_key('iv', 'VI', global_minor=False) 'bvi' # F minor expressed with respect to A major >>> abs2rel_key('iv', 'vi', global_minor=False) 'vi' # F minor expressed with respect to A minor >>> abs2rel_key('iv', 'VI', global_minor=True) 'vi' # F minor expressed with respect to Ab major >>> abs2rel_key('iv', 'vi', global_minor=True) '#vi' # F minor expressed with respect to Ab minor
>>> abs2rel_key('VI', 'IV', global_minor=False) 'III' # A major expressed with respect to F major >>> abs2rel_key('VI', 'iv', global_minor=False) '#III' # A major expressed with respect to F minor >>> abs2rel_key('VI', 'IV', global_minor=True) 'bIII' # Ab major expressed with respect to F major >>> abs2rel_key('VI', 'iv', global_minor=False) 'III' # Ab major expressed with respect to F minor
-
changes2tpc
(changes, numeral, minor=False, root_alterations=False)[source]¶ - Given a numeral and changes, computes the intervals that the changes represent.
Changes do not express absolute intervals but instead depend on the numeral and the mode. Uses: split_sd(), changes2list()
- Parameters
changes (
str
) – A string of changes following the DCML harmony standard.numeral (
str
) – Roman numeral. If it is preceded by accidentals, it depends on the parameter root_alterations whether these are taken into account.minor (
bool
, optional) – Set to true if the numeral occurs in a minor context.root_alterations (
bool
, optional) – Set to True if accidentals of the root should change the result.
-
chord2tpcs
(chord, regex, **kwargs)[source]¶ - Split a chord label into its features and apply features2tpcs().
Uses: features2tpcs()
- Parameters
chord (
str
) – Chord label that can be split into the features [‘numeral’, ‘form’, ‘figbass’, ‘changes’, ‘relativeroot’].regex (
re.Pattern
) – Compiled regex with named groups for the five features.**kwargs – arguments for features2tpcs (pass MC to show it in warnings!)
-
compute_chord_tones
(df, bass_only=False, expand=False, cols={})[source]¶ - Compute the chord tones for DCML harmony labels. They are returned as lists
of tonal pitch classes in close position, starting with the bass note. The tonal pitch classes represent intervals relative to the local tonic: -2: Second below tonic -1: fifth below tonic 0: tonic 1: fifth above tonic 2: second above tonic, etc. The labels need to have undergone split_labels() and propagate_keys(). Pedal points are not taken into account. Uses: features2tpcs(), transform()
- Parameters
df (
pandas.DataFrame
) – Dataframe containing DCML chord labels that have been split by split_labels() and where the keys have been propagated using propagate_keys(add_bool=True).bass_only (
bool
, optional) – Pass True if you need only the bass note.expand (
bool
, optional) – Pass True if you need chord tones and added tones in separate columns.cols (
dict
, optional) –In case the column names for
['mc', 'numeral', 'form', 'figbass', 'changes', 'relativeroot', 'localkey', 'globalkey']
deviate, pass a dict, such as{'mc': 'mc', 'numeral': 'numeral_col_name', 'form': 'form_col_name', 'figbass': 'figbass_col_name', 'changes': 'changes_col_name', 'relativeroot': 'relativeroot_col_name', 'localkey': 'localkey_col_name', 'globalkey': 'globalkey_col_name'}
You may also deactivate columns by setting them to None, e.g. {‘changes’: None}
- Returns
For every row of df one tuple with chord tones, expressed as tonal picth classes. If expand is True, the function returns a DataFrame with four columns: Two with tuples for chord tones and added tones, one with the chord root, and one with the bass note.
- Return type
-
features2tpcs
(numeral, form=None, figbass=None, changes=None, relativeroot=None, key='C', minor=None, merge_tones=True, bass_only=False, mc=None)[source]¶ - Given the features of a chord label, this function returns the chord tones
in the order of the inversion, starting from the bass note. The tones are expressed as tonal pitch classes, where -1=F, 0=C, 1=G etc. Uses: str_is_minor(), name2tpc(), rn2tpc(), changes2list(), sort_tpcs()
- Parameters
numeral (
str
) – Roman numeral of the chord’s rootform ({None, 'M', 'o', '+' '%'}, optional) – Indicates the chord type if not a major or minor triad (for which `form`is None). ‘%’ and ‘M’ can only occur as tetrads, not as triads.
figbass ({None, '6', '64', '7', '65', '43', '2'}, optional) – Indicates chord’s inversion. Pass None for triad root position.
changes (
str
, optional) – Added steps such as ‘+6’ or suspensions such as ‘4’ or any combination such as (9+64). Numbers need to be in descending order.relativeroot (
str
, optional) – Pass a Roman scale degree if numeral is to be applied to a different scale degree of the local key, as in ‘V65/V’key (
str
orint
, optional) – The local key expressed as the root’s note name or a tonal pitch class. If it is a name and minor is None, uppercase means major and lowercase minor. If it is a tonal pitch class, minor needs to be specified.minor (
bool
, optional) – Pass True for minor and False for major. Can be omitted if key is a note name. This affects calculation of chords related to III, VI and VII.merge_tones (
bool
, optional) – Pass False if you want the function to return two tuples, one with (potentially suspended) chord tones and one with added notes.bass_only (
bool
, optional) – Return only the bass note instead of all chord tones.mc (int or str) – Pass measure count to display it in warnings.
-
features2type
(numeral, form=None, figbass=None)[source]¶ Turns a combination of the three chord features into a chord type.
- Returns
‘M’ (Major triad)
’m’ (Minor triad)
’o’ (Diminished triad)
’+’ (Augmented triad)
’mm7’ (Minor seventh chord)
’Mm7’ (Dominant seventh chord)
’MM7’ (Major seventh chord)
’mM7’ (Minor major seventh chord)
’o7’ (Diminished seventh chord)
’%7’ (Half-diminished seventh chord)
’+7’ (Augmented seventh chord)
-
labels2global_tonic
(df, cols={}, inplace=False)[source]¶ - Transposes all numerals to their position in the global major or minor scale.
This eliminates localkeys and relativeroots. The resulting chords are defined by [numeral, figbass, changes, globalkey_is_minor] (and pedal). Uses: transform(), rel2abs_key(), transpose_changes(), series_is_minor(), resolve_relative_keys() -> str_is_minor()
- Parameters
df (
pandas.DataFrame
) – Dataframe containing DCML chord labels that have been split by split_labels() and where the keys have been propagated using propagate_keys(add_bool=True).cols (
dict
, optional) –In case the column names for
['numeral', 'form', 'figbass', 'changes', 'relativeroot', 'localkey', 'globalkey']
deviate, pass a dict, such as{'chord': 'chord_col_name' 'pedal': 'pedal_col_name', 'numeral': 'numeral_col_name', 'form': 'form_col_name', 'figbass': 'figbass_col_name', 'changes': 'changes_col_name', 'relativeroot': 'relativeroot_col_name', 'localkey': 'localkey_col_name', 'globalkey': 'globalkey_col_name'}}
inplace (
bool
, optional) – Pass True if you want to mutate the input.
- Returns
If inplace=False, the relevant features of the transposed chords are returned. Otherwise, the original DataFrame is mutated.
- Return type
-
propagate_keys
(df, globalkey='globalkey', localkey='localkey', add_bool=True)[source]¶ - Propagate information about global keys and local keys throughout the dataframe.
Pass split harmonies for one piece at a time. For concatenated pieces, use apply_to_pieces(). Uses: series_is_minor()
- Parameters
df (
pandas.DataFrame
) – Dataframe containing DCML chord labels that have been split by split_labels().globalkey (
str
, optional) – In case you renamed the columns, pass column names.localkey (
str
, optional) – In case you renamed the columns, pass column names.add_bool (
bool
, optional) – Pass True if you want to add two boolean columns which are true if the respective key is a minor key.
-
propagate_pedal
(df, relative=True, drop_pedalend=True, cols={})[source]¶ - Propagate the pedal note for all chords within square brackets.
By default, the note is expressed in relation to each label’s localkey. Uses: rel2abs_key(), abs2rel_key
- Parameters
df (
pandas.DataFrame
) – Dataframe containing DCML chord labels that have been split by split_labels() and where the keys have been propagated using propagate_keys().relative (
bool
, optional) – Pass False if you want the pedal note to stay the same even if the localkey changes.drop_pedalend (
bool
, optional) – Pass False if you don’t want the column with the ending brackets to be dropped.cols (
dict
, optional) –In case the column names for
['pedal','pedalend', 'globalkey', 'localkey']
deviate, pass a dict, such as{'pedal': 'pedal_col_name', 'pedalend': 'pedalend_col_name', 'globalkey': 'globalkey_col_name', 'localkey': 'localkey_col_name'}
-
rel2abs_key
(rel, localkey, global_minor=False)[source]¶ Expresses a Roman numeral that is expressed relative to a localkey as scale degree of the global key. For local keys {III, iii, VI, vi, VII, vii} the result changes depending on whether the global key is major or minor. Uses: split_sd()
- Parameters
Examples
If the label viio6/VI appears in the context of the local key VI or vi, viio6 the absolute key to which viio6 applies depends on the global key. The comments express the examples in relation to global C major or C minor.
>>> rel2abs_key('vi', 'VI', global_minor=False) '#iv' # vi of A major = F# minor >>> rel2abs_key('vi', 'vi', global_minor=False) 'iv' # vi of A minor = F minor >>> rel2abs_key('vi', 'VI', global_minor=True) 'iv' # vi of Ab major = F minor >>> rel2abs_key('vi', 'vi', global_minor=True) 'biv' # vi of Ab minor = Fb minor
The same examples hold if you’re expressing in terms of the global key the root of a VI-chord within the local keys VI or vi.
-
replace_special
(df, regex, merge=False, inplace=False, cols={}, special_map={})[source]¶ Move special symbols in the numeral column to a separate column and replace them by the explicit chords they stand for. In particular, this function replaces the symbols It, Ger, and Fr. Uses: merge_changes()
- Parameters
df (
pandas.DataFrame
) – Dataframe containing DCML chord labels that have been split by split_labels().regex (
re.Pattern
) – Compiled regular expression used to split the labels replacing the special symbols.It needs to have named groups. The group names are used as column names unless replaced by cols.merge (
bool
, optional) – False: By default, existing values, except figbass, are overwritten. True: Merge existing with new values (for changes and relativeroot).inplace (
bool
, optional) – True: Change df inplace (default). False: Return copy.cols (
dict
, optional) –The special symbols appear in the column numeral and are moved to the column special. In case the column names for
['numeral','form', 'figbass', 'changes', 'relativeroot', 'special']
deviate, pass a dict, such as{'numeral': 'numeral_col_name', 'form': 'form_col_name 'figbass': 'figbass_col_name', 'changes': 'changes_col_name', 'relativeroot': 'relativeroot_col_name', 'special': 'special_col_name'}
special_map (
dict
, optional) – In case you want to add or alter special symbols to be replaced, pass a replacement map, e.g. {‘N’: ‘bII6’}. The column ‘figbass’ is only altered if it’s None to allow for inversions of special chords.
-
resolve_relative_keys
(relativeroot, minor=False)[source]¶ - Resolve nested relative keys, e.g. ‘V/V/V’ => ‘VI’.
Uses: rel2abs_key(), str_is_minor()
- relativeroot
str
One or several relative keys, e.g. iv/v/VI (fourth scale degree of the fifth scale degree of the sixth scale degree)
- minor
bool
, optional Pass True if the last of the relative keys is to be interpreted within a minor context.
-
rn2tpc
(rn, global_minor=False)[source]¶ Turn a Roman numeral into a TPC interval (e.g. for transposition purposes). Uses: split_sd()
-
split_labels
(df, column, regex, cols={}, dropna=False, **kwargs)[source]¶ Split harmony labels complying with the DCML syntax into columns holding their various features.
- Parameters
df (
pandas.DataFrame
) – Dataframe where one column contains DCML chord labels.column (
str
) – Name of the column that holds the harmony labels.regex (
re.Pattern
) – Compiled regular expression used to split the labels. It needs to have named groups. The group names are used as column names unless replaced by cols.cols (
dict
) – Dictionary to map the regex’s group names to deviating column names.dropna (
bool
, optional) – Pass True if you want to drop rows where column is NaN/<NA>
-
split_sd
(sd, count=False)[source]¶ Splits a scale degree such as ‘bbVI’ or ‘b6’ into accidentals and numeral.
-
transform_note_columns
(df, to, note_cols=['chord_tones', 'added_tones', 'bass_note', 'root'], minor_col='localkey_is_minor', inplace=False, **kwargs)[source]¶ - Turns columns with line-of-fifth tonal pitch classes into another representation.
Uses: transform_columns()
- Parameters
df (
pandas.DataFrame
) – DataFrame where columns (or column combinations) work as function arguments.to ({'name', 'iv', 'pc', 'sd', 'rn'}) –
The tone representation that you want to get from the note_cols.
- ’name’: Note names. Should only be used if the stacked fifths actually represent
absolute tonal pitch classes rather than intervals over the local tonic. In other words, make sure to use ‘name’ only if 0 means C rather than I.
- ’iv’: Intervals such that 0 = ‘P1’, 1 = ‘P5’, 4 = ‘M3’, -3 = ‘m3’, 6 = ‘A4’,
-6 = ‘D5’ etc.
’pc’: (Relative) chromatic pitch class, or distance from tonic in semitones.
- ’sd’: Scale degrees such that 0 = ‘1’, -1 = ‘4’, -2 = ‘b7’ in major, ‘7’ in minor etc.
This representation requires a boolean column minor_col which is True in those rows where the stacks of fifths occur in a local minor context and False for the others. Alternatively, if all pitches are in the same mode or you simply want to express them as degrees of particular mode, you can pass the boolean keyword argument minor.
- ’rn’: Roman numerals such that 0 = ‘I’, -2 = ‘bVII’ in major, ‘VII’ in minor etc.
Requires boolean ‘minor’ values, see ‘sd’.
note_cols (
list
, optional) – List of columns that hold integers or collections of integers that represent stacks of fifth (0 = tonal center, 1 = fifth above, -1 = fourth above, etc).minor_col (
str
, optional) – If to is ‘sd’ or ‘rn’, specify a boolean column where the value is True in those rows where the stacks of fifths occur in a local minor context and False for the others.
-
transpose_changes
(changes, old_num, new_num, old_minor=False, new_minor=False)[source]¶ - Since the interval sizes expressed by the changes of the DCML harmony syntax
depend on the numeral’s position in the scale, these may change if the numeral is transposed. This function expresses the same changes for the new position. Chord tone alterations (of 3 and 5) stay untouched. Uses: changes2tpc()
- Parameters
changes (
str
) – A string of changes following the DCML harmony standard.old_num (
str
:) – Old numeral, new numeral.new_num (
str
:) – Old numeral, new numeral.old_minor (
bool
, optional) – For each numeral, pass True if it occurs in a minor context.new_minor (
bool
, optional) – For each numeral, pass True if it occurs in a minor context.
-
-
ms3.expand_dcml.
merge_changes
(left, right, *args)[source]¶ Merge to changes into one, e.g. b3 and +#7 to +#7b3. Uses: changes2list()
-
ms3.expand_dcml.
transform_columns
(df, func, columns=None, param2col=None, inplace=False, **kwargs)[source]¶ Wrapper function to use transform() on df[columns], leaving the other columns untouched.
- Parameters
df (
pandas.DataFrame
) – DataFrame where columns (or column combinations) work as function arguments.func (
callable
) – Function you want to apply to all elements in columns.columns (
list
) – Columns to which you want to apply func.param2col (
dict
orlist
, optional) – Mapping from parameter names of func to column names. If you pass a list of column names, the columns’ values are passed as positional arguments. Pass None if you want to use all columns as positional arguments.inplace (
bool
, optional) – Pass True if you want to mutate df rather than getting an altered copy.**kwargs (keyword arguments for transform()) –
ms3.logger module¶
-
class
ms3.logger.
ContextAdapter
(logger, extra)[source]¶ Bases:
logging.LoggerAdapter
This LoggerAdapter is designed to include the module and function that called the logger.
-
process
(msg, overwrite={}, stack_info=False, **kwargs)[source]¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
-
ms3.logger.
config_logger
(name, level=None, logfile=None)[source]¶ Configs the logger with name name. Overwrites existing config.
-
ms3.logger.
function_logger
(f)[source]¶ - This decorator ensures that the decorated function can use the variable logger for logging and
makes it possible to pass the function the keyword argument logger with either a Logger object or the name of one. If the keyword argument is not passed, the root logger is used.
Example
This is how the decorator can be used:
from ms3.logger import function_logger @function_logger def log_this(msg): logger.warning(msg) if __name__ == '__main__': log_this('First test', logger='my_logger') log_this('Second Test')
Output:
WARNING my_logger -- function_logger.py (line 5) log_this(): First test WARNING root -- function_logger.py (line 5) log_this(): Second Test
ms3.parse module¶
-
class
ms3.parse.
Parse
(dir=None, key=None, file_re='.*', folder_re='.*', exclude_re='^(\\.|__)', recursive=True, logger_name='Parse', level=None)[source]¶ Bases:
object
-
add_dir
(dir, key=None, file_re='.*', folder_re='.*', exclude_re='^(\\.|__)', recursive=True)[source]¶
-
get_lists
(keys=None, notes=False, rests=False, notes_and_rests=False, measures=False, events=False, labels=False, chords=False, expanded=False)[source]¶
-
property
parsed
¶
-
store_lists
(keys=None, root_dir=None, notes_folder=None, notes_suffix='', rests_folder=None, rests_suffix='', notes_and_rests_folder=None, notes_and_rests_suffix='', measures_folder=None, measures_suffix='', events_folder=None, events_suffix='', labels_folder=None, labels_suffix='', chords_folder=None, chords_suffix='', expanded_folder=None, expanded_suffix='', simulate=False)[source]¶
-
ms3.score module¶
-
class
ms3.score.
MSCX
(mscx_src=None, read_only=False, parser='bs4', logger_name='MSCX', level=None)[source]¶ Bases:
object
Object for interacting with the XML structure of a MuseScore 3 file.
-
_parsed
¶ Holds the MSCX score parsed by the selected parser.
- Type
_MSCX_bs4
-
infer_label_types :obj:`bool`, optional
For label_type 0 (simple string), mark which ones
-
level
¶ Pass a level name for which (and above which) you want to see log records.
- Type
{‘W’, ‘D’, ‘I’, ‘E’, ‘C’, ‘WARNING’, ‘DEBUG’, ‘INFO’, ‘ERROR’, ‘CRITICAL’}, optional
-
output_mscx
(filepath)¶ Write the internal score representation to a file.
-
add_labels
(df, label='label', mc='mc', onset='onset', staff='staff', voice='voice', **kwargs)[source]¶ - Parameters
df (
pandas.DataFrame
) – DataFrame with labels to be added.label (
str
) – Names of the DataFrame columns for the five required parameters.mc (
str
) – Names of the DataFrame columns for the five required parameters.onset (
str
) – Names of the DataFrame columns for the five required parameters.staff (
str
) – Names of the DataFrame columns for the five required parameters.voice (
str
) – Names of the DataFrame columns for the five required parameters.kwargs –
- label_type, root, base, leftParen, rightParen, offset_x, offset_y, nashville
For these parameters, the standard column names are used automatically if the columns are present. If the column names have changed, pass them as kwargs, e.g.
base='name_of_the_base_column'
- Returns
- Return type
-
property
chords
¶
-
property
events
¶
-
property
expanded
¶
-
property
labels
¶
-
property
measures
¶
-
property
notes
¶
-
property
notes_and_rests
¶
-
property
parsed
¶
-
property
rests
¶
-
property
version
¶ MuseScore version with which the file was created (read-only).
-
-
class
ms3.score.
Score
(mscx_src=None, infer_label_types=['dcml'], read_only=False, logger_name='Score', level=None, parser='bs4')[source]¶ Bases:
object
Object representing a score.
-
infer_label_types
¶ Changing this value results in a call to
infer_types()
.
-
logger
¶ Current logger that the object is using.
- Type
-
parser
¶ The only XML parser currently implemented is BeautifulSoup 4.
- Type
{‘bs4’}
-
paths, files, fnames, fexts, logger_names
Dictionaries for keeping track of file information handled by
handle_path()
.- Type
-
_annotations
¶
-
_harmony_regex
¶
-
_label_types
¶
-
_types_to_infer
¶
-
handle_path
(path, key)[source]¶ Puts the path into paths, files, fnames, fexts dicts with the given key.
-
abs_regex
= '^\\(?[A-G|a-g](b*|#*).*?(/[A-G|a-g](b*|#*))?$'¶
-
dcml_regex
= re.compile('\n ^(?P<first>\n (\\.?\n ((?P<globalkey>[a-gA-G](b*|\\#*))\\.)?\n , re.VERBOSE)¶
-
property
infer_label_types
¶
-
property
mscx
¶ Returns the MSCX object with the parsed score.
-
nashville_regex
= '^(b*|#*)(\\d).*$'¶
-
rn_regex
= '^$'¶
-
property
types
¶
-
ms3.skeleton module¶
This is a skeleton file that can serve as a starting point for a Python console script. To run this script uncomment the following lines in the [options.entry_points] section in setup.cfg:
- console_scripts =
fibonacci = ms3.skeleton:run
Then run python setup.py install which will install the command fibonacci inside your current environment. Besides console scripts, the header (i.e. until _logger…) of this file can also be used as template for Python modules.
Note: This skeleton file can be safely removed if not needed!
-
ms3.skeleton.
main
(args)[source]¶ Main entry point allowing external calls
- Parameters
args ([str]) – command line parameter list
ms3.utils module¶
-
ms3.utils.
fifths2acc
(fifths)[source]¶ Returns accidentals for a stack of fifths that can be combined with a basic representation of the seven steps.
-
ms3.utils.
fifths2iv
(fifths)[source]¶ Return interval name of a stack of fifths such that 0 = ‘P1’, -1 = ‘P4’, -2 = ‘m7’, 4 = ‘M3’ etc. Uses: map2elements()
-
ms3.utils.
fifths2name
(fifths, midi=None, ms=False)[source]¶ - Return note name of a stack of fifths such that
0 = C, -1 = F, -2 = Bb, 1 = G etc. Uses: map2elements(), fifths2str()
-
ms3.utils.
fifths2pc
(fifths)[source]¶ Turn a stack of fifths into a chromatic pitch class. Uses: map2elements()
-
ms3.utils.
fifths2rn
(fifths, minor=False, auto_key=False)[source]¶ - Return Roman numeral of a stack of fifths such that
0 = I, -1 = IV, 1 = V, -2 = bVII in major, VII in minor, etc. Uses: map2elements(), is_minor_mode()
- Parameters
auto_key (
bool
, optional) – By default, the returned Roman numerals are uppercase. Pass True to pass upper- or lowercase according to the position in the scale.
-
ms3.utils.
fifths2sd
(fifths, minor=False)[source]¶ Return scale degree of a stack of fifths such that 0 = ‘1’, -1 = ‘4’, -2 = ‘b7’ in major, ‘7’ in minor etc. Uses: map2elements(), fifths2str()
-
ms3.utils.
fifths2str
(fifths, steps, inverted=False)[source]¶ Boiler plate used by fifths2-functions.
-
ms3.utils.
is_minor_mode
(fifths, minor=False)[source]¶ Returns True if the scale degree fifths naturally has a minor third in the scale.
-
ms3.utils.
load_tsv
(path, index_col=None, sep='\t', converters={}, dtypes={}, stringtype=False, **kwargs)[source]¶ Loads the TSV file path while applying correct type conversion and parsing tuples.
- Parameters
path (
str
) – Path to a TSV file as output by format_data().index_col (
list
, optional) – By default, the first two columns are loaded as MultiIndex. The first level distinguishes pieces and the second level the elements within.converters (
dict
, optional) – Enhances or overwrites the mapping from column names to types included the constants.dtypes (
dict
, optional) – Enhances or overwrites the mapping from column names to types included the constants.stringtype (
bool
, optional) – If you’re using pandas >= 1.0.0 you might want to set this to True in order to be using the new string datatype that includes the new null type pd.NA.
-
ms3.utils.
map2elements
(e, f, *args, **kwargs)[source]¶ If e is an iterable, f is applied to all elements.
-
ms3.utils.
midi2octave
(midi, fifths=None)[source]¶ - For a given MIDI pitch, calculate the octave. Middle octave = 4
Uses: fifths2pc(), map2elements()
-
ms3.utils.
name2tpc
(nn)[source]¶ Turn a note name such as Ab into a tonal pitch class, such that -1=F, 0=C, 1=G etc. Uses: split_note_name()
-
ms3.utils.
scan_directory
(dir, file_re='.*', folder_re='.*', exclude_re='^(\\.|__)', recursive=True)[source]¶ Get a list of files.
- Parameters
dir (
str
) – Directory to be scanned for files.file_re (
str
, optional) – Regular expressions for filtering certain file names or folder names. The regEx are checked with search(), not match(), allowing for fuzzy search.folder_re (
str
, optional) – Regular expressions for filtering certain file names or folder names. The regEx are checked with search(), not match(), allowing for fuzzy search.recursive (
bool
, optional) – By default, sub-directories are recursively scanned. Pass False to scan onlydir
.
- Returns
List of full paths meeting the criteria.
- Return type
-
ms3.utils.
sort_tpcs
(tpcs, ascending=True, start=None)[source]¶ - Sort tonal pitch classes by order on the piano.
Uses: fifths2pc()
-
ms3.utils.
split_note_name
(nn, count=False)[source]¶ Splits a note name such as ‘Ab’ into accidentals and name.
-
ms3.utils.
transform
(df, func, param2col=None, column_wise=False, **kwargs)[source]¶ - Compute a function for every row of a DataFrame, using several cols as arguments.
The result is the same as using df.apply(lambda r: func(param1=r.col1, param2=r.col2…), axis=1) but it optimizes the procedure by precomputing func for all occurrent parameter combinations. Uses: inspect.getfullargspec()
- Parameters
df (
pandas.DataFrame
orpandas.Series
) – Dataframe containing function parameters.func (
callable
) – The result of this function for every row will be returned.param2col (
dict
orlist
, optional) – Mapping from parameter names of func to column names. If you pass a list of column names, the columns’ values are passed as positional arguments. Pass None if you want to use all columns as positional arguments.column_wise (
bool
, optional) – Pass True if you want to map func to the elements of every column separately. This is simply an optimized version of df.apply(func) but allows for naming columns to use as function arguments. If param2col is None, func is mapped to the elements of all columns, otherwise to all columns that are not named as parameters in param2col. In the case where func does not require a positional first element and you want to pass the elements of the various columns as keyword argument, give it as param2col={‘function_argument’: None}inplace (
bool
, optional) – Pass True if you want to mutate df rather than getting an altered copy.**kwargs (Other parameters passed to func.) –
Module contents¶
All functionality of the library is available through creating a ms3.Score
object for a single score and a
ms3.Parse
object for multiple scores. Parsing a list of annotation labels only can be done by creating a
ms3.Annotations
object.