Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 1.6.0 #398

Open
wants to merge 156 commits into
base: main
Choose a base branch
from
Open

Version 1.6.0 #398

wants to merge 156 commits into from

Conversation

sildater
Copy link
Member

PR for version 1.6.0

This new version addresses multiple changes, bug fixes and new features:

New features

Bug fixes

Other Changes

Todo

  • some PRs are still to be merged
  • set new version

CarlosCancino-Chacon and others added 30 commits April 5, 2024 16:24
add check for empty note array
improve documentation performance note_array
fixes for two bugs during **kern import.
fix bug, update documentation
Copy link
Member

@CarlosCancino-Chacon CarlosCancino-Chacon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the comment about sorting the voice info in exportmusicxml.py (which is more of an internal note for documentation), things look ok.

# get the voice for which merging notes and other has lowest cost
merge_voice = sorted(cost.items(), key=itemgetter(1))[0][0]
# merge_voice = sorted(cost.items(), key=itemgetter(1))[0][0]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this might end up having some unintended effect down the line... In any case, I think this change is ok as long as it doesn't break anything on the tests. Would it be possible to have a documented list of scores/pieces that had an issue with the sorting (i.e., the piece(s) that prompted this issue)? This could be useful for further testing down the line

Copy link
Member

@manoskary manoskary left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pull request for version 1.6.0 includes multiple changes, bug fixes, and new features such as measure refactoring for various formats and the addition of measure and clef features.

Minor comments are added mostly for better understanding parts of this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was this file supposed to be edited?

partitura/io/importmatch.py Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While all these additions of note features are great and I have no review remarks or comments about the changes I would like to raise a small concern about the make_note_features function which calls upon pt.score.merge_parts to address scores with multiple parts. This approach while mostly correct it might result to some copying errors or very long computation times due to the part's recursive nature. Before merging this PR or even after, should we consider replacing the merge parts with some faster and safer np.array merging as it is done for note_array?

Comment on lines +750 to +760
# check that all onsets have a duration
# ornaments (grace notes) do not have a duration
score_unique_onset_idxs = np.array(
[
np.where(np.logical_and(score_onsets == u, score_durations > 0))[0]
for u in score_unique_onsets
],
dtype=object,
)
score_unique_onset_idxs = [
np.where(np.logical_and(score_onsets == u, score_durations > 0))[0]
for u in score_unique_onsets
]

else:
score_unique_onset_idxs = np.array(
[np.where(score_onsets == u)[0] for u in score_unique_onsets],
dtype=object,
)
score_unique_onset_idxs = [
np.where(score_onsets == u)[0] for u in score_unique_onsets
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was there any reason that dtype was set to object before I don't see why it would not only be int?

Also:
If a note with 0 duration is the only present on a unique onset then indexing np.where(score_onsets == u)[0] could result to IndexError.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dtype was set to object before because the output was an array of arrays of different sizes (and numpy used to complain about this).

Regarding your second point, I don't see how that could happen, since it would have a valid onset value.

import numpy as np

note_array = np.array(
    [
        (60, 0.0, 1.0, "n0"),
        (62, 1.0, 1.0, "n1"),
        (63, 2.0, 0.0, "n2"), # this note has duration 0 and is the only one in onset 2
        (64, 2.5, 1.0, "n3"),
    ],
    dtype=[
        ("pitch", int),
        ("onset_beat", float),
        ("duration_beat", float),
        ("id", "U10"),
    ],
)


unique_onsets = np.unique(note_array["onset_beat"])

unique_onset_idxs = [np.where(note_array["onset_beat"]== u)[0] for u in unique_onsets]

# Expected result
# [array([0]), array([1]), array([2]), array([3])]

Or am I missing something?

)
elif len(staff_clefs) == 1:
# If there is only a single clef
staff_clefs = np.array([staff_clefs[0, :], staff_clefs[0, :]])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the second element point to self.last_point.t maybe something like np.r_[self.last_point.t, staff_clefs[1, :]]?
In any case, I am having a bit of hard time to follow some parts of this code, can they be documented slightly more?

Comment on lines +281 to +284
if staff_clefs[0, 0] > self.first_point.t:
staff_clefs = np.vstack(
((self.first_point.t, *staff_clefs[0, 1:]), staff_clefs)
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a forced end points in case of multiple clefs, and maybe also a check for that is needed.

Comment on lines +243 to +254
clefs = np.array(
[
(
c.start.t,
c.staff,
clef_sign_to_int(c.sign),
c.line,
c.octave_change if c.octave_change is not None else 0,
)
for c in self.iter_all(Clef)
]
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think clefs and their maps could work slightly better as structured arrays. It is not immediately obvious what the values are.

Comment on lines +60 to +81
map_fn = part.clef_map
self.assertTrue(
np.all(map_fn(part.first_point.t) == np.array([[1, 0, 2, 0], [2, 1, 4, 0]])) # treble / bass
)
self.assertTrue(
np.all(map_fn(7) == np.array([[1, 0, 2, -2], [2, 0, 2, 0]])) # treble15vb / treble
)
self.assertTrue(
np.all(map_fn(8) == np.array([[1, 0, 2, 1], [2, 1, 4, 0]])) # treble8va / bass
)
self.assertTrue(
np.all(map_fn(11) == np.array([[1, 2, 3, 0], [2, 1, 4, 0]])) # ut3 / bass
)
self.assertTrue(
np.all(map_fn(12) == np.array([[1, 2, 3, 0], [2, 1, 3, 0]])) # ut3 / bass3
)
self.assertTrue(
np.all(map_fn(13) == np.array([[1, 2, 4, 0], [2, 1, 3, 0]])) # ut4 / bass3
)
self.assertTrue(
np.all(map_fn(part.last_point.t) == np.array([[1, 2, 4, 0], [2, 1, 3, 0]])) # ut4 / bass3
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it is a test, so it is not very crucial, but I am having hard time to follow this test and the comments are not helping a lot. I think this could benefit from some more explicative inline documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants