Skip to content

Conversation

@Anas12091101
Copy link
Contributor

@Anas12091101 Anas12091101 commented Sep 29, 2025

Description

This PR migrates the data calculation logic for the GradeSummary table, which was previously in the frontend-app-learning.

This PR also introduces a new visibility option for assignment scores:

“Never show individual assessment results, but show overall assessment results after the due date.”

With this option, learners cannot see question-level correctness or scores at any time. However, once the due date has passed, they can view their overall score in the total grades section on the Progress page.

This PR should be reviewed in correspondence with openedx/frontend-app-learning#1797

Useful information to include:

  • Which edX user roles will this change impact? Common user roles are "Learner", "Course Author",
    "Developer", and "Operator".
  • Include screenshots for changes to the UI (ideally, both "before" and "after" screenshots, if applicable).
  • Provide links to the description of corresponding configuration changes. Remember to correctly annotate these
    changes.

Supporting information

Proposal: https://openedx.atlassian.net/wiki/spaces/OEPM/pages/5150769174/Proposal+Add+option+to+show+overall+assessment+results+after+due+date+but+hide+individual+assessment+results

openedx/platform-roadmap#460

Internal ticket: https://github.com/mitodl/hq/issues/3859

Testing instructions

  • Verify that the progress page works fine without breaking anything

Deadline

Before Ulmo

Other information

Include anything else that will help reviewers and consumers understand the change.

  • Does this change depend on other changes elsewhere?
  • Any special concerns or limitations? For example: deprecations, migrations, security, or accessibility.
  • If your database migration can't be rolled back easily.

@openedx-webhooks openedx-webhooks added the open-source-contribution PR author is not from Axim or 2U label Sep 29, 2025
@openedx-webhooks
Copy link

openedx-webhooks commented Sep 29, 2025

Thanks for the pull request, @Anas12091101!

This repository is currently maintained by @openedx/wg-maintenance-edx-platform.

Once you've gone through the following steps feel free to tag them in a comment and let them know that your changes are ready for engineering review.

🔘 Get product approval

If you haven't already, check this list to see if your contribution needs to go through the product review process.

  • If it does, you'll need to submit a product proposal for your contribution, and have it reviewed by the Product Working Group.
    • This process (including the steps you'll need to take) is documented here.
  • If it doesn't, simply proceed with the next step.
🔘 Provide context

To help your reviewers and other members of the community understand the purpose and larger context of your changes, feel free to add as much of the following information to the PR description as you can:

  • Dependencies

    This PR must be merged before / after / at the same time as ...

  • Blockers

    This PR is waiting for OEP-1234 to be accepted.

  • Timeline information

    This PR must be merged by XX date because ...

  • Partner information

    This is for a course on edx.org.

  • Supporting documentation
  • Relevant Open edX discussion forum threads
🔘 Get a green build

If one or more checks are failing, continue working on your changes until this is no longer the case and your build turns green.


Where can I find more information?

If you'd like to get more details on all aspects of the review process for open source pull requests (OSPRs), check out the following resources:

When can I expect my changes to be merged?

Our goal is to get community contributions seen and reviewed as efficiently as possible.

However, the amount of time that it takes to review and merge a PR can vary significantly based on factors such as:

  • The size and impact of the changes that it introduces
  • The need for product review
  • Maintenance status of the parent repository

💡 As a result it may take up to several weeks or months to complete a review and merge your PR.

@github-project-automation github-project-automation bot moved this to Needs Triage in Contributions Sep 29, 2025
@Anas12091101 Anas12091101 marked this pull request as draft September 29, 2025 18:28
@mphilbrick211 mphilbrick211 moved this from Needs Triage to Waiting on Author in Contributions Sep 29, 2025
@Anas12091101 Anas12091101 force-pushed the anas/add-new-grade-visisbility-option branch from cea9397 to c73c019 Compare September 30, 2025 06:43
@Anas12091101 Anas12091101 moved this from Waiting on Author to Ready for Review in Contributions Sep 30, 2025
@Anas12091101 Anas12091101 marked this pull request as ready for review September 30, 2025 07:44
@sarina
Copy link
Contributor

sarina commented Sep 30, 2025

@Anas12091101 could you add some tests around this new functionality?

@Anas12091101 Anas12091101 force-pushed the anas/add-new-grade-visisbility-option branch 2 times, most recently from e1d16b4 to 602d07e Compare October 1, 2025 11:17
@Anas12091101
Copy link
Contributor Author

@sarina I have added the tests

@sarina
Copy link
Contributor

sarina commented Oct 1, 2025

Thanks @Anas12091101 - I just checked with @pdpinch and he's going to provide a review.

@Anas12091101 Anas12091101 force-pushed the anas/add-new-grade-visisbility-option branch from afee826 to 9a055ec Compare October 7, 2025 10:29
Copy link
Contributor

@asadali145 asadali145 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ormsbee ormsbee self-requested a review October 8, 2025 15:54
Copy link
Contributor

@ormsbee ormsbee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor question and request. But also a high level question: Don't we need to modify the REST API to suppress the grade when the new status is applied to a subsection?

from datetime import datetime
from logging import getLogger

from pytz import UTC
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use the standard library's datetime.timezone.utc instead of using the pytz package.

%if hide_url:
<p class="d-inline">${section.display_name}
%if (total > 0 or earned > 0) and section.show_grades(staff_access):
%if (total > 0 or earned > 0) and section.show_grades(staff_access) and not section.show_correctness == 'never_but_include_grade':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, we want to keep logic this specific out of the template. Is there a reason it can't be incorporated into the show_grades() logic?

@ormsbee
Copy link
Contributor

ormsbee commented Oct 8, 2025

@Anas12091101: Did you use an LLM to help generate this PR? And if so, which one? If you have used an LLM, please make sure you have read the project policy on using generative AI tools.

@mphilbrick211 mphilbrick211 moved this from Ready for Review to In Eng Review in Contributions Oct 8, 2025
@Anas12091101
Copy link
Contributor Author

@ormsbee Thanks for the pointer. I did not use an LLM to create this PR. I have, however, used GitHub Copilot (GPT-5 agent) on a different Authoring MFE PR to help write tests and speed up some fixes

@Anas12091101
Copy link
Contributor Author

Don't we need to modify the REST API to suppress the grade when the new status is applied to a subsection?

Yes, we need to suppress the grade in the API. This is also being discussed here: openedx/frontend-app-learning#1797 (comment).
I’ll be updating this PR to include the grade calculation on the backend and then remove the corresponding calculation from the Learning MFE.

@Anas12091101 Anas12091101 changed the title feat: add never_but_include_grade visibility option feat: migrate data calculation logic from frontend and add 'never_but_include_grade' grade visibility option Oct 15, 2025
@Anas12091101 Anas12091101 changed the title feat: migrate data calculation logic from frontend and add 'never_but_include_grade' grade visibility option feat: migrate data calculation logic from learning MFE and add 'never_but_include_grade' grade visibility option Oct 15, 2025
@Anas12091101 Anas12091101 changed the title feat: migrate data calculation logic from learning MFE and add 'never_but_include_grade' grade visibility option feat: migrate progress page data calculation logic from learning MFE and add 'never_but_include_grade' grade visibility option Oct 15, 2025
@Anas12091101 Anas12091101 force-pushed the anas/add-new-grade-visisbility-option branch from a213207 to 796a658 Compare October 16, 2025 07:28
@Anas12091101 Anas12091101 requested a review from ormsbee October 16, 2025 09:02
@Anas12091101
Copy link
Contributor Author

Anas12091101 commented Oct 16, 2025

@ormsbee, I have shifted the grades calculation logic to the backend in this PR and adjusted the frontend PR openedx/frontend-app-learning#1797 accordingly. This PR is ready for another look.

return aggregator.run()


User = get_user_model()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please put your new code under the User assignment here.

Comment on lines 21 to 23
scores: List[float] = field(default_factory=list)
visibilities: List[Optional[bool]] = field(default_factory=list)
included: List[Optional[bool]] = field(default_factory=list)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: edx-platform requires python 3.11 or greater, so you can use the newer notation style for these, e.g. list[float], list[bool | None], etc.

class _AssignmentBucket:
"""Holds scores and visibility info for one assignment type."""
assignment_type: str
expected_total: int
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please carefully document what all these fields actually mean, along with examples, even if it seems obvious to you. Someone who is not familiar with grading code might have questions like:

  • Is assignment_type the full name (e.g. Homework), or the abbreviation (HW)?
  • Is expected_total the amount that the user put for "Total number" in the grading policy, or is it "Total number - Number of droppable"?

Grading code is generally extremely confusing to folks approaching the platform, so it's okay to be a little verbose in the comments if it adds clarity.



@dataclass
class _AssignmentBucket:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Do we use the terminology "bucket" elsewhere in the grading code? I don't hate the name, but I don't want to add a convention if we already have something else we call this, say on the frontend.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the frontend, the calculation logic primarily resides in these two functions:
https://github.com/openedx/frontend-app-learning/pull/1797/files#diff-4d3a6d853ff9fd1354a56024f06c6718c856c281e295de92c5b09ac1e86774bfL6-L92

Initially, I tried to replicate the exact calculation flow from the frontend, but it became difficult to follow due to the complexity of the logic. To improve readability and maintainability, I introduced the _AssignmentBucket class to make the code more modular.

We can definitely consider renaming it if “bucket” doesn’t feel intuitive, for example, something like AssignmentDetails might be clearer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fine, we can keep the terminology for now. Thank you.

Comment on lines 164 to 165
'average_grade': round(earned_visible, 4),
'weighted_grade': round(earned_visible * weight, 4),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have very specific rounding logic in grades:

def _compute_percent(grader_result):
"""
Computes and returns the grade percentage from the given
result from the grader.
"""
# Confused about the addition of .05 here? See https://openedx.atlassian.net/browse/TNL-6972
return round_away_from_zero(grader_result['percent'] * 100 + 0.05) / 100

This is further complicated by the fact that the behavior of round() changed between Python 2 and Python 3, and we needed to keep consistency. Even if it yield the same result 99.99% of the time, inconsistencies will crop up somewhere in the user base, and people are extremely sensitive to any time their actual grades do not line up with what's displayed on the progress page.

Is it necessary to do these sorts of calculations here, rather than taking existing calculations and rounding already done in the grades app?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The calculation here is performed for each assignment type to determine the total for each assignment individually. From what I understand, the grades app primarily computes the overall final grade rather than per-assignment totals (please correct me if I’m mistaken).

Previously, this calculation was handled on the frontend to display the results of each assignment separately in the Grade Summary table:

Screenshot 2025-10-20 at 8 31 35 PM

I’m also recalculating the final_grades here to display the raw weighted grade (77.50 in the screenshot) on the frontend, while the 78% value still comes from the calculation in the grades app.

To ensure consistent rounding behavior, I can switch from round to round_away_from_zero here. What are your thoughts on that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that you've already shifted to using round_away_from_zero, which is good. It's not enough in the long term, but I'll write that up elsewhere. Thank you.

@ormsbee ormsbee moved this from In Eng Review to Waiting on Author in Contributions Oct 17, 2025
Copy link
Contributor

@asadali145 asadali145 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM in testing!

Comment on lines +63 to +71
if not self.visibilities:
return 'none'
all_hidden = all(v is False for v in self.visibilities)
some_hidden = any(v is False for v in self.visibilities)
if all_hidden:
return 'all'
if some_hidden:
return 'some'
return 'none'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we create constants for all these states?

@Anas12091101 Anas12091101 requested a review from ormsbee October 21, 2025 06:03
@Anas12091101 Anas12091101 moved this from Waiting on Author to Ready for Review in Contributions Oct 21, 2025
Copy link
Contributor

@ormsbee ormsbee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm approving this because it implements a useful feature and it does not make the situation worse, but in the medium-long term, we need to consolidate the grading logic more to be sure that progress and actual backend grade calculation are more aligned. I'll make a separate ticket to track that.

@pdpinch: Would consolidating this logic be something that MIT can lead? You folks have a lot of domain expertise in the subtleties of grading at this point.

@ormsbee ormsbee merged commit 4afff6e into openedx:master Oct 22, 2025
49 checks passed
@github-project-automation github-project-automation bot moved this from Ready for Review to Done in Contributions Oct 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open-source-contribution PR author is not from Axim or 2U

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

5 participants