-
Notifications
You must be signed in to change notification settings - Fork 4.2k
feat: migrate progress page data calculation logic from learning MFE and add 'never_but_include_grade' grade visibility option #37399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: migrate progress page data calculation logic from learning MFE and add 'never_but_include_grade' grade visibility option #37399
Conversation
|
Thanks for the pull request, @Anas12091101! This repository is currently maintained by Once you've gone through the following steps feel free to tag them in a comment and let them know that your changes are ready for engineering review. 🔘 Get product approvalIf you haven't already, check this list to see if your contribution needs to go through the product review process.
🔘 Provide contextTo help your reviewers and other members of the community understand the purpose and larger context of your changes, feel free to add as much of the following information to the PR description as you can:
🔘 Get a green buildIf one or more checks are failing, continue working on your changes until this is no longer the case and your build turns green. Where can I find more information?If you'd like to get more details on all aspects of the review process for open source pull requests (OSPRs), check out the following resources: When can I expect my changes to be merged?Our goal is to get community contributions seen and reviewed as efficiently as possible. However, the amount of time that it takes to review and merge a PR can vary significantly based on factors such as:
💡 As a result it may take up to several weeks or months to complete a review and merge your PR. |
cea9397 to
c73c019
Compare
|
@Anas12091101 could you add some tests around this new functionality? |
e1d16b4 to
602d07e
Compare
|
@sarina I have added the tests |
|
Thanks @Anas12091101 - I just checked with @pdpinch and he's going to provide a review. |
afee826 to
9a055ec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor question and request. But also a high level question: Don't we need to modify the REST API to suppress the grade when the new status is applied to a subsection?
| from datetime import datetime | ||
| from logging import getLogger | ||
|
|
||
| from pytz import UTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use the standard library's datetime.timezone.utc instead of using the pytz package.
| %if hide_url: | ||
| <p class="d-inline">${section.display_name} | ||
| %if (total > 0 or earned > 0) and section.show_grades(staff_access): | ||
| %if (total > 0 or earned > 0) and section.show_grades(staff_access) and not section.show_correctness == 'never_but_include_grade': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, we want to keep logic this specific out of the template. Is there a reason it can't be incorporated into the show_grades() logic?
|
@Anas12091101: Did you use an LLM to help generate this PR? And if so, which one? If you have used an LLM, please make sure you have read the project policy on using generative AI tools. |
|
@ormsbee Thanks for the pointer. I did not use an LLM to create this PR. I have, however, used GitHub Copilot (GPT-5 agent) on a different Authoring MFE PR to help write tests and speed up some fixes |
Yes, we need to suppress the grade in the API. This is also being discussed here: openedx/frontend-app-learning#1797 (comment). |
a213207 to
796a658
Compare
|
@ormsbee, I have shifted the grades calculation logic to the backend in this PR and adjusted the frontend PR openedx/frontend-app-learning#1797 accordingly. This PR is ready for another look. |
| return aggregator.run() | ||
|
|
||
|
|
||
| User = get_user_model() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please put your new code under the User assignment here.
| scores: List[float] = field(default_factory=list) | ||
| visibilities: List[Optional[bool]] = field(default_factory=list) | ||
| included: List[Optional[bool]] = field(default_factory=list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: edx-platform requires python 3.11 or greater, so you can use the newer notation style for these, e.g. list[float], list[bool | None], etc.
| class _AssignmentBucket: | ||
| """Holds scores and visibility info for one assignment type.""" | ||
| assignment_type: str | ||
| expected_total: int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please carefully document what all these fields actually mean, along with examples, even if it seems obvious to you. Someone who is not familiar with grading code might have questions like:
- Is
assignment_typethe full name (e.g. Homework), or the abbreviation (HW)? - Is
expected_totalthe amount that the user put for "Total number" in the grading policy, or is it "Total number - Number of droppable"?
Grading code is generally extremely confusing to folks approaching the platform, so it's okay to be a little verbose in the comments if it adds clarity.
|
|
||
|
|
||
| @dataclass | ||
| class _AssignmentBucket: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Do we use the terminology "bucket" elsewhere in the grading code? I don't hate the name, but I don't want to add a convention if we already have something else we call this, say on the frontend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the frontend, the calculation logic primarily resides in these two functions:
https://github.com/openedx/frontend-app-learning/pull/1797/files#diff-4d3a6d853ff9fd1354a56024f06c6718c856c281e295de92c5b09ac1e86774bfL6-L92
Initially, I tried to replicate the exact calculation flow from the frontend, but it became difficult to follow due to the complexity of the logic. To improve readability and maintainability, I introduced the _AssignmentBucket class to make the code more modular.
We can definitely consider renaming it if “bucket” doesn’t feel intuitive, for example, something like AssignmentDetails might be clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine, we can keep the terminology for now. Thank you.
| 'average_grade': round(earned_visible, 4), | ||
| 'weighted_grade': round(earned_visible * weight, 4), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have very specific rounding logic in grades:
edx-platform/lms/djangoapps/grades/course_grade.py
Lines 317 to 324 in 9110ae0
| def _compute_percent(grader_result): | |
| """ | |
| Computes and returns the grade percentage from the given | |
| result from the grader. | |
| """ | |
| # Confused about the addition of .05 here? See https://openedx.atlassian.net/browse/TNL-6972 | |
| return round_away_from_zero(grader_result['percent'] * 100 + 0.05) / 100 |
This is further complicated by the fact that the behavior of round() changed between Python 2 and Python 3, and we needed to keep consistency. Even if it yield the same result 99.99% of the time, inconsistencies will crop up somewhere in the user base, and people are extremely sensitive to any time their actual grades do not line up with what's displayed on the progress page.
Is it necessary to do these sorts of calculations here, rather than taking existing calculations and rounding already done in the grades app?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation here is performed for each assignment type to determine the total for each assignment individually. From what I understand, the grades app primarily computes the overall final grade rather than per-assignment totals (please correct me if I’m mistaken).
Previously, this calculation was handled on the frontend to display the results of each assignment separately in the Grade Summary table:
I’m also recalculating the final_grades here to display the raw weighted grade (77.50 in the screenshot) on the frontend, while the 78% value still comes from the calculation in the grades app.
To ensure consistent rounding behavior, I can switch from round to round_away_from_zero here. What are your thoughts on that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you've already shifted to using round_away_from_zero, which is good. It's not enough in the long term, but I'll write that up elsewhere. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM in testing!
| if not self.visibilities: | ||
| return 'none' | ||
| all_hidden = all(v is False for v in self.visibilities) | ||
| some_hidden = any(v is False for v in self.visibilities) | ||
| if all_hidden: | ||
| return 'all' | ||
| if some_hidden: | ||
| return 'some' | ||
| return 'none' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create constants for all these states?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm approving this because it implements a useful feature and it does not make the situation worse, but in the medium-long term, we need to consolidate the grading logic more to be sure that progress and actual backend grade calculation are more aligned. I'll make a separate ticket to track that.
@pdpinch: Would consolidating this logic be something that MIT can lead? You folks have a lot of domain expertise in the subtleties of grading at this point.
Description
This PR migrates the data calculation logic for the GradeSummary table, which was previously in the frontend-app-learning.
This PR also introduces a new visibility option for assignment scores:
“Never show individual assessment results, but show overall assessment results after the due date.”
With this option, learners cannot see question-level correctness or scores at any time. However, once the due date has passed, they can view their overall score in the total grades section on the Progress page.
This PR should be reviewed in correspondence with openedx/frontend-app-learning#1797
Useful information to include:
"Developer", and "Operator".
changes.
Supporting information
Proposal: https://openedx.atlassian.net/wiki/spaces/OEPM/pages/5150769174/Proposal+Add+option+to+show+overall+assessment+results+after+due+date+but+hide+individual+assessment+results
openedx/platform-roadmap#460
Internal ticket: https://github.com/mitodl/hq/issues/3859
Testing instructions
Deadline
Before Ulmo
Other information
Include anything else that will help reviewers and consumers understand the change.