Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xAPI statement verb "evaluated" presents wrong data #423

Open
adibMindcet opened this issue May 22, 2024 · 3 comments
Open

xAPI statement verb "evaluated" presents wrong data #423

adibMindcet opened this issue May 22, 2024 · 3 comments

Comments

@adibMindcet
Copy link

The xAPI statement with the verb "evaluated" is presumed to hold data about the grade of a certain student's answer on a certain problem.
The statement "result" section presents the following fields about that grade:

  1. Scaled.
  2. Raw.
  3. Min.
  4. Max.

When trying to get raw grade for a certain student in a certain problem, we don't get the real raw grade from the LMS - meaning 4 points, 6 points etc. but rather we only get "1" or "0". Also the "Max" field is always 1, and not like the real max grade in the LMS.
While looking at it in the LMS's gradebook and in the "Problem Grade Report" "Possible"(=max) and "Earned"(=raw) columns, we do get the correct grades.

Need to check why this statement brings the wrong data, and how to make it bring the correct raw and max grades.

@adibMindcet
Copy link
Author

Further research:
Apparently the problem is that this xAPI statement is transformed from problem_check source server (tracking log), that takes the problem’s grade data from fields called “grade” and “max grade”,
while the Problem Grade Report and the LMS itself takes the “possible” and “earned” points from a field called “weight” (and represents it as raw grade and not “weight”).
The Weight parameter (which represents the grade itself in the LMS and the PGR report) can be found in tracking log event of type “submitted”, however it seems not to be transformed into xAPI event, yet…
This is quite a confusing configuration and definitions, we’ll try to dig deeper into it to understand more

@ziafazal
Copy link
Contributor

@adibMindcet It appears edx-platform emits tracking events with scaled grades not raw grades. Event routing backend transforms those events into Caliper/xAPI statements and avoids manipulating anythings unless it does not conforms to Caliper/xAPI standards. In this case(problem_check) event we parse grade info provided by tracking events and transform it to Caliper/xAPI statements without changing it.

@adibMindcet
Copy link
Author

@ziafazal
The xAPI statements do provide Scaled score (alongside raw, min and max scores).
However, the terminology is quite poor - what the tracking events and xAPI statements refer to as "raw", "min" and "max" score, is actually the number of input fields that problem has.
Whilst the "3 points possible" etc. shown in the LMS, the Problem Grade Report, and submissions tables in the SQL - is referred by a parameter called in Studio and in "grades_visibleblocks" table as "weight".

In short, the confusion comes from the use of several different words to describe the same thing, and some confusing words ("weight" (Studio and SQL) = "possible points" (LMS and reports), "raw_possible" (in SQL) = "max score" (in xAPI), "raw" (in xAPI) = "raw_earned", "earned" (reports) = "scaled*weight" (requires xAPI+SQL data to calculate).

In the bottom line, it seems all the data exists, but its spread around and does not keep unified and correct naming so it makes it confusing to find.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants