Skip to content

Instructor functionality

davemckain edited this page May 9, 2012 · 1 revision
  • KEY = Key functionality that will be implemented first. (NB: item-specific stuff first, then tests.)
  • NICE = Nice to have later

Assessment preparation and debugging

  • Browse/navigate own assessments (KEY)
  • In the first instance, assessments could be presented as a simple list. We could later add additional metadata for categorisation, or some kind of explicit "folder" structures.
  • Browse/navigate sample assessments (KEY)
  • QTI Works will be used to showcase a number of examples, and instructors should be able to able to browse, try and deliver these freely.
  • Browse/navigate shared assessments (NICE)
  • If we make it possible for instructors to "share" assessments, either by making them visible to the public, visible to all instructors, or visible to named instructors, then we'll need to make it possible for them to be browsed by other people.
  • Upload an assessment into the system (KEY)
    • We will permit either an assessmentTest or a single assessmentItem. These would normally be uploaded as IMS content packages, though we will accept standalone assessmentItem XML as a convenience. (This will be wrapped into a trivial CP by the system on import. We will store some metadata about the assessment, such as a name and title/description, which would be inferred from the initial upload data (e.g. the QTI XML). The user would be able to override this. In the first instance, metadata will be very minimal, but there's scope for more exciting things later, such as metadata for categorisation/tagging/taxonomy and maybe a flag for making things visible to the public (or other instructors).
  • Replace assessment content (KEY)
    • The instructor would upload a new CP (or XML) to replace the existing QTI data for the assessment, which would probably happen quite a lot during the upload/validate/try debug cycle. We would keep existing metadata intact.
  • Validate an assessment (KEY)
    • This will invoke the existing validation process and show the results. The assessment will be recorded as having been validated, and whether or not it was found to be valid.
  • Try an assessment (KEY)
    • This lets the instructor try the assessment out to see how it works. Additional debugging information would be displayed, in a similar way to how MAE currently operates. The assessment would have to be marked as valid first, though for convenience I'll probably just validate first (if it hasn't been done) and ask the instructor to sort out any problems before it can be tried. There should be options for tweaking aspects of how the assessment is delivered (see below).
  • Delete assessment (KEY)
    • This would completely remove the assessment from the system. Any data recorded for that assessment would be deleted too, so this is a potentially destructive process and would need to be flagged up accordingly.
  • Modify assessment visibility (NICE)
    • As mentioned, it might be nice to allow instructors to "share" assessments either with the public, all instructors, or specific instructors.

Assessment delivery

  • Create a "delivery" of an assessment for candidates (KEY)
    • A "delivery" of an assessment would be a particular instance of the assessment that a set of candidates (students) would take over a certain time period. All assessments would have at least one delivery; I have seen use cases where the same assessment is delivered more than once. (E.g. at the start and end of a semester.) In the first instance, a delivery would be manually marked as open/closed, and would be available to any LTI candidate coming into with the correct launch data. In the future, we could perhaps add more fine-grained functionality.
    • Each assessment delivery would most likely require some additional data to specify how the assessment would function that is not specified within the QTI. For example, when delivering a single item, you may want to specify the maximum number of attempts. There's no way of doing this within the item XML, so we'd have to allow this to be set. Tests will be different, and potentially more complicated.
  • Update the parameters of a delivery (KEY..NICE)
    • In the first instance, this would be simply consist of marking as open/closed and specifying "how" candidates would access the assessment (LTI normally). Later on, we could have more fine-grained things.
    • For safety and consistency, we probably need to enforce a ban on making changes to an assessment or delivery once a candidate has attempted a delivery. These details could probably be simplified slightly in order to avoid the system appearing too restrictive.
  • Get launch parameters for tool consumers and candidates (KEY)
    • This would create the required URLs and any additional data required to allow candidates to access the assessment delivery.
  • Delete an Assignment delivery (KEY)
  • This would delete the delivery, as well as all data accumulated for that delivery, which would include data pertaining to candidates. This is therefore quite a dangerous option!

Assessment reporting

  • See candidate attempt summary (KEY..NICE)
    • This would show summary details of how each known candidate is progressing. E.g. has she make an attempt or not? What was the (last) score? Etc.
    • There's potential for drilling down quite deeply.
  • Export attempt data (KEY..NICE)
    • Similar to above. Would need to support different types of reports. At the very least, would need way of getting scores in a simple tabular format for course admin; geeks would want access to full XML results.
  • Replay a candidate attempt (NICE)
    • This would show each interaction the candidate made, which might be quite useful. I aim to store all data in order to make this possible.