Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.8.9.1 change mode within story #341

Open
JackMostow opened this issue Jun 6, 2018 · 46 comments
Open

1.8.9.1 change mode within story #341

JackMostow opened this issue Jun 6, 2018 · 46 comments
Assignees

Comments

@JackMostow
Copy link
Contributor

This issue succeeds READ add PARROT, HIDE, REVEAL #262.

Allow story mode to change sentence by sentence, so that we can, e.g., insert spoken prompts.

Which is easier to implement and use?

a. Treat story mode as a variable that storydata.json can assign a new value.
+: simple
+: concise
-: context-dependent because its scope lasts until the next assignment

b. Treat story mode as a nestable modifier to wrap around one or more pages or sentences.
+: context-independent insofar as it's not affected by changing the mode of the preceding text
-: verbose
-: modifier could nest around multiple pages, which might be awkward

@JackMostow
Copy link
Contributor Author

This is an architectural change, so it involves Kevin, but may be appropriate for Octav to implement.

@JackMostow
Copy link
Contributor Author

If animator graph for story does a case split on mode, or mode is a feature, storydata.json can signal which words to narrate from which package, e.g. generic prompts vs. story text.

@JackMostow
Copy link
Contributor Author

JackMostow commented Jul 10, 2018

Octav - Please implement per-sentence modes.
Ulani - As consumer for this feature, please let Octav know which syntax will make your task easier.

@octavpo
Copy link
Collaborator

octavpo commented Jul 10, 2018

Let me see if I understand the task correctly. So we want to be able to switch between the six modes (HEAR, ECHO, READ, etc), at the sentence level, right? So it seems the simplest thing is to add a 'mode' attribute at the desired levels in storydata.json files. That is sentence, page, and possible whole tutor. Although for the whole tutor there's already a mode in the dev_data.json, so one question is if we get rid of that or still consider it as the default mode. It has the advantage that you could use the same tutor in different modes, I don't know if that's something useful.

@octavpo
Copy link
Collaborator

octavpo commented Jul 10, 2018

If animator graph for story does a case split on mode, or mode is a feature, storydata.json can signal which words to narrate from which package, e.g. generic prompts vs. story text.

Not sure I understand this comment. Are we saying we also want to communicate the mode as a feature to the graph? And what it would do with it?

@JackMostow
Copy link
Contributor Author

So I assume, because the animator graph already has mode-specific edges:
"edges": [
{"constraint": "STORY_STARTING", "edge": "BEGIN_STORY"},
{"constraint": "ECHO_LINEMODE", "edge": "ECHO_LINE_NODE"},
{"constraint": "PARROT_LINEMODE", "edge": "PARROT_LINE_NODE"},
{"constraint": "STORY_COMPLETE", "edge": "NEXT_SCENE"},
{"constraint": "PAGE_COMPLETE", "edge": "NEXT_PAGE_NODE"},
{"constraint": "PARAGRAPH_COMPLETE", "edge": "NEXT_PARA_NODE"},
{"constraint": "LINE_COMPLETE", "edge": "NEXT_LINE_NODE"},
{"constraint": "", "edge": "NEXT_WORD_NODE"}
]

However, I'm not sure to what if any degree they determine mode-specific behavior. If they do, this seems like the appropriate place to specify it.

@octavpo
Copy link
Collaborator

octavpo commented Jul 10, 2018

Right I forgot about that. It does need to do things somewhat different because it needs to read different prompts at different times. I've already implemented a 'prompt' feature some time ago, by which the json file could indicate what prompt to use for parrot mode, maybe we want to generalize it? Not sure what's the intended use for other modes.

@JackMostow
Copy link
Contributor Author

(How) does 'prompt' differ from HIDE mode, i.e. play narrated prompt without displaying its text? - Jack

@octavpo
Copy link
Collaborator

octavpo commented Jul 10, 2018

I think what you're calling 'prompt' above is the sentence read by the tutor. The prompt feature we have now can be used to customize the introductory sentences like 'Listen carefully' or 'Please listen and repeat after me'. I don't know if any tutor actually uses it.

@octavpo
Copy link
Collaborator

octavpo commented Jul 10, 2018

This mode is called tutorVariant currently, and it's encoded in dev_data.json with values like 'story.hear', 'story.hide', ... If we still want to use that, maybe we should also name the feature in storydata.json 'tutorVariant' or just 'variant' for consistency, and use the same values. Or else we can reduce all the 'story' variants to just 'story' and have a 'mode' in storydata.json?

@JackMostow
Copy link
Contributor Author

We already reuse the same story in different modes, and I'd like to be able to continue doing so, so we still need an externally specifiable global mode.

We also need the ability to specify per-sentence modes for various purposes, such as scripted prompts.

If a story in a global mode contains a sentence with its own mode, the sentence-specific mode should override the global mode, because:

  1. We need some conflict resolution policy in case it happens.
  2. This one is simple.
  3. It probably makes more sense than letting the global mode override the local mode.

More generally, any grainsize component should be able to specify its own mode that overrides the mode of the component that it's part of in the overall hierarchy:

  • story
  • page
  • paragraph
  • sentence

Let's avoid per-utterance or per-word modes: we don't need them or the complications they'd add.

@octavpo
Copy link
Collaborator

octavpo commented Jul 11, 2018

With the current structure of the storydata.json files we cannot store any additional data at the paragraph level, as a paragraph is stored just as an array of sentences. We'd need to change it into an object to be able to add more parameters. It's probably not too difficult in terms of code changes, but all json files would need to be regenerated to the new format. Should I do it?

I assume the story level would be what we have now in dev_data.json, we don't want to add another story level setting in storydata.json, right?

@JackMostow
Copy link
Contributor Author

Then omit paragraph-specific modes -- we can get by with just page- and sentence-specific modes.
If you're asking whether storydata.json can specify a story mode instead of what's in dev_data.json, I agree that it should not, so as to avoid confusion. Thanks. - Jack

@octavpo
Copy link
Collaborator

octavpo commented Jul 14, 2018

I have an implementation of the mode changes, wasn't as easy as I thought. Not on GitHub yet, but tested. A few questions:

  • It uses a 'variant' property, let me know if you want to rename it to 'mode' or something else.
  • It doesn't repeat prompts, it probably needs to, as old prompts might not be valid anymore, the question is when: at the start of each sentence, or maybe only when mode changes? Repeating between all sentences might be tedious if mode doesn't change.
  • Related to that it looks like we might need to also be able to customize prompts at the sentence level. Currently we only have that at the story and page level, should I add that?

@JackMostow
Copy link
Contributor Author

The property name shouldn't matter.
Prompts can be implemented simply as sentences in HEAR mode.
Please clarify "repeat prompts" with a concrete example.

@JackMostow
Copy link
Contributor Author

Just to clarify: mode should have scope nested within the story - page - sentence hierarchy.
Thus a sentence in HEAR mode would not affect the mode of any other sentence or page.

@octavpo
Copy link
Collaborator

octavpo commented Jul 14, 2018

Prompts can be implemented simply as sentences in HEAR mode.

I assume you're thinking about HIDE mode, so they're not displayed. That's true, but they'd be stored in a different place in the assets hierarchy, and we'd to need to repeat them for each tutor. It might be nicer to have them specified as a sentence 'prompt' feature, especially since we already have it for pages, should be easy to extend, and it would be symmetrical with 'variant'.

Please clarify "repeat prompts" with a concrete example.

Let's say a story has two sentences, first in HEAR mode, second in READ mode. Currently the tutor would only play the prompt 'Listen carefully' at the start of the tutor. But that prompt is not valid for the second sentence. So it should play the other prompt 'Please read aloud' (or maybe a custom one) before the second sentence.

On the other hand if both sentences are in HEAR mode it should probably not repeat the prompt before the second sentence.

Just to clarify: mode should have scope nested within the story - page - sentence hierarchy.
Thus a sentence in HEAR mode would not affect the mode of any other sentence or page.

Yes that's how I've implemented it.

@JackMostow
Copy link
Contributor Author

Good catch. Yes, I meant HIDE, not HEAR.

Good point about prompts for mode changes, but:

  1. Mode is specified hierarchically, so we need to define mode change carefully.
    a. A sentence in a different mode than the previous sentence: yes
    b. A sentence in a different mode than the page that contains it: not necessarily if it's the first sentence on the page

  2. At which point(s) would you specify the page/sentence prompts for a mode, applicable how broadly?
    a. In the animator graph, for all text in that mode?
    b. At the top level of storydata.json, for every page/sentence in that story?
    c. In the page/sentence, for that page/sentence?

Thanks. - Jack

@JackMostow
Copy link
Contributor Author

I'll ask Leonora to record with "Please" (Tafadhali), "Now" (Sasa), and nothing, and get her advice.

@amyogan
Copy link
Collaborator

amyogan commented Jul 20, 2018 via email

@JackMostow
Copy link
Contributor Author

In English, "Repeat after me" sounds OK, but "Read aloud." sounds abrupt, "Now read aloud" sounds OK, and "Sasa" is shorter than "Tafadhali".
But we need whatever's shortest without sounding rude in Swahili. I'll ask Leonora when we record today.
We can record all 3 versions and check with Filipo if Tanzanian norms of politeness differ from Kenyan norms.

@JackMostow
Copy link
Contributor Author

@judithodili - What if any additional prompts from new or revised activities need translated and narrated?

@amyogan
Copy link
Collaborator

amyogan commented Jul 20, 2018 via email

@JackMostow
Copy link
Contributor Author

@octavpo and @uhq1 - In case you're not already doing so, please coordinate so that Ulani can use Octav's latest implementation of sentence-specific modes to implement comprehension questions.

Ulani - When can I see each of these demos?
(1) generic questions (I hear you already demo'd; is there more to do?)
(2) cloze questions (coming soon?)
(3) picture-matching questions (aiming for when?)

My priorities for picture-matching are:

  1. listening/reading text comprehension: "Touch the picture that shows what you just heard/read."
    Make sure there's flexibility in terms of whether and when to display the text and the pictures, e.g. first display just the text, then just the pictures, in order to avoid contention for the visual channel.

  2. listening/reading word comprehension (aka "vocabulary"): same prompt, but the text is one word, and the pictures are objects like those in the alphabet story and the animals in FaceLogin. Can reuse the very same code, perhaps with 3-4 pictures instead of 2. Practices retrieving meaning of a given printed word. Tests knowledge of the word.

  3. Vocabulary: "Touch the word illustrated in the picture." same task in the opposite direction. Requires new code to display a list of words to pick from, but you may be able to reuse code from cloze. Mapping picture to word provides marginal value beyond mapping word to picture. Practices retrieving name of a given object and then recognizing its printed form. Gets more mileage out of the same number of pictures, but allows answering without knowing the illustrated word if the kid can eliminate the others.

  4. Vocabulary: "Match the words to the pictures." Requires even more code and a UI/UX used nowhere else in RoboTutor. This activity is popular in schools and printed workbooks, which @judithodili values but I don't. I don't know or see any extra pedagogical value for it unless it's more engaging. It's harder to implement in code than simple multiple choice. From an assessibility standpoint, it conflates knowledge of one word with knowledge of another -- e.g. if you know 2 of the 3 words, you can match the unknown word by process of elimination based on your knowledge of the others.

In other words, I view cloze questions and text comprehension picture-matching as must-haves, word picture-matching as low-hanging fruit, and the others two as optional.

@octavpo
Copy link
Collaborator

octavpo commented Jul 24, 2018

I've implemented the changes about colors and when to repeat prompts (when the current prompt is different than the last one).

We already have prompts for all these modes, so it's not clear to me why we're trying to create new ones. The current prompts are (at least by file names, no idea what the swahili audio is actually saying):

  • READ, ECHO, REVEAL: "Please read aloud"
  • HEAR, HIDE: "Listen carefully"
  • PARROT: "Please listen and repeat after me" at start, and "Repeat after me" after the reading phase.

We can change them easily (in the program I mean) if you think they're not good, but you guys are talking like that's something new.

@octavpo
Copy link
Collaborator

octavpo commented Jul 24, 2018

About the idea of reading the prompt before showing the text: I understand what Judith wants, but I don't see how this would work with the new idea that we might change modes from sentence to sentence, so we might need to repeat prompts between successive sentences. If we only do it for the first prompt in the story (or even on a page), fine, but it's inconsistent with how the rest of the prompts would work. If we do it for all prompts, that means each time we want to say a prompt we hide all text, say the prompt, and then show the text back. Is that what you guys want?

@JackMostow
Copy link
Contributor Author

"Please read aloud" is too long in Swahili.
"Read to me" is shorter and more personable.

Listen. | Sikiliza. | HEAR
Read to RoboTutor. | Somea RoboTutor. | READ, ECHO

Their Swahili narrations are in RT11-POSEIDON MORE PROMPTS 160726.491 07202018.

HIDE should not have a prompt -- it's used for prompts.

Please clarify '... and "Repeat after me" after the reading phase.' What "reading phase"? "Repeat after me" belongs before RoboTutor says the thing to repeat.

@judithodili - You just want something for the first page, right? The problem is that if the text appears before the prompt, kids jump the gun and start reading it without hearing the instructions? The solution is to play the prompt before showing the text?

@octavpo - I don't understand your issue. Isn't this behavior equivalent to starting each story with "Please read aloud" in HIDE mode?

@judithodili
Copy link
Collaborator

judithodili commented Jul 25, 2018 via email

@amyogan
Copy link
Collaborator

amyogan commented Jul 25, 2018 via email

@JackMostow
Copy link
Contributor Author

Oops, I misdescribed it as "Read to me," but the actual prompt is indeed "Read to RoboTutor." In fact I'm very leery of using the first person to refer to RoboTutor. "Repeat after me" didn't bother me; should it?

If RoboTutor, like Project LISTEN's Reading Tutor, displays each sentence only when it's ready to read -- i.e. hides future sentences rather than showing them in gray -- then HIDE will say the prompt before showing the text. What if any good reason is there to display future text in gray rather than not at all?

@octavpo
Copy link
Collaborator

octavpo commented Jul 25, 2018

HIDE should not have a prompt -- it's used for prompts.

HIDE was originally designed for a different purpose, see issue #262, that's why it has a prompt, although I'm not convinced the prompt is valid for the original purpose. If you're positive we don't need it anymore for the original purpose I can remove the prompt, but then it might also be clearer to rename it to PROMPT, or just add a PROMPT mode.

@octavpo
Copy link
Collaborator

octavpo commented Jul 25, 2018

Please clarify '... and "Repeat after me" after the reading phase.' What "reading phase"? "Repeat after me" belongs before RoboTutor says the thing to repeat.

The way PARROT is currently implemented, it's supposed to play two prompts, one before it reads the sentence and another one before it listens to the kid. If you think the first prompt is enough I can remove the second one. Or maybe a different prompt is better at that place? You were suggesting something like "Now YOU say it".

@octavpo
Copy link
Collaborator

octavpo commented Jul 25, 2018

@octavpo - I don't understand your issue. Isn't this behavior equivalent to starting each story with "Please read aloud" in HIDE mode?

The problem is it's not clear either how it should work or how we want to implement it. We have two ways to play sentences with no visible text, either as prompts from the graph, or as sentences in HIDE mode.

If it's a sentence in HIDE mode, it works like this: you add the text as the first sentence of the story, either on a separate page or on the same page as the rest of the sentences, depending on what image you want kids to see while reading the prompt. Then the next sentence comes, with a different mode, so a different prompt. So the system will play that prompt, so you're getting two prompts in sequence. Besides, you also need to add the audio file to the proper folder, changing the audio source is actually difficult.

If I understand correctly, your comments seem to indicate that's not what you want, because you don't want to add the prompt sentences to each story, and neither the audio to each story folder. So then it's a prompt, that's the main difference between prompts and story sentences, that they're specified in the code and the audios are in a different place. It's also that there's a whole tracking mechanism that's use for sentences that's absent for prompts. I cannot play a sentence in any mode when all the needed parameters are actually not present in the story.

So if we're playing it as a prompt, it seems like what we want different is that it's played before the sentences are displayed rather than after, as it currently is. That can be done, probably relatively easily, but there are still a few issues. One is what about the image? Should it be visible or not?

And the other issue is consistency of behavior. This request was made when we had only one prompt at the start of the story. But with the implementation of mode switching that's not true anymore, we could potentially have a different prompt played before each sentence. So do we implement this new behavior only for the first prompt in the story? Why would the first prompt work differently than the others? If seeing the text while the prompt is played is an issue, it seems like it should be an issue for all of them, so we should rather implement it for all.

If we do that then it will work like this: the first sentence comes, it's prompt is played before the text is displayed (maybe while the image is visible). Then the narration does something with the sentence, depending on mode. Then the next sentence comes. If it's the same mode, no prompt is played and the narration goes on. If it's a different mode, then it's prompt is played while hiding all the text, then it will show the text back and the narration goes on. So each time the mode changes the whole page text is hidden while the prompt is played. Is that what you guys want? Just wanted to make sure you're aware of it before I spend the time implementing it.

I also feel I'm spending too much time on something that has potentially little benefit, I should move to other things, there's a long list.

@JackMostow
Copy link
Contributor Author

The way PARROT is currently implemented, it's supposed to play two prompts, one before it reads the sentence and another one before it listens to the kid. If you think the first prompt is enough I can remove the second one. Or maybe a different prompt is better at that place? You were suggesting something like "Now YOU say it".

@judithodili and @amyogan -

  1. Should we rephrase "Repeat after me" to avoid using the first person, e.g. to "Repeat after RoboTutor"?
  2. After reading something to the kid, does PARROT need to say "Now YOU say it" or should it just wait?

@JackMostow
Copy link
Contributor Author

@octavpo - I agree that this task is getting out of hand in an effort to anticipate all cases and handle them correctly. I'm not sure what to do about it.
At minimum, keep the current story-level modes working, and allow sentence-level modes, especially to allow story-specific prompts.
If it helps, distinguish these two constructs:

  1. HIDE: read the story text to the kid, using sentence narrations from the story folder.
  2. PROMPT: tell the kid what to do, using narrated prompts from the prompts folder.

Rather than anticipate all cases, implement whatever seems reasonable, to fix later when and if needed.

@amyogan and @judithodili - Do you concur? Any other advice on how to move on expeditiously?

@octavpo
Copy link
Collaborator

octavpo commented Aug 3, 2018

The PROMPT mode should be ready now, but it's hard to test because I have no idea where these prompt audio files need to go and how to deliver them to the tablet. I don't see the old ones anywhere in CodeDrop1_Assets. I see them somewhere in RTAsset_Publisher/RTAsset_Audio_SW_Set1/, but then I don't know how to deliver them to the tablet, my understanding now is all deliveries happen from CodeDrop1_Assets. So if somebody has more information that it would be good.

Also since HIDE is not going to be used for prompts anymore, we need a prompt for it. I assume it should be the same as HEAR? Also REVEAL still the same as READ? And I don't see a revised prompt for PARROT, I see a proposal for "Repeat after me", are we set on it?

@JackMostow
Copy link
Contributor Author

@kevindeland - Please explain where to put audio files for tutor prompts for them to get installed.
@octavpo - Please address questions about RoboTutor architecture/infrastructure/installation explicitly to @kevindeland because he might not see them otherwise.

Yes, I suppose the prompt for HIDE should be the same as for HEAR, namely "Listen".

The prompt for REVEAL, if any, should be text-specific, not general.

The revised prompt for PARROT is "Rudia baada ya RoboTutor" ("Repeat after RoboTutor") rather than "Repeat after me," to avoid anthropomorphizing RoboTutor by using the first person in prompts.

@octavpo
Copy link
Collaborator

octavpo commented Aug 10, 2018

The PROMPT mode should be done now. I've put all changes on branch reading_modes and sent a PR. I also put the new files on RTAsset_Publisher/octav and sent a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants