-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.8.9.1 change mode within story #341
Comments
This is an architectural change, so it involves Kevin, but may be appropriate for Octav to implement. |
If animator graph for story does a case split on mode, or mode is a feature, storydata.json can signal which words to narrate from which package, e.g. generic prompts vs. story text. |
Octav - Please implement per-sentence modes. |
Let me see if I understand the task correctly. So we want to be able to switch between the six modes (HEAR, ECHO, READ, etc), at the sentence level, right? So it seems the simplest thing is to add a 'mode' attribute at the desired levels in storydata.json files. That is sentence, page, and possible whole tutor. Although for the whole tutor there's already a mode in the dev_data.json, so one question is if we get rid of that or still consider it as the default mode. It has the advantage that you could use the same tutor in different modes, I don't know if that's something useful. |
Not sure I understand this comment. Are we saying we also want to communicate the mode as a feature to the graph? And what it would do with it? |
So I assume, because the animator graph already has mode-specific edges: However, I'm not sure to what if any degree they determine mode-specific behavior. If they do, this seems like the appropriate place to specify it. |
Right I forgot about that. It does need to do things somewhat different because it needs to read different prompts at different times. I've already implemented a 'prompt' feature some time ago, by which the json file could indicate what prompt to use for parrot mode, maybe we want to generalize it? Not sure what's the intended use for other modes. |
(How) does 'prompt' differ from HIDE mode, i.e. play narrated prompt without displaying its text? - Jack |
I think what you're calling 'prompt' above is the sentence read by the tutor. The prompt feature we have now can be used to customize the introductory sentences like 'Listen carefully' or 'Please listen and repeat after me'. I don't know if any tutor actually uses it. |
This mode is called tutorVariant currently, and it's encoded in dev_data.json with values like 'story.hear', 'story.hide', ... If we still want to use that, maybe we should also name the feature in storydata.json 'tutorVariant' or just 'variant' for consistency, and use the same values. Or else we can reduce all the 'story' variants to just 'story' and have a 'mode' in storydata.json? |
We already reuse the same story in different modes, and I'd like to be able to continue doing so, so we still need an externally specifiable global mode. We also need the ability to specify per-sentence modes for various purposes, such as scripted prompts. If a story in a global mode contains a sentence with its own mode, the sentence-specific mode should override the global mode, because:
More generally, any grainsize component should be able to specify its own mode that overrides the mode of the component that it's part of in the overall hierarchy:
Let's avoid per-utterance or per-word modes: we don't need them or the complications they'd add. |
With the current structure of the storydata.json files we cannot store any additional data at the paragraph level, as a paragraph is stored just as an array of sentences. We'd need to change it into an object to be able to add more parameters. It's probably not too difficult in terms of code changes, but all json files would need to be regenerated to the new format. Should I do it? I assume the story level would be what we have now in dev_data.json, we don't want to add another story level setting in storydata.json, right? |
Then omit paragraph-specific modes -- we can get by with just page- and sentence-specific modes. |
I have an implementation of the mode changes, wasn't as easy as I thought. Not on GitHub yet, but tested. A few questions:
|
The property name shouldn't matter. |
Just to clarify: mode should have scope nested within the story - page - sentence hierarchy. |
I assume you're thinking about HIDE mode, so they're not displayed. That's true, but they'd be stored in a different place in the assets hierarchy, and we'd to need to repeat them for each tutor. It might be nicer to have them specified as a sentence 'prompt' feature, especially since we already have it for pages, should be easy to extend, and it would be symmetrical with 'variant'.
Let's say a story has two sentences, first in HEAR mode, second in READ mode. Currently the tutor would only play the prompt 'Listen carefully' at the start of the tutor. But that prompt is not valid for the second sentence. So it should play the other prompt 'Please read aloud' (or maybe a custom one) before the second sentence. On the other hand if both sentences are in HEAR mode it should probably not repeat the prompt before the second sentence.
Yes that's how I've implemented it. |
Good catch. Yes, I meant HIDE, not HEAR. Good point about prompts for mode changes, but:
Thanks. - Jack |
I'll ask Leonora to record with "Please" (Tafadhali), "Now" (Sasa), and nothing, and get her advice. |
I concur with no 'please' - the prompts are so long the kids get
impatient!!
|
In English, "Repeat after me" sounds OK, but "Read aloud." sounds abrupt, "Now read aloud" sounds OK, and "Sasa" is shorter than "Tafadhali". |
@judithodili - What if any additional prompts from new or revised activities need translated and narrated? |
Kids don't get told please there :) But I agree Filipo should be able to
confirm!
|
@octavpo and @uhq1 - In case you're not already doing so, please coordinate so that Ulani can use Octav's latest implementation of sentence-specific modes to implement comprehension questions. Ulani - When can I see each of these demos? My priorities for picture-matching are:
In other words, I view cloze questions and text comprehension picture-matching as must-haves, word picture-matching as low-hanging fruit, and the others two as optional. |
I've implemented the changes about colors and when to repeat prompts (when the current prompt is different than the last one). We already have prompts for all these modes, so it's not clear to me why we're trying to create new ones. The current prompts are (at least by file names, no idea what the swahili audio is actually saying):
We can change them easily (in the program I mean) if you think they're not good, but you guys are talking like that's something new. |
About the idea of reading the prompt before showing the text: I understand what Judith wants, but I don't see how this would work with the new idea that we might change modes from sentence to sentence, so we might need to repeat prompts between successive sentences. If we only do it for the first prompt in the story (or even on a page), fine, but it's inconsistent with how the rest of the prompts would work. If we do it for all prompts, that means each time we want to say a prompt we hide all text, say the prompt, and then show the text back. Is that what you guys want? |
"Please read aloud" is too long in Swahili. Listen. | Sikiliza. | HEAR Their Swahili narrations are in RT11-POSEIDON MORE PROMPTS 160726.491 07202018. HIDE should not have a prompt -- it's used for prompts. Please clarify '... and "Repeat after me" after the reading phase.' What "reading phase"? "Repeat after me" belongs before RoboTutor says the thing to repeat. @judithodili - You just want something for the first page, right? The problem is that if the text appears before the prompt, kids jump the gun and start reading it without hearing the instructions? The solution is to play the prompt before showing the text? @octavpo - I don't understand your issue. Isn't this behavior equivalent to starting each story with "Please read aloud" in HIDE mode? |
‘Read to me’ is *very* risky. Robotutor is not human and referring to it
with human terms may have really negative effects with this population.
XPRIZE reported that some people already think it is witchcraft... do you
want to risk making this problem worse?
Hearing the instruction before seeing the text would be nice but not worth
losing sleep over if it’s too much work. Depending on the verdict on how we
assess performance, it might not be an issue even if they start reading
early.
--
Regards,
Judith Odili Uchidiuno
www.judithu.com
|
I thought we ended up going with 'Read to Robotutor'?
|
Oops, I misdescribed it as "Read to me," but the actual prompt is indeed "Read to RoboTutor." In fact I'm very leery of using the first person to refer to RoboTutor. "Repeat after me" didn't bother me; should it? If RoboTutor, like Project LISTEN's Reading Tutor, displays each sentence only when it's ready to read -- i.e. hides future sentences rather than showing them in gray -- then HIDE will say the prompt before showing the text. What if any good reason is there to display future text in gray rather than not at all? |
HIDE was originally designed for a different purpose, see issue #262, that's why it has a prompt, although I'm not convinced the prompt is valid for the original purpose. If you're positive we don't need it anymore for the original purpose I can remove the prompt, but then it might also be clearer to rename it to PROMPT, or just add a PROMPT mode. |
The way PARROT is currently implemented, it's supposed to play two prompts, one before it reads the sentence and another one before it listens to the kid. If you think the first prompt is enough I can remove the second one. Or maybe a different prompt is better at that place? You were suggesting something like "Now YOU say it". |
The problem is it's not clear either how it should work or how we want to implement it. We have two ways to play sentences with no visible text, either as prompts from the graph, or as sentences in HIDE mode. If it's a sentence in HIDE mode, it works like this: you add the text as the first sentence of the story, either on a separate page or on the same page as the rest of the sentences, depending on what image you want kids to see while reading the prompt. Then the next sentence comes, with a different mode, so a different prompt. So the system will play that prompt, so you're getting two prompts in sequence. Besides, you also need to add the audio file to the proper folder, changing the audio source is actually difficult. If I understand correctly, your comments seem to indicate that's not what you want, because you don't want to add the prompt sentences to each story, and neither the audio to each story folder. So then it's a prompt, that's the main difference between prompts and story sentences, that they're specified in the code and the audios are in a different place. It's also that there's a whole tracking mechanism that's use for sentences that's absent for prompts. I cannot play a sentence in any mode when all the needed parameters are actually not present in the story. So if we're playing it as a prompt, it seems like what we want different is that it's played before the sentences are displayed rather than after, as it currently is. That can be done, probably relatively easily, but there are still a few issues. One is what about the image? Should it be visible or not? And the other issue is consistency of behavior. This request was made when we had only one prompt at the start of the story. But with the implementation of mode switching that's not true anymore, we could potentially have a different prompt played before each sentence. So do we implement this new behavior only for the first prompt in the story? Why would the first prompt work differently than the others? If seeing the text while the prompt is played is an issue, it seems like it should be an issue for all of them, so we should rather implement it for all. If we do that then it will work like this: the first sentence comes, it's prompt is played before the text is displayed (maybe while the image is visible). Then the narration does something with the sentence, depending on mode. Then the next sentence comes. If it's the same mode, no prompt is played and the narration goes on. If it's a different mode, then it's prompt is played while hiding all the text, then it will show the text back and the narration goes on. So each time the mode changes the whole page text is hidden while the prompt is played. Is that what you guys want? Just wanted to make sure you're aware of it before I spend the time implementing it. I also feel I'm spending too much time on something that has potentially little benefit, I should move to other things, there's a long list. |
@judithodili and @amyogan -
|
@octavpo - I agree that this task is getting out of hand in an effort to anticipate all cases and handle them correctly. I'm not sure what to do about it.
Rather than anticipate all cases, implement whatever seems reasonable, to fix later when and if needed. @amyogan and @judithodili - Do you concur? Any other advice on how to move on expeditiously? |
The PROMPT mode should be ready now, but it's hard to test because I have no idea where these prompt audio files need to go and how to deliver them to the tablet. I don't see the old ones anywhere in CodeDrop1_Assets. I see them somewhere in RTAsset_Publisher/RTAsset_Audio_SW_Set1/, but then I don't know how to deliver them to the tablet, my understanding now is all deliveries happen from CodeDrop1_Assets. So if somebody has more information that it would be good. Also since HIDE is not going to be used for prompts anymore, we need a prompt for it. I assume it should be the same as HEAR? Also REVEAL still the same as READ? And I don't see a revised prompt for PARROT, I see a proposal for "Repeat after me", are we set on it? |
@kevindeland - Please explain where to put audio files for tutor prompts for them to get installed. Yes, I suppose the prompt for HIDE should be the same as for HEAR, namely "Listen". The prompt for REVEAL, if any, should be text-specific, not general. The revised prompt for PARROT is "Rudia baada ya RoboTutor" ("Repeat after RoboTutor") rather than "Repeat after me," to avoid anthropomorphizing RoboTutor by using the first person in prompts. |
The PROMPT mode should be done now. I've put all changes on branch reading_modes and sent a PR. I also put the new files on RTAsset_Publisher/octav and sent a PR. |
This issue succeeds READ add PARROT, HIDE, REVEAL #262.
Allow story mode to change sentence by sentence, so that we can, e.g., insert spoken prompts.
Which is easier to implement and use?
a. Treat story mode as a variable that storydata.json can assign a new value.
+: simple
+: concise
-: context-dependent because its scope lasts until the next assignment
b. Treat story mode as a nestable modifier to wrap around one or more pages or sentences.
+: context-independent insofar as it's not affected by changing the mode of the preceding text
-: verbose
-: modifier could nest around multiple pages, which might be awkward
The text was updated successfully, but these errors were encountered: