Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise READ UI/UX tracking #157

Closed
JackMostow opened this issue Oct 25, 2017 · 4 comments
Closed

Revise READ UI/UX tracking #157

JackMostow opened this issue Oct 25, 2017 · 4 comments
Assignees

Comments

@JackMostow
Copy link
Contributor

  1. (When and why) should we adjust narration speed in HEAR and ECHO?
    1.1. To [10%] faster than kid's recent reading pace after filtering out hesitations longer than [1 second]?
    1.2. To follow kid's finger through the text?
    +: intuitive
    +: trains kid to track position in text
    +: lets kids rehear
    -: lets kids skip ahead (unless RoboFinger refuses)
    +: gives kid agency
    -: line boundary awkward
    ?: how adapt smoothly to kid's pace?
    ?: what if kid skips line?
    ?: should dragging work the same for kid's reading?

  2. Let READ vary mode by sentence (or utterance?) as in Project LISTEN's Reading Tutor:
    2.1. HEAR reads aloud to the kid.
    2.2. READ listens to the kid read aloud.
    2.3. ECHO listens to the kid read the sentence, then rereads it fluently.
    2.4. REVEAL listens to the kid read, revealing each word when heard.
    2.5. SAY speaks the text with VSS but without displaying the text., like As_spoken_only in Reading Tutor

  3. Why did/does READ lag several words behind reader, at least at first?

  4. How can we get READ’s “blame the kid” left-to-right policy not to encourage kids to, read, one, word, at, a, time, unlike Project LISTEN’s Reading Tutor, which used “chase the kid” to track the kid through the text.

4.1. Can we reduce false rejections by recognizing visual speech?

4.2. When should READ tolerate a skipped word?

4.3. How should we change READ's UI to encourage fluent reading?
4.3.1. Replace underline with RoboFinger.
4.3.2. When kid is speaking, move RoboFinger through text at kid's recent pace.
4.3.3. When kid hesitates, stop moving Robofinger.
4.3.4. When kid skips >1 word, move RoboFinger back to first skipped word and turn it red.
4.3.5. When kid hesitates, start tapping RoboFinger on first skipped word instead of audio icon.
+: audio icon is unintuitive
+: tapping RoboFinger is intuitive cue to tap
4.3.6. After [3] seconds, say the word and advance RoboFinger to the next word.

@VishnuTejus
Copy link
Contributor

VishnuTejus commented Oct 26, 2017 via email

@JackMostow
Copy link
Contributor Author

JackMostow commented Oct 26, 2017 via email

@JackMostow
Copy link
Contributor Author

It's time to revisit this issue.

@JackMostow
Copy link
Contributor Author

#157 was superseded by #341 and #348.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants