-
Hello all, I found the project and I started to code on it. After it, I found the PR #120 and I had to destroy all my work. Is there a place with some documentation about the screen reader architecture itself? I mean:
I inferred some of these questions myself, but I am not sure the direction you (project authors) want to go with it, so I am posting this before any big code effort. Another consideration are screen reader speech-related behavior, like how messages are structured, how about braille output, etc. I thought about just adding more impls for the screen reader state struct with commands for notifying the focused object, caret/line/etc changed, text changes, etc. From there, in the apply_all function, match on the screen reader event and applying the corresponding to these other functions. Another alternative, to group every object-tree related state, is to add an additional struct to hold cache, event and focus history and active applications history and implement the screen reader event processing there, passing the required structs such the speech backend. Another thing to consider is to extract the speech synthesis to an additional struct to be able to implement, for example, direct calls to espeak-ng later, same for braille. The last question is about addons. What they could do? How is event processing related to them? What is the programming languages you plan to allow? Thank you for your effort. I hope I can help you at least a little. |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 19 replies
-
I browsed to the official home page and found ansuers for these questions. |
Beta Was this translation helpful? Give feedback.
-
Hi there, I'm just going to bed here in Canada. I promise I'll get t9 this in the morning. |
Beta Was this translation helpful? Give feedback.
-
Hi, @francipvb let me explain what I'm attempting to do right now with the We were running into problems testing the state and reactions of various components of the screen reader, since, just to test how a function works, we needed to pass a full I'm very comitted to making Odilia easy to test, and so each function simply should only take the state absolutely necessary, and then we can test using that. I think I'm close, but just a bit more patience we would be there. Odilia has never handled more than 10-ish AT-SPI events, so once the architectural decisions are figured out, it should be only 5-10 hours of work for me to actually implement it. If you have any opinions here about how to use the Rust type system and concurrency to our advantage, please let me know!
They will be able to listen to, intercept and modify various stages of the event processing pipeline. So they can discard/modify events coming from AT-SPI, they could discard or modify Addons are not here yet, I just want to build a system which will have easy hook-in points for addons.
Braille is not supported right now. There's no stable
Events are not cached, only accessible objects denoted by the struct
Those are my attempt at creating a type-system based guarantee of who is accessing what data at which point in the pipeline. It is not about anything external or touchscreen-support realted.
There are three enums to pay attention to:
I think this answers most of your question. Let me know if you need any clarifications. I'm always happy to help! |
Beta Was this translation helpful? Give feedback.
-
I like your 5 concepts. My only concern is
I'm basically trying to do this with The good news is that outside of caching, it's pretty easy to keep state consistent. Caching is by far the hardest thing to get right.
I'm not sure if I understand what you're proposing. Could you rephrase it? The factory pattern is pretty rare in Rust because it requires explicit dynamic dispatch:
I'm not sure what advantage this has? Traits can not be accessed by third-party plugins anyway: whether via WASM or DBus. Even if they were able to be exported and implemented, what functionality does it add for us? Again, it is possible I'm just misunderstanding.
I'd be interested to see where you're going with this. And I could absolutely use some help getting the architecture nailed down. Whether you want to discuss it more or just go off and see what you can whip up as a proof of concept, either is totally good with me! Whatever you are able to contribute would be extremely valuable. |
Beta Was this translation helpful? Give feedback.
-
Hello, Another idea is to use actix framework to implement the various parts and avoid manual implementation of event listening and processing loop, also for other systems like braille and speech. Implementing a system is very simple, just implement the actor trait and add logic for starting and stopping the particular system. Also you implement the message handler trait to listen for messages from other systems. |
Beta Was this translation helpful? Give feedback.
-
Hello again, What about your at-spi packages? are the enumerations, interfaces and so generated automatically? Another thing is about examples, I just try to run focused-async-std and focused-tokio examples, but neither of these work. |
Beta Was this translation helpful? Give feedback.
Hi, @francipvb let me explain what I'm attempting to do right now with the
mother-of-all-refactors
branch.We were running into problems testing the state and reactions of various components of the screen reader, since, just to test how a function works, we needed to pass a full
ScreenReaderState
structure into the function, followed by checking which pieces of state were changed. In an attempt to make Odilia more testable and modular, I'm attempting to create an architecture that takes in minimal state as required for the event, and returns a list of operations that can be checked tested against. These operations could be "say X", or "focus user on Y".I'm very comitted to making Odilia …