Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move workspace loading to workflows #233

Merged
merged 8 commits into from
Aug 5, 2024
Merged

Move workspace loading to workflows #233

merged 8 commits into from
Aug 5, 2024

Conversation

luca-della-vedova
Copy link
Member

New feature implementation

Implemented feature

Refactor workspace loading to use bevy_impulse workflows.

Implementation description

As noted in the title, first try at using the library so happy to hear ways I can do it better / more efficiently.
Previously we created channels and used a system that spawns a detached async task that sends messages to a channel (for file dialogs) and one that continuously polls that channel to see if any event has been received.
By contrast, now this is done in a workflow where the output of the file selecting is piped into the function that will use it.

I went for an API that removes events altogether and uses workflows instead.
This has, to my understanding, a few benefits:

  • We can avoid the overhead of running systems that do nothing 99.99% of the time, note how this PR removes all workspace loading systems.
  • We can just use a Res<> instead of an EventWriter (which is a ResMut behind the scenes), potentially increasing parallelism.
  • The workflows can be used in a more flexible way. For example, we can trigger a workflow from any system and it will be ran in the next command flush, rather than having them in only one point of the schedule.
  • More of a future point but this architecture should be more flexible, we can reuse workflows or part of them in different apps and don't necessarily have to do a single event reading processing kind of architecture, which is a bit harder to extend.

I was on the fence on how to save workflows in the app, I went for a single resource for "all workspace loading workflows" but we can play with modularity and, for example, have one resource per workflow, use marker components, split the services, etc. but that is a whole different discussion.

If this looks good I'll tackle workspace saving next, then go for the really complex one of model handling

@luca-della-vedova
Copy link
Member Author

Web CI failure is real, it seems bevy impulse cannot compile on web because of different types uses to represent tasks in wasm (where multithreading is not implemented).

@mxgrey
Copy link
Collaborator

mxgrey commented Aug 2, 2024

I was hoping the wasm situation would have had a fix upstream by now, but I'm not surprised that it isn't working.

One solution that comes to mind is to implement a simple single-threaded async executor that polls all the ready tasks during flush_impulses. We would use feature flags to choose between the async task pool vs single threaded flusher.

I think the main issue is this would make async providers unsuitable for long-running tasks that don't await often when they're run in web assembly, even though they'd work fine in a native build. That kind of inconsistency may cause confusion and frustration for users.

In any case I'll take a stab at this and see how it goes.

@mxgrey
Copy link
Collaborator

mxgrey commented Aug 2, 2024

I've added a single_threaded_async feature for bevy_impulse which seems to help the wasm issue.

However there are some new compilation errors that show up in site editor when compiling for web assembly, all related to this map_async call:

error: future cannot be sent between threads safely
   --> rmf_site_editor/src/workspace.rs:386:18
    |
386 |                 .map_async(|_| async {
    |                  ^^^^^^^^^ future created by async block is not `Send`

I'm pretty baffled about why that would be Send when not targeting wasm but not be Send when targeting wasm. Maybe AsyncFileDialog doesn't support Send when it targets wasm..?

Theoretically when single_threaded_async is active we might not need the Send bound on tasks since they'll always be run on one thread..? I don't know if it's realistic to relax that bound, but it might be worth trying.

@mxgrey
Copy link
Collaborator

mxgrey commented Aug 3, 2024

I've figured out a way to relax the Send bound when the single_threaded_async feature is on. Testing the web build of rmf_site_editor locally, it seems to be working very well. The dialog selection workflow is working exactly as expected.

The only issue I noticed is that zooming in/out seems to have some probability of making the whole screen go blank for no apparent reason. I doubt that has anything to do with this PR, in fact it could easily be a weird issue with the version of Firefox on my laptop rather than any problem is the site editor itself, but it may be something to keep an eye on.

Signed-off-by: Michael X. Grey <[email protected]>
/// Services that deal with workspace loading
pub struct WorkspaceLoadingServices {
/// Service that spawns an open file dialog and loads a site accordingly.
pub load_workspace_from_dialog: Service<(), (), ()>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor remark: The third generic argument is for streams which are optional (the absence of streams is represented by an empty tuple). If you just leave it out it'll fill in the empty tuple automatically.

So for example this could be Service<(), ()>. It's a very minor thing, but I think it's nicer to just see the input and output in the Service signature when there are no streams.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't know that! Done 659da70


impl WorkspaceLoadingServices {
/// Request to spawn a dialog and load a workspace
pub fn load_from_dialog(&self, commands: &mut Commands) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have any objection to doing it this way, but I can suggest two alternatives that might be more ergonomic or idiomatic:

1. SystemParam

You could make a custom SystemParam struct that queries for both this resource and for Commands, then offers these load_from_dialog, etc methods and takes care of using Commands and the resource.

2. Extension trait for Commands

You could define a trait extension for Commands that implements these methods directly on Commands. However you'd also have to define one or more custom structs that implement the Command trait to get access to the resource from the World during the command flush, so this would be a bit more troublesome.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+10 for the SystemParam approach, it's definitely a net gain so we can avoid this awkward &mut commands API, I forgot that it existed.
Added in 5d08fbf

I do agree that the extension trait approach tends to be a bit boilerplate-y and harder to trace, but in any case we can mix and match and / or revisit when it's time to split the workcell editor mode

Signed-off-by: Luca Della Vedova <[email protected]>
Signed-off-by: Luca Della Vedova <[email protected]>

/// `SystemParam` used to request for workspace loading operations
#[derive(SystemParam)]
pub struct WorkspaceLoadingParams<'w, 's> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super super minor nitpick but what if we call this WorkspaceLoader instead of WorkspaceLoadingParams? I think WorkspaceLoader would make its purpose more clear. Users shouldn't really have to care that it's a struct of system params.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

error!("Failed converting to site {:?}", err);
}
let Some(LoadWorkspaceFile(default_file, data)) = request else {
return;
Copy link
Collaborator

@mxgrey mxgrey Aug 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this function accepts an Option<LoadWorkspaceFile> purely so that the services before it in the chain can return a None if they want to cancel the workflow of loading a file.

This does work, but it causes implementation details of the workflow to leak into the individual services. Now every service that wants to feed a LoadWorkspaceFile into process_load_workspace_files needs to wrap their output in Some even if the service producing the LoadWorkspaceFile would never fail or want to cancel. Also process_load_workspace_files becomes responsible for filtering out the None cases when that shouldn't really be its job. If a service says its input is Option<T>, that should be an indication that None values will still be used somehow to perform the service (e.g. maybe a default value is filled in) rather than just being immediately discarded.

Chain has two APIs that would help here: dispose_on_none and cancel_on_none. In this case they would do the same thing since disposing an output in a simple linear chain will immediately cancel the workflow. I would recommend we tweak the services and workflows like this:

  1. process_load_workspace_files takes in a simple LoadWorkspaceFile instead of Option<LoadWorkspaceFile>
  2. The load_workspace_from_data workflow doesn't need to wrap its mapped value in Some
  3. The other workflows add .cancel_on_none() to their chains right before .then(process_load_files)

This way the big picture details of how the workflows behave will remain in the workflow definitions while the process_load_workspace_files doesn't need to worry about whether or how upstream services might fail/cancel/etc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Signed-off-by: Luca Della Vedova <[email protected]>
Copy link
Collaborator

@mxgrey mxgrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an awesome introduction of bevy_impulse into actual use 👍

@mxgrey mxgrey merged commit 47552ea into main Aug 5, 2024
5 checks passed
@mxgrey mxgrey deleted the luca/impulse_workspaces branch August 5, 2024 06:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants