-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
draft of single component rebuilds for spin watch #2478
base: main
Are you sure you want to change the base?
draft of single component rebuilds for spin watch #2478
Conversation
Signed-off-by: Garik Asplund <[email protected]>
Thanks for looking at this @garikAsplund - I've bounced off it a couple of times myself but had forgotten to remove the "good first issue" tag... Unfortunately, I am sceptical about this being the way to go. There are some easy ways to improve it (e.g. send a list of affected files rather than the CSV debug string), but I'm not sure this tackles the broad problem of "what happens if two changes happen close together". You'll start a I'm also a bit confused about how the buildifier is inferring which components to rebuild. It seems to be receiving a CSV string of changed files and scanning that to see if it contains any component IDs? But that can't work: component IDs are not file names. Perhaps I'm misunderstanding? It wouldn't be the first time... grin But generally I'm wary of having to infer the changed components from the list of files: this seems like it involves a lot of re-reading and glob matching. The strategy I've taken with this - and, to be clear, which has failed twice, so take this with a grain of salt - was to create independent watchers per component, each knowing the associated component ID. Each watcher would then be able to say "a file for component X has changed." But then something needed to accumulate those into the watch channel, so that when the receiver detected a change, it atomically grabbed the accumulated list and could process the full set. (Plus if more came in before it had done the list, it would either ignore them or they would continue to accumulate.) As you can see, the problem ends up being quite a subtle synchronisation issue, and my suspicion is that we're going to need a new synchronisation structure rather than relying solely on the I apologise that the issue labelling implies that this is more approachable than (I think) it really is. Please do feel free to keep at it if you relish the challenge, or if you have an insight that bypasses my worries! And please do correct me if I've misunderstood your approach here! And thanks again for taking a crack at this! |
@itowlson Thanks for the feedback. That seems about right! My first inclination was that every component needed its own watcher, but when I realized there was other info available I ran with that since it seemed simpler. I guess I also didn't see the Do you have shared code available for attempts at componentized watchers? I may take a look at that and see about adding a collection of component IDs to build values on refresh. Also, do you have any input about desired/expected behavior of manifest file changes? Alternatively I could open an issue there and investigate that. |
I don't have anything even close to working I'm afraid. Manifest file changes must trigger a full rebuild and restart, and must reload the watch configuration (because we may now be watching different paths). This is currently managed by the ManifestFilterFactory, Reconfiguriser and ReconfigurableWatcher. |
Is that working? Whenever I make changes to |
It should work... if it's not then that's a bug. |
I think you're right. It's not firing for me either. That's a bug: I'll take a look. Thanks! |
Signed-off-by: Garik Asplund <[email protected]>
Signed-off-by: Garik Asplund <[email protected]>
Signed-off-by: Garik Asplund <[email protected]>
Signed-off-by: Garik Asplund <[email protected]>
@itowlson I'm not saying this is by any means complete, but I think this updated version takes into account some of the points you brought up yesterday. I kept at the indirect approach of using It seems like the watcher joins paths of changed files with There's some hacky stuff with selecting when to do a full rebuild with the startup and when the manifest changes. It's doubtful I've taken all considerations into account since I'm not as familiar with the whole process there. Please take a look and see if this is way off base, somewhat fruitful, or in between. |
Thanks for this! I do think this could be a fruitful avenue and I appreciate you coming up with it and exploring it. However, I have a couple of comments, one a minor tactical one, but one fundamental.
Hope this is helpful! It's a knotty problem... |
Yes, multiple changes within the debounce period get accumulated within watchexec, so trigger a single update. The race condition happens when you have changes that happen outside the debounce period - so that it sends multiple notifications - but within the check-to-check interval (basically, within an iteration of the loop) - so that one "latest" value gets overwritten by another before ever being seen. And for sure I'm not surprised it works for you. I'd expect it to work for me too! This sort of thing manifests in an unpredictable race condition: it will cut in only when the timing is just right. (The specific timing condition is during a major customer demo.) Again, the fundamental problem is that the current synchronisation mechanism is not well suited to the goal. We now want a mechanism that tells us which components have changed since last build. A mechanism that tells us the latest component that changed, with a hope that we pick up changes fast enough that we don't miss anything, is not that. Even if we get lucky 99% of the time! We shouldn't be reluctant to change the mechanism to meet the new need. For example, Hope that all makes sense, I've already rewritten this message a few times to try to focus it, but still not quite happy with how it's come out... happy to discuss further for sure! |
Ok, I think I more fully understand the issue and am able to reproduce it myself by staggering a bunch of file changes in and out of debounce periods--there is something lost in the mix. Appreciate the thoughtful response. I'm still curious and will poke around but I'm not promising anything. I'm not opposed to looking at the options you mentioned, but is a queue of use here? Or how/when/where exactly are the values within the check-to-check interval being replaced or lost? I guess are they ever accessible or is that what you're getting at with by bringing up alternative methods? |
Yes, q queue is definitely useful, but I wasn't able to find a queue that was awaitable. It could certainly be a building block though! The characteristics I was thinking of for the synchronisation structure were something like:
We might be able to do this by combining a And I may be overthinking this. Like polling a shared queue on a interval that's imperceptibly short to users but extremely long at the CPU scale (say every 10ms) might feel less beautiful than a simple |
Good stuff. I definitely have more of an appreciation for the nuances now! Appears that pub struct WatcherCommunicator {
/// Send a list of paths that should be watched for changes.
paths_to_watch_tx: tokio::sync::mpsc::UnboundedSender<Vec<PathBuf>>,
/// Listen for a list of paths that were changed.
changed_paths_rx: tokio::sync::broadcast::Receiver<Option<Vec<PathBuf>>>,
/// Send a message to force a restart.
restart_tx: tokio::sync::mpsc::UnboundedSender<()>,
restart_mode: Mutex<WatcherRestartMode>,
banner: String,
}
...
impl WatcherCommunicator {
pub fn watch_paths(&self, paths: Vec<PathBuf>) -> Result<(), AnyError> {
self.paths_to_watch_tx.send(paths).map_err(AnyError::from)
}
pub fn force_restart(&self) -> Result<(), AnyError> {
// Change back to automatic mode, so that HMR can set up watching
// from scratch.
*self.restart_mode.lock() = WatcherRestartMode::Automatic;
self.restart_tx.send(()).map_err(AnyError::from)
}
pub async fn watch_for_changed_paths(
&self,
) -> Result<Option<Vec<PathBuf>>, AnyError> {
let mut rx = self.changed_paths_rx.resubscribe();
rx.recv().await.map_err(AnyError::from)
}
pub fn change_restart_mode(&self, restart_mode: WatcherRestartMode) {
*self.restart_mode.lock() = restart_mode;
}
pub fn print(&self, msg: String) {
log::info!("{} {}", self.banner, msg);
}
} They use notify instead of watchexec, but those projects look to be closely related. The I'm not sure exactly how it's all related, but then there's the whole separate issue of HMR or workarounds in Rust and web spaces like what Dioxus did last year with hot reloading. Not at all diving into that mess, just mentioning it to be thorough--also see here. It depends on what the full spec of |
I would definitely not worry about hot reloading for now...! |
I'm still new to larger projects and Rust in general, so hopefully this isn't too janky.
This potentially closes #1417.
What this draft does is make the
watched_changes
receiver a tuple with aUuid
and aString
. This allows paths for where the file change originated to be passed into thebuild_components
function in the buildifier. The logic inbuild_components
is effective for what projects I have up and running, but could use a test or inspection. Essentially if there is an empty string in the receiver tuple, it does a full build since it's the first execution. Any changes after that are matched to component IDs and selectively built with-c
.Unfortunately, since
spawn_watchexec
contains the notifier, the tuples have to be added to artifact and manifest senders and receivers as well. I tried to make theString
anOption<String>
but that seemed to overly complicate things. That could be because I'm not as familiar with idiomatic Rust.Another small thing I noticed was that the debounce value is a
u64
. That could be set very high and never rebuild. I don't know if it's worth changing that to au16
and then pass it toDuration::from_millis
as au64
? If the user enters too large of a value, clap will emit an error.Also, if the debounce value is high--anything over a few seconds--then there is potential for multiple components to be rebuilt. In this case
Vec<String>
or some other similar structure is better suited and would need to be refactored.One last thing--I tried making changes within the manifest but never got it to rebuild automatically in response. That doesn't appear to be the intent though. I'm guessing that stems from
ManifestFilterFactory
.