-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite score calculation in Rust #25
Comments
I agree, rewriting in Rust makes sense. Originally we were planning to have the dashboard show the scores for just 1 year, with a fresh graph for each year like the interop dashboard, but now that we have a single graph, there is a lot of data to process. Another option is to truncate and archive data beyond a cutoff date (which is what I do when testing changes locally). |
Also, at some point we'll be dropping the data for Layout 2013. |
We could also split the chart and only show 2024 (or only show the last 12 months) in the main view wpt.servo.org. We could even keep for future reference something like wpt.servo.org/2023 or whatever with the old data. If that helps to alleviate things and it's not too complex to implement. |
I've looked into this a little more. And I don't think RIIR will be a huge performance win for computing historical data, although I think it could still be a nice improvement for code quality / maintainability. The performance bottleneck seems to be xz (de)compression which is using a C library in Node and is much the same speed in Rust. I'm getting ~1.5s just to decompress and deserialize xz compressed "run score" JSON files (as stored in this repo),(or ~400ms if operating on an already-decompressed JSON file), and 1.5s x 600+ files is going to be slow no matter what. I think there are couple of things which would make a big difference here here:
I am tentatively planning to:
|
The scores for past runs change because we score all past runs against the current day's run. This is similar to what the Interop dashboard does to account for addition and deletion of tests and subtests. Is the proposal here to have pre-computed scores as an optimization for the case the tests & subtests in an "area" have not changed between runs, or just to not update the scores of past runs?
The MANIFEST.json in the
One thing that I like about the current setup is that it is dead simple to deploy and has no dependencies like a database or server. I'm not against moving to a more complicated setup for Servo if we can gain a lot of new features. But at that point, I'm not sure if the whole thing should be a wpt.fyi 2.0 kind of project. |
Score recalc is pretty slow currently. Slow enough that I left it and came back to it several times and it still wasn't done. We should consider rewriting it in Rust for better performance. Such an implementation could:
Deserialize
implementation, avoiding allocating huge arrays of test results. This could be done both for top-level tests and sub-tests.The text was updated successfully, but these errors were encountered: