-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Engine CI rework #1707
Engine CI rework #1707
Conversation
on: | ||
workflow_dispatch: | ||
schedule: | ||
- cron: "36 00 * * *" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it do an early exit (or at least skip most of the steps) if there are no changes compared to yesterday's build so as not to waste minutes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The triggering of the workflow is still not finalized, right now it's on cron just so it rebuild periodically in my fork so I have data on performance, see how cache behavior looks over time (right now it was daily, I've also done testing with 4h) etc.
For the final shape, I want it to be triggered on every push to master, release branches, and for PRs, without any trigger on cron.
9a19986
to
7d801ca
Compare
058844c
to
4dbe2cc
Compare
Will reopen a fresh one when I'm ready for review, this draft one triggers unnecessary runs. |
Note: As exception to my regular PRs, don't look at individual commits in isolation, they are bad.
This is a draft proposal for reworking of the CI engine build for feedback before I polish it.
Build environment
There are 2 separate build images, one for Windows, one for Linux. The docker images are built as part of separate workflow and stored in the github package repository (will look like this), not in cache like previous version. The images are relatively small ~300-400MiB and building them takes 2-3 minutes (no mxe).
Each of the image contains a complete required build environment with all dependencies installed (including mingwlibs etc) and configured for proper resolution from engine CMake configuration.
The build step is then a platform agnostic invocation of CMake with generic release build configuration.
To sum up, there is a separation:
Optimization
The various scripts, based on existing ones, were tuned to maximize parallelism and leverage some faster methods:
Workers
The GitHub actions that compile the engine are configured to use custom https://namespace.so/ runners: This is yet to be configured to work in engine repo. The workers use high performance new EPYC processors so full build using 8 cores takes only ~13 minutes. In case there are plenty ccache hits, the build, packaging, upload takes ~2 minutes.
namespace uses a cache architecture where cache volumes are just mounted in the machine, which allows for instant restore, but, it means that cache/fresh cache is not available for next runs that are scheduled on a different compute node, even when namespace tries to perform cache aware scheduling. I'm not sure if this will be the most efficient cache for engine builds yet given high cost of cache hit/stale cache (need to rebuild way more), so I'm still evaluating it in my fork: https://github.com/p2004a/spring/actions