-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Moving away from TaskCluster #3317
Comments
What do you think about GitLabs builtin CI features? I'm using it for my Jaco-Assistant project and I'm quite happy with it because currently it supports almost all my requirements. The pipeline does linting checks and some code statistics calculation and I'm using it to provide prebuilt container images (You could build and provide the training images from there for example). See my CI setup file here. There is also an official tutorial for usage with github: https://about.gitlab.com/solutions/github/ |
That would mean moving to gitlab, which raises other questions. I dont have experience with their ci even though i use gitlab for some personal project (from gitorious.org). Maybe i should post a detailed explanation of our usage of taskcluster to help there ? |
No, you can use it with github too. From: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/
I think this is a good idea. But you should be able to do everything on gitlab ci as soon you can run it in a docker container without special flags. |
We also need support for Windows, macOS and iOS that cannot be covered by Docker |
Our current usage of TaskCluster: We leverage the current features:
Hardware:
|
I have been using GitLab CI (the on-prem community edition) for about three years at my workplace, and so far I have been very happy with it. @lissyx I believe GitLab CI supports all the requirements you listed above - I've personally used most of those features. The thing I really like about GitLab CI is that it seems to be a very important feature for the company - they release updates frequently. |
Don't hesitate if you want, I'd be happy to see how you can do macOS or Windows builds / tests. |
Windows builds might be covered with some of their beta features: For iOS I think you would need to create your own runners on the macbooks and link them to the CI. They made a blog post for this: |
I have no time to take a look at that, sadly. |
@DanBmh @opensorceror Let me be super-clear: what you shared looks very interesting, but I have no time to dig into that myself. If you guys are willing, please go ahead. One thing I should add is that for macOS, we would really need something to be hosted: the biggest pain was on maintaining this. If we move to GitLab CI but there is still need to babysit those, it's not really worth the effort. |
Personally I'm a bit hesitant to work on this by myself, because the CI config of this repo seems too complex for a lone newcomer to tackle. FWIW, I did a test connecting a GitHub repo with GitLab CI...works pretty well. I'm not sure where we would find hosted macOS options though. |
Of course
That's nice, i will have a look.
That might be the biggest pain point. |
Looks like Travis supports macOS builds. Never used it though, not aware of the limitations if any. |
Can it do something like we do with TC, i.e., precompile bits and fetch them at need? So to overcome this, we have https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/generic_tc_caching-linux-opt-base.tyml + e.g., https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/tf_linux-amd64-cpu-opt.yml It basically:
Which allows us to have caching we can periodically update, as you can see there: https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/.shared.yml#L186-L260 We use the same mechanisms for many components (SWIG, pyenv, homebrew, etc.) to make sure we can keep build time decent on PRs (~10-20min of build more or less, ~2min for tests) so that a PR can complete under 30-60 mins. |
Nice, and can those be indexed like what TaskCluster has? |
Not sure what you mean by this. You can give them custom names or save folders depending on your branch names for example, if this is what you mean. |
Ok, I think I will try and use GitLab CI on gitlab for a pet-project of mine that lacks CI :), that will help me get a grasp of the landscape. |
@DanBmh @opensorceror I have been able to play with a small project of mine with GitLab CI, and I have to admit after scratching the surface, it seems to be nice. I'm pretty sure we can replicate the same things, but obviously it requires rework of the CI handling. However, I doubt this can work well on a "free tier plan", so I think if there's a move in that direction it will require some investments, including to have support for Windows and macOS. We have been able to get access to our current TaskCluster cost usages, and thanks to the latest optimization we landed back in august, we could run the same workload as previously for a fairly small amount of money. I guess it's mostly a question of people stepping up and doing, at some point :) |
@lissyx you can also look into azure pipelines, it has a free tier and self-hosted agents that can be run locally |
Thanks, but I'm sorry I can't spend more time than I already spent, I'm not 100% anymore on DeepSpeech, and I have been spending too much time on it in the past weeks. |
@DanBmh @opensorceror @stepkillah Do you know something that would allow us to have beefy managed macOS (and Windows) instances on GitLab CI ? After a few weeks of hacking over there, I'm afraid we'd be exactly in the same position as we are today with TaskCluster, with the big difference that we know taskcluster, and we are still in direct contact with the people managing it so fixing issues is quite simple for us. I insist on beefy, because building tensorflow on the machines we have (MacBook Pro circa 2017, running several VMs) even on bare-metal already takes hours. Now we have some caching in place everywhere to limit the impact, but even |
Define "beefy". |
Could you please remind me, what was the reason for building tensorflow ourself? If building tensorflow is really that complicated an time consuming, wouldn't be using a prebuilt version for all gpu devices and using the tflite runtime (optionally with a non quantized model) for all other devices an easier option? |
At least 8GB of RAM, preferably 16GB, and at least 8 CPUs.
We already on TaskCluster have some prebuilding in place, but producing this artifact takes various times
So each time we work on TensorFlow (upgrading to newer releases, etc.), it's "complicated". Currently, what we achieve is "sustainable" altough painful. However, given the performances of what I could test on GitLab CI / AppVeyor, it's not impossible this would take our build time much more skyrocketting, and thus it would significantly slow down things. |
Yes. But Windows servers despite rare at least are a thing, and the platform is not as hard to support as macOS. Official TensorFlow builds on macOS don't have GPU support anymore so I don't see how doing the work to move to them would be beneficial. |
I've been taking a look at GitHub Actions lately and it seems like a good fit:
The biggest caveat seems to be that the self-managed workers don't have a really good security story for public repositories, to avoid random PRs with new CI code exploiting your infra. The best solution to that problem seems to be an approach based the idea detailed here:
The main questions about porting from TaskCluster to GitHub Actions from my looking seem to be:
|
Some useful things I found in the process:
|
On a specific hardware requirement we have: KVM-enabled VMs. GitHub hosted workers don't have it, but Cirrus CI provides some level of free support for OSS and has KVM enabled workers: https://cirrus-ci.org/guide/linux/#kvm-enabled-privileged-containers Could be something to look into. Another more exotic possibility I read somewhere is running Android emulator task on macOS hosts. I don't know if that would work on GitHub workers tho and it can also be a net negative due to macOS tasks being harder to maintain than Linux ones. |
Posting for posterity here a multi-stage Dockerfile I've been playing with to build Python wheels and also start playing with the caching situation. It's written for our coqui-ai/STT fork but the only difference is repo and artifact names. FROM quay.io/pypa/manylinux_2_24_x86_64 as base
RUN git clone https://github.com/coqui-ai/STT.git STT
WORKDIR /STT
RUN git submodule sync tensorflow/
RUN git submodule update --init tensorflow/
FROM base as tfbuild
RUN curl -L https://github.com/bazelbuild/bazelisk/releases/download/v1.7.5/bazelisk-linux-amd64 > /usr/local/bin/bazel && chmod +x /usr/local/bin/bazel
WORKDIR /STT/tensorflow/
ENV VIRTUAL_ENV=/tmp/cp36-cp36m-venv
RUN /opt/python/cp36-cp36m/bin/python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ENV TF_ENABLE_XLA=0
ENV TF_NEED_JEMALLOC=1
ENV TF_NEED_OPENCL_SYCL=0
ENV TF_NEED_MKL=0
ENV TF_NEED_VERBS=0
ENV TF_NEED_MPI=0
ENV TF_NEED_IGNITE=0
ENV TF_NEED_GDR=0
ENV TF_NEED_NGRAPH=0
ENV TF_DOWNLOAD_CLANG=0
ENV TF_SET_ANDROID_WORKSPACE=0
ENV TF_NEED_TENSORRT=0
ENV TF_NEED_ROCM=0
RUN echo "" | TF_NEED_CUDA=0 ./configure
RUN bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=noaws --config=nogcp --config=nohdfs --config=nonccl --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libstt.so
FROM base as pybase
RUN mkdir -p /STT/tensorflow/bazel-bin/native_client
COPY --from=tfbuild /STT/tensorflow/bazel-bin/native_client/libstt.so /STT/tensorflow/bazel-bin/native_client/libstt.so
WORKDIR /STT/native_client/python
RUN apt-get update && apt-get install -y --no-install-recommends wget && rm -rf /var/lib/apt/lists/*
FROM pybase as py36
ENV VIRTUAL_ENV=/tmp/cp36-cp36m-venv
RUN /opt/python/cp36-cp36m/bin/python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip install -U pip
RUN pip install numpy==1.7.0
ENV NUMPY_DEP_VERSION=">=1.7.0"
RUN make bindings TFDIR=/STT/tensorflow
FROM scratch as py36-artifact
COPY --from=py36 /STT/native_client/python/dist/STT-*-cp36* / $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
linux-py-wheels latest 258d85e4a936 5 minutes ago 10.3MB
linux-py-wheels py36-artifact 258d85e4a936 5 minutes ago 10.3MB
linux-py-wheels py36 d2413e500df9 5 minutes ago 1.76GB
linux-py-wheels pybase b31d4f23682f 8 minutes ago 1.68GB
linux-py-wheels tfbuild 16120c3975b7 31 minutes ago 2.42GB
linux-py-wheels base dba6ce2faceb 54 minutes ago 1.64GB |
Looks like generically we can use a sequence somewhat like this: docker login
docker pull docker.pkg.github.com/.../image:stage1 || true
docker pull docker.pkg.github.com/.../image:stage2 || true
...
docker pull docker.pkg.github.com/.../image:stageN || true
docker build -t image:stage1 --target stage1 --cache-from=docker.pkg.github.com/.../image:stage1 .
docker build -t image:stage2 --target stage2 --cache-from=docker.pkg.github.com/.../image:stage1 --cache-from=docker.pkg.github.com/.../image:stage2 .
...
docker build -t image:stage2 --target stage2 --cache-from=docker.pkg.github.com/.../image:stage1 --cache-from=docker.pkg.github.com/.../image:stage2 ... --cache-from=docker.pkg.github.com/.../image:stageN --out artifacts .
docker tag && docker push
# upload artifacts from artifacts/ folder Which should cache all the intermediate stages and allow for easy sharing between workflows/jobs as well. |
I'm having a hard time getting both the remote and the local cache to work when building multiple targets of the same Dockerfile in a row to be able to tag all the intermediate images. But I found this approach which should let me avoid building more than once: https://forums.docker.com/t/tag-intermediate-build-stages-multi-stage-build/34795 |
So far I have been able to start getting a GitHub Actions workflow "working" for macOS build process:
I could get this to work end-to-end with the artifact serving as a cache and being properly re-populated / used as expected:
Since yesterday, I'm trying to get full blown tensorflow and while the build of tensorflow itself could complete successfully several times (~3h of build, better than expected), there were issues related to artifact handling, basically making the caching I have put in place not working (artifact missing when I see it on the UI). Also, this is starting to get a bit messy in the YAML file, but maybe we can refine the workflow into several smaller pieces and rely on https://docs.github.com/en/actions/reference/events-that-trigger-workflows#workflow_run however I still lack proper understanding of |
Yes. The need to have task definitions already merged before you trigger them makes using |
I guess the only viable alternative would be https://docs.github.com/en/rest/reference/actions#create-a-workflow-dispatch-event And somehow, we will end up re-creating |
I hope by viable you're joking hehe. That's just as as workflow_run :P |
No, I was actually serious, there's no mention of the limitations of |
I think re-creating tc-decision is switching TC for an bad TC imitation. We should try to stick to the happy path in Google Actions as much as possible. Dealing with repetitive YAML files are way better than having to understand a custom dispatch solution. |
Yes, that was implied in my comment, better having smaller scope and/or repetitive but ownable than recreate perfection relying on us and that people would end up rewrite anyway to own it. |
So:
Weirdly, it's only corrupted in |
I could get something green, using |
We can get caching via GitHub Actions artifacts. A full blown tensorflow build seem to be consitent ~3h on their hardware (good surprise, I was expecting much worse), and a TFLite only build is ~10m ; current door-to-door workflow is ~15m when re-using cache of full blown tensorflow ; it would be ~25-30 min with no cache and only TFLite. Currently, it requires a small piece of specific JS code for a specific GitHub Actions implem, because the default handling of artifacts makes it too tied to the workflow. This needs to be reviewed to ensure it is an acceptable augmentation of the surface of code. |
heads up, I have opened a first PR to discuss #3563 |
TaskCluster is a CI service provided by Mozilla, and available to both Firefox development (Firefox-CI instance) and Community on Github (Community TaskCluster). It’s being widely used across some Mozilla projects, and it has its own advantages. In our case, the control over tasks, over workers for specific needs and long build time was easier to achieve working with the TaskCluster team rather than relying on other CI services.
However, this has lead to the CI code being very specific to the project, and kind of a source of frustration for non employees trying to send patches and get involved in the project ; specifically because some of the CI parts were “hand-crafted” and triggering builds and tests requires being a “collaborator” on the Github project, which has other implications making it complicated to enable it easily to anyone. In the end, this creates an artificial barrier to contributing to this project, even though we happily run PRs manually, it is still frustrating for everyone. The issue #3228 was an attempt to fix that, but we came to the conclusion it would be more beneficial for everyone to switch to some well known CI service and setup that is less intimidating. While TaskCluster is a great tool and has helped us a lot, we feel its limitations now makes it inappropriate for the project to stimulate and enable external contributions.
We would like to take this opportunity to also enable more contributors to hack and own the code related to CI, so discussion is open.
Issues for GitHub Actions:
The text was updated successfully, but these errors were encountered: