-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Multiarch support #830
Comments
I enabled aarch64 builds for our 15.5 dependencies in https://build.opensuse.org/project/show/filesystems:ceph:s3gw. These all built successfully, which is enough to allow
Not knowing what's up with that, I tried switching to using index 4f0f6192a04..062eb4acd0c 100755
--- a/qa/rgw/store/sfs/build-radosgw.sh
+++ b/qa/rgw/store/sfs/build-radosgw.sh
@@ -37,7 +37,6 @@ CC=${CC:-"gcc-12"}
CXX=${CXX:-"g++-12"}
CEPH_CMAKE_ARGS=(
- "-GNinja"
"-DBOOST_J=${NPROC}"
"-DCMAKE_C_COMPILER=${CC}"
"-DCMAKE_CXX_COMPILER=${CXX}"
@@ -111,14 +110,7 @@ _configure() {
_build() {
pushd "${SFS_BUILD_DIR}"
- ninja -j "${NPROC}" bin/radosgw crypto_plugins
-
- if [ "${WITH_TESTS}" == "ON" ] ; then
- # discover tests from ctest tags. Selects all tests which have the tag s3gw
- mapfile -t \
- UNIT_TESTS <<< "$(ctest -N -L s3gw | grep "Test #" | awk '{print $3}')"
- ninja -j "${NPROC}" "${UNIT_TESTS[@]}"
- fi
+ make -j "${NPROC}" radosgw crypto_plugins
popd
} Narrator: It did not give us a quick win This failed with:
So we should really try to figure out what's up with that ninja failure. |
I figured it out. There's a pretty straightforward workaround for that: --- a/cmake/modules/LimitJobs.cmake
+++ b/cmake/modules/LimitJobs.cmake
@@ -2,6 +2,11 @@ set(MAX_COMPILE_MEM 3500 CACHE INTERNAL "maximum memory used by each compiling j
set(MAX_LINK_MEM 4500 CACHE INTERNAL "maximum memory used by each linking job (in MiB)")
cmake_host_system_information(RESULT _num_cores QUERY NUMBER_OF_LOGICAL_CORES)
+# This will never be zero on a real system, but it can be if you're doing
+# weird things like trying to cross-compile using qemu emulation.
+if(_num_cores EQUAL 0)
+ set(_num_cores 1)
+endif()
cmake_host_system_information(RESULT _total_mem QUERY TOTAL_PHYSICAL_MEMORY)
math(EXPR _avg_compile_jobs "${_total_mem} / ${MAX_COMPILE_MEM}") ...so now I've got a build running. More to follow if/when it eventually completes (it is not fast)... |
With aquarist-labs/ceph#256 applied, I eventually got an aarch64 s3gw container built. It took TWENTY TWO HOURS running on a single core under qemu-aarch64, but it did build successfully, so at least we know it works. To do this for real, we'll want actual aarch64 builders. |
With aquarist-labs/ceph#259 applied, which falls back to using |
This is good news. I wonder if there's a bottleneck elsewhere since it still takes way longer than I anticipated. Anyways, we can throw up to 32 cores at the problem with our current workers, which should make this workable. |
It could just be my desktop system sucks a bit. I can try to force an aarch64 build through CI if you like, by opening a PR something like this: --- a/.github/workflows/test-s3gw.yml
+++ b/.github/workflows/test-s3gw.yml
@@ -56,6 +56,7 @@ jobs:
- name: Build Unittests
run: |
docker build \
+ --platform linux/aarch64 \
--build-arg CMAKE_BUILD_TYPE=Debug \
--build-arg NPROC=16 \
--file s3gw/Dockerfile \
@@ -65,11 +66,12 @@ jobs:
- name: Run Unittests
run: |
- docker run --rm s3gw-unittests:${IMAGE_TAG}
+ docker run --platform linux/aarch64 --rm s3gw-unittests:${IMAGE_TAG}
- name: Build s3gw Container Image
run: |
docker build \
+ --platform linux/aarch64 \
--build-arg CMAKE_BUILD_TYPE=Debug \
--build-arg NPROC=16 \
--build-arg SRC_S3GW_DIR=s3gw \
@@ -85,7 +87,7 @@ jobs:
source ceph/qa/rgw/store/sfs/tests/helpers.sh
mkdir -p integration/storage
- CONTAINER=$(docker run --rm -d \
+ CONTAINER=$(docker run --platform linux/aarch64 --rm -d \
-p 7480:7480 \
-v $GITHUB_WORKSPACE/integration/storage:/data \
s3gw:${IMAGE_TAG} \
@@ -110,7 +112,7 @@ jobs:
source ceph/qa/rgw/store/sfs/tests/helpers.sh
mkdir -p smoke/storage
- CONTAINER=$(docker run --rm -d \
+ CONTAINER=$(docker run --platform linux/aarch64 --rm -d \
-p 7480:7480 \
-v $GITHUB_WORKSPACE/smoke/storage:/data \
s3gw:${IMAGE_TAG} \
@@ -128,7 +130,7 @@ jobs:
run: |
set -x
- docker run --rm \
+ docker run --platform linux/aarch64 --rm \
-v /run/docker.sock:/run/docker.sock \
-v ${GITHUB_WORKSPACE}/s3tr-out:/out \
--pull=always \
@@ -149,7 +151,7 @@ jobs:
run: |
set -x
- docker run --rm \
+ docker run --platform linux/aarch64 --rm \
-v ${GITHUB_WORKSPACE}/s3tr-out:/out \
-v ${GITHUB_WORKSPACE}/ceph:/ceph:ro \
ghcr.io/aquarist-labs/s3tr:latest \ (The above is untested, but the |
You're using buildx and QEMU, right? This would need to be set up for the workers first. There are GH actions to do that easily. We even had them set up until last week when it gave us trouble (again 🙄). |
Yeah, but it Just Worked[TM], i.e. I didn't do anything other than install docker on my desktop and somehow it knew how to do the right thing when I ran |
Longhorn ships for amd64, arm64 and experimentally for s390x.
To complete integration with Longhorn, the s3gw needs to be built for all of Longhorn's supported architectures.
The current roadmap for aligning s3gw's target architectures and Longhorn's supported architectures is as follows:
The text was updated successfully, but these errors were encountered: