-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Binfmt and QEMU are not configured for x86 #24
Comments
The issue with creating a x86 image appears to be due the way that Ubuntu (and Debian) set up binfmt/qemu-user - it appears what (on a x86_64 system) they set up binfmt entries for armv7 and aarch64 they do not set up an entry for x86 (unlike binfmt on Alpine which does). So it is not exactly an issue with the script, more of a difference in behaviour on Ubuntu/Debian regarding x86 emulation. I suspect this is done as Ubuntu and Debian tend to have multilib installed, permitting them to run x86 binaries directly on a x86_64 system. I will have to think how to resolve this for Ubuntu and Debian in the script. I am somewhat confused by the options you are passing to create-alpine-disk-image: "--root-part-size 128" - from a brief test this is insufficient size for a Alpine rootfs to install. How did you determine the 128MB/MiB figure? Why are you adding the "linux-virt" package? the appropriate kernel package is automatically selected by the script. Likewise open-vm-tools packages are automatically selected when you pass the "--virtual vmware" option. Also I only just noticed that Alpine does not actually provide open-vm-tools packages for x86 - I have changed the script accordingly. Why are you adding the ca-certificates and wget packages? ca-certificates and wget should be installed automatically by the script. Why are you installing "mesa-dri-gallium"? That is graphics related, not something typically installed on a server. I noticed you have made some changes in your fork of my repo. In particular I am confused why you have added Alpine 3.12.x to the script. Firstly Alpine 3.12 is out-of-support since the end of May. Secondly there is no cloud-init package for Alpine 3.12 (it was added to Alpine 3.13.0) and, as my script relies on cloud-init, I specifically did not add 3.12.x support for that reason. In also don't understand why you added a parted check to the create-alpine-disk-image script when it makes no use of parted at all - the secondary script created by create-alpine-disk-image does have a dependancy but a check for that it already present here https://github.com/HugoFlorentino/alpine-image/blob/main/lib/common-functions#L755 |
Eventually I did find a way to get past that error:
Apparently to keep this registry persistent, it has to be added to Some changes I made in my fork were rather blind attempts, because I could not find comments explaining the purpose of some options. I am learning as I go. You are correct, many packages were redundant. I added v3.12 because I couldn't find some drivers which I thought were needed to be manually declared.
I tried the tool in a system without parted installed and it failed without outputting useful errors, that's why I added it. Better to fail early if it's not installed, IMHO. Same thing goes for shellcheck. The script failed without any useful error. |
Right, this is working around the issue I mentioned earlier that the Ubuntu and Debian binfmt-support/qemu-user-static packages do not set this up for x86. I will look at modifying my script to set this up on Debian/Ubuntu machines.
I don't see how selecting Alpine Release 3.12 would make any difference regarding kernel drivers, the enabled virtio & VMware-specific drivers have not changed between kernels in Alpine releases for quite some time. Were you perhaps trying to use emulated physical devices on VMware (rather than virtualised virtio/vmware devices)? The Alpine linux-virt does not support the full set of physical drivers as the linux-lts package as it is designed for virtual machines (rather than virtual machines "pretending" to be physical machines).
The create-alpine-disk-image script does not use parted at all and therefore it does not check that parted is installed. Any script, such as create.sh, generated by create-alpine-disk-image will make use of parted but such scripts do already include a check that the parted package is installed. If the parted package check failed then the create.sh script should produce a "The following packages need to be installed:" message and stop. So I'm confused how you would see a problem that was resolved by simply installing the parted package, unless you are testing on a non-Alpine/Debian/Ubuntu system - those are the only distros the script currently supports.
I'm using shellcheck as a debugging aid and so didn't add it to dependencies. I've changed the script to now make it a conditional dependency, I'll do a new MR in the next few hours to push this change. |
This is right now on Ubuntu 20.04 without parted installed:
It simply breaks and finishes, no error message at all. |
I just merged some changes including the Debian/Ubuntu setup for binfmt x86. |
It you create the script with the "--debug" flag then once you run it there will be a logfile in the same directory with more information about the script run - that may highlight what is going on. |
Right:
Perhaps displaying the error would make the script more intuitive. Apparently this is what you intended with the message "The following packages need to be installed:" but it's simply not shown, either with or without --debug |
Hmm, that part of create.sh is after it calls the check_for_required_packages function which should give an error if the parted package is not installed. The script doesn't check if the parted binary exists as it already has checked that the parted package is installed. I know I tested this on Debian not too long ago - Ubuntu generally functions 99% the same as Debian. I'll need to set up a Ubuntu VM to test further. |
Actually I don't see that this function checks presence of packages at all in Debian/Ubuntu There is a |
The are 2 calls to dpkg-query - one in check_for_required_packages, the other in check_binfmt_packages. |
Sorry, I missed that. Funny it's not working though. |
Indeed. Perhaps you could try pasting that code into a separate script or on the command-line to check what is going on? |
I tried querying all at once and also each at a time using a loop:
The first variant did not report things correctly (I have all packages installed right now):
|
Yeah I was "lazy" in not using a loop and checking each individually as the existing code certainly works fine for me on Debian. Using a loop is the obvious solution. Out of interest what does this show with the same $_package_list value?
|
It exits with a return value of 1. Go figure. Plus, I uninstalled parted and the loop didn't work as expected either. |
I cloned project with latest changes, prepared the script with debug enabled and this is the output, when trying to build an image for x86:
Script aborts at this point, without installing in the image the list of additional packages I declared. |
This appears to be a fundamental issue with (packages) installing - the busybox-1.35.0-r17.post-install script just runs busybox. All I can assume is that the binfmt/qemu-user workaround for x86 on debian/ubuntu is not working as expected. Do you see the same errors when creating a x86_64 disk image? |
No, with that architecture the script finishes building image normally. Even while building image for x86, there is a log line which attracted my attention:
Shouldn't it download a statically built APK tool for x86? |
Right, which seems to confirm it is a binfmt/qumu-user-x86 issue on Ubuntu x86_86.
No. The static APK is run on the machine before any binfmt/qemu-user activity (which happens inside the chroot). |
But x86 code is compatible with x86_64 hardware anyway, so it should work |
Debian and Ubuntu support multiarch, Alpine does not. Most binaries are not 100% statically linked and so require some libraries (the C library at the very least) in order to run. Therefore you can't run an Alpine x86 binary on a x86_64 machine if you do not also have the x86 C library (Musl in Alpine's case) available. Also Alpine uses Musl as its C library (and so Alpine binaries are linked to that), Debian and Ubuntu typically use Glibc. |
I thought the whole point of a static build was to not depend on libraries. |
I tried something else, after using the script with
Cloud that be the reason it fails? |
If a binary is completely statically linked, yes. However static linking means that at least 1 library has been statically linked, not all. Alpine binaries always dynamically link at least the C library (musl). If you run "ldd" on something like Alpine's /bin/busybox you will see this. |
Rather than grepping did you actually look at those sections of the create.sh script? No, those lines are not relevant. |
I am trying to use the tool from Ubuntu 20.04, but it fails. I prepared the script with this command, which gave no error:
create-alpine-disk-image --debug --script-filename create.sh --virtual vmware --arch x86 --cpu-vendor intel --enable-watchdog --os-device-media disk --os-device-type scsi --ipv4only --keymap 'us us-intl' --optimise --username cloud --password cloud --auth-control both --root-part-size 128 --add-packages 'linux-virt mesa-dri-gallium open-vm-tools-deploypkg open-vm-tools-guestinfo open-vm-tools-openrc open-vm-tools-static open-vm-tools-timesync open-vm-tools-vmbackup busybox-extras binutils wget ca-certificates'
However, when I try to run it (as root), following error message appears:
Binfmt and QEMU are not configured for x86
I think I have all dependencies installed, but I couldn't see a list of required packages, I installed them basicaly through trial and error.
Edit: I found out what the problem was. My system had x86_64 as architecture, but my intention was to build a strict x86 alpine image. Apparently, in order to build an image for a different architecture, some extra configuration must be done which the script doesn't currently support.
The text was updated successfully, but these errors were encountered: