Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build enclave image without Docker daemon from local docker archive #235

Open
sphw opened this issue Mar 6, 2021 · 22 comments
Open

Build enclave image without Docker daemon from local docker archive #235

sphw opened this issue Mar 6, 2021 · 22 comments
Assignees
Labels
enhancement New feature or request

Comments

@sphw
Copy link

sphw commented Mar 6, 2021

What

Right now, you need a running Docker daemon to build an enclave image. The LinuxKit version currently included attempts to pull the image using the Docker daemon. linuxkit/linuxkit#3573 now lets LinuxKit pull the images directly without the need for a Docker daemon.

Why

As part of our enclave support, we (M10 Networks) want to be able to build enclave images entirely in a Dockerfile. We distribute all of our services through Docker images, and all of the builds are entirely performed inside of Dockerfiles. While possible to change this for Nitro, I don't think it should be necessary with a simple change.

How

To supply this functionality, more or less out of the box, all that would be required would be to update the included LinuxKit to one of the latest builds. This would then allow users to use skopeo, or a similar tool, to transfer their local image into LinuxKit's cache located at ~/.linuxkit/cache themselves.

A more user-friendly follow on would be to allow the users to simply pass in a path to their own Docker image archive. At that point, the CLI would need to copy the image into the LinuxKit cache.

As a temporary workaround for our use case, I can simply replace the LinuxKit binary in /usr/share/nitro_enclaves/blobs with my own. I think this use case is common enough that it should either be supported through easier means or documented in some way

@sphw
Copy link
Author

sphw commented Mar 7, 2021

On further investigation there is actually a further reason the nitro-cli calls out to the Docker daemon to inspect the Docker image:

fn inspect_image(&self) -> Result<(Vec<String>, Vec<String>), DockerError> {

There are a few ways this could be fixed. It might make sense to allow users to simply configure the env vars, and cmd themselves. That would be a useful as a feature in its own right. I am happy to open a PR adding those options to the CLI if there is interest.

@petreeftime
Copy link
Contributor

Our LinuxKit also has an additional feature which doesn't seem to be merged: linuxkit/linuxkit#3446 and needs to be rebased. @neacsu do you know what is missing for upstreaming the feature? Are there any changes we would need to do in the codebase to support it?

@sphw
Copy link
Author

sphw commented Mar 8, 2021

Our LinuxKit also has an additional feature which doesn't seem to be merged: linuxkit/linuxkit#3446 and needs to be rebased. @neacsu do you know what is missing for upstreaming the feature? Are there any changes we would need to do in the codebase to support it?

I ran into that today when I was making these changes. I rebased one of the commits from that PR onto develop: https://github.com/m10io/linuxkit

I also made some preliminary changes here: https://github.com/m10io/aws-nitro-enclaves-cli/tree/cmd-env-params

I can clean those up and open a PR if there is interest

@neacsu
Copy link
Contributor

neacsu commented Mar 8, 2021

I ran into that today when I was making these changes. I rebased one of the commits from that PR onto develop: https://github.com/m10io/linuxkit

The patch there is actually the first version from the PR, and the one that works with the current codebase from aws-nitro-enclaves-cli, i.e. the one that takes the prefix as a command option (-prefix).

@petreeftime The PR hasn't got more traction since I last addressed the required changes. I will try to ping the maintainers. The changes to linuxkit repo would require some changes in aws-nitro-enclaves-cli as well: the prefix would have to be passed in the YML file instead of as a command option.

@sphw Thanks for taking an interest in this. The proposal to specify cmd and env separately sounds good. It would definitely be nice to have a way of doing things based only on Docker format and not necessarily on the Docker daemon.

In that sense, we should try to integrate the linuxkit changes in some way. Right now, the best way to do that would probably be to update the linuxkit blob with our feature patch applied, until we can merge it upstream.

@sphw sphw changed the title Build enclave image without Docker daemon from local docker achive Build enclave image without Docker daemon from local docker archive Mar 17, 2021
@andraprs andraprs added the enhancement New feature or request label Apr 13, 2021
@exFalso
Copy link

exFalso commented Jan 30, 2022

We had a similar problem. We're using nix for reproducible builds, and Dockerfiles+daemon builds don't play well with the build sandbox (and the resulting images are also not reproducible).
Thankfully, the docker abstraction is not actually used later on in the .eif build, the layers are unpacked and repacked into an initramfs. This means that you can simply produce an initramfs from your docker archive and use that for your .eif builds using the eif_build command https://github.com/aws/aws-nitro-enclaves-cli/blob/8f6ed740b05225512d86163f8b02292668c4b056/eif_utils/src/bin/eif_build.rs.
Note that the official nitro init assumes a particular initramfs layout which you must follow:

@petreeftime
Copy link
Contributor

bollard as an alternative to shiplift might support podman, which would remove the docker daemon requirement. It also seems to be actively maintained. This needs to be tested first.

@eugkoira
Copy link
Contributor

eugkoira commented May 6, 2022

We see a 3 issues in that ticket:

@Mikescops
Copy link

Mikescops commented Jun 13, 2022

Hello,
We are also facing this limitation and like to have the ability to build nitro eif on our CI without the docker daemon (using kaniko for instance).
Thanks @shtaked for pushing this issue, will it be prioritized at some point?

@awnumar
Copy link

awnumar commented Jul 15, 2022

We've run into this issue while trying to build a docker image.tar file into an enclave image.

@shtaked

Actually shiplift can also work with podman if you specify DOCKER_HOST environment variable pointing to podman docker-compatible REST service:

This is quite tricky to setup in my experience.

  • It's not possible to start podman system service inside the container without first disabling seccomp, e.g. by passing the --security-opt seccomp=unconfined flag to docker run. This is a potential security risk, and may not be possible in some trusted environments (e.g. concourse).
  • Due to some upstream issue, podman is difficult to install on amazonlinux, whereas nitro-cli is difficult to install anywhere except amazonlinux.

we don't have ability to use container image tar file as an input for nitro-cli. It looks like a fair request feasible to implement

This would be the ideal solution for us. We currently don't have a good way of automating these builds while there continues to be a dependency on a docker daemon.

@eugkoira
Copy link
Contributor

Update from our side:

  • currently we are busy with removing docker daemon dependency when pulling images - basically removing shiplift library usage and rather pulling the images from OCI repositories ourselves
  • we updated linuxkit to a latest version and now it can be used without docker daemon
  • we are exploring the options to provide a flag for input container image file, but the main question here is - what format of the image should we support - docker V1 or OCI? For us it looks like more generic approach would be better to switch to OCI completely, but we would appreciate for your input of your use cases (sphw, Mikescops, awnumar).

@Mikescops
Copy link

Thanks for the updates, sounds great!
@shtaked definitely OCI, our CI builds with Kaniko to OCI, but for most cases, OCI seems to be the way to go now. (also docker v1 is deprecated)

@sphw
Copy link
Author

sphw commented Sep 20, 2022

@shtaked We are currently using Docker v1 images built using Nix. I'm not 100% sure where Nix's support for OCI images is at. If it doesn't exist yet, I can commit to adding it to nixpkgs, since the formats are similar it shouldn't be a big lift.

@Mikescops
Copy link

@shtaked hey, do you have any updates on this work? We're still blocked by this issue at the moment. Thanks!

@exFalso
Copy link

exFalso commented Nov 8, 2022

Again would like to point out that the eif images may be built without the involvement of Docker at all. Similar to @sphw we use Nix as well, and have been building images for production use for quite a while now. There's no need for Docker or OCI indirection, you can just use a plain initrd that wraps a folder. See #235 (comment) for details

@Mikescops
Copy link

Mikescops commented Nov 9, 2022

@exFalso How do you handle the signature part of the eif? This signature is critical for us.
Also even if your solution works it's not convenient, we use docker images everywhere to version our builds.

@exFalso
Copy link

exFalso commented Nov 9, 2022

You can just unpack the layers into the rootfs folder if you really want to start from a docker image. That's basically what the aws tooling does as well, but for some reason it jumps through many hoops to do it.

By signatures do you mean the built-in way of signing the images (with PCR8 and whatnot)? eif_build has the corresponding options:

$ ./result/bin/eif_build --help
Enclave image format builder
Builds an eif file

USAGE:
    eif_build [FLAGS] [OPTIONS] --cmdline <String> --kernel <FILE> --output <FILE> --ramdisk <FILE>...

FLAGS:
    -h, --help       Prints help information
        --sha256     Sets algorithm to be used for measuring the image
        --sha384     Sets algorithm to be used for measuring the image
        --sha512     Sets algorithm to be used for measuring the image
    -V, --version    Prints version information

OPTIONS:
        --cmdline <String>                             Sets the cmdline
        --kernel <FILE>                                Sets path to a bzImage/Image file for x86_64/aarch64 architecture
        --output <FILE>                                Specify output file path
        --private-key <private-key>                    Specify the path to the private-key
        --ramdisk <FILE>...                            Sets path to a ramdisk file representing a cpio.gz archive
        --signing-certificate <signing-certificate>    Specify the path to the signing certificate

(Sidenote: I don't quite understand the benefit of this signature scheme btw. Attestation will verify the image hash already, which you can sign out of bands if you really want to. Having it as a built-in would only really make sense if there was specific nitro functionality tied to the signature similar to SGX MRSIGNER, e.g. sealing capabilities tied to the signing key.)

@Mikescops
Copy link

Thanks that's helpful, well the benefit for us of the signature (and then the PCR8 check) is that we trust the environment building the image but we don't trust the rest of the infrastructure. In other words, we want to prevent an operator on the infrastructure to launch a rogue eif.

@petreeftime
Copy link
Contributor

You can just unpack the layers into the rootfs folder if you really want to start from a docker image. That's basically what the aws tooling does as well, but for some reason it jumps through many hoops to do it.

Yes, the process to build from a docker container is a bit complex and can be simplified for sure, but just using docker (or podman) was not a sufficient step, as there was no guarantee that from a given container image you always get the same cpio archive. This is something to bear in mind if this sort of reproducibility is important. If you rely solely on PCR8 for validation, then it's probably not a requirement.

@exFalso
Copy link

exFalso commented Nov 10, 2022

Yeah, reproducibility was precisely why we ended up not using Docker at all and creating the initrd directly. Dockerfiles in particular almost encourage users to create non-reproducible images by downloading non-pinned stuff from the internet.

If you do have e.g. a root/ directory where you created the right structure (so the unpacked layers are under root/rootfs, you can get quite far by just normalizing timestamps and calling cpio with the right magic flags. For example if you want the cpio result at $out, you can do

    find root -exec touch -h --date=@1 {} +
    (cd root && find * .[^.*] -print0 | sort -z | cpio -o -H newc -R +0:+0 --reproducible --null | gzip -n > $out)

which will also normalize the uid/gid.

@jovanni-hernandez
Copy link

Hey all,

I'm also running into this limitation - it would be awesome to build EIFs in our CI using the native nitro-cli tooling. Is this still a feature that is being looked into?

@cottand
Copy link

cottand commented Jun 11, 2024

Hi all,

We ran into this limitation and we decided to go with @exFalso 's approach above, and decided to build the enclaves 'from scratch' with aws/aws-nitro-enclaves-image-format directly, rather than using the Nitro CLI. This uses Nix instead of Docker, so you don't need the daemon nor other privileged builders, and you do not need Docker images (but you can tweak it to use TARs if you do want that). You get some other benefits, like the possibility of using your own Kernel or init process (with the Nitro CLI, hard-to-reproduce binaries are used instead)

We made these efforts open-source at monzo/aws-nitro-util. We are using this to build enclaves in production.

@meerd
Copy link
Contributor

meerd commented Jun 14, 2024

Hi @cottand,

Congratulations on your new project initiative! We're excited to see the added value that aws-nitro-util brings for Nitro Enclaves users.

Nowadays, @foersleo has been hard at work enabling reproducible builds on aws-nitro-enclaves-sdk-bootstrap. Once this fantastic effort is complete, all the blobs distributed via aws-nitro-enclaves-cli-devel package will be reproducible and verifiable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests