Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use new docker build provider #1278

Merged
merged 8 commits into from
Jan 6, 2025
Merged

Use new docker build provider #1278

merged 8 commits into from
Jan 6, 2025

Conversation

blampe
Copy link
Contributor

@blampe blampe commented Apr 26, 2024

This replaces the old Docker provider with the new Docker Build provider.

User-facing AWSX types are unchanged.

Behavior is unchanged, except we will now automatically store and use an inline cache if the user hasn't configured anything explicitly.

Fixes #1349
Fixes #1072

@blampe blampe changed the title docker-build Use new docker build provider Apr 26, 2024
@blampe blampe marked this pull request as draft April 26, 2024 22:32
Copy link
Contributor

@lukehoban lukehoban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SDK types are unchanged -- those remain the old Docker API.

The @pulumi/docker typess don't show up in the public API of this component, right? So what do you mean that "SDK types are unchanged"? I assume we can remove the dependency on @pulumi/docker from both the implementation, and update the implied depdendency in the schema (which I assume is only to hint at the internal dependnecy - even though not strictly needed?). Feel like maybe I'm missing something that motivated this comment though?

@@ -22,8 +22,9 @@
},
"//": "Pulumi sub-provider dependencies must be pinned at an exact version because we extract this value to generate the correct dependency in the schema",
"dependencies": {
"@pulumi/aws": "6.32.0",
"@pulumi/aws": "6.9.0",
"@pulumi/docker": "4.5.1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this dependency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think removing this will break anyone who indirectly depends on it -- is that OK? In other words I can specify only awsx in package.json but still use docker in my program.

@blampe
Copy link
Contributor Author

blampe commented Aug 9, 2024

So what do you mean that "SDK types are unchanged"?

Sorry, this meant that awsx's own wrapper types around Docker are unchanged. Only the internals are changing, the user-facing API remains the same.

@blampe blampe requested a review from flostadler August 9, 2024 17:35
@blampe blampe marked this pull request as ready for review August 9, 2024 17:35
@mjeffryes mjeffryes added this to the 0.108 milestone Aug 16, 2024
Copy link
Contributor

@flostadler flostadler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just two small questions.
I was thinking that we should add an upgrade test for going from the old to the new docker provider, what do you think?

tags: [canonicalImageName],
buildArgs: dockerInputs.args,
cacheFrom: cacheFrom,
cacheTo: [{ inline: {} }],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if this will increase image sizes substantially? IIRC this is only metadata about the layers so the size increase shouldn't be substantial.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question! It's actually full cache artifacts, so it can increase the size a bit. I think it would be best to take this out and stay as close to the existing behavior of not doing any caching stuff unless cacheFrom is set.

However this makes me realize we probably do want inline caching when cacheFrom is set. Currently it's using a registry cache output, but this isn't compatible with the existing behavior because users currently point this at an image (not a cache artifact) due to the way the old Docker provider did caching. (It would simply pull images to get layers into your local build cache.) In other words we don't want to export a cache to image:latest, but it would be reasonable to push an image with inline caching.

Caching Docker images is hard enough as it is and the old provider's behavior was also easy to get wrong. For example cacheFrom might not do anything without a magic env var:

new docker.Image("my-app-image", {
    build: {
        args: {
            BUILDKIT_INLINE_CACHE: "1",
        },
        cacheFrom: {
            images: ["foo:latest"],
        },
        context: "app/",
        dockerfile: "app/Dockerfile",
    },
    imageName: "foo:latest",
});

So for us to stay as close to the existing behavior while also allowing caching to somewhat work, I think we basically just want to enable this BUILDKIT_INLINE_CACHE behavior when cacheFrom is present.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried a couple of different sample images and there was no substantial increase in image size. For some the image was even smaller. So the current approach sounds good to me

if (dockerInputs.context !== undefined) {
context = dockerInputs.context;
}

const dockerImageArgs: docker.ImageArgs = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any new input properties we should expose as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably! There's a lot more we can do now, e.g. multiple tags, multiple platforms, more refined caching behavior, etc.

I didn't want to add any of those right now because (a) these are opinionated APIs and opinions take more time to get concensus on, and (b) it's not clear how much (if any) we want to continue investing in this wrapper versus giving users more composable building blocks (like your example of registry auth helpers). The latter is my preference, but it's not my call to make.

@mikhailshilkov mikhailshilkov removed this from the 0.108 milestone Aug 22, 2024
@mjeffryes mjeffryes modified the milestones: 0.109, 0.110 Sep 12, 2024
@mjeffryes mjeffryes removed this from the 0.110 milestone Oct 2, 2024
@t0yv0
Copy link
Member

t0yv0 commented Nov 6, 2024

Are we going forward with this @blampe or should we pick up on AWS squad?

What is your confidence that the upgrade is going to be "invisible" to users? Does new Docker Build provider Image have an alias to the old image and is ready to accept old image states and rewire them? Or is there going to be a replacement during upgrade but we believe it is benign?

I was thinking that we should add an upgrade test for going from the old to the new docker provider, what do you think?

This would be good but I also wonder if we know enough to cover the upgrade space well enough to test it properly.

Perhaps instead we can just include this change in the next major version?

@t0yv0
Copy link
Member

t0yv0 commented Nov 6, 2024

This likely will also fix #1401

@flostadler
Copy link
Contributor

@t0yv0 In my opinion we shouldn't wait for the next major with this. Even if the image does get rebuilt it shouldn't be a problem. It's not a breaking change like infrastructure being replaced.

As part of another change I'm going to add a function to the provider for retrieving ECR creds. If there's users that do not want to go with docker-build, they could use that and pipe it into the docker.Image resource.

@flostadler
Copy link
Contributor

I resolved the merge conflicts with the main branch, updated to the latest docker-build (0.0.8) and added an upgrade test. I think this is good to go now!

@flostadler flostadler requested review from t0yv0 and corymhall January 2, 2025 14:41
@flostadler
Copy link
Contributor

@blampe I updated the PR, could you have a look as well?

@@ -25,7 +25,7 @@
"//": "Pulumi sub-provider dependencies must be pinned at an exact version because we extract this value to generate the correct dependency in the schema",
"dependencies": {
"@pulumi/aws": "6.65.0",
"@pulumi/docker": "4.5.1",
"@pulumi/docker-build": "0.0.8",
Copy link
Contributor Author

@blampe blampe Jan 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this possibly a breaking change? Like if I pull in p/docker indirectly via awsx and then reference it from my code, my imports would break?

I don't know if that's what would actually happen or if it's avoidable. Would you leave in the old p/docker dependency even if it's unused?

Mostly just asking to make sure we've thought through any edge cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it's not a breaking change. Users shouldn't depend on transitive dependencies directly. They should add those to their own dependencies.

That being said, I'm currently workin on another awsx feature that'll add back the docker provider for push without build scenarios so we're gonna be covered here

@blampe
Copy link
Contributor Author

blampe commented Jan 4, 2025

@blampe I updated the PR, could you have a look as well?

@flostadler it looks great to me! Just one small question re: dependencies but not blocking. Feel free to merge whenever you want!

@flostadler flostadler merged commit fe08e89 into master Jan 6, 2025
11 checks passed
@flostadler flostadler deleted the buildx branch January 6, 2025 12:19
@pulumi-bot
Copy link
Contributor

This PR has been shipped in release v2.20.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update awsx to use docker-build ecr.NewImage CacheFrom adds --target to docker build
8 participants