Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add kubemark provider #3847

Closed
wants to merge 2 commits into from

Conversation

benmoss
Copy link

@benmoss benmoss commented Oct 22, 2020

What this PR does / why we need it:
Adds the kubemark provider to test/infrastructure to live alongside its BFF, CAPD.

Right now it lives in https://github.com/benmoss/cluster-api-provider-kubemark which is less than ideal.

Alternately we can move it to a new kubernetes-sigs repo of its own, but figured I would propose this first since we already have CAPD as precedent for these development-minded providers living in this repo.

I think the release Makefile bits will need some twiddling, it's mostly based on CAPA's Makefile but probably should converge more with CAPD's.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Oct 22, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign ncdc after the PR has been reviewed.
You can assign the PR to them by writing /assign @ncdc in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Oct 22, 2020
@benmoss benmoss force-pushed the kubemark-provider branch 2 times, most recently from 0991602 to e7b5a21 Compare October 22, 2020 14:29
test/infrastructure/kubemark/Dockerfile Outdated Show resolved Hide resolved
test/infrastructure/kubemark/Dockerfile Outdated Show resolved Hide resolved
test/infrastructure/kubemark/PROJECT Outdated Show resolved Hide resolved
test/infrastructure/kubemark/PROJECT Outdated Show resolved Hide resolved
Comment on lines 233 to 247
mgr, err := certificate.NewManager(&certificate.Config{
BootstrapCertificatePEM: cfg.AuthInfos[TokenUser].ClientCertificateData,
BootstrapKeyPEM: cfg.AuthInfos[TokenUser].ClientKeyData,
CertificateStore: certificateStore,
Template: &x509.CertificateRequest{
Subject: pkix.Name{
CommonName: fmt.Sprintf("system:node:%s", kubemarkMachine.Name),
Organization: []string{"system:nodes"},
},
},
Usages: []certificates.KeyUsage{
certificates.UsageDigitalSignature,
certificates.UsageKeyEncipherment,
certificates.UsageClientAuth,
},
ClientsetFn: newClientFn,
})
if err != nil {
logger.Error(err, "error creating cert manager")
return ctrl.Result{}, err
}

mgr.Start()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have access to the cluster CA (through the secrets), I suspect quite a bit of this could be simplified by using that CA rather than using the bootstrap token to request a certificate to be signed by the cluster CA

That said, it might be good to keep this workflow if we intend to support control plane providers other than Kubeadm.

Copy link
Author

@benmoss benmoss Oct 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I reverse-engineered this by figuring out how both the kubeadm join and kubelet bootstrap phases converted the token into a full-fledged kubeconfig. I am sure it can be made simpler, and yes some head-scratching might have to go into figuring out how we could support other bootstrap providers, right now it's quite tied to CAPBK.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's the case, then I'd say you can likely just rip out a good chunk of this by snagging the CA secret and creating the node cert/kubeconfig directly. We can always add additional support later if need be.

test/infrastructure/kubemark/controllers/copy_pasta.go Outdated Show resolved Hide resolved
@benmoss
Copy link
Author

benmoss commented Oct 22, 2020

One important note with all this: there's no official kubemark images published right now.

We have two options:

  • publish our own under some sigs.k8s.io
  • wait until we can convince sig-scalability / sig-release to publish them under k8s.gcr.io

I've tried to start the conversation (today) with sig-scalability, but so far no movement

/ //_/_ __/ /_ ___ ____ ___ ____ ______/ /__
/ ,< / / / / __ \/ _ \/ __ `__ \/ __ `/ ___/ //_/
/ /| / /_/ / /_/ / __/ / / / / / /_/ / / / ,<
/_/ |_\__,_/_.___/\___/_/ /_/ /_/\__,_/_/ /_/|_|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fancy ;)

@neolit123
Copy link
Member

SGTM, thanks for the contribution @benmoss !

@neolit123
Copy link
Member

neolit123 commented Oct 22, 2020

One important note with all this: there's no official kubemark images published right now.

We have two options:

  • publish our own under some sigs.k8s.io
  • wait until we can convince sig-scalability / sig-release to publish them under k8s.gcr.io

I've tried to start the conversation (today) with sig-scalability, but so far no movement

is this about a test image for kubemark or the CAPI provider for kubemark?

for kubemark itself there is already a gcr.io/k8s-testimages/kubemark.
(but there is also this issue to migrate everything away from k8s-testimages:
kubernetes/k8s.io#1312)

for provider images, my assumption is this is where this should go:
https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images

e.g. there is a:
https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images/k8s-staging-capi-docker

i don't think we should include kubemark* as part of the root k8s.gcr.io/ release, thus #sig-release redirected to scale.

@k8s-ci-robot
Copy link
Contributor

@benmoss: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-cluster-api-verify fe84dd1 link /test pull-cluster-api-verify
pull-cluster-api-test-main fe84dd1 link /test pull-cluster-api-test-main

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@benmoss
Copy link
Author

benmoss commented Oct 23, 2020

is this about a test image for kubemark or the CAPI provider for kubemark?

for kubemark itself there is already a gcr.io/k8s-testimages/kubemark.
(but there is also this issue to migrate everything away from k8s-testimages:
kubernetes/k8s.io#1312)

Yes, this is about the kubemark image itself, not the CAPK image. I didn't know about the k8s-testimages one. It appears to be broken as far as I know, it has an entrypoint of /kubemark but that file doesn't appear to be runnable. I can't get anything to run from it, everything just returns this error:

standard_init_linux.go:211: exec user process caused "no such file or directory"

I extracted the kubemark from the image tarball and given the presence of glog in the error logs I can assume it was built in 2018 at latest.

for provider images, my assumption is this is where this should go:
https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images

agreed

@neolit123
Copy link
Member

neolit123 commented Oct 23, 2020 via email

@detiber
Copy link
Member

detiber commented Oct 23, 2020

One important note with all this: there's no official kubemark images published right now.

We have two options:

  • publish our own under some sigs.k8s.io

Definitely worth pursuing at least as a stopgap/short term solution to unblock this work.

  • wait until we can convince sig-scalability / sig-release to publish them under k8s.gcr.io

I do think this is definitely the preferred long term solution, especially if it can be standardized using the image promotion process.

I've tried to start the conversation (today) with sig-scalability, but so far no movement

Might be worth seeing if it's something they are interested in having the work for this contributed, in general sig-scalability tends to have more work to do than time.

@@ -0,0 +1,7 @@
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The whole samples directory can likely be deleted, I believe we already do that for the other generated content as well.

@benmoss
Copy link
Author

benmoss commented Oct 23, 2020

I think we are going to move this to an outside repository, I sent out an email to sig-cluster-lifecycle for the approval. Thanks for the reviews, this will all make it into the other repo for sure!

@benmoss benmoss closed this Oct 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants