-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Add kubemark provider #3847
✨ Add kubemark provider #3847
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
0991602
to
e7b5a21
Compare
mgr, err := certificate.NewManager(&certificate.Config{ | ||
BootstrapCertificatePEM: cfg.AuthInfos[TokenUser].ClientCertificateData, | ||
BootstrapKeyPEM: cfg.AuthInfos[TokenUser].ClientKeyData, | ||
CertificateStore: certificateStore, | ||
Template: &x509.CertificateRequest{ | ||
Subject: pkix.Name{ | ||
CommonName: fmt.Sprintf("system:node:%s", kubemarkMachine.Name), | ||
Organization: []string{"system:nodes"}, | ||
}, | ||
}, | ||
Usages: []certificates.KeyUsage{ | ||
certificates.UsageDigitalSignature, | ||
certificates.UsageKeyEncipherment, | ||
certificates.UsageClientAuth, | ||
}, | ||
ClientsetFn: newClientFn, | ||
}) | ||
if err != nil { | ||
logger.Error(err, "error creating cert manager") | ||
return ctrl.Result{}, err | ||
} | ||
|
||
mgr.Start() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have access to the cluster CA (through the secrets), I suspect quite a bit of this could be simplified by using that CA rather than using the bootstrap token to request a certificate to be signed by the cluster CA
That said, it might be good to keep this workflow if we intend to support control plane providers other than Kubeadm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I reverse-engineered this by figuring out how both the kubeadm join
and kubelet bootstrap phases converted the token into a full-fledged kubeconfig. I am sure it can be made simpler, and yes some head-scratching might have to go into figuring out how we could support other bootstrap providers, right now it's quite tied to CAPBK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that's the case, then I'd say you can likely just rip out a good chunk of this by snagging the CA secret and creating the node cert/kubeconfig directly. We can always add additional support later if need be.
test/infrastructure/kubemark/controllers/kubemarkmachine_controller.go
Outdated
Show resolved
Hide resolved
test/infrastructure/kubemark/controllers/kubemarkmachine_controller.go
Outdated
Show resolved
Hide resolved
test/infrastructure/kubemark/controllers/kubemarkmachine_controller.go
Outdated
Show resolved
Hide resolved
One important note with all this: there's no official kubemark images published right now. We have two options:
I've tried to start the conversation (today) with sig-scalability, but so far no movement |
e7b5a21
to
a7a76f8
Compare
/ //_/_ __/ /_ ___ ____ ___ ____ ______/ /__ | ||
/ ,< / / / / __ \/ _ \/ __ `__ \/ __ `/ ___/ //_/ | ||
/ /| / /_/ / /_/ / __/ / / / / / /_/ / / / ,< | ||
/_/ |_\__,_/_.___/\___/_/ /_/ /_/\__,_/_/ /_/|_| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fancy ;)
SGTM, thanks for the contribution @benmoss ! |
is this about a test image for kubemark or the CAPI provider for kubemark? for kubemark itself there is already a for provider images, my assumption is this is where this should go: e.g. there is a: i don't think we should include kubemark* as part of the root |
4f1ed6d
to
fe84dd1
Compare
@benmoss: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Yes, this is about the kubemark image itself, not the CAPK image. I didn't know about the k8s-testimages one. It appears to be broken as far as I know, it has an entrypoint of
I extracted the kubemark from the image tarball and given the presence of glog in the error logs I can assume it was built in 2018 at latest.
agreed |
Ok, sig scale should know more about the broken image.
|
Definitely worth pursuing at least as a stopgap/short term solution to unblock this work.
I do think this is definitely the preferred long term solution, especially if it can be standardized using the image promotion process.
Might be worth seeing if it's something they are interested in having the work for this contributed, in general sig-scalability tends to have more work to do than time. |
@@ -0,0 +1,7 @@ | |||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The whole samples
directory can likely be deleted, I believe we already do that for the other generated content as well.
I think we are going to move this to an outside repository, I sent out an email to sig-cluster-lifecycle for the approval. Thanks for the reviews, this will all make it into the other repo for sure! |
What this PR does / why we need it:
Adds the kubemark provider to test/infrastructure to live alongside its BFF, CAPD.
Right now it lives in https://github.com/benmoss/cluster-api-provider-kubemark which is less than ideal.
Alternately we can move it to a new kubernetes-sigs repo of its own, but figured I would propose this first since we already have CAPD as precedent for these development-minded providers living in this repo.
I think the release Makefile bits will need some twiddling, it's mostly based on CAPA's Makefile but probably should converge more with CAPD's.