-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add autoscaler local exec option #895
base: main
Are you sure you want to change the base?
feat: add autoscaler local exec option #895
Conversation
This commit adds the ability to deploy the k8s cluster autoscaler using local-exec rather than remote-exec. Using local-exec is helpful when you don't use the operator/bastion features of this module. Now if you set cluster_autoscaler_remote_exec variable to false terraform will run a `kubectl apply` command on the same machine where you are running Terraform. More info in this issue: oracle-terraform-modules#894 Signed-off-by: Chris Wiggins([email protected])
modules/extensions/autoscaler.tf
Outdated
} | ||
|
||
provisioner "local-exec" { | ||
inline = ["cat ${local.cluster_autoscaler_manifest} | kubectl apply --dry-run='client' -f -"] |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
@@ -118,7 +127,7 @@ data "helm_template" "cluster_autoscaler" { | |||
} | |||
|
|||
resource "null_resource" "cluster_autoscaler" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's cleaner to change this to remote_cluster_autoscaler
so it matches the new local_cluster_autoscaler
, however I didn't want to break anything upstream. Options:
- Change the name and add a
moved {}
block. - Leave the name as it is.
Any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, then all extensions will need this kind of feature. I like your idea of using a variable to control remote or local exec. Perhaps just use the same null resource but different conditional blocks based on the variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up going with the helm_release
resource, I think that is the better approach. I attempted to use a separate null_resource with a local-exec provisioner, but I would also have to add a local_file
resource to create the manifest and then apply it with kubectl. Using kubectl makes sense when you can't use the helm_release
resource remotely, but you can use it locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the helm provider being initialized.
I recommend using a kube-config datasource and creating a local kube-config file instead of relying on the existing one and the currently configured context.
Note that this approach assumes that OCI CLI is already installed and configured locally.
This comment was marked as outdated.
This comment was marked as outdated.
modules/extensions/autoscaler.tf
Outdated
} | ||
|
||
provisioner "local-exec" { | ||
command = "cat ${local.cluster_autoscaler_manifest} | kubectl apply --dry-run='client' -f -" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cat-ing that local variable into kubectl apply doesn't seem to work. I get an error like the following:
│ serviceAccountName: cluster-autoscaler-oci-oke-cluster-autoscaler
│ tolerations:
│ []
│ | kubectl apply --dry-run='client' -f -': exit status 2. Output: : -: not found
│ /bin/sh: 81: -: not found
│ /bin/sh: 82: -: not found
│ /bin/sh: 83: -: not found
│ /bin/sh: 84: -: not found
Maybe there are newlines in the variable that is messing it up?
Options:
- Use the helm provider that is already in
versions.tf
to just apply the helm chart. - Use the
local_file
resource to actually save the manifest yaml, which would have a similar effect to what the remote-exec is doing.
This commit updates the autoscaler so when you want it to run locally it will use the helm_release resource instead of a null_resource with a local command provisioner.
I was able to get this working with the I did run into an issue when trying to apply the helm package locally after I deployed it via the remote method. It looks like when deploying it via the manifest that is generated from the
TLDR: if you are switching from remote to local install of the cluster autoscaler helm chart you are going to have some manual work to do. Right now I only need this feature to work with the cluster autoscaler but I could see adding it to the other extentions. Hopefully this work for y'all, let me know if I need to make any changes. |
…form-oci-oke into 894-local-autoscaler-deploy
Can I get a maintainer to take a look at this PR? I've been using it in production for a few weeks. |
Sorry.
Problem solved.
I found out I had google auth app set up. And it worked.
Thanks.
You can close the ticket
F.Costa
+393351246931
Il Ven 8 Mar 2024, 23:17 Chris Wiggins ***@***.***> ha
scritto:
… Can I get a maintainer to take a look at this PR? I've been using it in
production for a few weeks.
—
Reply to this email directly, view it on GitHub
<#895 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHSK64KWLNPXF2C7MXFR3OLYXI2F3AVCNFSM6AAAAABC66NRY6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBWGQ4TOMRXGE>
.
You are receiving this because you are subscribed to this thread.Message
ID: <oracle-terraform-modules/terraform-oci-oke/pull/895/c1986497271@
github.com>
|
wrong PR? |
I think if this works as we expected. then other extension code need to be update too, like metrics-server etc... Thanks. |
I agree, however this PR has been open for over 6 months without any response from the maintainers. I'd be happy to add this feature to other extensions if there was traction on this PR. |
In this setup, the local We already support two options:
|
This commit adds the ability to deploy the k8s cluster autoscaler using local-exec rather than remote-exec. Using local-exec is helpful when you don't use the operator/bastion features of this module. Now if you set cluster_autoscaler_remote_exec variable to false terraform will run a
kubectl apply
command on the same machine where you are running Terraform.More info in this issue: #894
Signed-off-by: Chris Wiggins([email protected])