Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error after running terraform apply #8

Open
friendshipmarker opened this issue Apr 23, 2021 · 3 comments
Open

Error after running terraform apply #8

friendshipmarker opened this issue Apr 23, 2021 · 3 comments
Labels
bug Something isn't working

Comments

@friendshipmarker
Copy link

I'm following the guide here for bootstrapping terraform, https://itnext.io/bootstrapping-kubernetes-clusters-on-aws-with-terraform-b7c0371aaea0 . After execution of terraform apply, i get the following error. I ssh into the machine 35.81.x.x and see that the /home/ubuntu/admin.conf has a permission of root:root still as though the chown command isn't being respected from the main.tf in the ../modules/cluster/main.tf file.

│ Error: Error running command 'alias scp='scp -q -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
│ scp [email protected]:/home/ubuntu/admin.conf /Users/alluser/aws/terraform-kubeadm/selected-lark.conf >/dev/null
│ ': exit status 1. Output: scp: /home/ubuntu/admin.conf: Permission denied

Did something happen with a recent update? Running on version

module "cluster" {
source = "weibeld/kubeadm/aws"
version = "~> 0.2.4"
}

@mrpaff
Copy link

mrpaff commented May 3, 2021

From what I see the admin.conf file is owned by root and other users do not have read access to it.
scp cannot elevate its privileges on the remote server, so a better option is to use rsync
I replaced lines 256 and 257 in .terraform\modules\cluster\main.tf with

alias rsync='rsync --rsync-path="sudo rsync" -e "ssh -q -i ${var.private_key_file} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"'
rsync ubuntu@${aws_eip.master.public_ip}:/home/ubuntu/admin.conf ${local.kubeconfig_file} >/dev/null

This worked for me.
I will open a Pull Request with this change, so the code owner can review it and merge it

@greenszpila
Copy link

I am getting the following error on my local machine as well as the ec2 master node:

@C02FGA9JMD6M terraform-kubeadm % kubectl --kubeconfig admin.conf get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

as well as the already mentioned scp error, but was able to copy the config file with rsync.

@weibeld
Copy link
Owner

weibeld commented Jan 11, 2023

@friendshipmarker @mrpaff @greenszpila If you encounter this error, please log in to the node with SSH and post the output of:

tail -n 20 /var/log/cloud-init-output.log

This includes the logs of the bootstrap script, so we can see what's going on.

I'm working on fixing these issues.

@weibeld weibeld added the bug Something isn't working label Jan 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants