Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Variables not allowed #143

Open
outbackdingo opened this issue Nov 15, 2024 · 11 comments
Open

Error: Variables not allowed #143

outbackdingo opened this issue Nov 15, 2024 · 11 comments

Comments

@outbackdingo
Copy link

kubernetes git:(main) tofu plan
var.proxmox
Enter a value:


│ Error: Variables not allowed

│ on line 1:
│ (source code not available)

│ Variables may not be used here.


│ Error: No value for required variable

│ on variables.tf line 1:
│ 1: variable "proxmox" {

│ The root module input variable "proxmox" is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.

@sebiklamar
Copy link
Contributor

sebiklamar commented Nov 15, 2024

Hi @outbackdingo,

you just need to

  1. In proxmox: create i) user, ii) role and iii) token which can be leveraged for tofu to connect with your proxmox environment
  2. in tofu/kubernetes folder: create a *auto.tfvars file where provide the values in question

For (1) proxmox setup see: http://thomasgrant.net/2024/06/16/deploying-proxmox-with-opentofu-part-i/
For (2) you can start from an example credentials.auto.tfvars file:

proxmox = {
    name = "this value is not used in the code"
    cluster_name = "foo"
    endpoint = "https://my.proxmox.ip.or.serverfqdn.example.com:8006"
    insecure = false
    username = "root"
    api_token = "<user>@pve!<tokenID>=<tokenPassword>"
}

...where <user>and the <token*> credentials are those created in step (1)
The api_token assignment might look like:
api_token = "terraform@pve!killertofu=12345678-0abc-def0-1234-56789abcdef"

I'm in the progress of creating a PR for enhancing the README which helps newbies -- I'm in the same boat as you, though 4 weeks ahead.

HTH -- Sebastian

Edit:
I don't want to suppress Vegard's excellent blog article on implementing K8S with tofu and proxmox where he is also documenting the proxmox provider configuration (deep link to "Main Course") -- though lacking details on the <UUID> part.

Edit 2: Added name attribute to example file.

@outbackdingo
Copy link
Author

outbackdingo commented Nov 15, 2024

I had previously done all of step 1. when i deployed "c0depool-iac" which worked. Im now moving to try homelab as it has more features.
for step 2. i created a proxmox.auto.tfvars with

    cluster_name = "pve"
    endpoint = "https://192.168.10.245:8006"
    insecure = false
    username = "root"
    api_token = "terraform-prov@pve!uic=d1ee6e3c-0f31-48d1-9c15-f148c56203cc"
}

which is now throwing

Changes to Outputs:
  + talos_config = (sensitive value)
╷
│ Error: Invalid value for input variable
│
│   on variables.tf line 1:
│    1: variable "proxmox" {
│
│ The given value is not suitable for var.proxmox, which is sensitive: attribute "name" is required. Invalid value defined at proxmox.auto.tfvars:1,11-7,2.
╵

@sebiklamar
Copy link
Contributor

You need to provide the name attribute with any value assignment (it's not used in vergard's homelab code). I've missed that in my posting. Added it.

Sorry for the oversight, I've copy-pasted my *auto.tfvars file which is based on a patched version of the tofu/kubernetes/variables.tf file where I've remove the name = string definition because it's unused in the code. PR also pending.

@sebiklamar
Copy link
Contributor

@outbackdingo , the next error you will run into is the one I've stumbled over and which is documented with a workaround in #106

@sebiklamar
Copy link
Contributor

Let me paste my current revised version of the README (which is also giving hints on creating the sealed secrets key as part of the bootstrapping). Please note that you also need to change the nodes definition in main.tf -- and maybe also deactivate some proxmox volume definitions as well as change IPs for the cluster definition in the same file.

cd tofu/kubernetes      # if not there already
mkdir bootstrap/sealed-secrets/certificate
openssl req -x509 -days 365 -nodes -newkey rsa:4096 -keyout bootstrap/sealed-secrets/certificate/sealed-secrets.key -out bootstrap/sealed-secrets/certificate/sealed-secrets.cert -subj "/CN=sealed-secret/O=sealed-secret"

vi credentials.auto.tfvars

tofu init
tofu apply -target=module.talos.talos_image_factory_schematic.this
tofu apply

talosctl config merge output/talos-config.yaml

CLUSTER="talos"; kubectl config delete-context admin@$CLUSTER; kubectl config delete-user admin@$CLUSTER; kubectl config delete-cluster $CLUSTER

cp ~/.kube/config ~/.kube/config.bak && KUBECONFIG="~/.kube/config:output/kube-config.yaml" kubectl config view --flatten > /tmp/config && mv /tmp/config ~/.kube/config

After the cluster is bootstrapped, you would also need to adapt the K8S code to your environment

  • adapt any IP (192.168.1.x) and domain (*.stonegarden.dev)
  • change the email in letsencrypt definition in k8s/infra/controllers/cert-manager/cluster-issuer.yaml
  • change sealed secrets, see On documenting sealed secrets usage in README (and its proper usage) #141 (WIP)
  • change parameters for cloudflared tunnel in k8s/infra/network/cloudflared/config.yaml
  • for running ArgoCD with the builtin admin user set admin.enabled: true in k8s/infra/controllers/argocd/values.yaml
  • before doing so, also change github URLs in all application-set.yaml and project.yaml files used by ArgoCD

@outbackdingo
Copy link
Author

and next.... yeah after your comments and help above i started poking around the tree, and it appears quite proprietary to his environment/needs, which some what also fulfil my needs short of a few things added i dont need, im thinking to fork it and make it less proprietary, as in dns names can be vars and such. id like to see it working first though so i know what im dealing with.

Changes to Outputs:

  • kube_config = (sensitive value)

Do you want to perform these actions?
OpenTofu will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

module.talos.proxmox_virtual_environment_download_file.this["abel_dcac6b92c17d1d8947a0cee5e0e6b6904089aa878c70d66196bb1138dbd05d1a_v1.8.1"]: Creating...
module.talos.proxmox_virtual_environment_download_file.this["euclid_dcac6b92c17d1d8947a0cee5e0e6b6904089aa878c70d66196bb1138dbd05d1a_v1.8.1"]: Creating...
module.talos.proxmox_virtual_environment_download_file.this["cantor_dcac6b92c17d1d8947a0cee5e0e6b6904089aa878c70d66196bb1138dbd05d1a_v1.8.1"]: Creating...

│ Error: Error initiating file download

│ with module.talos.proxmox_virtual_environment_download_file.this["abel_dcac6b92c17d1d8947a0cee5e0e6b6904089aa878c70d66196bb1138dbd05d1a_v1.8.1"],
│ on talos/image.tf line 19, in resource "proxmox_virtual_environment_download_file" "this":
│ 19: resource "proxmox_virtual_environment_download_file" "this" {

│ Could not get file metadata, unexpected error: error fetching metadata from download url, unexpected error in GetQueryURLMetadata: error retrieving URL metadata for
│ 0xc000492c38: received an HTTP 500 response - Reason: hostname lookup 'abel' failed - failed to get address info for: abel: No address associated with hostname


│ Error: Error initiating file download

│ with module.talos.proxmox_virtual_environment_download_file.this["cantor_dcac6b92c17d1d8947a0cee5e0e6b6904089aa878c70d66196bb1138dbd05d1a_v1.8.1"],
│ on talos/image.tf line 19, in resource "proxmox_virtual_environment_download_file" "this":
│ 19: resource "proxmox_virtual_environment_download_file" "this" {

│ Could not get file metadata, unexpected error: error fetching metadata from download url, unexpected error in GetQueryURLMetadata: error retrieving URL metadata for
│ 0xc00089afc8: received an HTTP 500 response - Reason: hostname lookup 'cantor' failed - failed to get address info for: cantor: No address associated with hostname


│ Error: Error initiating file download

│ with module.talos.proxmox_virtual_environment_download_file.this["euclid_dcac6b92c17d1d8947a0cee5e0e6b6904089aa878c70d66196bb1138dbd05d1a_v1.8.1"],
│ on talos/image.tf line 19, in resource "proxmox_virtual_environment_download_file" "this":
│ 19: resource "proxmox_virtual_environment_download_file" "this" {

│ Could not get file metadata, unexpected error: error fetching metadata from download url, unexpected error in GetQueryURLMetadata: error retrieving URL metadata for
│ 0xc00058ac80: received an HTTP 500 response - Reason: hostname lookup 'euclid' failed - failed to get address info for: euclid: No address associated with hostname

@vehagn
Copy link
Owner

vehagn commented Nov 17, 2024

@outbackdingo

it appears quite proprietary to his environment/needs

I feel proprietary is too harsh, but I understand what you mean. I only have a limited time to work on this as a hobby, though I like to think I'm doing incremental improvements.

im thinking to fork it and make it less proprietary, as in dns names can be vars and such.

That would be very welcome! I've been thinking about a template similar to onedr0p's cluster template, but I've prioritised features instead.

A system similar to Spring Initializr, but with more templating would be awesome. Maybe a mega Helm chart?

The biggest hurdle would be customisability though... Nothing wrong with some copy, paste and edit though.

@sebiklamar
Copy link
Contributor

sebiklamar commented Nov 17, 2024

@outbackdingo, I think I know what you mean, though I see it differently.

I was also wondering that the nodes definition is hard-coded in main.tf and not leveraging variables that you can encrypt with SOPS. However, in the end it's no big difference.

@vehagn, you've used variables before according to a code/commit comment, I remember; how comes you've changed that?

I was also disappointed about the need to change every domain and IP reference (and the other things mentioned above). I liked the possibility of e.g. defining the domain for all kubernetes manifests in an environment variable like done in https://k3s.rocks/install-setup/.

However, that approach doesn't scale well (you would need to have a dedicated variable for each and every IP) and it won't work with CI/CD tools like Flux/ArgoCD (where the command line tool envsubst is not available).

@outbackdingo, how would you implement a more flexible k8s cluster using variables?

I also think that in the end a global search-and-replace in a forked repo works well because it's a simple editor change. Of course, any adaptations in the code will create some challenges in merging upstream code into.

I therefore decided to go the kustomize route which is a standard tool in the k8s/CNCF eco system. While it doesn't allow templating or using variables, it allows patching. Once you get to know the (new) trick of patching (not using patchesStrategicMerge which is outdated, instead using the patches: - path: file.yaml syntax), it is quite powerful. I even managed to change the domain references (like a sed "s/foo.example.com/bar.sub.example.net/") using kustomize's (powerful, yet complex) transformers.

See a PoC example for my homelab/k8s/apps/dev/whoami app. The base environment defines a common setup. The overlay poc changes the replica count and container version as well as the IP used -- all without the need to define the other/base stuff again (DRY). In addition, kustomize's transformer definitions change the domain (mentioned above). Of course, you also could change the hostname to something completely different in the overlay.

I have to admit that this approach has some complexity, though. However, it allows the definition of a common base and then put one's environment-specific adapations on-top. Normally, that overlay approach is used for different environments (e.g. dev, staging, prod), typically. I've liked the approach of using vegard's homelab code)as a base and then put my adaptions on-top -- and of course for having different environments for trial-and-error on my route to evolving a prod k8s cluster (and for test rebuilds).

@vehagn, I know onedr0p's cluster template. How would you allow the user to gain from any code changes from upstream that happen after template instantiation?

FYI kustomize example for homelab/k8s/apps/dev/whoami

# show base configuration 
kustomize build --enable-helm k8s/apps/dev/whoami/base

# poc config. is generated with 2 small patch files
kustomize build --enable-helm k8s/apps/dev/whoami/envs/poc

A diff shows the outcome of patching and transformes quite well.

@@ -7,8 +7,8 @@
 kind: Service
 metadata:
   annotations:
-    io.cilium/lb-ipam-ips: 192.168.8.7
-  name: whoami
+    io.cilium/lb-ipam-ips: 172.17.6.5
+  name: poc-whoami
   namespace: whoami
 spec:
   ports:
@@ -24,10 +24,10 @@
 metadata:
   labels:
     app: whoami
-  name: whoami
+  name: poc-whoami
   namespace: whoami
 spec:
-  replicas: 3
+  replicas: 1
   selector:
     matchLabels:
       app: whoami
@@ -38,7 +38,7 @@
       namespace: whoami
     spec:
       containers:
-      - image: ghcr.io/traefik/whoami:latest
+      - image: ghcr.io/traefik/whoami:v1.10
         name: whoami
         ports:
         - containerPort: 80
@@ -68,12 +68,12 @@
 apiVersion: gateway.networking.k8s.io/v1
 kind: HTTPRoute
 metadata:
-  name: whoami-ingress
+  name: poc-whoami-ingress
   namespace: whoami
 spec:
   hostnames:
-  - whoami-ingress.example.net
-  - whoami-ingress2.example.net
+  - poc-whoami-ingress.sub.domain.example.com
+  - poc-whoami-ingress2.sub.domain.example.com
   parentRefs:
   - name: internal
     namespace: gateway

See the simplicity of poc's definition which generates a full blown resource definition, nevertheless.

# k8s/apps/dev/whoami/envs/poc/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: whoami
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: whoami
          image: ghcr.io/traefik/whoami:v1.10

# k8s/apps/dev/whoami/envs/poc/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: whoami
  annotations:
    io.cilium/lb-ipam-ips: 172.17.6.5

See sebiklamar/homelab/k8s/apps/dev/whoami for more details.

@outbackdingo
Copy link
Author

@vehagn your docs are awesome, almost TL;DR.. but you have definatly helped me comprehend a lot of what you did

That would be very welcome! I've been thinking about a template similar to onedr0p's cluster template, but I've prioritised features instead.

I started with c0depool/c0depool-iac.git which also uses sops and after some tweaks had it spun up in a day, caveat to that it had no argo, and some manual requirements like adding cilium and things. And was pretty clear cut in explaining how to get it spun up, basically step by step, which in my opinion simply makes it easier for those wanting to learn, Im simply trying to get to a point where i have the added features wanted, like auth and security, and since im on a static ip at home, its in my "lab". I understand your fitting the needs to your environment, like cloudflare tunnels and multimedia things, where i dont require them. so im looking to minimize whats deployed.

@vehagn
Copy link
Owner

vehagn commented Nov 18, 2024

@sebiklamar

@vehagn, you've used variables before according to a code/commit comment, I remember; how comes you've changed that?

That's for my own convenience. I could alternatively separate it into two variable files, one with public information I can check into the repo and another with private information, but I've been lazy.

See a PoC example for my homelab/k8s/apps/dev/whoami app.

That's a neat approach! I haven't dug into the details yet, but I think I get the gist of it.

I've been planning to replace some of the Argo CD Application resources with ApplicationSets following this article by The Norwegian Mapping Authority. I think that could allow for a more centralised config.

@vehagn, I know onedr0p's cluster template. How would you allow the user to gain from any code changes from upstream that happen after template instantiation?

I would have to dogfood the template myself then.

@outbackdingo

@vehagn your docs are awesome, almost TL;DR.. but you have definatly helped me comprehend a lot of what you did

Thanks! I do have a habit of over-explaining, but that's mostly for myself when I have to understand what I did before.

At the time being this is just a hobby — so time is limited, though I do feel the urge to make it more accessible.

@outbackdingo
Copy link
Author

outbackdingo commented Nov 18, 2024

@vehagn i think if i could integrate your mini-lab into my deployment i would be there, my overall issue is theres a ton to change to simply get a cluster up as our environments are a bit different, im on a single DELL R640 in my Home Office/Den on a public IP, behind a opnsense firewall. Simply put your ArgoCD apps deployment, cilium, gateway, cert-manager, sealed secrets, keycloak, dns, without the cloudflared bits and maybe proxmox-csi are mosly what i need to add/integrate to get my lab up, i took down the c0depool deployment, but can easily spin that back up, i think id get stuck at adding the argocd and the missing apps to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants