Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

version next (currently 1.68.0) does not support anymore config.executions schema #125

Open
finkinfridom opened this issue Nov 18, 2024 · 38 comments
Labels
enhancement New feature or request

Comments

@finkinfridom
Copy link

When deploying next version for n8n (currently the 1.68.0) the pod keeps restarting because the config.executions schema did not match anymore.
Is it a breaking change for the upcoming version? What's the suggested way to migrate to the next version?

Currently, the helm chart provide a couple of configuration keys
https://github.com/8gears/n8n-helm-chart/blob/main/charts/n8n/values.yaml#L8

Given the current n8n documentation states about environment variables usage (https://docs.n8n.io/hosting/scaling/execution-data/#enable-data-pruning) shouldn't be easier to move everything to the env variables instead of the config map?

@medwingADMIN
Copy link

The issue actually begins in 1.67.1

@MaSpeng
Copy link

MaSpeng commented Nov 18, 2024

There are multiple pull requests wich remove sections from the configuration schema since version 1.50.0 it seems:

These changes seem all be breaking and not documented as well :(

@Vad1mo
Copy link
Member

Vad1mo commented Nov 18, 2024

@MaSpeng, @finkinfridom @medwingADMIN for binging that up!

I am not thrilled to fully switch from the config file to env vars. The main reason is, that this will have quite some impact on existing charts users, and we would bump the major version of the chart to +1.
We would also need to change how we approach configs and docs in that chart.

Maybe @ivov from n8n can shed some light on the strategy.

As the first step, I suggest removing executions from values.yaml

@mayphilc
Copy link

mayphilc commented Nov 18, 2024

https://truecharts.org/charts/stable/n8n/

supposedly working with 1.68.0

@finkinfridom
Copy link
Author

Given most of our needed configurations can be set as environment variables and, given also setting

config:
  executions: {}

didn't fix the issue (as the default values were still there) the only working solution found was to set

config:
  executions: null

in your values.yaml

With the above changes, the configmap for the config doesn't exist anymore on the cluster and the app correctly upgrade through helm

@mayphilc
Copy link

i know im a noob, but my friend who set my k8s up for me used the other chart i linked above, im on 1.68 now with no problems in queue mode woth workers, webhook processors and replicas

@ivov
Copy link

ivov commented Nov 19, 2024

@MaSpeng, @finkinfridom @medwingADMIN for binging that up!

I am not thrilled to fully switch from the config file to env vars. The main reason is, that this will have quite some impact on existing charts users, and we would bump the major version of the chart to +1. We would also need to change how we approach configs and docs in that chart.

Maybe @ivov from n8n can shed some light on the strategy.

As the first step, I suggest removing executions from values.yaml

@MaSpeng Thanks for bringing this up.

Our config schema has grown too large and does not support dependency injection, so we are slowly moving towards smaller independent configs that support DI.

The env vars continue to work as before, but we did not realize that the internal structure of the config schema was being relied on externally. Sorry for the inconvenience - we'll rename back those keys. But please bear in mind we will likely deprecate and later drop N8N_CONFIG_FILES in n8n v2, in favor of .env files and *_FILE env vars.

@Vad1mo
Copy link
Member

Vad1mo commented Nov 20, 2024

thank you all for the input and contributions.
The next step would be to prepare this chart for the breaking change.
My main intent is to provide a clear and actionable path for users to be able to upgrade that chart.

I find applications that have multiple levels of configuration to be easier to operate and reason about because one likely end up with config file for general configuration setup and env vars for environment-specific changes. This makes it easy to see what is a general config option and what is env-specific.

For the upcoming iteration of the chart, I propose maintaining the structured format and converting it to environment variables.

db:
  type: postgresdb
  postgresdb:
    database: n8n
    host: localhost
    port: 12345
...      

will become env variables

DB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=12345
...

There are also the .*_FILE env vars, I am not sure if we should incorporate that too.

@MaSpeng
Copy link

MaSpeng commented Nov 20, 2024

@Vad1mo I appreciate the changes you want to make.

The incorporation of the _FILE environments could be useful, for example when using a secret management/vault which provides the secrets through files, DB_POSTGRESDB_PASSWORD_FILE as example for this use case.

I think this would look like this in the end:

db:
  type: postgresdb
  postgresdb:
    database: n8n
    host: localhost
    port: 5432
    password_file: /var/secrets/postgres_secret

I would suggest to also provide a schema to ensure that a specific configuration is only provided through an environment or a file.

This might be a general improvement to ensure the configuration is sensible, so not someone tries to configure sqlite as database when the type is already set to postgresdb.

By the way could also be a good time to remove deprecated configurations like mysqldb.

@Vad1mo
Copy link
Member

Vad1mo commented Nov 20, 2024

The incorporation of the _FILE environments could be useful, for example when using a secret management/vault which provides the secrets through files, DB_POSTGRESDB_PASSWORD_FILE as example for this use case.

I think this would look like this in the end:

db:
  type: postgresdb
  postgresdb:
    database: n8n
    host: localhost
    port: 5432
    password_file: /var/secrets/postgres_secret

I see, the password_file: /var/secrets/postgres_secret would be supported out of the box, I wonder how /var/secrets/postgres_secret is mounted.
I guess given the various solutions, this should be something out of the scope of this chart.

I would suggest to also provide a schema to ensure that a specific configuration is only provided through an environment or a file.

This might be a general improvement to ensure the configuration is sensible, so not someone tries to configure sqlite as database when the type is already set to postgresdb.

Schema is a good key word, I had that in mind too, especially since I recently stumbled across helm-cel, do you know if there is a list of support env vars for n8n? the closest I got was this

However, I would rather not limit the chart to be too strictly in a way that prevents people from setting env vars that are not yet supported by this chart.

By the way could also be a good time to remove deprecated configurations like mysqldb.
Agree

@MaSpeng
Copy link

MaSpeng commented Nov 22, 2024

@Vad1mo

I guess given the various solutions, this should be something out of the scope of this chart.

I agree with the part out of scope.

Schema is a good key word, I had that in mind too, especially since I recently stumbled across helm-cel, do you know if there is a list of support env vars for n8n? the closest I got was this

Personally, I would use the documentation for this topic: https://docs.n8n.io/hosting/configuration/environment-variables/

However, I would rather not limit the chart to be too strictly in a way that prevents people from setting env vars that are not yet supported by this chart.

I would also not limit the extraEnvs part, I mostly thought about the configuration parameter like database and so on.

@Vad1mo
Copy link
Member

Vad1mo commented Dec 15, 2024

This is the n8n helm chart values for the next version.

I designed it that you can now configure n8n, worker, and webhook individually.
Please look at the values files and let me know it that makes sense from the user perspective.

@finkinfridom
@MaSpeng
@mayphilc
@ivov
@medwingADMIN

# README
# High level values structure, overview and explanation of the values.yaml file.
# 1. chart wide values, like the image repository, image tag, etc.
# 2. ingress, (tested with nginx, but it likly works with other too)
# 3. n8n app configuration + kubernetes specific settings
# 4. worker related settings + kubernetes specific settings
# 5. webhook related settings + kubernetes specific settings
# 6. Redis related settings + kubernetes specific settings

##
##
## Common Kubernetes Config Settings for this entire n8n deployment
##
image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""
imagePullSecrets: []

ingress:
  enabled: false
  annotations: {}
  # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

  # define a custom incressClassName, like "traefik" or "nginx"
  className: ""



# n8n application related n8n configuration + Kubernetes specific settings
n8n:
  # The config: {} dictionary is converted to environmental variables in the ConfigMap.
  # Example: the YAML entry db.postgresdb.host: loclahost is transformed DB_POSTGRESDB_HOST=localhost
  # See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values.
  config:
    n8n:
      # If not specified, n8n will create a random encryption key for encrypting saved credentials, and saves it in the dir ~/.n8n folder
      # if you run a stateless n8n, you should provide an encryption key here.
      encryption_key:
    #  db:
    #    type: postgresdb
    #    postgresdb:
    #      host: 192.168.0.52

  # Dictionary for secrets, unlike config:, the values here will end up in the secret file.
  # The YAML entry db.postgresdb.password: my_secret is transformed DB_POSTGRESDB_password=bXlfc2VjcmV0
  # See https://docs.n8n.io/hosting/configuration/environment-variables/
  secret:
  #    database:
  #      postgresdb:
  #        password: 'big secret'

  ##
  ## N8n Kubernetes specific settings
  ##
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    type: emptyDir
    ## Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## PVC annotations
    #
    # If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
    # This is not maintained at Helm v3 anymore.
    # https://github.com/8gears/n8n-helm-chart/issues/8
    #
    # annotations:
    #   helm.sh/resource-policy: keep
    ## Persistent Volume Access Mode
    ##
    accessModes:
      - ReadWriteOnce
    ## Persistent Volume size
    ##
    size: 1Gi
    ## Use an existing PVC
    ##
    # existingClaim:

  replicaCount: 1

  # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
  # If these options are not set, default values are 25%
  # deploymentStrategy:
  #  type: RollingUpdate
  #  maxSurge: "50%"
  #  maxUnavailable: "50%"

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # Annotations to add to the service account
    annotations: {}
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""

  podAnnotations: {}

  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
    #  runAsNonRoot: true
  #  runAsUser: 1000

  # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
  # your own docker image
  # see https://github.com/8gears/n8n-helm-chart/pull/30
  lifecycle:
    {}

  #  here's the sample configuration to add mysql-client to the container
  # lifecycle:
  #  postStart:
  #    exec:
  #      command: ["/bin/sh", "-c", "apk add mysql-client"]

  # here you can override a command for main container
  # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
  # run additional preparation steps (e.g., installing additional software)
  command: []

  # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
  # command:
  #  - tini
  #  - --
  #  - /bin/sh
  #  - -c
  #  - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n

  # here you can override the livenessProbe for the main container
  # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like

  livenessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can override the readinessProbe for the main container
  # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can add init containers to the various deployments
  initContainers: []

  service:
    type: ClusterIP
    port: 80
    annotations: {}
  workerResources:
    {}
  webhookResources:
    {}
  resources:
    {}
    # We usually recommend not specifying default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
    #   cpu: 100m
  #   memory: 128Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  nodeSelector: {}
  tolerations: []
  affinity: {}


## Worker related settings
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

  ##
  ## Worker Kubernetes specific settings
  ##
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    type: emptyDir
    ## Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## PVC annotations
    #
    # If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
    # This is not maintained at Helm v3 anymore.
    # https://github.com/8gears/n8n-helm-chart/issues/8
    #
    # annotations:
    #   helm.sh/resource-policy: keep
    ## Persistent Volume Access Mode
    ##
    accessModes:
      - ReadWriteOnce
    ## Persistent Volume size
    ##
    size: 1Gi
    ## Use an existing PVC
    ##
    # existingClaim:

  replicaCount: 1

  # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
  # If these options are not set, default values are 25%
  # deploymentStrategy:
  #  type: RollingUpdate
  #  maxSurge: "50%"
  #  maxUnavailable: "50%"

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # Annotations to add to the service account
    annotations: {}
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""

  podAnnotations: {}

  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
  #  runAsNonRoot: true
  #  runAsUser: 1000

  # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
  # your own docker image
  # see https://github.com/8gears/n8n-helm-chart/pull/30
  lifecycle:
    {}

  #  here's the sample configuration to add mysql-client to the container
  # lifecycle:
  #  postStart:
  #    exec:
  #      command: ["/bin/sh", "-c", "apk add mysql-client"]

  # here you can override a command for main container
  # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
  # run additional preparation steps (e.g., installing additional software)
  command: []

  # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
  # command:
  #  - tini
  #  - --
  #  - /bin/sh
  #  - -c
  #  - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n

  # here you can override the livenessProbe for the main container
  # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like

  livenessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can override the readinessProbe for the main container
  # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can add init containers to the various deployments
  initContainers: []

  service:
    type: ClusterIP
    port: 80
    annotations: {}



  workerResources:
    {}

  webhookResources:
    {}

  resources:
    {}
    # We usually recommend not specifying default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
  #   cpu: 100m
  #   memory: 128Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80

  nodeSelector: {}

  tolerations: []

  affinity: {}

## Webhook related settings
# With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance.
# See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation.
webhooks:
  enabled: false
  count: 1
  config: {}
  secret: {}
  ##
  ## Webhook Kubernetes specific settings
  ##
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    type: emptyDir
    ## Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## PVC annotations
    #
    # If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
    # This is not maintained at Helm v3 anymore.
    # https://github.com/8gears/n8n-helm-chart/issues/8
    #
    # annotations:
    #   helm.sh/resource-policy: keep
    ## Persistent Volume Access Mode
    ##
    accessModes:
      - ReadWriteOnce
    ## Persistent Volume size
    ##
    size: 1Gi
    ## Use an existing PVC
    ##
    # existingClaim:

  replicaCount: 1

  # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
  # If these options are not set, default values are 25%
  # deploymentStrategy:
  #  type: RollingUpdate
  #  maxSurge: "50%"
  #  maxUnavailable: "50%"

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # Annotations to add to the service account
    annotations: {}
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""

  podAnnotations: {}

  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
  #  runAsNonRoot: true
  #  runAsUser: 1000

  # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
  # your own docker image
  # see https://github.com/8gears/n8n-helm-chart/pull/30
  lifecycle:
    {}

  #  here's the sample configuration to add mysql-client to the container
  # lifecycle:
  #  postStart:
  #    exec:
  #      command: ["/bin/sh", "-c", "apk add mysql-client"]

  # here you can override a command for main container
  # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
  # run additional preparation steps (e.g., installing additional software)
  command: []

  # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
  # command:
  #  - tini
  #  - --
  #  - /bin/sh
  #  - -c
  #  - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n

  # here you can override the livenessProbe for the main container
  # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like

  livenessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can override the readinessProbe for the main container
  # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can add init containers to the various deployments
  initContainers: []

  service:
    type: ClusterIP
    port: 80
    annotations: {}



  workerResources:
    {}

  webhookResources:
    {}

  resources:
    {}
    # We usually recommend not specifying default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
  #   cpu: 100m
  #   memory: 128Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80

  nodeSelector: {}

  tolerations: []

  affinity: {}



## Bitnami Valkey configuration
## https://artifacthub.io/packages/helm/bitnami/valkey
redis:
  enabled: false
  architecture: standalone

  master:
    persistence:
      enabled: true
      existingClaim: ""
      size: 2Gi

@mayphilc
Copy link

mayphilc commented Dec 15, 2024

@Vad1mo

I cant think of anything else it would need

execution mode?

@MaSpeng
Copy link

MaSpeng commented Dec 16, 2024

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

@finkinfridom
Copy link
Author

@Vad1mo looks good to me too. thanks a lot

@Vad1mo
Copy link
Member

Vad1mo commented Dec 17, 2024

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

now you do it like this with enabled: true

like:

webhooks:
  enabled: false
  config: {}
  secret: {}
  ...
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

@mayphilc
Copy link

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

now you do it like this with enabled: true

like:

webhooks:
  enabled: false
  config: {}
  secret: {}
  ...
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

what about enabling queue mode

@mayphilc
Copy link

mayphilc commented Dec 18, 2024

is count/concurrency equivalent to replicas

@MaSpeng
Copy link

MaSpeng commented Dec 18, 2024

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

now you do it like this with enabled: true
like:

webhooks:
  enabled: false
  config: {}
  secret: {}
  ...
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

what about enabling queue mode

@mayphilc Thats what was talked about in this topic, the "queue" mode is actually the "EXECUTIONS_MODE", so I when enabling the webhook/worker options, the chart will set the execution mode to "queue", at least that what I expect by now :).

@Vad1mo
Copy link
Member

Vad1mo commented Dec 18, 2024

webhook/worker
We haven't used webhook/worker so far, and I didn't look into the underlying code too deep. I am looking at it purely from a consumer perspective.

I now see that I need to take Redis into account when enabling the queue mode. So it will likely have an influence on each other.

I have other questions regarding webhook/worker.
In the future, webhook/worker will have their dedicated config.

  1. Do webhook/worker have always the same config as the main?
  2. Should webhook/worker have a config that is different from main
  3. Should webhook/worker be able to extend the config of main?

@MaSpeng
Copy link

MaSpeng commented Dec 18, 2024

@Vad1mo From the environment variable configuration in the n8n documentation, it looks like you could at least use each environment variable on each service. If it will have any effect is decided at the runtime and seems not to be documented.

As a consequence we normally share the same environment variables on each service (main, webhook, worker), which is especially important if you allow additional build in or external modules for code nodes.
You could enable them for main and while your are developing and testing a workflow, everything would be fine, but if you would forget to do the same on the worker service, the workflow would fail when executed regularly through a webhook or a schedule as example.

I think in the end, you should at least be able to overwrite or nullify a particular setting for a specific service so in the end, the consumer of this helm chart has the freedom to configure his specific needs, but could also be a case of YAGNI ^^.

@Vad1mo
Copy link
Member

Vad1mo commented Dec 18, 2024

I think in the end, you should at least be able to overwrite or nullify a particular setting for a specific service so in the end, the consumer of this helm chart has the freedom to configure his specific needs, but could also be a case of YAGNI ^^.

Thanks for your view. I was thinking the same, glad I was able to reassure my view

@dadangnh
Copy link
Contributor

dadangnh commented Jan 4, 2025

Hi @Vad1mo , as raised in this PR. I haven't see the deployment custom labels or annotations support on the example above. Apart from that, I believe we can implement topologySpreadConstraints config for each deployment as well. Hence, I'd like to propose adding something like the following:

globalDeploymentAnnotations: {} # Applied to all deployments
globalDeploymentLabels: () # Applied to all deployments
n8n:
  # other keys
  deploymentAnnotations: {} # Applied only to n8n deployment
  deploymentLabels: {} # Applied only to n8n deployment
  topologySpreadConstraints: {}  # Applied only to n8n deployment
webhook:
  # other keys
  deploymentAnnotations: {} # Applied only to webhook deployment
  deploymentLabels: {} # Applied only to webhook deployment
  topologySpreadConstraints: {}  # Applied only to webhook deployment
worker:
  # other keys
  deploymentAnnotations: {} # Applied only to worker deployment
  deploymentLabels: {} # Applied only to worker deployment
  topologySpreadConstraints: {}  # Applied only to worker deployment

What do you think?

@Vad1mo Vad1mo mentioned this issue Jan 10, 2025
@Vad1mo
Copy link
Member

Vad1mo commented Jan 10, 2025

hey all, I guess you are waiting for the new release, I have been working slowly on making the needed changes.
Meanwhile, I am pushing the not yet working code to add transparency and if you are up to for a review.

I would consider at the moment deployment.yaml to be closes to complete.

@Vad1mo
Copy link
Member

Vad1mo commented Jan 24, 2025

Hello, here a quick status update. The happy (without worker/webhook) path scenario is working.

Also take a look at this example values file

Great things achieved so far:

  1. Maintain a YAML config in Helm/Values where YAML is mapped 1:1 into ENV var (IMHO trees are better than ENV Vars)
  2. All n8n options supported.
  3. Extra env vars, for example to reference ConfigMaps and Secrets in other files (GitOpts, external secrets etc.)3 Fully configurable deployment, webhook, worker,
  4. Escape Hatch option, to inject any manifest. (Gateway API, Istio, CloudNativePG, ...)

Please give it a try and let me know what is missing or is still incorrect.

@finkinfridom @dadangnh @MaSpeng @mayphilc

@galovics
Copy link

@Vad1mo any update on this and when this new branch will be production-ready? On the frontpage of the GH repo I read it was supposed to be January 2025 but I'd assume it's still not complete.

@Vad1mo
Copy link
Member

Vad1mo commented Feb 13, 2025

@Vad1mo any update on this and when this new branch will be production-ready? On the frontpage of the GH repo I read it was supposed to be January 2025 but I'd assume it's still not complete.

It is already working, the only missing part are webhooks and worker. You can already give it a go

@galovics
Copy link

@Vad1mo are they essential to n8n? I haven't used it before and I'm not sure about the impact here. Mostly I was working with make.com and zapier as managed solutions.

@Vad1mo
Copy link
Member

Vad1mo commented Feb 14, 2025

are they essential to n8n?

Yes, but they are part of "main" too. In this context it's about scalability and high throughput where webhook and worker run as separate pods and not in the main process. FYI. We don't use it ourself but some in the community do

@galovics
Copy link

@Vad1mo got it. So let's say I start utilizing the "next" branch of this helm chart, when it goes back into main, will it be compatible or I need to recreate all workflows when this happens and we upgrade?

@mayphilc
Copy link

Just stick with the latest tag it's usually very stable. Any fresher than that is experimental.

There's no real reason you'd lose anything as long as you install and perform updates/rollback correctly.

Don't delete your persistent volumes!

@Vad1mo
Copy link
Member

Vad1mo commented Feb 14, 2025

let's say I start utilizing the "next" branch of this helm chart, when it goes back into main, will it be compatible

Next will be the new main. I expect all values.yaml settings will stay as they are now in next.

@galovics
Copy link

galovics commented Feb 14, 2025

@mayphilc

Don't delete your persistent volumes!

I was planning to use PostgreSQL for this instead of SQLite. Does n8n store anything on disk in case there's an external database connected to it?

@Vad1mo any plans when the "next will be the new main" will happen?

@mayphilc
Copy link

@mayphilc

Don't delete your persistent volumes!

I was planning to use PostgreSQL for this instead of SQLite. Does n8n store anything on disk in case there's an external database connected to it?

@Vad1mo any plans when the "next will be the new main" will happen?

i feel like i should explain how databases work because the answer to your question is yes and no and maybe.

#1 n8n uses sqlite or postgres/supabase for its sql. this is just to store structured data the same way a website needs to, and yes thats going to be in the machine/cloud its installed on. thats just things like settings and other structural data though

#2 your workflows are stored on a hard drive but not in any database so unless you back them up and store them elsewhere, youll lose them if the drive goes.

#3 the data you are managing with your flows, like chat context and documents you parse etc etc, get stored in relational databses also known as vector databases, and these can be remote or local it depends on you. I prefer to stay self hosted i like control over my data.

80% or more of anything you will ever build in n8n is basicaly just routing connections with logic to and from seperate online servies that normally never work together the way you need. For example using AI to field calls for your busines after hours could involve local or remote knowledge bases, local or remote AI, a plethora of online services and communication tools etc and theres really no limit what you can build other than n8n deciding they want to rape you for $500/mo if you want a production worthy n8n, otherwise you are limited to the constraints of either the cheap cloud plans or the community edition both of which will absolutely do anything 1 person could want, just dont expect it to handle alot of concurrent tasks or be secure. unless you want to spin up seperate installations behind your own auth setup.

my point is the majority of everything youll do will leverage online services inlcuding the storage of actual data unless you self host your own vector databases etc etc.

@galovics
Copy link

@mayphilc thanks for the explanation, I don't see why the arrogance though. If you've checked out my profile, you'd see I have a Cloud services company, so I know very well how databases work.

The only reason I asked about the disk storage is because you didn't say whether block storage is used even if I have an external DB connected to it, my bad if it was too much of an obvious question..

@mayphilc
Copy link

@mayphilc thanks for the explanation, I don't see why the arrogance though. If you've checked out my profile, you'd see I have a Cloud services company, so I know very well how databases work.

The only reason I asked about the disk storage is because you didn't say whether block storage is used even if I have an external DB connected to it, my bad if it was too much of an obvious question..

how was i arrogant? asking if having an external database connected effect disk storage is not a straight forward question and honestly makes absolutely no sense at all, they have nothing to do with each other in any simple way. I spent my time explaining mist f the way n8n ca utilize combinations of storage to try to clarify it for you and you think thats me being arrogant?

heres me being arrogant:

i saw your wesite before i even answered you, and it looks like a very well built website made by somebody who is desperately trying to succeed in the cloud services industry because not even 1 service you supposedly provide is defined anywhere other than simple generic "be in the cloud" and "Use genAI with your product". good luck bud i hope you didnt invest too much into whatever you think your company does

@galovics
Copy link

@mayphilc jeez dude, chill out.

@mayphilc
Copy link

Im fine I just regret taking 2 minutes of my life to stop and educate you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants