-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
version next
(currently 1.68.0) does not support anymore config.executions schema
#125
Comments
The issue actually begins in 1.67.1 |
@MaSpeng, @finkinfridom @medwingADMIN for binging that up! I am not thrilled to fully switch from the config file to env vars. The main reason is, that this will have quite some impact on existing charts users, and we would bump the major version of the chart to +1. Maybe @ivov from n8n can shed some light on the strategy. As the first step, I suggest removing |
https://truecharts.org/charts/stable/n8n/ supposedly working with 1.68.0 |
Given most of our needed configurations can be set as environment variables and, given also setting
didn't fix the issue (as the default values were still there) the only working solution found was to set
in your values.yaml With the above changes, the configmap for the |
i know im a noob, but my friend who set my k8s up for me used the other chart i linked above, im on 1.68 now with no problems in queue mode woth workers, webhook processors and replicas |
@MaSpeng Thanks for bringing this up. Our config schema has grown too large and does not support dependency injection, so we are slowly moving towards smaller independent configs that support DI. The env vars continue to work as before, but we did not realize that the internal structure of the config schema was being relied on externally. Sorry for the inconvenience - we'll rename back those keys. But please bear in mind we will likely deprecate and later drop |
thank you all for the input and contributions. I find applications that have multiple levels of configuration to be easier to operate and reason about because one likely end up with config file for general configuration setup and env vars for environment-specific changes. This makes it easy to see what is a general config option and what is env-specific. For the upcoming iteration of the chart, I propose maintaining the structured format and converting it to environment variables. db:
type: postgresdb
postgresdb:
database: n8n
host: localhost
port: 12345
... will become env variables DB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=12345
... There are also the |
@Vad1mo I appreciate the changes you want to make. The incorporation of the I think this would look like this in the end: db:
type: postgresdb
postgresdb:
database: n8n
host: localhost
port: 5432
password_file: /var/secrets/postgres_secret I would suggest to also provide a schema to ensure that a specific configuration is only provided through an environment or a file. This might be a general improvement to ensure the configuration is sensible, so not someone tries to configure By the way could also be a good time to remove deprecated configurations like |
I see, the
Schema is a good key word, I had that in mind too, especially since I recently stumbled across helm-cel, do you know if there is a list of support env vars for n8n? the closest I got was this However, I would rather not limit the chart to be too strictly in a way that prevents people from setting env vars that are not yet supported by this chart.
|
I agree with the part out of scope.
Personally, I would use the documentation for this topic: https://docs.n8n.io/hosting/configuration/environment-variables/
I would also not limit the |
This is the n8n helm chart values for the next version. I designed it that you can now configure n8n, worker, and webhook individually. @finkinfridom # README
# High level values structure, overview and explanation of the values.yaml file.
# 1. chart wide values, like the image repository, image tag, etc.
# 2. ingress, (tested with nginx, but it likly works with other too)
# 3. n8n app configuration + kubernetes specific settings
# 4. worker related settings + kubernetes specific settings
# 5. webhook related settings + kubernetes specific settings
# 6. Redis related settings + kubernetes specific settings
##
##
## Common Kubernetes Config Settings for this entire n8n deployment
##
image:
repository: n8nio/n8n
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# define a custom incressClassName, like "traefik" or "nginx"
className: ""
# n8n application related n8n configuration + Kubernetes specific settings
n8n:
# The config: {} dictionary is converted to environmental variables in the ConfigMap.
# Example: the YAML entry db.postgresdb.host: loclahost is transformed DB_POSTGRESDB_HOST=localhost
# See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values.
config:
n8n:
# If not specified, n8n will create a random encryption key for encrypting saved credentials, and saves it in the dir ~/.n8n folder
# if you run a stateless n8n, you should provide an encryption key here.
encryption_key:
# db:
# type: postgresdb
# postgresdb:
# host: 192.168.0.52
# Dictionary for secrets, unlike config:, the values here will end up in the secret file.
# The YAML entry db.postgresdb.password: my_secret is transformed DB_POSTGRESDB_password=bXlfc2VjcmV0
# See https://docs.n8n.io/hosting/configuration/environment-variables/
secret:
# database:
# postgresdb:
# password: 'big secret'
##
## N8n Kubernetes specific settings
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: false
# what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
type: emptyDir
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## PVC annotations
#
# If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
# This is not maintained at Helm v3 anymore.
# https://github.com/8gears/n8n-helm-chart/issues/8
#
# annotations:
# helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 1Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
# If these options are not set, default values are 25%
# deploymentStrategy:
# type: RollingUpdate
# maxSurge: "50%"
# maxUnavailable: "50%"
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
# your own docker image
# see https://github.com/8gears/n8n-helm-chart/pull/30
lifecycle:
{}
# here's the sample configuration to add mysql-client to the container
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "apk add mysql-client"]
# here you can override a command for main container
# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
# run additional preparation steps (e.g., installing additional software)
command: []
# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
# command:
# - tini
# - --
# - /bin/sh
# - -c
# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n
# here you can override the livenessProbe for the main container
# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like
livenessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can override the readinessProbe for the main container
# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like
readinessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can add init containers to the various deployments
initContainers: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}
webhookResources:
{}
resources:
{}
# We usually recommend not specifying default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
## Worker related settings
worker:
enabled: false
count: 2
concurrency: 2
config: {}
secret: {}
##
## Worker Kubernetes specific settings
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: false
# what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
type: emptyDir
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## PVC annotations
#
# If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
# This is not maintained at Helm v3 anymore.
# https://github.com/8gears/n8n-helm-chart/issues/8
#
# annotations:
# helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 1Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
# If these options are not set, default values are 25%
# deploymentStrategy:
# type: RollingUpdate
# maxSurge: "50%"
# maxUnavailable: "50%"
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
# your own docker image
# see https://github.com/8gears/n8n-helm-chart/pull/30
lifecycle:
{}
# here's the sample configuration to add mysql-client to the container
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "apk add mysql-client"]
# here you can override a command for main container
# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
# run additional preparation steps (e.g., installing additional software)
command: []
# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
# command:
# - tini
# - --
# - /bin/sh
# - -c
# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n
# here you can override the livenessProbe for the main container
# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like
livenessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can override the readinessProbe for the main container
# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like
readinessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can add init containers to the various deployments
initContainers: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}
webhookResources:
{}
resources:
{}
# We usually recommend not specifying default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
## Webhook related settings
# With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance.
# See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation.
webhooks:
enabled: false
count: 1
config: {}
secret: {}
##
## Webhook Kubernetes specific settings
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: false
# what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
type: emptyDir
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## PVC annotations
#
# If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
# This is not maintained at Helm v3 anymore.
# https://github.com/8gears/n8n-helm-chart/issues/8
#
# annotations:
# helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 1Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
# If these options are not set, default values are 25%
# deploymentStrategy:
# type: RollingUpdate
# maxSurge: "50%"
# maxUnavailable: "50%"
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
# your own docker image
# see https://github.com/8gears/n8n-helm-chart/pull/30
lifecycle:
{}
# here's the sample configuration to add mysql-client to the container
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "apk add mysql-client"]
# here you can override a command for main container
# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
# run additional preparation steps (e.g., installing additional software)
command: []
# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
# command:
# - tini
# - --
# - /bin/sh
# - -c
# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n
# here you can override the livenessProbe for the main container
# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like
livenessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can override the readinessProbe for the main container
# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like
readinessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can add init containers to the various deployments
initContainers: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}
webhookResources:
{}
resources:
{}
# We usually recommend not specifying default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
## Bitnami Valkey configuration
## https://artifacthub.io/packages/helm/bitnami/valkey
redis:
enabled: false
architecture: standalone
master:
persistence:
enabled: true
existingClaim: ""
size: 2Gi
|
I cant think of anything else it would need execution mode? |
@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on |
@Vad1mo looks good to me too. thanks a lot |
now you do it like this with enabled: true like: webhooks:
enabled: false
config: {}
secret: {}
...
worker:
enabled: false
count: 2
concurrency: 2
config: {}
secret: {} |
what about enabling queue mode |
is count/concurrency equivalent to replicas |
@mayphilc Thats what was talked about in this topic, the "queue" mode is actually the "EXECUTIONS_MODE", so I when enabling the webhook/worker options, the chart will set the execution mode to "queue", at least that what I expect by now :). |
I now see that I need to take Redis into account when enabling the queue mode. So it will likely have an influence on each other. I have other questions regarding webhook/worker.
|
@Vad1mo From the environment variable configuration in the n8n documentation, it looks like you could at least use each environment variable on each service. If it will have any effect is decided at the runtime and seems not to be documented. As a consequence we normally share the same environment variables on each service (main, webhook, worker), which is especially important if you allow additional build in or external modules for code nodes. I think in the end, you should at least be able to overwrite or nullify a particular setting for a specific service so in the end, the consumer of this helm chart has the freedom to configure his specific needs, but could also be a case of YAGNI ^^. |
Thanks for your view. I was thinking the same, glad I was able to reassure my view |
Hi @Vad1mo , as raised in this PR. I haven't see the deployment custom labels or annotations support on the example above. Apart from that, I believe we can implement globalDeploymentAnnotations: {} # Applied to all deployments
globalDeploymentLabels: () # Applied to all deployments
n8n:
# other keys
deploymentAnnotations: {} # Applied only to n8n deployment
deploymentLabels: {} # Applied only to n8n deployment
topologySpreadConstraints: {} # Applied only to n8n deployment
webhook:
# other keys
deploymentAnnotations: {} # Applied only to webhook deployment
deploymentLabels: {} # Applied only to webhook deployment
topologySpreadConstraints: {} # Applied only to webhook deployment
worker:
# other keys
deploymentAnnotations: {} # Applied only to worker deployment
deploymentLabels: {} # Applied only to worker deployment
topologySpreadConstraints: {} # Applied only to worker deployment What do you think? |
hey all, I guess you are waiting for the new release, I have been working slowly on making the needed changes. I would consider at the moment deployment.yaml to be closes to complete. |
Hello, here a quick status update. The happy (without worker/webhook) path scenario is working. Also take a look at this example values file Great things achieved so far:
Please give it a try and let me know what is missing or is still incorrect. |
@Vad1mo any update on this and when this new branch will be production-ready? On the frontpage of the GH repo I read it was supposed to be January 2025 but I'd assume it's still not complete. |
It is already working, the only missing part are webhooks and worker. You can already give it a go |
@Vad1mo are they essential to n8n? I haven't used it before and I'm not sure about the impact here. Mostly I was working with make.com and zapier as managed solutions. |
Yes, but they are part of "main" too. In this context it's about scalability and high throughput where webhook and worker run as separate pods and not in the main process. FYI. We don't use it ourself but some in the community do |
@Vad1mo got it. So let's say I start utilizing the "next" branch of this helm chart, when it goes back into main, will it be compatible or I need to recreate all workflows when this happens and we upgrade? |
Just stick with the latest tag it's usually very stable. Any fresher than that is experimental. There's no real reason you'd lose anything as long as you install and perform updates/rollback correctly. Don't delete your persistent volumes! |
Next will be the new main. I expect all values.yaml settings will stay as they are now in next. |
i feel like i should explain how databases work because the answer to your question is yes and no and maybe. #1 n8n uses sqlite or postgres/supabase for its sql. this is just to store structured data the same way a website needs to, and yes thats going to be in the machine/cloud its installed on. thats just things like settings and other structural data though #2 your workflows are stored on a hard drive but not in any database so unless you back them up and store them elsewhere, youll lose them if the drive goes. #3 the data you are managing with your flows, like chat context and documents you parse etc etc, get stored in relational databses also known as vector databases, and these can be remote or local it depends on you. I prefer to stay self hosted i like control over my data. 80% or more of anything you will ever build in n8n is basicaly just routing connections with logic to and from seperate online servies that normally never work together the way you need. For example using AI to field calls for your busines after hours could involve local or remote knowledge bases, local or remote AI, a plethora of online services and communication tools etc and theres really no limit what you can build other than n8n deciding they want to rape you for $500/mo if you want a production worthy n8n, otherwise you are limited to the constraints of either the cheap cloud plans or the community edition both of which will absolutely do anything 1 person could want, just dont expect it to handle alot of concurrent tasks or be secure. unless you want to spin up seperate installations behind your own auth setup. my point is the majority of everything youll do will leverage online services inlcuding the storage of actual data unless you self host your own vector databases etc etc. |
@mayphilc thanks for the explanation, I don't see why the arrogance though. If you've checked out my profile, you'd see I have a Cloud services company, so I know very well how databases work. The only reason I asked about the disk storage is because you didn't say whether block storage is used even if I have an external DB connected to it, my bad if it was too much of an obvious question.. |
how was i arrogant? asking if having an external database connected effect disk storage is not a straight forward question and honestly makes absolutely no sense at all, they have nothing to do with each other in any simple way. I spent my time explaining mist f the way n8n ca utilize combinations of storage to try to clarify it for you and you think thats me being arrogant? heres me being arrogant: i saw your wesite before i even answered you, and it looks like a very well built website made by somebody who is desperately trying to succeed in the cloud services industry because not even 1 service you supposedly provide is defined anywhere other than simple generic "be in the cloud" and "Use genAI with your product". good luck bud i hope you didnt invest too much into whatever you think your company does |
@mayphilc jeez dude, chill out. |
Im fine I just regret taking 2 minutes of my life to stop and educate you |
When deploying
next
version for n8n (currently the 1.68.0) the pod keeps restarting because theconfig.executions
schema did not match anymore.Is it a breaking change for the upcoming version? What's the suggested way to migrate to the next version?
Currently, the helm chart provide a couple of configuration keys
https://github.com/8gears/n8n-helm-chart/blob/main/charts/n8n/values.yaml#L8
Given the current n8n documentation states about environment variables usage (https://docs.n8n.io/hosting/scaling/execution-data/#enable-data-pruning) shouldn't be easier to move everything to the env variables instead of the config map?
The text was updated successfully, but these errors were encountered: