-
Notifications
You must be signed in to change notification settings - Fork 404
ERROR, Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1 #796
Comments
I'm having the exact same issue :( |
I have the same issue :'( |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
In my case, the pod was running as non-root user and wasn't able to read a token file.
|
I have the same problem, someone can give some tips ? |
The error message |
Seeing the same things on my side, has anyone figured a way around it? Passing in a service account to the pod with the associated IAMroleArn also does not seem to work. |
If you are not already running KES see |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
This issue was closed because it has been stalled for 30 days with no activity. |
We started seeing externalsecrets with status
ERROR, Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
intermittently after upgrading to 8.1.3 from 6.0.0 in one of the cluster. There are times when status is success for the same secret.We have multiple clusters with externalsecrets 8.1.3. The external secrets are with status success in clusters upto 120 secrets. The cluster where we see failures have more than 500 secrets.
We think it might be because of throttling from AWS, however logs does not indicate clearly.
We are using
Per pod IAM authentication
using kube2iamenv values
Log message
Do you have any suggestions or fixes?
We did not see any option which can help with throttling other than WATCH_TIMEOUT. Removing WATCH_TIMEOUT vafiable itself also did not help.
The text was updated successfully, but these errors were encountered: