-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault is sealed. Unsealing... storage: object doesn't exist #12
Comments
This issue is definitely related to the kms key ring and key. If you run everything in a new project it all works. If you try to delete and recreate the vault StatefulSet the vault fails to initialize raising a can't access core/migration error. |
|
Try deleting everything in the GCS bucket and deleting and re-deploying the statefulset |
That's the first thing I tried :-), didn't work. |
Stuck in the same place :( |
I've still not been able to reproduce this (I run these scripts from scratch every week). Can someone share a reproduction? |
Thanks for the fantastic work on this and document. I have the same issue though as BrentDorsey and vnbx. Steps To Reproduce On GCP, in already existing project (I cannot create a new project as I do not have permissions to do so in our subscription) Cloned repo vault-on-gke and followed the steps therein to the T. I had the issue as below on deploying the statefultset. The init container logs showed:`
First round debug: I deleted everything including the GCS bucket, the service account, the KMS key and recreated everything again from scratch (except that I could not recreate in a new project). I faced the same issue. Second Round Then I changed the vault-init image to version "1.0.0" and the Vault image to 1.0.2 in the manifest.
This was strange since my key had IAM policy "roles/editor" and "roles/cloudkms.cryptoKeyEncrypterDecrypter"
Round Three Cloned repo vault-kubernetes-workshop
I expanded the permission for both the key and keyring to "roles/owner" for the service account and yet get above error. And reverted this later. I have followed the steps exactly as in the repos mentioned above in each trial. The only change made was trying to change the vault-init image tag and vault image tag in the last but one trial. Could you kindly help resolve it? Thanks. |
Hi @sethvargo kubectl logs vault-0 -c vault-init |
can you post your .hcl file that you are constructing? you need to make sure that the bucket is defined there with the right permissions . |
@priyeshgpatel the .hcl, I am using is below which is actually given as argument for the initcontainer as defined at https://github.com/kelseyhightower/vault-on-google-kubernetes-engine/blob/e7e24127b62b8f120ff24a0de8413263ca54b0e3/vault.yaml listener "tcp" { |
you need to add the seal stanza with the keyring and key
|
@priyeshgpatel - adding that piece doesn't made any difference. It is still the same: |
Well you would need that stanza to create the seal keys.
|
Here is how to make it work: This is some race condition where the initialization for some reason does not complete successfully and does not store the .enc files in the bucket. this is extremely outdated and should be modified to use the new vault 1.1 with KMS built in here's vault.yaml with the changes needed to make it work with the new version notes:
apiVersion: v1
kind: Service
metadata:
name: vault
spec:
clusterIP: None
ports:
- name: http
port: 8200
- name: server
port: 8201
selector:
app: vault
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vault
labels:
app: vault
spec:
serviceName: "vault"
selector:
matchLabels:
app: vault
replicas: 2
template:
metadata:
labels:
app: vault
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- vault
topologyKey: kubernetes.io/hostname
initContainers:
- name: config
image: busybox
env:
- name: GCS_BUCKET_NAME
valueFrom:
configMapKeyRef:
name: vault
key: gcs-bucket-name
command: ["/bin/sh", "-c"]
args:
- |
cat > /etc/vault/config/vault.hcl <<EOF
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/etc/vault/tls/vault.pem"
tls_key_file = "/etc/vault/tls/vault-key.pem"
tls_min_version = "tls12"
}
storage "gcs" {
bucket = "${GCS_BUCKET_NAME}"
ha_enabled = "true"
}
seal "gcpckms" {
project = "PROJECT-ID-HERE"
region = "global"
key_ring = "vault"
crypto_key = "vault-init"
}
ui = true
EOF
volumeMounts:
- name: vault-config
mountPath: /etc/vault/config
containers:
- name: vault
image: "vault:1.1.3"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: "status.podIP"
- name: "VAULT_API_ADDR"
valueFrom:
configMapKeyRef:
name: vault
key: api-addr
- name: "VAULT_CLUSTER_ADDR"
value: "https://$(POD_IP):8201"
args:
- "server"
- "-config=/etc/vault/config/vault.hcl"
ports:
- name: http
containerPort: 8200
protocol: "TCP"
- name: server
containerPort: 8201
protocol: "TCP"
readinessProbe:
httpGet:
path: "/v1/sys/health?standbyok=true"
port: 8200
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: "500m"
memory: "1Gi"
securityContext:
capabilities:
add:
- IPC_LOCK
volumeMounts:
- name: vault-config
mountPath: /etc/vault/config
- name: vault-tls
mountPath: /etc/vault/tls
volumes:
- name: vault-config
emptyDir: {}
- name: vault-tls
secret:
secretName: vault now when applied for the 1st time it will start sealed an uninitialized you must port forward to vault-0 pod and initilize your self which will use KMS kubectl port-forward vault-0 8200:8200 $ vault operator init
Recovery Key 1: DQPWFQjcZSjo04Jjvgosxwz7dPATlbAanY+qxoOAPey+
Recovery Key 2: 2GupUmF//LIIN7kxEJMaVfQkN4MSA8JUDVRr/f+3pyWP
Recovery Key 3: 38maDcchw+Qj8/tl9jWM+yjCGNFOUe4bnfr9Rsd1TkN+
Recovery Key 4: Tcjax6o9uoHyNwj2Er6ll9lq5nape2NZOHIn2Lxtf0ZS
Recovery Key 5: nfz3wqVqWtmLtK2LBhPRMwBQE/V0eP3Qo0ItLAgw0EQy
Initial Root Token: s.Spnah49tLX7DR7EJgyHEnd35
Success! Vault is initialized
Recovery key initialized with 5 key shares and a key threshold of 3. Please
securely distribute the key shares printed above. |
You can use https://github.com/sethvargo/vault-on-gke for a more updated version. |
Had the exact same issue and my problem were oauth scopes on cluster which I forgot to update to
or at least give it a bucket storage write scope as by default it's set to
|
Thanks for putting this together man I love your work!
I'm hoping you can help me resolve the issue I'm having. I've gone through the instructions several times and I keep running into the same "storage: object doesn't exists" error when init container is trying to unseal the vault.
The missing storage object is unseal-keys.json.enc.
For some reason the init container is not able to authenticate to the vault API and unable to generate unseal-keys.json.enc?
The only changes I made to the instructions were to use the us-central region and I had to remove cluster-version because 1.11.2-gk3.9 is no longer supported.
The text was updated successfully, but these errors were encountered: