You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a developer
I want to make sure the Terraform state that describe our infrastructure is backed up after each sucessful provisioning
So that restore the infrastructure to a known working state if the Terraform state gets corrupted or out of sync
Acceptance criteria
Given an en environment has been provisioned
When the associated Terraform state is no longer in sync
Then we can download and reload a known working Terraform state
Additional infos
Is your feature request related to a problem? Please describe.
Problems:
gigadb-website/ops/infrastructure/envs/live (develop%=) % ansible-playbook -i ../../inventories users_playbook.yml -e "newuser=bastien" -e "gigadb_env=live"
[WARNING]: Could not match supplied host pattern, ignoring: name_bastion_server_live*
PLAY [Create an additional user on bastion for curators team] ****************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *******************************************************************************************************************************************************************************************************
It seems that the terraform state for Upstream/gigadb-website on live production environment is out of sync with the state on AWS.
Unfortunately, terraform init or terraform refresh are unable to fix the issue.
As it stands now, the terraform state is missing several resources, including the ones for the EC2 instances.
Thus, the terraform inventory script is unable to get the tags associated with these resources, which include the "Name" tag which is used
to construct the host name used by Ansible for host filtering.
The possible cause of the issues may come from either (or both) work:
The work to deploy the files server included changes to tag name convention, but the terraform was not applied on live production because the terraform action would destroy and recreate resources some of which weren't not fully backed up yet (EFS)
Setting up AWS for the alternative infrastructure and the introduction of AWS_PROFILE, and the GigaDBAlt IAM user
The correct fix is to rebuild Upstream/gigadb-website, which we can only do when the PRs related to alternative infrastructure are merged and we can do a blue/green deployment.
Describe the solution you'd like
A clear and concise description of what you want to happen.
Possible solutions:
Backup the last known working terraform state to a s3 bucket
1 terraform state file per developer per environment
Temporary solution
Create a temporary static inventory file (e.g: temp_inventory.txt) in your ops/infrastructure/envs/live/ directory for Upstream/gigadb-website with the hardcoded hosts:
User Story
Acceptance criteria
Additional infos
Is your feature request related to a problem? Please describe.
Problems:
It seems that the terraform state for Upstream/gigadb-website on live production environment is out of sync with the state on AWS.
Unfortunately, terraform init or terraform refresh are unable to fix the issue.
As it stands now, the terraform state is missing several resources, including the ones for the EC2 instances.
Thus, the terraform inventory script is unable to get the tags associated with these resources, which include the "Name" tag which is used
to construct the host name used by Ansible for host filtering.
The possible cause of the issues may come from either (or both) work:
The correct fix is to rebuild Upstream/gigadb-website, which we can only do when the PRs related to alternative infrastructure are merged and we can do a blue/green deployment.
Describe the solution you'd like
A clear and concise description of what you want to happen.
Possible solutions:
Temporary solution
Create a temporary static inventory file (e.g: temp_inventory.txt) in your ops/infrastructure/envs/live/ directory for Upstream/gigadb-website with the hardcoded hosts:
Then pass an additional -i parameter to the Ansible playbook you want to run:
The text was updated successfully, but these errors were encountered: