This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide. The following table lists the parts of the guide.
0-bootstrap | Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD Pipeline for foundations code in subsequent stages. |
1-org (this file) | Sets up top level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy. |
2-environments | Sets up development, non-production, and production environments within the Google Cloud organization that you've created. |
3-networks-dual-svpc | Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub. |
3-networks-hub-and-spoke | Sets up base and restricted shared VPCs with all the default configuration found on step 3-networks-dual-svpc, but here the architecture will be based on the Hub and Spoke network model. It also sets up the global DNS hub |
4-projects | Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage. |
5-app-infra | Deploy a simple Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects. |
For an overview of the architecture and the parts, see the terraform-example-foundation README.
The purpose of this step is to set up top-level shared folders, monitoring and networking projects, organization-level logging, and baseline security settings through organizational policies.
- 0-bootstrap executed successfully.
- Security Command Center notifications require that you choose a Security Command Center tier and create and grant permissions for the Security Command Center service account as outlined in Setting up Security Command Center
- Ensure that you have requested a sufficient project quota, as the Terraform scripts will create multiple projects from this point onwards. For more information, please see the FAQ.
Note: Make sure that you use version 1.0.0 of Terraform throughout this series, otherwise you might experience Terraform state snapshot lock errors.
Please refer to troubleshooting if you run into issues during this step.
Disclaimer: This step enables Data Access logs for all services in your organization.
Enabling Data Access logs might result in your project being charged for the additional logs usage.
For details on costs you might incur, go to Pricing.
You can choose not to enable the Data Access logs by setting variable data_access_logs_enabled
to false.
Note: This module creates a sink to export all logs to Google Storage and Log Bucket. It also creates sinks to export a subset of security related logs
to Bigquery and Pub/Sub. This will result in additional charges for those copies of logs. For Log Bucket destination, logs retained for the default retention period (30 days) don't incur a storage cost.
You can change the filters & sinks by modifying the configuration in envs/shared/log_sinks.tf
.
Note: Currently, this module does not enable bucket policy retention for organization logs, please, enable it if needed.
Note: You need to set variable enable_hub_and_spoke
to true
to be able to use the Hub-and-Spoke architecture detailed in the Networking section of the Google Cloud security foundations guide.
Note: If you are using MacOS, replace cp -RT
with cp -R
in the relevant
commands. The -T
flag is needed for Linux, but causes problems for MacOS.
Note: This module creates a Security Command Center Notification.
The notification name must be unique in the organization.
The suggested name in the terraform.tfvars
file is scc-notify.
To check if it already exists run:
gcloud scc notifications describe <scc_notification_name> --organization=<org_id>
Note: This module manages contacts for notifications using Essential Contacts API. This is assigned at the Parent Level (Organization or Folder) you configured to be inherited by all child resources. There is also possible to assign Essential Contacts directly to projects using project-factory essential_contacts submodule. Billing notifications are assigned to be sent to group_billing_admins
mandatory group. Legal and Suspension notifications are assigned to group_org_admins
mandatory group. If you provide all other groups notifications will be configured like the table below:
Group | Notification Category | Fallback Group |
---|---|---|
gcp_network_viewer | Technical | Org Admins |
gcp_platform_viewer | Product Updates and Technical | Org Admins |
gcp_scc_admin | Product Updates and Security | Org Admins |
gcp_security_reviewer | Security and Technical | Org Admins |
-
Clone the policy repo based on the Terraform output from the previous section. Clone the repo at the same level of the
terraform-example-foundation
folder, the next instructions assume that layout. Runterraform output cloudbuild_project_id
in the0-bootstrap
folder to see the project again.gcloud source repos clone gcp-policies --project=YOUR_CLOUD_BUILD_PROJECT_ID
-
Navigate into the repo. All subsequent steps assume you are running them from the gcp-policies directory. If you run them from another directory, adjust your copy paths accordingly.
cd gcp-policies git checkout -b main
-
Copy contents of policy-library to new repo.
cp -RT ../terraform-example-foundation/policy-library/ .
-
Commit changes.
git add . git commit -m 'Your message'
-
Push your main branch to the new repo.
git push --set-upstream origin main
-
Navigate out of the repo.
cd ..
-
Clone the repo.
gcloud source repos clone gcp-org --project=YOUR_CLOUD_BUILD_PROJECT_ID
The message
warning: You appear to have cloned an empty repository.
is normal and can be ignored. -
Navigate into the repo and change to a non-production branch. All subsequent steps assume you are running them from the gcp-org directory. If you run them from another directory, adjust your copy paths accordingly.
cd gcp-org git checkout -b plan
-
Copy contents of foundation to new repo (terraform variables will be updated in a future step).
cp -RT ../terraform-example-foundation/1-org/ .
-
Copy Cloud Build configuration files for Terraform. You may need to modify the command to reflect your current directory.
cp ../terraform-example-foundation/build/cloudbuild-tf-* .
-
Copy the Terraform wrapper script to the root of your new repository (modify accordingly based on your current directory).
cp ../terraform-example-foundation/build/tf-wrapper.sh .
-
Ensure wrapper script can be executed.
chmod 755 ./tf-wrapper.sh
-
Check if your organization already has an Access Context Manager Policy.
gcloud access-context-manager policies list --organization YOUR_ORGANIZATION_ID --format="value(name)"
-
Rename
./envs/shared/terraform.example.tfvars
to./envs/shared/terraform.tfvars
and update the file with values from your environment and bootstrap step (you can re-runterraform output
in the 0-bootstrap directory to find these values). Make sure thatdefault_region
is set to a valid BigQuery dataset region. Also, if the previous step showed a numeric value, make sure to un-comment the variablecreate_access_context_manager_access_policy = false
. See the shared folder README.md for additional information on the values in theterraform.tfvars
file. -
Commit changes.
git add . git commit -m 'Your message'
-
Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply.
git push --set-upstream origin plan
-
Review the plan output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
-
Merge changes to production branch. Because the production branch is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply.
git checkout -b production git push origin production
-
Review the apply output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
-
You can now move to the instructions in the 2-environments step.
Troubleshooting:
If you received a PERMISSION_DENIED
error running the gcloud access-context-manager
or the gcloud scc notifications
commands you can append
--impersonate-service-account=terraform-org-sa@<SEED_PROJECT_ID>.iam.gserviceaccount.com
to run the command as the Terraform Service Account.
-
Clone the repo you created manually in 0-bootstrap.
git clone <YOUR_NEW_REPO-1-org>
-
Navigate into the repo and change to a non-production branch. All subsequent steps assume you are running them from the <YOUR_NEW_REPO-1-org> directory. If you run them from another directory, adjust your copy paths accordingly.
cd YOUR_NEW_REPO_CLONE-1-org git checkout -b plan
-
Copy contents of foundation to new repo.
cp -RT ../terraform-example-foundation/1-org/ .
-
Copy contents of policy-library to new repo.
cp -RT ../terraform-example-foundation/policy-library/ ./policy-library
-
Copy the Jenkinsfile script to the root of your new repository.\
cp ../terraform-example-foundation/build/Jenkinsfile .
-
Update the variables located in the
environment {}
section of theJenkinsfile
with values from your environment:_TF_SA_EMAIL _STATE_BUCKET_NAME _PROJECT_ID (the CI/CD project ID)
-
Copy Terraform wrapper script to the root of your new repository.
cp ../terraform-example-foundation/build/tf-wrapper.sh .
-
Ensure wrapper script can be executed.
chmod 755 ./tf-wrapper.sh
-
Check if your organization already has an Access Context Manager Policy.
gcloud access-context-manager policies list --organization YOUR_ORGANIZATION_ID --format="value(name)"
-
Rename
./envs/shared/terraform.example.tfvars
to./envs/shared/terraform.tfvars
and update the file with values from your environment and bootstrap. You can re-runterraform output
in the 0-bootstrap directory to find these values. Make sure thatdefault_region
is set to a valid BigQuery dataset region. Also, if the previous step showed a numeric value, make sure to un-comment the variablecreate_access_context_manager_access_policy = false
. See the shared folder README.md for additional information on the values in theterraform.tfvars
file. -
Commit changes.
git add . git commit -m 'Your message'
-
Push your plan branch.
- Assuming you configured an automatic trigger in your Jenkins Controller (see Jenkins sub-module README), this will trigger a plan. You can also trigger a Jenkins job manually. Given the many options to do this in Jenkins, it is out of the scope of this document see Jenkins website for more details.
git push --set-upstream origin plan
-
Review the plan output in your Controller's web UI.
-
Merge changes to production branch.
git checkout -b production git push origin production
-
Review the apply output in your Controller's web UI. (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Controller UI).
- Change into
1-org
folder, copy the Terraform wrapper script and ensure it can be executed.cd 1-org cp ../build/tf-wrapper.sh . chmod 755 ./tf-wrapper.sh
- Change into
envs/shared
folder and renameterraform.example.tfvars
toterraform.tfvars
.cd envs/shared mv terraform.example.tfvars terraform.tfvars
- Update the file with values from your environment and 0-bootstrap output.
- Use
terraform output
to get the backend bucket value from 0-bootstrap output.export backend_bucket=$(terraform -chdir="../../../0-bootstrap/" output -raw gcs_bucket_tfstate) echo "backend_bucket = ${backend_bucket}" sed -i "s/TERRAFORM_STATE_BUCKET/${backend_bucket}/" ./terraform.tfvars
- Also update
backend.tf
with your backend bucket from 0-bootstrap output.for i in `find -name 'backend.tf'`; do sed -i "s/UPDATE_ME/${backend_bucket}/" $i; done
- Return to
1-org
foldercd ../../../1-org
We will now deploy our environment (production) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponding to a branch is the repository for 1-org step and only the corresponding environment is applied.
To use the validate
option of the tf-wrapper.sh
script, please follow the instructions to install the terraform-tools component.
- Use
terraform output
to get the Cloud Build project ID and the organization step Terraform Service Account from 0-bootstrap output. An environment variableGOOGLE_IMPERSONATE_SERVICE_ACCOUNT
will be set using the Terraform Service Account to enable impersonation.export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="../0-bootstrap/" output -raw cloudbuild_project_id) echo ${CLOUD_BUILD_PROJECT_ID} export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../0-bootstrap/" output -raw organization_step_terraform_service_account_email) echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
- Run
init
andplan
and review output../tf-wrapper.sh init production ./tf-wrapper.sh plan production
- Run
validate
and check for violations../tf-wrapper.sh validate production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
- Run
apply
production../tf-wrapper.sh apply production
If you received any errors or made any changes to the Terraform config or terraform.tfvars
you must re-run ./tf-wrapper.sh plan production
before run ./tf-wrapper.sh apply production
.
Before executing the next stages, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT
environment variable.
unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT