From fec510659239264540ecbecaeebdc194cd1ff209 Mon Sep 17 00:00:00 2001 From: Dan Miller Date: Wed, 7 Aug 2024 14:04:22 -0400 Subject: [PATCH] Replace Admonition Style (#1092) Co-authored-by: Erik Osterman (CEO @ Cloud Posse) --- .../eks/karpenter-provisioner/README.md | 16 ++--- modules/account/README.md | 56 +++++++-------- modules/auth0/tenant/README.md | 10 +-- modules/aws-config/README.md | 69 ++++++++++--------- modules/aws-sso/README.md | 14 ++-- modules/dns-primary/README.md | 10 ++- modules/ecr/README.md | 10 ++- .../actions-runner-controller/CHANGELOG.md | 12 ++-- modules/eks/cluster/CHANGELOG.md | 48 +++++++------ modules/eks/cluster/README.md | 64 ++++++++--------- modules/eks/datadog-agent/README.md | 10 ++- modules/eks/karpenter/CHANGELOG.md | 54 +++++++-------- modules/eks/karpenter/README.md | 46 ++++++------- modules/github-runners/README.md | 30 ++++---- modules/network-firewall/README.md | 38 +++++----- modules/spacelift/README.md | 18 ++--- modules/tfstate-backend/README.md | 20 +++--- 17 files changed, 242 insertions(+), 283 deletions(-) diff --git a/deprecated/eks/karpenter-provisioner/README.md b/deprecated/eks/karpenter-provisioner/README.md index 9f0ce2010..5b79ab02d 100644 --- a/deprecated/eks/karpenter-provisioner/README.md +++ b/deprecated/eks/karpenter-provisioner/README.md @@ -1,13 +1,13 @@ # Component: `eks/karpenter-provisioner` -:::warning This component is DEPRECATED - -With v1beta1 of Karpenter, the `provisioner` component is deprecated. -Please use the `eks/karpenter-node-group` component instead. - -For more details, see the [Karpenter v1beta1 release notes](/modules/eks/karpenter/CHANGELOG.md). - -::: +> [!WARNING] +> +> #### This component is DEPRECATED +> +> With v1beta1 of Karpenter, the `provisioner` component is deprecated. +> Please use the `eks/karpenter-node-group` component instead. +> +> For more details, see the [Karpenter v1beta1 release notes](/modules/eks/karpenter/CHANGELOG.md). This component deploys [Karpenter provisioners](https://karpenter.sh/v0.18.0/aws/provisioning) on an EKS cluster. diff --git a/modules/account/README.md b/modules/account/README.md index 9e0613bfb..1f35f30b8 100644 --- a/modules/account/README.md +++ b/modules/account/README.md @@ -4,13 +4,11 @@ This component is responsible for provisioning the full account hierarchy along includes the ability to associate Service Control Policies (SCPs) to the Organization, each Organizational Unit and account. -:::info - -Part of a -[cold start](https://docs.cloudposse.com/reference-architecture/how-to-guides/implementation/enterprise/implement-aws-cold-start) -so it has to be initially run with `SuperAdmin` role. - -::: +> [!NOTE] +> +> Part of a +> [cold start](https://docs.cloudposse.com/reference-architecture/how-to-guides/implementation/enterprise/implement-aws-cold-start) +> so it has to be initially run with `SuperAdmin` role. In addition, it enables [AWS IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html), which helps @@ -178,15 +176,13 @@ SuperAdmin) credentials you have saved in 1Password. #### Request an increase in the maximum number of accounts allowed -:::caution - -Make sure your support plan for the _root_ account was upgraded to the "Business" level (or Higher). This is necessary -to expedite the quota increase requests, which could take several days on a basic support plan. Without it, AWS support -will claim that since we’re not currently utilizing any of the resources, so they do not want to approve the requests. -AWS support is not aware of your other organization. If AWS still gives you problems, please escalate to your AWS TAM. -See [AWS](https://docs.cloudposse.com/reference-architecture/reference/aws). - -::: +> [!WARNING] +> +> Make sure your support plan for the _root_ account was upgraded to the "Business" level (or Higher). This is necessary +> to expedite the quota increase requests, which could take several days on a basic support plan. Without it, AWS +> support will claim that since we’re not currently utilizing any of the resources, so they do not want to approve the +> requests. AWS support is not aware of your other organization. If AWS still gives you problems, please escalate to +> your AWS TAM. See [AWS](https://docs.cloudposse.com/reference-architecture/reference/aws). 1. From the region list, select "US East (N. Virginia) us-east-1". @@ -318,21 +314,19 @@ atmos terraform import account --stack core-gbl-root 'aws_organizations_organiza AWS accounts and organizational units are generated dynamically by the `terraform/account` component using the configuration in the `gbl-root` stack. -:::info _**Special note:**_ - -In the rare case where you will need to be enabling non-default AWS Regions, temporarily comment out the -`DenyRootAccountAccess` service control policy setting in `gbl-root.yaml`. You will restore it later, after enabling the -optional Regions. See related: -[Decide on Opting Into Non-default Regions](https://docs.cloudposse.com/reference-architecture/design-decisions/cold-start/decide-on-opting-into-non-default-regions) - -::: - -:::caution You must wait until your quota increase request has been granted - -If you try to create the accounts before the quota increase is granted, you can expect to see failures like -`ACCOUNT_NUMBER_LIMIT_EXCEEDED`. - -::: +> [!IMPORTANT] +> +> In the rare case where you will need to be enabling non-default AWS Regions, temporarily comment out the +> `DenyRootAccountAccess` service control policy setting in `gbl-root.yaml`. You will restore it later, after enabling +> the optional Regions. See related: +> [Decide on Opting Into Non-default Regions](https://docs.cloudposse.com/reference-architecture/design-decisions/cold-start/decide-on-opting-into-non-default-regions) + +> [!TIP] +> +> #### You must wait until your quota increase request has been granted +> +> If you try to create the accounts before the quota increase is granted, you can expect to see failures like +> `ACCOUNT_NUMBER_LIMIT_EXCEEDED`. In the Geodesic shell, execute the following commands to provision AWS Organizational Units and AWS accounts: diff --git a/modules/auth0/tenant/README.md b/modules/auth0/tenant/README.md index 2562725f6..e165ca834 100644 --- a/modules/auth0/tenant/README.md +++ b/modules/auth0/tenant/README.md @@ -42,11 +42,11 @@ in Terraform. Follow the [Auth0 provider documentation](https://registry.terraform.io/providers/auth0/auth0/latest/docs/guides/quickstart) to create a Machine to Machine application. -:::tip Machine to Machine App Name - -Use the Context Label format for the machine name for consistency. For example, `acme-plat-gbl-prod-auth0-provider`. - -::: +> [!TIP] +> +> #### Machine to Machine App Name +> +> Use the Context Label format for the machine name for consistency. For example, `acme-plat-gbl-prod-auth0-provider`. After creating the Machine to Machine application, add the app's domain, client ID, and client secret to AWS Systems Manager Parameter Store in the same account and region as this component deployment. The path for the parameters are diff --git a/modules/aws-config/README.md b/modules/aws-config/README.md index c8f35b94c..c280c627b 100644 --- a/modules/aws-config/README.md +++ b/modules/aws-config/README.md @@ -20,25 +20,25 @@ Some of the key features of AWS Config include: - Notifications and alerts: AWS Config can send notifications and alerts when changes are made to your AWS resources that could impact their compliance or security posture. -:::caution AWS Config Limitations - -You'll also want to be aware of some limitations with AWS Config: - -- The maximum number of AWS Config rules that can be evaluated in a single account is 1000. - - This can be mitigated by removing rules that are duplicated across packs. You'll have to manually search for these - duplicates. - - You can also look for rules that do not apply to any resources and remove those. You'll have to manually click - through rules in the AWS Config interface to see which rules are not being evaluated. - - If you end up still needing more than 1000 rules, one recommendation is to only run packs on a schedule with a - lambda that removes the pack after results are collected. If you had different schedule for each day of the week, - that would mean 7000 rules over the week. The aggregators would not be able to handle this, so you would need to - make sure to store them somewhere else (i.e. S3) so the findings are not lost. - - See the - [Audit Manager docs](https://aws.amazon.com/blogs/mt/integrate-across-the-three-lines-model-part-2-transform-aws-config-conformance-packs-into-aws-audit-manager-assessments/) - if you think you would like to convert conformance packs to custom Audit Manager assessments. -- The maximum number of AWS Config conformance packs that can be created in a single account is 50. - -::: +> [!WARNING] +> +> #### AWS Config Limitations +> +> You'll also want to be aware of some limitations with AWS Config: +> +> - The maximum number of AWS Config rules that can be evaluated in a single account is 1000. +> - This can be mitigated by removing rules that are duplicated across packs. You'll have to manually search for these +> duplicates. +> - You can also look for rules that do not apply to any resources and remove those. You'll have to manually click +> through rules in the AWS Config interface to see which rules are not being evaluated. +> - If you end up still needing more than 1000 rules, one recommendation is to only run packs on a schedule with a +> lambda that removes the pack after results are collected. If you had different schedule for each day of the week, +> that would mean 7000 rules over the week. The aggregators would not be able to handle this, so you would need to +> make sure to store them somewhere else (i.e. S3) so the findings are not lost. +> - See the +> [Audit Manager docs](https://aws.amazon.com/blogs/mt/integrate-across-the-three-lines-model-part-2-transform-aws-config-conformance-packs-into-aws-audit-manager-assessments/) +> if you think you would like to convert conformance packs to custom Audit Manager assessments. +> - The maximum number of AWS Config conformance packs that can be created in a single account is 50. Overall, AWS Config provides you with a powerful toolset to help you monitor and manage the configurations of your AWS resources, ensuring that they remain compliant, secure, and properly configured over time. @@ -79,21 +79,22 @@ Before deploying this AWS Config component `config-bucket` and `cloudtrail-bucke This component has a `default_scope` variable for configuring if it will be an organization-wide or account-level component by default. Note that this can be overridden by the `scope` variable in the `conformance_packs` items. -:::info Using the account default_scope - -If default_scope == `account`, AWS Config is regional AWS service, so this component needs to be deployed to all -regions. If an individual `conformance_packs` item has `scope` set to `organization`, that particular pack will be -deployed to the organization level. - -::: - -:::info Using the organization default_scope - -If default_scope == `organization`, AWS Config is global unless overriden in the `conformance_packs` items. You will -need to update your org to allow the `config-multiaccountsetup.amazonaws.com` service access principal for this to work. -If you are using our `account` component, just add that principal to the `aws_service_access_principals` variable. - -::: +> [!TIP] +> +> #### Using the account default_scope +> +> If default_scope == `account`, AWS Config is regional AWS service, so this component needs to be deployed to all +> regions. If an individual `conformance_packs` item has `scope` set to `organization`, that particular pack will be +> deployed to the organization level. + +> [!TIP] +> +> #### Using the organization default_scope +> +> If default_scope == `organization`, AWS Config is global unless overriden in the `conformance_packs` items. You will +> need to update your org to allow the `config-multiaccountsetup.amazonaws.com` service access principal for this to +> work. If you are using our `account` component, just add that principal to the `aws_service_access_principals` +> variable. At the AWS Organizational level, the Components designate an account to be the `central collection account` and a single region to be the `central collection region` so that compliance information can be aggregated into a central location. diff --git a/modules/aws-sso/README.md b/modules/aws-sso/README.md index e351537be..d51fa0db4 100644 --- a/modules/aws-sso/README.md +++ b/modules/aws-sso/README.md @@ -32,14 +32,12 @@ recommended `gbl-root` stack. ### Google Workspace -:::important - -> Your identity source is currently configured as 'External identity provider'. To add new groups or edit their -> memberships, you must do this using your external identity provider. - -Groups _cannot_ be created with ClickOps in the AWS console and instead must be created with AWS API. - -::: +> [!IMPORTANT] +> +> > Your identity source is currently configured as 'External identity provider'. To add new groups or edit their +> > memberships, you must do this using your external identity provider. +> +> Groups _cannot_ be created with ClickOps in the AWS console and instead must be created with AWS API. Google Workspace is now supported by AWS Identity Center, but Group creation is not automatically handled. After [configuring SAML and SCIM with Google Workspace and IAM Identity Center following the AWS documentation](https://docs.aws.amazon.com/singlesignon/latest/userguide/gs-gwp.html), diff --git a/modules/dns-primary/README.md b/modules/dns-primary/README.md index 9d6d0df29..b53c42776 100644 --- a/modules/dns-primary/README.md +++ b/modules/dns-primary/README.md @@ -93,12 +93,10 @@ components: YourVeryLongStringGoesHere ``` -:::info - -Use the [acm](https://docs.cloudposse.com/components/library/aws/acm) component for more advanced certificate -requirements. - -::: +> [!TIP] +> +> Use the [acm](https://docs.cloudposse.com/components/library/aws/acm) component for more advanced certificate +> requirements. diff --git a/modules/ecr/README.md b/modules/ecr/README.md index ea55966cf..7ee7c4396 100644 --- a/modules/ecr/README.md +++ b/modules/ecr/README.md @@ -6,12 +6,10 @@ This utilizes to assign accounts to various roles. It is also compatible with the [GitHub Actions IAM Role mixin](https://github.com/cloudposse/terraform-aws-components/blob/master/mixins/github-actions-iam-role/README-github-action-iam-role.md). -:::caution - -Older versions of our reference architecture have an`eks-iam` component that needs to be updated to provide sufficient -IAM roles to allow pods to pull from ECR repos - -::: +> [!WARNING] +> +> Older versions of our reference architecture have an`eks-iam` component that needs to be updated to provide sufficient +> IAM roles to allow pods to pull from ECR repos ## Usage diff --git a/modules/eks/actions-runner-controller/CHANGELOG.md b/modules/eks/actions-runner-controller/CHANGELOG.md index d3c2cc338..5fa8bdc77 100644 --- a/modules/eks/actions-runner-controller/CHANGELOG.md +++ b/modules/eks/actions-runner-controller/CHANGELOG.md @@ -76,12 +76,12 @@ of memory allocated to the runner Pod to account for this. This is generally not small enough amount of disk space that it can be reasonably stored in the RAM allocated to a single CPU in an EC2 instance, so it is the CPU that remains the limiting factor in how many Runners can be run on an instance. -:::warning You must configure a memory request for the runner Pod - -When using `tmpfs_enabled`, you must configure a memory request for the runner Pod. If you do not, a single Pod would be -allowed to consume half the Node's memory just for its disk storage. - -::: +> [!WARNING] +> +> #### You must configure a memory request for the runner Pod +> +> When using `tmpfs_enabled`, you must configure a memory request for the runner Pod. If you do not, a single Pod would +> be allowed to consume half the Node's memory just for its disk storage. #### Configure startup timeout via `wait_for_docker_seconds` diff --git a/modules/eks/cluster/CHANGELOG.md b/modules/eks/cluster/CHANGELOG.md index bef5b7e2f..ae11e83df 100644 --- a/modules/eks/cluster/CHANGELOG.md +++ b/modules/eks/cluster/CHANGELOG.md @@ -49,13 +49,13 @@ Components PR [#1033](https://github.com/cloudposse/terraform-aws-components/pul ### Major Breaking Changes -:::warning Major Breaking Changes, Manual Intervention Required - -This release includes a major breaking change that requires manual intervention to migrate existing clusters. The change -is necessary to support the new AWS Access Control API, which is more secure and more reliable than the old `aws-auth` -ConfigMap. - -::: +> [!WARNING] +> +> #### Major Breaking Changes, Manual Intervention Required +> +> This release includes a major breaking change that requires manual intervention to migrate existing clusters. The +> change is necessary to support the new AWS Access Control API, which is more secure and more reliable than the old +> `aws-auth` ConfigMap. This release drops support for the `aws-auth` ConfigMap and switches to managing access control with the new AWS Access Control API. This change allows for more secure and reliable access control, and removes the requirement that Terraform @@ -65,18 +65,18 @@ In this release, this component only supports assigning "team roles" to Kubernet Access Policies is not yet implemented. However, if you specify `system:masters` as a group, that will be translated into assigning the `AmazonEKSClusterAdminPolicy` to the role. Any other `system:*` group will cause an error. -:::tip Network Access Considerations - -Previously, this component required network access to the EKS control plane to manage the `aws-auth` ConfigMap. This -meant having the EKS control plane accessible from the public internet, or using a bastion host or VPN to access the -control plane. With the new AWS Access Control API, Terraform operations on the EKS cluster no longer require network -access to the EKS control plane. - -This may seem like it makes it easier to secure the EKS control plane, but Terraform users will still require network -access to the EKS control plane to manage any deployments or other Kubernetes resources in the cluster. This means that -this upgrade does not substantially change the need for network access. - -::: +> [!TIP] +> +> #### Network Access Considerations +> +> Previously, this component required network access to the EKS control plane to manage the `aws-auth` ConfigMap. This +> meant having the EKS control plane accessible from the public internet, or using a bastion host or VPN to access the +> control plane. With the new AWS Access Control API, Terraform operations on the EKS cluster no longer require network +> access to the EKS control plane. +> +> This may seem like it makes it easier to secure the EKS control plane, but Terraform users will still require network +> access to the EKS control plane to manage any deployments or other Kubernetes resources in the cluster. This means +> that this upgrade does not substantially change the need for network access. ### Minor Changes @@ -94,12 +94,10 @@ Full details of the migration process can be found in the `cloudposse/terraform- [migration document](https://github.com/cloudposse/terraform-aws-eks-cluster/blob/main/docs/migration-v3-v4.md). This section is a streamlined version for users of this `eks/cluster` component. -:::important - -The commands below assume the component is named "eks/cluster". If you are using a different name, replace "eks/cluster" -with the correct component name. - -::: +> [!IMPORTANT] +> +> The commands below assume the component is named "eks/cluster". If you are using a different name, replace +> "eks/cluster" with the correct component name. #### Prepare for Migration diff --git a/modules/eks/cluster/README.md b/modules/eks/cluster/README.md index f8e6c29ed..cb86484ad 100644 --- a/modules/eks/cluster/README.md +++ b/modules/eks/cluster/README.md @@ -3,14 +3,14 @@ This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles. -:::note Windows not supported - -This component has not been tested with Windows worker nodes of any launch type. Although upstream modules support -Windows nodes, there are likely issues around incorrect or insufficient IAM permissions or other configuration that -would need to be resolved for this component to properly configure the upstream modules for Windows nodes. If you need -Windows nodes, please experiment and be on the lookout for issues, and then report any issues to Cloud Posse. - -::: +> [!NOTE] +> +> #### Windows not supported +> +> This component has not been tested with Windows worker nodes of any launch type. Although upstream modules support +> Windows nodes, there are likely issues around incorrect or insufficient IAM permissions or other configuration that +> would need to be resolved for this component to properly configure the upstream modules for Windows nodes. If you need +> Windows nodes, please experiment and be on the lookout for issues, and then report any issues to Cloud Posse. ## Usage @@ -191,9 +191,9 @@ components: # Also, it is only supported for AL2 and some Windows AMIs, not BottleRocket or AL2023. # Kubernetes docs: https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ kubelet_extra_args: >- - --kube-reserved cpu=100m,memory=0.6Gi,ephemeral-storage=1Gi - --system-reserved cpu=100m,memory=0.2Gi,ephemeral-storage=1Gi - --eviction-hard memory.available<200Mi,nodefs.available<10%,imagefs.available<15% + --kube-reserved cpu=100m,memory=0.6Gi,ephemeral-storage=1Gi --system-reserved + cpu=100m,memory=0.2Gi,ephemeral-storage=1Gi --eviction-hard + memory.available<200Mi,nodefs.available<10%,imagefs.available<15% block_device_map: # EBS volume for local ephemeral storage # IGNORED if legacy `disk_encryption_enabled` or `disk_size` are set! @@ -294,14 +294,12 @@ You can also view the release and support timeline for EKS clusters support “Addons” that can be automatically installed on a cluster. Install these addons with the [`var.addons` input](https://docs.cloudposse.com/components/library/aws/eks/cluster/#input_addons). -:::info - -Run the following command to see all available addons, their type, and their publisher. You can also see the URL for -addons that are available through the AWS Marketplace. Replace 1.27 with the version of your cluster. See -[Creating an addon](https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on) for more -details. - -::: +> [!TIP] +> +> Run the following command to see all available addons, their type, and their publisher. You can also see the URL for +> addons that are available through the AWS Marketplace. Replace 1.27 with the version of your cluster. See +> [Creating an addon](https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on) for +> more details. ```shell EKS_K8S_VERSION=1.29 # replace with your cluster version @@ -309,12 +307,10 @@ aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION \ --query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table ``` -:::info - -You can see which versions are available for each addon by executing the following commands. Replace 1.29 with the -version of your cluster. - -::: +> [!TIP] +> +> You can see which versions are available for each addon by executing the following commands. Replace 1.29 with the +> version of your cluster. ```shell EKS_K8S_VERSION=1.29 # replace with your cluster version @@ -394,16 +390,14 @@ addons: addon_version: "v1.8.7-eksbuild.1" ``` -:::warning - -Addons may not be suitable for all use-cases! For example, if you are deploying Karpenter to Fargate and using Karpenter -to provision all nodes, these nodes will never be available before the cluster component is deployed if you are using -the CoreDNS addon (for example). - -This is one of the reasons we recommend deploying a managed node group: to ensure that the addons will become fully -functional during deployment of the cluster. - -::: +> [!WARNING] +> +> Addons may not be suitable for all use-cases! For example, if you are deploying Karpenter to Fargate and using +> Karpenter to provision all nodes, these nodes will never be available before the cluster component is deployed if you +> are using the CoreDNS addon (for example). +> +> This is one of the reasons we recommend deploying a managed node group: to ensure that the addons will become fully +> functional during deployment of the cluster. For more information on upgrading EKS Addons, see ["How to Upgrade EKS Cluster Addons"](https://docs.cloudposse.com/reference-architecture/how-to-guides/upgrades/how-to-upgrade-eks-cluster-addons/) diff --git a/modules/eks/datadog-agent/README.md b/modules/eks/datadog-agent/README.md index c4373bab5..58791fa45 100644 --- a/modules/eks/datadog-agent/README.md +++ b/modules/eks/datadog-agent/README.md @@ -105,12 +105,10 @@ for `.yaml`. #### Sample Yaml -:::caution - -The key of a filename must match datadog docs, which is `.yaml` -[Datadog Cluster Checks](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/?tab=helm#configuration-from-static-configuration-files) - -::: +> [!WARNING] +> +> The key of a filename must match datadog docs, which is `.yaml` > +> [Datadog Cluster Checks](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/?tab=helm#configuration-from-static-configuration-files) Cluster Checks **can** be used for external URL testing (loadbalancer endpoints), whereas annotations **must** be used for kubernetes services. diff --git a/modules/eks/karpenter/CHANGELOG.md b/modules/eks/karpenter/CHANGELOG.md index bd5b3bc24..55e5d90f3 100644 --- a/modules/eks/karpenter/CHANGELOG.md +++ b/modules/eks/karpenter/CHANGELOG.md @@ -22,27 +22,27 @@ Policy. This has also been fixed by making the `v1alpha` policy a separate manag controller's role, rather than merging the statements into the `v1beta` policy. This change also avoids potential conflicts with policy SIDs. -:::note Innocuous Changes - -Terraform will show IAM Policy changes, including deletion of statements from the existing policy and creation of a new -policy. This is expected and innocuous. The IAM Policy has been split into 2 to avoid exceeding length limits, but the -current (`v1beta`) policy remains the same and the now separate (`v1alpha`) policy has been corrected. - -::: +> [!NOTE] +> +> #### Innocuous Changes +> +> Terraform will show IAM Policy changes, including deletion of statements from the existing policy and creation of a +> new policy. This is expected and innocuous. The IAM Policy has been split into 2 to avoid exceeding length limits, but +> the current (`v1beta`) policy remains the same and the now separate (`v1alpha`) policy has been corrected. ## Version 1.445.0 Components [PR #1039](https://github.com/cloudposse/terraform-aws-components/pull/1039) -:::warning Major Breaking Changes - -Karpenter at version v0.33.0 transitioned from the `v1alpha` API to the `v1beta` API with many breaking changes. This -component (`eks/karpenter`) changed as well, dropping support for the `v1alpha` API and adding support for the `v1beta` -API. At the same time, the corresponding `eks/karpenter-provisioner` component was replaced with the -`eks/karpenter-node-pool` component. The old components remain available under the -[`deprecated/`](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated) directory. - -::: +> [!WARNING] +> +> #### Major Breaking Changes +> +> Karpenter at version v0.33.0 transitioned from the `v1alpha` API to the `v1beta` API with many breaking changes. This +> component (`eks/karpenter`) changed as well, dropping support for the `v1alpha` API and adding support for the +> `v1beta` API. At the same time, the corresponding `eks/karpenter-provisioner` component was replaced with the +> `eks/karpenter-node-pool` component. The old components remain available under the +> [`deprecated/`](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated) directory. The full list of changes in Karpenter is too extensive to repeat here. See the [Karpenter v1beta Migration Guide](https://karpenter.sh/v0.32/upgrading/v1beta1-migration/) and the @@ -106,18 +106,16 @@ kubectl annotate crd awsnodetemplates.karpenter.k8s.aws provisioners.karpenter.s kubectl annotate crd awsnodetemplates.karpenter.k8s.aws provisioners.karpenter.sh meta.helm.sh/release-namespace=karpenter --overwrite ``` -:::info - -Previously the `karpenter-crd-upgrade` script included deploying the `karpenter-crd` chart. Now that this chart is moved -to Terraform, that helm deployment is no longer necessary. - -For reference, the `karpenter-crd` chart can be installed with helm with the following: - -```bash -helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "$VERSION" --namespace karpenter -``` - -::: +> [!NOTE] +> +> Previously the `karpenter-crd-upgrade` script included deploying the `karpenter-crd` chart. Now that this chart is +> moved to Terraform, that helm deployment is no longer necessary. +> +> For reference, the `karpenter-crd` chart can be installed with helm with the following: +> +> ```bash +> helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "$VERSION" --namespace karpenter +> ``` Now that the CRDs are upgraded, the component is ready to be applied. Apply the `eks/karpenter` component and then apply `eks/karpenter-provisioner`. diff --git a/modules/eks/karpenter/README.md b/modules/eks/karpenter/README.md index 5732ce94c..f13cbcfaa 100644 --- a/modules/eks/karpenter/README.md +++ b/modules/eks/karpenter/README.md @@ -94,12 +94,12 @@ The process of provisioning Karpenter on an EKS cluster consists of 3 steps. ### 1. Provision EKS IAM Role for Nodes Launched by Karpenter -:::note VPC assumptions being made - -We assume you've already created a VPC using our [VPC component](/modules/vpc) and have private subnets already set up. -The Karpenter node pools will be launched in the private subnets. - -::: +> [!NOTE] +> +> #### VPC assumptions being made +> +> We assume you've already created a VPC using our [VPC component](/modules/vpc) and have private subnets already set +> up. The Karpenter node pools will be launched in the private subnets. EKS IAM Role for Nodes launched by Karpenter are provisioned by the `eks/cluster` component. (EKS can also provision a Fargate Profile for Karpenter, but deploying Karpenter to Fargate is not recommended.): @@ -116,11 +116,9 @@ components: karpenter_iam_role_enabled: true ``` -:::note Authorization - -- The AWS Auth API for EKS is used to authorize the Karpenter controller to interact with the EKS cluster. - -::: +> [!NOTE] +> +> The AWS Auth API for EKS is used to authorize the Karpenter controller to interact with the EKS cluster. Karpenter is installed using a Helm chart. The Helm chart installs the Karpenter controller and a webhook pod as a Deployment that needs to run before the controller can be used for scaling your cluster. We recommend a minimum of one @@ -189,12 +187,12 @@ In this step, we provision the `components/terraform/eks/karpenter-node-pool` co [NodePools](https://karpenter.sh/v0.36/getting-started/getting-started-with-karpenter/#5-create-nodepool) using the `kubernetes_manifest` resource. -:::note Why use a separate component for NodePools? - -We create the NodePools as a separate component since the CRDs for the NodePools are created by the Karpenter component. -This helps manage dependencies. - -::: +> [!TIP] +> +> #### Why use a separate component for NodePools? +> +> We create the NodePools as a separate component since the CRDs for the NodePools are created by the Karpenter +> component. This helps manage dependencies. First, create an abstract component for the `eks/karpenter-node-pool` component: @@ -287,13 +285,13 @@ interruption events include: - Instance Terminating Events - Instance Stopping Events -:::info Interruption Handler vs. Termination Handler - -The Node Interruption Handler is not the same as the Node Termination Handler. The latter is always enabled and cleanly -shuts down the node in 2 minutes in response to a Node Termination event. The former gets advance notice that a node -will soon be terminated, so it can have 5-10 minutes to shut down a node. - -::: +> [!TIP] +> +> #### Interruption Handler vs. Termination Handler +> +> The Node Interruption Handler is not the same as the Node Termination Handler. The latter is always enabled and +> cleanly shuts down the node in 2 minutes in response to a Node Termination event. The former gets advance notice that +> a node will soon be terminated, so it can have 5-10 minutes to shut down a node. For more details, see refer to the [Karpenter docs](https://karpenter.sh/v0.32/concepts/disruption/#interruption) and [FAQ](https://karpenter.sh/v0.32/faq/#interruption-handling) diff --git a/modules/github-runners/README.md b/modules/github-runners/README.md index 8cd4fa30a..e36a78b29 100644 --- a/modules/github-runners/README.md +++ b/modules/github-runners/README.md @@ -2,12 +2,10 @@ This component is responsible for provisioning EC2 instances for GitHub runners. -:::info - -We also have a similar component based on -[actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller) for Kubernetes. - -::: +> [!TIP] +> +> We also have a similar component based on +> [actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller) for Kubernetes. ## Requirements @@ -179,13 +177,11 @@ permissions “mode” for Self-hosted runners to Read-Only. The instructions fo ### Creating Registration Token -:::info - -We highly recommend using a GitHub Application with the github-action-token-rotator module to generate the Registration -Token. This will ensure that the token is rotated and that the token is stored in SSM Parameter Store encrypted with -KMS. - -::: +> [!TIP] +> +> We highly recommend using a GitHub Application with the github-action-token-rotator module to generate the +> Registration Token. This will ensure that the token is rotated and that the token is stored in SSM Parameter Store +> encrypted with KMS. #### GitHub Application @@ -224,11 +220,9 @@ and skip the rest. Otherwise, complete the private key setup in `core- [!TIP] +> +> If you change the Private Key saved in SSM, redeploy `github-action-token-rotator` #### (ClickOps) Obtain the Runner Registration Token diff --git a/modules/network-firewall/README.md b/modules/network-firewall/README.md index a7fe6c867..4d2b122a8 100644 --- a/modules/network-firewall/README.md +++ b/modules/network-firewall/README.md @@ -9,16 +9,14 @@ including Network Firewall, firewall policy, rule groups, and logging configurat Example of a Network Firewall with stateful 5-tuple rules: -:::info - -The "5-tuple" means the five items (columns) that each rule (row, or tuple) in a firewall policy uses to define whether -to block or allow traffic: source and destination IP, source and destination port, and protocol. - -Refer to -[Standard stateful rule groups in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-basic.html) -for more details. - -::: +> [!TIP] +> +> The "5-tuple" means the five items (columns) that each rule (row, or tuple) in a firewall policy uses to define +> whether to block or allow traffic: source and destination IP, source and destination port, and protocol. +> +> Refer to +> [Standard stateful rule groups in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-basic.html) +> for more details. ```yaml components: @@ -89,17 +87,15 @@ components: Example of a Network Firewall with [Suricata](https://suricata.readthedocs.io/en/suricata-6.0.0/rules/) rules: -:::info - -For [Suricata](https://suricata.io/) rule group type, you provide match and action settings in a string, in a Suricata -compatible specification. The specification fully defines what the stateful rules engine looks for in a traffic flow and -the action to take on the packets in a flow that matches the inspection criteria. - -Refer to -[Suricata compatible rule strings in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-suricata.html) -for more details. - -::: +> [!TIP] +> +> For [Suricata](https://suricata.io/) rule group type, you provide match and action settings in a string, in a Suricata +> compatible specification. The specification fully defines what the stateful rules engine looks for in a traffic flow +> and the action to take on the packets in a flow that matches the inspection criteria. +> +> Refer to +> [Suricata compatible rule strings in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-suricata.html) +> for more details. ```yaml components: diff --git a/modules/spacelift/README.md b/modules/spacelift/README.md index 4adc6c1d4..864cdbbb7 100644 --- a/modules/spacelift/README.md +++ b/modules/spacelift/README.md @@ -122,11 +122,9 @@ components: #### Deployment -:::info - -The following steps assume that you've already authenticated with Spacelift locally. - -::: +> [!TIP] +> +> The following steps assume that you've already authenticated with Spacelift locally. First deploy Spaces and policies with the `spaces` component: @@ -153,12 +151,10 @@ following: + core-ue1-auto-spacelift-worker-pool ``` -:::info - -The `spacelift/worker-pool` component is deployed to a specific tenant, stage, and region but is still deployed by the -root administrator stack. Verify the administrator stack by checking the `managed-by:` label. - -::: +> [!TIP] +> +> The `spacelift/worker-pool` component is deployed to a specific tenant, stage, and region but is still deployed by the +> root administrator stack. Verify the administrator stack by checking the `managed-by:` label. Finally, deploy the Spacelift Worker Pool (change the stack-slug to match your configuration): diff --git a/modules/tfstate-backend/README.md b/modules/tfstate-backend/README.md index 70da2f2ba..03aaa0818 100644 --- a/modules/tfstate-backend/README.md +++ b/modules/tfstate-backend/README.md @@ -10,14 +10,12 @@ wish to restrict who can read the production Terraform state backend S3 bucket. all Terraform users require read access to the most sensitive accounts, such as `root` and `audit`, in order to read security configuration information, so careful planning is required when architecting backend splits. -:::info - -Part of cold start, so it has to initially be run with `SuperAdmin`, multiple -times: to create the S3 bucket and then to move the state into it. Follow -the guide **[here](https://docs.cloudposse.com/reference-architecture/how-to-guides/implementation/enterprise/implement-aws-cold-start/#provision-tfstate-backend-component)** -to get started. - -::: +> [!TIP] +> +> Part of cold start, so it has to initially be run with `SuperAdmin`, multiple times: to create the S3 bucket and then +> to move the state into it. Follow the guide +> **[here](https://docs.cloudposse.com/reference-architecture/how-to-guides/implementation/enterprise/implement-aws-cold-start/#provision-tfstate-backend-component)** +> to get started. ### Access Control @@ -58,9 +56,9 @@ access. You can configure who is allowed to assume these roles. - For convenience, the component automatically grants access to the backend to the user deploying it. This is helpful because it allows that user, presumably SuperAdmin, to deploy the normal components that expect the user does not have - direct access to Terraform state, without requiring custom configuration. However, you may want to explicitly - grant SuperAdmin access to the backend in the `allowed_principal_arns` configuration, to ensure that SuperAdmin - can always access the backend, even if the component is later updated by the `root-admin` role. + direct access to Terraform state, without requiring custom configuration. However, you may want to explicitly grant + SuperAdmin access to the backend in the `allowed_principal_arns` configuration, to ensure that SuperAdmin can always + access the backend, even if the component is later updated by the `root-admin` role. ### Quotas