Skip to content

Commit

Permalink
Use prose wrap for Markdown files
Browse files Browse the repository at this point in the history
  • Loading branch information
cloudlena committed Feb 3, 2025
1 parent f6b2841 commit 6a6b7ce
Show file tree
Hide file tree
Showing 30 changed files with 3,332 additions and 1,013 deletions.
1 change: 1 addition & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{ "proseWrap": "always" }
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# bespinian Blog

The bespinian blog at <https://blog.bespinian.io>. Created with [Hugo](https://gohugo.io).
The bespinian blog at <https://blog.bespinian.io>. Created with
[Hugo](https://gohugo.io).

## Initialize the Project

Expand All @@ -22,4 +23,5 @@ The bespinian blog at <https://blog.bespinian.io>. Created with [Hugo](https://g

### Post doesn't Show Up

If your post doesn't show up, ensure the date you have set in the front matter is in the past.
If your post doesn't show up, ensure the date you have set in the front matter
is in the past.
142 changes: 118 additions & 24 deletions content/posts/api-contract-definitions.md

Large diffs are not rendered by default.

98 changes: 54 additions & 44 deletions content/posts/aws-config-sharing/index.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,21 @@
---
title: "Share network and organization info with AWS landing zone member accounts"
title:
"Share network and organization info with AWS landing zone member accounts"
date: 2024-07-04T08:27:12+02:00
---

In an AWS landing zone setup, you typically have several infrastructure accounts, which create
resources and share them with other member accounts.
In an AWS landing zone setup, you typically have several infrastructure
accounts, which create resources and share them with other member accounts.

A typical AWS landing zone organization structure might look something like this:
A typical AWS landing zone organization structure might look something like
this:

![AWS Landing Zone](org-structure.drawio.png)

The management account contains your definition of the organizational structure. The network account
defines several VPCs with their subnets and routing. In the case of the network resources, you might
share the subnets with your member accounts using the AWS Resource Access Manager, or RAM.
The management account contains your definition of the organizational structure.
The network account defines several VPCs with their subnets and routing. In the
case of the network resources, you might share the subnets with your member
accounts using the AWS Resource Access Manager, or RAM.

```hcl
# create a subnet in your network account and share it with member accounts
Expand All @@ -38,32 +41,36 @@ resource "aws_ram_principal_association" "dev_subnets_dev" {
}
```

So far, so good. Easy and straight-forward. Now let's take a look at what this new subnet looks like
in the member account.
So far, so good. Easy and straight-forward. Now let's take a look at what this
new subnet looks like in the member account.

![List of subnets without names](subnets.png)

Well, it worked. But the subnet has no name! This is because the name of a subnet is stored in its
tags (see the Terraform example above). And RAM does not share tags with target accounts. So if we
only have access to the member account, we don't know which subnet to choose, unless we already know
its ID. If you work in UI, this makes it pretty error-prone and your resources might end up using
the wrong subnet. And if you're working with Infrastructure as Code, which you should, you might
have to hard-code these subnet IDs somewhere, which is also mehh.

One simple solution to this would be to grant member accounts read access to subnets in the network
account through a special role. This is not very difficult to set up, but when you need the subnet
information, you always have to assume a role first. In Terraform, this is done through a separate
provider definition, which you need to configure just for this purpose.

You could also just tag the subnets manually in all member accounts. But again, this is not very
pretty.

Enter AWS Systems Manager Parameter Store! This service lets you store simple key-value pairs to be
consumed by other services. And these parameters can even be shared - for a small fee - with other
accounts through RAM. When we create and share our subnets, we just store their ID in an SSM
parameter and share it in the same RAM share with the member accounts. In your IaC definition you
can then pull the subnet ID out of the shared parameter and use it to attach an EC2 instance, for
example.
Well, it worked. But the subnet has no name! This is because the name of a
subnet is stored in its tags (see the Terraform example above). And RAM does not
share tags with target accounts. So if we only have access to the member
account, we don't know which subnet to choose, unless we already know its ID. If
you work in UI, this makes it pretty error-prone and your resources might end up
using the wrong subnet. And if you're working with Infrastructure as Code, which
you should, you might have to hard-code these subnet IDs somewhere, which is
also mehh.

One simple solution to this would be to grant member accounts read access to
subnets in the network account through a special role. This is not very
difficult to set up, but when you need the subnet information, you always have
to assume a role first. In Terraform, this is done through a separate provider
definition, which you need to configure just for this purpose.

You could also just tag the subnets manually in all member accounts. But again,
this is not very pretty.

Enter AWS Systems Manager Parameter Store! This service lets you store simple
key-value pairs to be consumed by other services. And these parameters can even
be shared - for a small fee - with other accounts through RAM. When we create
and share our subnets, we just store their ID in an SSM parameter and share it
in the same RAM share with the member accounts. In your IaC definition you can
then pull the subnet ID out of the shared parameter and use it to attach an EC2
instance, for example.

![Schema of the resource share mechanism](architecture.drawio.png)

Expand All @@ -87,14 +94,15 @@ resource "aws_ram_resource_association" "param_dev_subnet_ids" {
```

As you can see, we store the value as a JSON object. This makes it easier to share multiple IDs in
the same parameter. This saves a few cents and also makes accessing the values on the other side
faster and simpler.
As you can see, we store the value as a JSON object. This makes it easier to
share multiple IDs in the same parameter. This saves a few cents and also makes
accessing the values on the other side faster and simpler.

Make sure the `tier` is set to "Advanced". Standard parameters are not shareable via RAM!
Make sure the `tier` is set to "Advanced". Standard parameters are not shareable
via RAM!

On the consuming side, which will be the member accounts, we can access the subnet IDs simply with
a Terraform data source.
On the consuming side, which will be the member accounts, we can access the
subnet IDs simply with a Terraform data source.

```hcl
# read the ids from the SSM parameter
Expand All @@ -113,14 +121,16 @@ resource "aws_instance" "web" {
}
```

That's it. If you use this a lot, you could also put the reading part into a Terraform module and
put the IDs into its output.
That's it. If you use this a lot, you could also put the reading part into a
Terraform module and put the IDs into its output.

For the sake of simplicity, I left some details out. For example the handling of sensitive values
and also the possibility to store SSM parameters hierarchically using paths (see `aws_ssm_parameters_by_path`).
But I'm sure you'll figure that out.
For the sake of simplicity, I left some details out. For example the handling of
sensitive values and also the possibility to store SSM parameters hierarchically
using paths (see `aws_ssm_parameters_by_path`). But I'm sure you'll figure that
out.

Also, we only shared some subnet ID here. But this method lets you share all sorts of stuff with member
accounts. You could also share the organization structure (IDs of organizational units or the IDs of
other accounts). This allows you to have information in member accounts without having to add
Also, we only shared some subnet ID here. But this method lets you share all
sorts of stuff with member accounts. You could also share the organization
structure (IDs of organizational units or the IDs of other accounts). This
allows you to have information in member accounts without having to add
permissions.
47 changes: 36 additions & 11 deletions content/posts/blue-green-deployment-on-cloud-foundry.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,25 @@ comments: true
date: 2016-12-12
---

Imagine you have one of your apps in production and want to `cf push` an update to it. If you do so, your app will experience a short downtime because CF needs to stop your old application and then power up the new one. During this short period of time, your users will be receiving `404`s when trying to access your application. Now, what if the new version of your app has an error in it and doesn't even start on Cloud Foundry? Your users will face an even longer downtime until you have found and fixed the bug.

To prevent these inconveniences for your users, Cloud Foundry allows you to do a so called "Blue-Green Deployment". I won't go into the depths of this concept because you can read all about it in the [Cloud Foundry documentation](https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html). Generally, it allows you to have two instances of your application running simultaneously, while one is the old version and one is already the new version. Your users are then being load balanced between the two apps and, as soon as the new version is running correctly, the old one is shut down.

Cloud Foundry doesn't provide this functionality out of the box. That's why I wrote a simple shell script to do this blue-green deployment for you.
Imagine you have one of your apps in production and want to `cf push` an update
to it. If you do so, your app will experience a short downtime because CF needs
to stop your old application and then power up the new one. During this short
period of time, your users will be receiving `404`s when trying to access your
application. Now, what if the new version of your app has an error in it and
doesn't even start on Cloud Foundry? Your users will face an even longer
downtime until you have found and fixed the bug.

To prevent these inconveniences for your users, Cloud Foundry allows you to do a
so called "Blue-Green Deployment". I won't go into the depths of this concept
because you can read all about it in the
[Cloud Foundry documentation](https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html).
Generally, it allows you to have two instances of your application running
simultaneously, while one is the old version and one is already the new version.
Your users are then being load balanced between the two apps and, as soon as the
new version is running correctly, the old one is shut down.

Cloud Foundry doesn't provide this functionality out of the box. That's why I
wrote a simple shell script to do this blue-green deployment for you.

```bash
#!/usr/bin/env bash
Expand Down Expand Up @@ -93,16 +107,23 @@ cf delete -f -r "${app_name}"
cf rename "${temp_app_name}" "${app_name}"
```

The script tries to guess some variables from your `manifest.yml` file, but you'll still need to set some environment variables for a successful deployment:
The script tries to guess some variables from your `manifest.yml` file, but
you'll still need to set some environment variables for a successful deployment:

- `CF_API`: The API endpoint of the CF instance you intend to use (e.g., `https://api.lyra-836.appcloud.swisscom.com`)
- `CF_SHARED_DOMAIN`: The shared domain you want to use for temporary routes used to smoke test your app
- `CF_API`: The API endpoint of the CF instance you intend to use (e.g.,
`https://api.lyra-836.appcloud.swisscom.com`)
- `CF_SHARED_DOMAIN`: The shared domain you want to use for temporary routes
used to smoke test your app
- `CF_USERNAME`: Your Cloud Foundry username
- `CF_PASSWORD`: Your Cloud Foundry password
- `CF_ORG`: The Cloud Foundry org you wish to deploy to
- `CF_SPACE`: The Cloud Foundry space you intend to deploy to

As soon as you've set all of these variables, you can simply execute the script, and it will do a verbose blue-green deployment for you. The script will deploy the new version of your app and check for it to get healthy. You can change the `expected_response` parameter to something else, like `401` if your app doesn't return a `200` status code without authentication.
As soon as you've set all of these variables, you can simply execute the script,
and it will do a verbose blue-green deployment for you. The script will deploy
the new version of your app and check for it to get healthy. You can change the
`expected_response` parameter to something else, like `401` if your app doesn't
return a `200` status code without authentication.

## Caveats

Expand All @@ -111,9 +132,13 @@ As soon as you've set all of these variables, you can simply execute the script,

## Further reading

There are two plugins for the Cloud Foundry CLI that also automate certain steps of blue-green deployment:
There are two plugins for the Cloud Foundry CLI that also automate certain steps
of blue-green deployment:

- [Autopilot](https://github.com/contraband/autopilot)
- [BlueGreenDeploy](https://github.com/bluemixgaragelondon/cf-blue-green-deploy)

My script is there to show you what happens behind the curtains and to be used by CI/CD systems or if you need more fine-grained control over what is happening during the deployment. Personally, I really like the BlueGreenDeploy plugin. It's easy to use and does the job.
My script is there to show you what happens behind the curtains and to be used
by CI/CD systems or if you need more fine-grained control over what is happening
during the deployment. Personally, I really like the BlueGreenDeploy plugin.
It's easy to use and does the job.
Loading

0 comments on commit 6a6b7ce

Please sign in to comment.