Skip to content

Commit

Permalink
add some prereqs for private installations, some notes in the private…
Browse files Browse the repository at this point in the history
… installs and update the version of ocp in the examples
  • Loading branch information
rcarrata committed Apr 13, 2021
1 parent 97495a3 commit a16e17e
Show file tree
Hide file tree
Showing 13 changed files with 350 additions and 6 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,13 @@ Ansible Repository for deploy Openshift 4 clusters with IPI installation in conn

There are different approaches for each mode, but always an IPI installation is used.

<img align="center" width="450" src="docs/pics/azure_ocp4_pic.png">
<img align="center" width="450" src="docs/pics/azure_ocp4_pic.png">

## 1. Prereqs

* [Prerequisites Doc](/docs/prereqs.md)
* [Prerequisites for Launch Automatic Installation](/docs/prereqs.md)

* [Prerequisites for Private / Disconnected Installation](/docs/prereqs-restricted.md)

## 2. Installation Modes

Expand Down
9 changes: 8 additions & 1 deletion docs/egress-default.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Egress Default - Load Balancer
# Egress Default Private - Load Balancer

## 1 Install and Configure Azure with Load Balancer as Egress Outbound

Expand All @@ -13,3 +13,10 @@ ansible-playbook install-private.yml -e "egress=Loadbalancer" -e "azure_outbound
## 2. Diagram Openshift Install using the Azure Load Balancer Outbound

<img align="center" width="750" src="pics/egress_azure_lb.png">

The following items are not required or created when you install a private cluster:

* A BaseDomainResourceGroup, since the cluster does not create public records
* Public IP addresses
* Public DNS records
* Public endpoints
7 changes: 7 additions & 0 deletions docs/egress-disconnected.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
# Openshift 4 Egress in Disconnected Mode

<img align="center" width="750" src="pics/egress_azure_disconnected.png">

The following items are not required or created when you install a private cluster:

* A BaseDomainResourceGroup, since the cluster does not create public records
* Public IP addresses
* Public DNS records
* Public endpoints
6 changes: 6 additions & 0 deletions docs/egress-firewall.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,9 @@ Azure](https://docs.openshift.com/container-platform/4.6/installing/installing_a

<img align="center" width="850" src="pics/egress_azure_fw.png">

The following items are not required or created when you install a private cluster:

* A BaseDomainResourceGroup, since the cluster does not create public records
* Public IP addresses
* Public DNS records
* Public endpoints
9 changes: 8 additions & 1 deletion docs/egress-nat.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,11 @@ TODO

## 2. Diagram Openshift Install using the Azure Nat Gateway Outbound

<img align="center" width="750" src="pics/egress_azure_nat_gw.png">
<img align="center" width="750" src="pics/egress_azure_nat_gw.png">

The following items are not required or created when you install a private cluster:

* A BaseDomainResourceGroup, since the cluster does not create public records
* Public IP addresses
* Public DNS records
* Public endpoints
228 changes: 228 additions & 0 deletions docs/manual-egress-azure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,228 @@
## 1. Create Firewall With AZ Cli (manual mode - not recommended)

```
VNET=$(az network vnet list --query "[?contains(name, '${CLUSTER_NAME}')].{Name:name}" -o tsv)
RG=$(az network vnet list --query "[?contains(name, '${CLUSTER_NAME}')].{ResourceGroup:resourceGroup}" -o tsv)
```

## Create vnet for FW

```
az network vnet create -g ${RG} -n fw-vnet --address-prefix 10.1.0.0/16 --subnet-name AzureFirewallSubnet --subnet-prefix 10.1.1.0/24
```

## Peer vnets
### fw-net -> cluster-net
```
az network vnet peering create -g ${RG} -n fw2cluster --vnet-name fw-vnet --remote-vnet ${VNET} --allow-vnet-access
```

### cluster-net -> fw-net with forwarding
```
az network vnet peering create -g ${RG} -n cluster2fw --vnet-name ${VNET} --remote-vnet fw-vnet --allow-forwarded-traffic --allow-vnet-access
```

## Create Firewall
```
export FW="myFirewall"
az extension add -n azure-firewall
az network firewall create -g ${RG} -n ${FW} --location eastus
```

NOTE: To leverage FQDN on network rules we need DNS proxy enabled, when enabled the firewall will listen on port 53 and will forward DNS requests to the DNS server specified above. This will allow the firewall to translate that FQDN automatically.

Testing the dns --enable-dns-proxy true

```
az network firewall create -g ${RG} -n ${FW} --location eastus --enable-dns-proxy true
```

## create static IP

```
az network public-ip create --name fw-pip --resource-group ${RG} \
--location eastus \
--allocation-method static \
--sku standard
```

## Configure FW public ip
```
az network firewall ip-config create \
--firewall-name ${FW} \
--name FW-config \
--public-ip-address fw-pip \
--resource-group ${RG} \
--vnet-name fw-vnet
...
{
"id": "/subscriptions/xxxx/resourceGroups/ocp4-rg/providers/Microsoft.Network/azureFirewalls/myFirewall/azureFirewallIpConfigurations/FW-config",
"name": "FW-config",
"privateIpAddress": "10.1.1.4",
"provisioningState": "Succeeded",
"publicIpAddress": {
"id": "/subscriptions/xxxx/resourceGroups/ocp4-rg/providers/Microsoft.Network/publicIPAddresses/fw-pip",
"resourceGroup": "ocp4-rg"
},
"resourceGroup": "ocp4-rg",
"subnet": {
"id": "/subscriptions/xxx/resourceGroups/ocp4-rg/providers/Microsoft.Network/virtualNetworks/fw-vnet/subnets/AzureFirewallSubnet",
"resourceGroup": "ocp4-rg"
},
"type": "Microsoft.Network/azureFirewalls/azureFirewallIpConfigurations"
```

## Update config

```
az network firewall update \
--name ${FW} \
--resource-group ${RG}
```

# Get private ip of FW

```
fwprivaddr="$(az network firewall ip-config list -g ${RG} -f ${FW} --query "[?name=='FW-config'].privateIpAddress" --output tsv)"
```

## Create new table route
```
az network route-table create \
--name Firewall-rt-table \
--resource-group ${RG} \
--location eastus \
--disable-bgp-route-propagation true
```

## Create default route

Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change any of Azure's default routing, you do so by creating a route table.

Create an empty route table to be associated with a given subnet. The route table will define the next hop as the Azure Firewall created above. Each subnet can have zero or one route table associated to it.

```
az network route-table route create \
--resource-group ${RG} \
--name DG-Route \
--route-table-name Firewall-rt-table \
--address-prefix 0.0.0.0/0 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address $fwprivaddr
```

## Nat Rule

Put before the assignation of the route table to the bastion subnet for avoiding locking down

```
bastion_private_ip="$(az network nic show -n bastion01 -g ocp4-rg | jq -r .ipConfigurations[0].privateIpAddress)"
```

```
fw_ip="$(az network public-ip show -n fw-pip -g ocp4-rg | jq -r .ipAddress)"
```

```
az network firewall nat-rule create --collection-name access2bastion \
--destination-addresses $fw_pip --destination-ports 22 \
--firewall-name myFirewall --name inboundrule --protocols Any \
--resource-group ocp4-rg --source-addresses '*' \
--translated-port 22 --action Dnat --priority 100 --translated-address $bastion_private_ip
```

## Associate the route table to the cluster subnets
## get subnets from the used vnet
```
subnets=`az network vnet subnet list --vnet-name ${VNET} -g ${RG} -o tsv --query '[].{Name:name}'`
```
#
```
for subnet in ${subnets}
do
az network vnet subnet update \
--resource-group ${RG} \
--vnet-name ${VNET} \
-n ${subnet} \
--route-table Firewall-rt-table
done
```

NOTE: the bastion subnet needs the route to the azure firewall? Could be without routing through the azure firewall and only with by the regular routing. Check to leave the the bastion subnet outside the route table. If this so, the nat rule for that is not needed any more.

## Configure APP rules

### Get subnet prefixes
```
addressPrefix=`az network vnet subnet list --vnet-name ${VNET} -g ${RG} -o tsv --query '[].{AddressPrefix:addressPrefix}'`
```

### Allow google
```
az network firewall application-rule create \
--collection-name App-Coll01 \
--firewall-name ${FW} \
--name Allow-Google \
--protocols Http=80 Https=443 \
--resource-group ${RG} \
--target-fqdns *google.com \
--source-addresses ${addressPrefix} \
--priority 200 \
--action Allow
```

### Allow azure / microsoft / windows stuff
```
az network firewall application-rule create \
--collection-name azure_ms \
--firewall-name ${FW} \
--name azure \
--protocols Http=80 Https=443 \
--resource-group ${RG} \
--target-fqdns *azure.com *microsoft.com *microsoftonline.com *windows.net \
--source-addresses ${addressPrefix} \
--priority 300 \
--action Allow
```

### Allow redhat / openshift / quay stuff
```
az network firewall application-rule create \
--collection-name redhat \
--firewall-name ${FW} \
--name redhat \
--protocols Http=80 Https=443 \
--resource-group ${RG} \
--target-fqdns *redhat.com *redhat.io *quay.io *openshift.com \
--source-addresses ${addressPrefix} \
--priority 400 \
--action Allow
```

### Allow github
```
az network firewall application-rule create \
--collection-name github \
--firewall-name ${FW} \
--name github \
--protocols Http=80 Https=443 \
--resource-group ${RG} \
--target-fqdns *github.com \
--source-addresses ${addressPrefix} \
--priority 500 \
--action Allow
```

### Allow docker.io
```
az network firewall application-rule create \
--collection-name docker \
--firewall-name ${FW} \
--name docker \
--protocols Http=80 Https=443 \
--resource-group ${RG} \
--target-fqdns *docker.io *docker.com \
--source-addresses ${addressPrefix} \
--priority 600 \
--action Allow
```
7 changes: 7 additions & 0 deletions docs/mode-disconnected.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,13 @@ ansible-playbook install-disconnected.yml --vault-password-file .vault-file-pass

<img align="center" width="750" src="pics/egress_azure_proxy.png">

The following items are not required or created when you install a private cluster:

* A BaseDomainResourceGroup, since the cluster does not create public records
* Public IP addresses
* Public DNS records
* Public endpoints

## OutboundMode

OutboundMode is forced to be "User-Defined Outbound Routing"
Expand Down
7 changes: 7 additions & 0 deletions docs/mode-private.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,13 @@ NOTE: This will use the proxy by default as egress / OutboundType. For other egr

<img align="center" width="750" src="pics/egress_azure_proxy.png">

The following items are not required or created when you install a private cluster:

* A BaseDomainResourceGroup, since the cluster does not create public records
* Public IP addresses
* Public DNS records
* Public endpoints

## Egress modes [Alternatives]

* [Default with LoadBalancer](/docs/egress-default.md)
Expand Down
Binary file added docs/pics/azure-prerequisites.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
68 changes: 68 additions & 0 deletions docs/prereqs-restricted.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Prerequisites for Restricted (Privated / Disconnected) Installation in Azure

In an installation of a Openshift4 in a Preexistent VNets and with Restricted Environment (Private
or Disconnected) some prerequisites needs to be present.

To create a private cluster on Microsoft Azure, you must provide an existing private VNet and
subnets to host the cluster.

The installation program must also be able to resolve the DNS records
that the cluster requires. The installation program configures the Ingress Operator and API server
for only internal traffic.

## [User-Defined Outbound Routing](https://docs.openshift.com/container-platform/4.7/installing/installing_azure/installing-azure-private.html#installation-azure-user-defined-routing_installing-azure-private)

In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the Internet. This allows you to skip the creation of public IP addresses and the public load balancer.

You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.

## [Requirements for using your VNet](https://docs.openshift.com/container-platform/4.7/installing/installing_azure/installing-azure-private.html#installation-about-custom-azure-vnet-requirements_installing-azure-private)

When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster.

In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:

* Subnets
* Route tables
* VNets
* Network Security Groups

The cluster must be able to access the resource group that contains the existing VNet and subnets.
Some cluster Operators must be able to access resources in both resource groups.

You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines.

Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

* All the subnets that you specify exist.
* You provide two private subnets for each availability zone.
* The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for.

## [Network security group requirements](https://docs.openshift.com/container-platform/4.7/installing/installing_azure/installing-azure-private.html#installation-about-custom-azure-vnet-nsg-requirements_installing-azure-private)

The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.

The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.

```
Port - Description
80 Allows HTTP traffic
443 Allows HTTPS traffic
6443 Allows communication to the control plane machines
22623 Allows communication to the machine config server
```

## Diagram for Prerequites in Azure

<img align="center" width="750" src="pics/azure-prerequisites.png">

Before launch the installation, need to be created these prerequisites:

- 1 VNet
- 1 Private DNS Zone
- 1 Virtual Private Link to link Private Hosted Zone and VNet
- 3 Subnets (1 for Masters, 1 for Workers, 1 for Bastion)
- 1 Bastion (optional) + 1 IP Public to reach
- 1 Network Security Group (?? To Check)
Loading

0 comments on commit a16e17e

Please sign in to comment.