This terraform module can be used to deploy IBM Cloud Private on any supported infrastructure vendor. Tested on Ubuntu 16.04 and RHEL 7 on SoftLayer, VMware, AWS and Azure.
If the default SSH user is not the root user, the default user must have password-less sudo access.
Variable | Default | Required | Description |
---|---|---|---|
Cluster settings | |||
icp-version | 2.1.0.2 | No | Version of ICP to provision. See below for details on using private registry |
icp-master | No* | IP address of ICP Masters. Required if you don't use icp-host-groups | |
icp-worker | No* | IP addresses of ICP Worker nodes. Required if you don't use icp-host-groups | |
icp-proxy | No* | IP addresses of ICP Proxy nodes. Required if you don't use icp-host-groups | |
icp-management | No | IP addresses of ICP Management Nodes, if management is to be separated from master nodes. Optional | |
icp-host-groups | No* | Map of host types and IPs. See below for details. | |
boot-node | No* | IP Address of boot node. Needed when using icp-host-groups or when using a boot node separate from first master node. If separate it must be included in cluster_size |
|
cluster_size | Yes | Define total clustersize. Workaround for terraform issue #10857. Normally computed | |
ICP Configuration | |||
icp_config_file | No | Yaml configuration file for ICP installation. | |
icp_configuration | No | Configuration items for ICP installation. See KnowledgeCenter for reference. Note: Boolean values (true/false) must be supplied as strings | |
config_strategy | merge | No | Strategy for original config.yaml shipped with ICP. Default is merge, everything else means override. |
ICP Boot node to cluster communication | |||
generate_key | True | No | Whether to generate a new ssh key for use by ICP Boot Master to communicate with other nodes |
icp_pub_key | No | Public ssh key for ICP Boot master to connect to ICP Cluster. Only use when generate_key = false | |
icp_priv_key | No | Private ssh key for ICP Boot master to connect to ICP Cluster. Only use when generate_key = false | |
Terraform installation process | |||
hooks | No | Hooks into different stages in the cluster setup process. See below for details | |
install-verbosity | No | Verbosity of the icp ansible installer. -v to -vvvv. See ansible documentation for verbosity information | |
Terraform to cluster ssh configuration | |||
ssh_user | root | No | Username for Terraform to ssh into the ICP cluster. This is typically the default user with for the relevant cloud vendor |
ssh_key_base64 | No | base64 encoded content of private ssh key | |
ssh_key_file | No | Location of private ssh key. i.e. ~/.ssh/id_rsa | |
ssh_agent | True | No | Enable or disable SSH Agent. Can correct some connectivity issues. Default: true (enabled) |
ssh_key | ~/.ssh/id_rsa | No | Private key corresponding to the public key that the cloud servers are provisioned with. DEPRECATED. Use ssh_key_file or ssh_key_base64 |
bastion_host | No | Specify hostname or IP to connect to nodes through a SSH bastion host. Assumes same SSH key and username as cluster nodes | |
Docker and ICP Enterprise Edition Image configuration | |||
docker_package_location | No | http or nfs location of docker installer which ships with ICP. Typically used for RHEL which does not support docker-ce | |
image_location | False | No | Location of image file. Start with nfs: or http: to indicate protocol to download with |
image_file | /dev/null | No | Filename of image. Only required for enterprise edition |
enterprise-edition | False | No | Whether to provision enterprise edition (EE) or community edition (CE). EE requires image files to be provided |
parallell-image-pull | False | No | Download and pull docker images on all nodes in parallell before starting ICP installation. Can speed up installation time |
- icp_public_key
- The public key used for boot master to connect via ssh for cluster setup
- icp_private_key
- The public key used for boot master to connect via ssh for cluster setup
- install_complete
- Boolean value that is set to true when ICP installation process is completed
- icp_version
- The ICP version that has been installed
- cluster_ips
- List of IPs of the cluster
The icp-version
field supports the format org
/repo
:version
. ibmcom
is the default organisation and icp-inception
is the default repo, so if you're installing for example version 2.1.0.2
from Docker Hub it's sufficient to specify 2.1.0.2
as the version number.
It is also supported to install from private docker registries.
In this case the format is:
username
:password
@private_registry_server
/org
/repo
:version
.
So for exmaple
myuser:[email protected]/ibmcom/icp-inception:2.1.0.2
It is possible to execute arbritrary commands between various phases of the cluster setup and installation process. Currently, the following hooks are defined
Hook name | Where executed | When executed |
---|---|---|
cluster-preconfig | all nodes | Before any of the module scripts |
cluster-postconfig | all nodes | After preprequisites are installed |
boot-preconfig | boot master | Before any module scripts on boot master |
preinstall | boot master | After configuration image load and configuration generation |
postinstall | boot master | After successful ICP installation |
In ICP version 2.1.0.2 the concept of host groups were introduced. This allows users to define groups of hosts by an arbritrary name that will be labelled such that they can be dedicated to particular workloads. You can read more about host groups on the KnowledgeCenter
To support this an input map called icp-host-groups
were introduced, and this can be used to generate the relevant hosts file for the ICP installer. When using this field it should be used instead of the icp-master
, icp-worker
, etc fields.
module "icpprovision" {
source = "github.com/ibm-cloud-architecture/terraform-module-icp-deploy?ref=2.3.1"
icp-master = ["${softlayer_virtual_guest.icpmaster.ipv4_address}"]
icp-worker = ["${softlayer_virtual_guest.icpworker.*.ipv4_address}"]
icp-proxy = ["${softlayer_virtual_guest.icpproxy.*.ipv4_address}"]
icp-version = "2.1.0.1"
cluster_size = "${var.master["nodes"] + var.worker["nodes"] + var.proxy["nodes"]}"
icp_configuration = {
"network_cidr" = "192.168.0.0/16"
"service_cluster_ip_range" = "172.16.0.1/24"
"default_admin_password" = "My0wnPassw0rd"
}
generate_key = true
ssh_user = "ubuntu"
ssh_key_file = "~/.ssh/id_rsa"
hooks = {
"cluster-preconfig" = [
"echo This will run on all nodes",
"echo And I can run as many commands",
"echo as I want",
"echo ....they will run in order"
]
"postinstall" = [
"echo Performing some post install backup",
"${ var.postinstallbackup != "true" ? "" : "sudo chmod a+x /tmp/icp_backup.sh ; sudo /tmp/icp_backup.sh" }"
]
}
}
module "icpprovision" {
source = "github.com/ibm-cloud-architecture/terraform-module-icp-deploy?ref=2.3.1"
# We will define master, management, worker, proxy and va (Vulnerability Assistant) as well as a custom db2 group
icp-host-groups = {
master = "${openstack_compute_instance_v2.icpmaster.*.access_ip_v4}"
management = "${openstack_compute_instance_v2.icpmanagement.*.access_ip_v4}"
worker = "${openstack_compute_instance_v2.icpworker.*.access_ip_v4}"
proxy = "${openstack_compute_instance_v2.icpproxy.*.access_ip_v4}"
va = "${openstack_compute_instance_v2.icpva.*.access_ip_v4}"
hostgroup-db2 = "${openstack_compute_instance_v2.icpdb2.*.access_ip_v4}"
}
# We always have to specify a node to bootstrap the cluster. It can be any of the cluster nodes, or a separate node that has network access to the cluster.
# We will use the first master node as boot node to run the ansible installer from
boot-node = "${openstack_compute_instance_v2.icpmaster.0.access_ip_v4}"
icp-version = "2.1.0.2"
cluster_size = "${var.master["nodes"] + var.worker["nodes"] + var.proxy["nodes"]}"
icp_configuration = {
"network_cidr" = "192.168.0.0/16"
"service_cluster_ip_range" = "172.16.0.1/24"
"default_admin_password" = "My0wnPassw0rd"
}
}
module "icpprovision" {
source = "github.com/ibm-cloud-architecture/terraform-module-icp-deploy?ref=2.0.0"
icp-master = ["${softlayer_virtual_guest.icpmaster.ipv4_address}"]
icp-worker = ["${softlayer_virtual_guest.icpworker.*.ipv4_address}"]
icp-proxy = ["${softlayer_virtual_guest.icpproxy.*.ipv4_address}"]
icp-version = "2.1.0.1"
cluster_size = "${var.master["nodes"] + var.worker["nodes"] + var.proxy["nodes"]}"
icp_configuration = {
"network_cidr" = "192.168.0.0/16"
"service_cluster_ip_range" = "172.16.0.1/24"
"default_admin_password" = "My0wnPassw0rd"
}
generate_key = true
ssh_user = "ubuntu"
ssh_key_file = "~/.ssh/id_rsa"
}
module "icpprovision" {
source = "github.com/ibm-cloud-architecture/terraform-module-icp-deploy?ref=2.0.0"
icp-master = ["${softlayer_virtual_guest.icpmaster.ipv4_address}"]
icp-worker = ["${softlayer_virtual_guest.icpworker.*.ipv4_address}"]
icp-proxy = ["${softlayer_virtual_guest.icpproxy.*.ipv4_address}"]
icp-version = "2.1.0.1-ee"
image_location = "nfs:fsf-lon0601b-fz.adn.networklayer.com:/IBM02S6275/data01/ibm-cloud-private-x86_64-2.1.0.1.tar.gz"
parallell-pull = True
cluster_size = "${var.master["nodes"] + var.worker["nodes"] + var.proxy["nodes"]}"
icp_configuration = {
"network_cidr" = "192.168.0.0/16"
"service_cluster_ip_range" = "172.16.0.1/24"
"default_admin_password" = "My0wnPassw0rd"
}
generate_key = true
ssh_user = "ubuntu"
ssh_key_file = "~/.ssh/id_rsa"
}
There are several examples for different providers available from IBM Cloud Architecture Solutions Group github page
Configuration file is generated from items in the following order
- config.yaml shipped with ICP (if config_strategy = merge, else blank)
- config.yaml specified in
icp_config_file
- key: value items specified in
icp_configuration
Details on configuration items on ICP KnowledgeCenter
The module supports automatic scaling of worker nodes.
To scale simply add more nodes in the root resource supplying the icp-worker
variable.
You can see working examples for softlayer in the icp-softlayer repository
Please note, because of how terraform handles module dependencies and triggers, it is currently necessary to retrigger the scaling resource after scaling down nodes. If you don't do this ICP will continue to report inactive nodes until the next scaling event. To manually trigger the removal of deleted node, run these commands:
terraform taint --module icpprovision null_resource.icp-worker-scaler
terraform apply
As new use cases and best practices emerge code will be added to and changed the in module. Any changes in the code leads to a new release version. The module versions follow a semantic versioning scheme.
To avoid breaking existing templates which depends on the module it is recommended to add a version tag to the module source when pulling directly from git.
- Retry ssh from boot to cluster nodes when generating /etc/hosts entries. Fixes issues when some cluster nodes are provisioned substantially slower.
- Report exit code from docker when running ansible installer, rather than the last command in the pipelist (tee)
- Skip blanks when generating config.yaml as yaml.safe_dump exports them as '' which ansible installer doesn't like
- Create backup copy of original config.yaml to keep options and comments
- Support nested dictionaries when parsing
icp_configuration
to convert true/false strings to booleans
- Fix empty icp-master list issue when using icp-host-groups
- Fix issue with docker package install from nfs source
- Make docker check silent when docker is not installed
- Fix issues with terraform formatting of boolean values in config.yaml
- Fix issue with non-hostgroups installations not generating hosts files
- Fix boot-node not being optional in non-hostgroups installations
- Fix issue with boot node trying to ssh itself
- Install docker from repository if no other method selected (ubuntu only)
- Fix apt install issue for prerequisites
- Add full support for separate boot node
- Save icp install log output to /tmp/icp-install-log.txt
- Add option for verbosity on icp install log output
- Fix issues with email usernames when using private registry
- Fix passwords containing ':' when using private registry
- Fix scaler error when using hostgroups
- Added support for hostgroups
- Updated preprequisites scripts to avoid emediate failure in airgapped installations
- Include module outputs
- Added support for install hooks
- Added support for converged proxy nodes (combined master/proxy)
- Added support for private docker registry
- Fixed problem with worker scaler
- Added support for ssh bastion host
- Added support for dedicated management hosts
- Split up null_resource provisioners to increase granularity
- Added support for parallell load of EE images
- Various fixes
- Initial release