The DO500 lab environment provisioning depends upon the following projects:
-
casl-ansible
This Red Hat Communities of Practice project provides the main playbooks for building the OpenShift cluster. It depends on the following projects:
-
infra-ansible
These playbooks create add-on infrastructure for the cluster.
NoteWe are currently using Jim Rigsbee’s fork for automated provisioning with AWS -
openshift-ansible
These playbooks install the OpenShift cluster.
-
openshift-applier
These playbooks apply resources to the OpenShift cluster after it is built.
-
-
AWS account with an IAM account enabled. You need the access and secret keys for the account. Use the "Download CSV" button (available only during account creation) to save credentials.
-
Access to the Employee SKU with redhat.com credentials.
-
A registry.redhat.io service account.
-
A hosted zone must be established in the AWS account. These instructions currently assume
nextcle.com
-
The Red Hat "Gold Images" need to be added to the AWS account. see https://access.redhat.com/articles/2962171.
Go to the EC2 console and select AMI. Search for RHEL-7.5 and select RHEL-7.5_HVM_GA-20180322-x86_64-1-Access2-GP2. Record the AMI ID. It should look like this: ami-c2603bad. It will be different for every AWS region!!! These also have to be added to each AWS base account, see URL above.
-
A private key file for access to the instances. You may create and upload one to your AWS account or use the glsdemo2.pem key. Contact Jim Rigsbee for the key file. Make sure a private key is defined in the AWS region you will target.
-
Copy sample.env.sh to env.sh.
-
Edit env.sh:
RHSM_USER=<subscription-manager username> RHSM_PASSWD=<subscription-manager password> RHSM_POOL=<subscription pool for Employee SKU> REG_USERNAME=<registry.redhat.io service account username> REG_TOKEN=<registry.redhat.io service account token> ENV_ID=<subdomain for this cluster> OCP_USERNAME=<cluster admin account> OCP_PASSWORD=<cluster admin password> IDM_DM_PASSWORD=D0ma1n123! IDM_ADMIN_PASSWORD=Adm1n123!
-
Make sure your AWS credentials CSV file (downloaded from IAM account creation page) is located in $HOME as aws-credentials.csv The file contents look like this:
User name,Password,Access key ID,Secret access key,Console login link bob,,AKIANNNNNNNNNNNNN,BBBBB/ccccc,https://933309984444.signin.aws.amazon.com/console
-
Pull in playbook dependencies using Ansible Galaxy:
ansible-galaxy install -r do500-requirements.yml -p galaxy cp casl-requirements.yml galaxy/casl-ansible cd galaxy/casl-ansible ansible-galaxy install -r casl-requirements.yml -p galaxy
NoteWe are using Jim Rigsbee’s fork of infra-ansible at this time. -
Copy sample OCP inventory to inventory_ocp:
cp -a sample_inventory_ocp inventory_ocp
-
Edit inventory_ocp/ec2.ini:
regions = <region to place cloud formation in, e.g. us-east-1> instance_filters = tag:env_id=<env_id, e.g. labs>
-
Edit inventory_ocp/group_vars/all.yaml:
env_id: "labs" <-- change to a unique name. will be used in hostnames for each AWS instance ... cloud_infrastructure: region: us-east-1 <---- Adjust accordingly, make it match ec2.ini values image_name: ami-0d70a070 <--- This is unique for your account/region, Red Hat Gold AMI for RHEL 7.5 masters: count: 1 <--- usually 1 flavor: m4.large <---- this should be sufficient zones: - us-east-1a <--- Adjust accordingly name_prefix: master root_volume_size: 40 docker_volume_size: 30 appnodes: count: 2 <--- Adjust based on the size of the class flavor: m4.xlarge <---- Adjust based on the size of the class for higher/lower memory requirements zones: - us-east-1a <---- Adjust accordingly name_prefix: app-node root_volume_size: 40 docker_volume_size: 30 infranodes: count: 1 <----- usually 1 flavor: m4.xlarge <---- this should be sufficient zones: - us-east-1a <----- Adjust accordingly name_prefix: infra-node root_volume_size: 40 docker_volume_size: 30 ... dns_domain: "nextcle.com" <---- Adjust if domain registered is different for your environment
-
Edit the provision-ocp.sh file:
-e OPTS="-e aws_key_name=glsdemo2" -t \ <--- make sure this matches the AWS key name you uploaded
Make sure the paths for the private key and do500 sources are changed if needed in the script.
-
Execute ./provision-ocp.sh.
This script:
-
Sources the environment variables needed for various secrets.
-
Creates an inventory in the galaxy/casl-ansible folder based on the entries in inventory_ocp using the sample AWS inventory provided in github.
-
Starts the
redhatcop/casl-ansible
container to provision, install, and configure the OpenShift cluster using the Ansible playbooks and roles provided bycasl-ansible
and its related modules.
-
-
Copy sample IdM inventory to inventory_idm:
cp -a sample_inventory_idm inventory_idm
-
Edit the inventory_idm/hosts file and change the name of the server to match the subdomain and domain used by the OpenShift cluster.
-
Edit the inventory_idm/group_vars/all.yml file:
... vpc_name: labs <---- make sure this matches the env_id for the cluster aws_region: us-east-1 <---- make sure this matches the cluster region # This should be a Gold AMI for Red Hat Linux ami_id: ami-0d70a070 <---- make sure this matches the cluster AMI dns_domain: nextcle.com <---- verify the domain name instance: flavor: t2.medium zone: us-east-1a <----- use the same availability zone as cluster public_ip: yes reverse_lookup: yes reverse_zone: 1.20.10.in-addr.arpa. <----- change based on the subnet for cluster hostname_prefix: idm root_volume_size: 25 ...
-
Edit the inventory_idm/group_vars/idm-server.yml file:
idm_master_hostname: idm.labs.nextcle.com <---- adjust subdomain / domain idm_domain: labs.nextcle.com <---- adjust subdomain / domain idm_realm: labs.nextcle.com <---- adjust subdomain / domain
-
Edit the *provision-idm.sh and make sure paths and keys are correct.
-
Execute ./provision-idm.sh
-
On master, add stanza to /etc/origin/master/master-config.yaml for identityProvider. See master-config.yaml.ldap for specific settings.
-
Add a group called "users" via the identity manager console.
-
Add user accounts via https://idm.<env_id>.nextcle.com. Assign users to "users" group. Reset passwords on each account.
NoteThere is a playbook to do this but I could not get it to work. -
Restart the master api and controllers. On master.<env_id>.nextcle.com:
NoteYou can ssh into IdM instance, setup your ssh keys and config, and ssh from there to master. /usr/local/bin/master-restart api
-
Assign cluster-admin privileges to at least one of the account you added in IdM. Run this on master as root.
oc adm policy add-cluster-role-to-user cluster-admin <username>
-
You should now be able to login with LDAP credentials from IdM:
oc login -u username -p password https://idm.<env_id>.nextcle.com
-
Double check your env.sh file to ensure that these variables are defined properly:
$ENV_ID <---- env id for the openshift cluster $OCP_USERNAME <---- the admin account you created in IdM $OCP_PASSWORD <--- password for this admin account
-
Execute provision-gitlab.sh
-
Verify via the CLI or OpenShift web console that GitLab deployed successfully.
-
Login with one of the IdM accounts on the GitLab LDAP login page. You should login successfully.
-
Provision user accounts on IdM via an automated means.
-
Gitlab fails to deploy on clusters that use cri-o instead of docker. They are hard-coded to look for docker marker files in the image.
-
Automate de-provision the IdM infrastructure in AWS.
-
Find a way to delete the PVC/PV elastic storage in an automated fashion.