Skip to content

Latest commit

 

History

History
241 lines (225 loc) · 8.27 KB

Provision.adoc

File metadata and controls

241 lines (225 loc) · 8.27 KB

DO500 Lab Environment Provisioning Guide

Dependencies

The DO500 lab environment provisioning depends upon the following projects:

  • casl-ansible

    This Red Hat Communities of Practice project provides the main playbooks for building the OpenShift cluster. It depends on the following projects:

    • infra-ansible

      These playbooks create add-on infrastructure for the cluster.

      Note
      We are currently using Jim Rigsbee’s fork for automated provisioning with AWS
    • openshift-ansible

      These playbooks install the OpenShift cluster.

    • openshift-applier

      These playbooks apply resources to the OpenShift cluster after it is built.

Prerequisites

  1. AWS account with an IAM account enabled. You need the access and secret keys for the account. Use the "Download CSV" button (available only during account creation) to save credentials.

  2. Access to the Employee SKU with redhat.com credentials.

  3. A registry.redhat.io service account.

  4. A hosted zone must be established in the AWS account. These instructions currently assume nextcle.com

  5. The Red Hat "Gold Images" need to be added to the AWS account. see https://access.redhat.com/articles/2962171.

    Go to the EC2 console and select AMI. Search for RHEL-7.5 and select RHEL-7.5_HVM_GA-20180322-x86_64-1-Access2-GP2. Record the AMI ID. It should look like this: ami-c2603bad. It will be different for every AWS region!!! These also have to be added to each AWS base account, see URL above.

  6. A private key file for access to the instances. You may create and upload one to your AWS account or use the glsdemo2.pem key. Contact Jim Rigsbee for the key file. Make sure a private key is defined in the AWS region you will target.

Provisioning Steps

Provisioning OpenShift Cluster

  1. Copy sample.env.sh to env.sh.

  2. Edit env.sh:

    RHSM_USER=<subscription-manager username>
    RHSM_PASSWD=<subscription-manager password>
    RHSM_POOL=<subscription pool for Employee SKU>
    REG_USERNAME=<registry.redhat.io service account username>
    REG_TOKEN=<registry.redhat.io service account token>
    ENV_ID=<subdomain for this cluster>
    OCP_USERNAME=<cluster admin account>
    OCP_PASSWORD=<cluster admin password>
    IDM_DM_PASSWORD=D0ma1n123!
    IDM_ADMIN_PASSWORD=Adm1n123!
  3. Make sure your AWS credentials CSV file (downloaded from IAM account creation page) is located in $HOME as aws-credentials.csv The file contents look like this:

    User name,Password,Access key ID,Secret access key,Console login link
    bob,,AKIANNNNNNNNNNNNN,BBBBB/ccccc,https://933309984444.signin.aws.amazon.com/console
  4. Pull in playbook dependencies using Ansible Galaxy:

    ansible-galaxy install -r do500-requirements.yml -p galaxy
    cp casl-requirements.yml galaxy/casl-ansible
    cd galaxy/casl-ansible
    ansible-galaxy install -r casl-requirements.yml -p galaxy
    Note
    We are using Jim Rigsbee’s fork of infra-ansible at this time.
  5. Copy sample OCP inventory to inventory_ocp:

    cp -a sample_inventory_ocp inventory_ocp
  6. Edit inventory_ocp/ec2.ini:

    regions = <region to place cloud formation in, e.g. us-east-1>
    instance_filters = tag:env_id=<env_id, e.g. labs>
  7. Edit inventory_ocp/group_vars/all.yaml:

    env_id: "labs" <-- change to a unique name. will be used in hostnames for each AWS instance
    ...
    cloud_infrastructure:
       region: us-east-1   <---- Adjust accordingly, make it match ec2.ini values
       image_name: ami-0d70a070    <--- This is unique for your account/region, Red Hat Gold AMI for RHEL 7.5
       masters:
         count: 1 <--- usually 1
         flavor: m4.large  <---- this should be sufficient
         zones:
         - us-east-1a  <--- Adjust accordingly
         name_prefix: master
         root_volume_size: 40
         docker_volume_size: 30
       appnodes:
         count: 2  <--- Adjust based on the size of the class
         flavor: m4.xlarge   <---- Adjust based on the size of the class for higher/lower memory requirements
         zones:
         - us-east-1a   <---- Adjust accordingly
         name_prefix: app-node
         root_volume_size: 40
         docker_volume_size: 30
       infranodes:
         count: 1    <----- usually 1
         flavor: m4.xlarge  <---- this should be sufficient
         zones:
         - us-east-1a   <----- Adjust accordingly
         name_prefix: infra-node
         root_volume_size: 40
         docker_volume_size: 30
    ...
    dns_domain: "nextcle.com"   <---- Adjust if domain registered is different for your environment
  8. Edit the provision-ocp.sh file:

      -e OPTS="-e aws_key_name=glsdemo2" -t \   <--- make sure this matches the AWS key name you uploaded

    Make sure the paths for the private key and do500 sources are changed if needed in the script.

  9. Execute ./provision-ocp.sh.

    This script:

    • Sources the environment variables needed for various secrets.

    • Creates an inventory in the galaxy/casl-ansible folder based on the entries in inventory_ocp using the sample AWS inventory provided in github.

    • Starts the redhatcop/casl-ansible container to provision, install, and configure the OpenShift cluster using the Ansible playbooks and roles provided by casl-ansible and its related modules.

Provisioning Identity Manager (LDAP)

  1. Copy sample IdM inventory to inventory_idm:

    cp -a sample_inventory_idm inventory_idm
  2. Edit the inventory_idm/hosts file and change the name of the server to match the subdomain and domain used by the OpenShift cluster.

  3. Edit the inventory_idm/group_vars/all.yml file:

    ...
    vpc_name: labs   <---- make sure this matches the env_id for the cluster
    aws_region: us-east-1   <---- make sure this matches the cluster region
    # This should be a Gold AMI for Red Hat Linux
    ami_id: ami-0d70a070   <---- make sure this matches the cluster AMI
    dns_domain: nextcle.com  <---- verify the domain name
    
    instance:
      flavor: t2.medium
      zone: us-east-1a    <----- use the same availability zone as cluster
      public_ip: yes
      reverse_lookup: yes
      reverse_zone: 1.20.10.in-addr.arpa. <----- change based on the subnet for cluster
      hostname_prefix: idm
      root_volume_size: 25
    ...
  4. Edit the inventory_idm/group_vars/idm-server.yml file:

    idm_master_hostname: idm.labs.nextcle.com  <---- adjust subdomain / domain
    idm_domain: labs.nextcle.com  <---- adjust subdomain / domain
    idm_realm: labs.nextcle.com  <---- adjust subdomain / domain
  5. Edit the *provision-idm.sh and make sure paths and keys are correct.

  6. Execute ./provision-idm.sh

  7. On master, add stanza to /etc/origin/master/master-config.yaml for identityProvider. See master-config.yaml.ldap for specific settings.

  8. Add a group called "users" via the identity manager console.

  9. Add user accounts via https://idm.<env_id>.nextcle.com. Assign users to "users" group. Reset passwords on each account.

    Note
    There is a playbook to do this but I could not get it to work.
  10. Restart the master api and controllers. On master.<env_id>.nextcle.com:

    Note
    You can ssh into IdM instance, setup your ssh keys and config, and ssh from there to master.
      /usr/local/bin/master-restart api
  11. Assign cluster-admin privileges to at least one of the account you added in IdM. Run this on master as root.

    oc adm policy add-cluster-role-to-user cluster-admin <username>
  12. You should now be able to login with LDAP credentials from IdM:

    oc login -u username -p password https://idm.<env_id>.nextcle.com

Provisioning GitLab Instance in OpenShift Cluster

  1. Double check your env.sh file to ensure that these variables are defined properly:

    $ENV_ID   <---- env id for the openshift cluster
    $OCP_USERNAME   <---- the admin account you created in IdM
    $OCP_PASSWORD   <--- password for this admin account
  2. Execute provision-gitlab.sh

  3. Verify via the CLI or OpenShift web console that GitLab deployed successfully.

  4. Login with one of the IdM accounts on the GitLab LDAP login page. You should login successfully.

Provisioning the Jenkins NPM Slave

  1. Execute provision-jenkins-slave.sh

TO DO List

  1. Provision user accounts on IdM via an automated means.

  2. Gitlab fails to deploy on clusters that use cri-o instead of docker. They are hard-coded to look for docker marker files in the image.

  3. Automate de-provision the IdM infrastructure in AWS.

  4. Find a way to delete the PVC/PV elastic storage in an automated fashion.