Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix / Enable OAuth configuration #24

Merged
merged 9 commits into from
Mar 12, 2021
Merged

Fix / Enable OAuth configuration #24

merged 9 commits into from
Mar 12, 2021

Conversation

bodom0015
Copy link
Member

@bodom0015 bodom0015 commented Feb 6, 2021

Problem

Login via OAuth2 is supported via ingress annotations in the NGINX ingress controller.
There is no way to currently configure this automatically via the Helm chart.

NOTE: There are some stubbed-out configs in values.yaml for this that do not currently wire up to anything

Approach

Allow user to configure oauth proxy paths in the values.yaml

How to Test

NOTE: This uses the test Docker images built from nds-org/ndslabs#336

Prerequisites:

  • Kubernetes Cluster deployed
  • NGINX Ingress controller running
  • at least one StorageClass or have a cluster-level default set
  • TLS cert generated, or cert-manager properly configured w/ ACMEDNS support for wildcard cert generation

Initial Setup

NOTE: By default, this Helm chart deploys version ~5.1 of the oauth2-proxy. Newest release is 7.0.0

  1. Checkout this branch locally
  2. Create a secret from your TLS certificate (if not using cert-manager): kubectl create secret tls ndslabs-tls --key ${KEY_FILE} --cert ${CERT_FILE}
  3. Edit values.yaml to set the following:
    • workbench.image.webui: ndslabs/angular-ui:external-auth - image has additional code to send _oauth2_proxy cookie to new API endpoint
    • workbench.image.apiserver: ndslabs/apiserver:external-auth - image has additional code to send _oauth2_proxy cookie back to the oauth2-proxy to fetch from its /userinfo endpoint
    • workbench.support.email / smtp.gmail_user / smtp_gmail_pass to your GMail app credentials
    • workbench.domain / workbench.subdomain_prefix (Defaults to: local.ndslabs.org / www)
    • workbench.etcd_storage.storage_class / workbench.home_storage.pvc_storage_class (Defaults to: "")
  4. Deploy Workbench: helm upgrade --install workbench . -f values.yaml
  5. Attempt to navigate directly to https://www.local.ndslabs.org/dashboard
  6. Login to the Workbench
    • You should be redirected to where your ?rd= parameter was pointing

Setup: Configuring oauth2-proxy to use GitHub

Final chain: Workbench -> oauth2-proxy -> Github -> Workbench

  1. Create a new OAuth application in GitHub - be sure to save the clientId and clientSecret: https://github.com/settings/developers
  2. Generate a new cookieSecret by executing the following: python -c 'import os,base64; print base64.b64encode(os.urandom(16))'
  3. Create a Kubernetes Secret from your new GitHub OAuth credentials:
kubectl -n default create secret generic oauth2-proxy-creds \
--from-literal=cookie-secret=${COOKIE_SECRET} \
--from-literal=client-id=${GITHUB_CLIENT_ID} \
--from-literal=client-secret=${GITHUB_CLIENT_SECRET}
  1. Create a file called oauth-proxy-config.github.yaml:
config:
  existingSecret: oauth2-proxy-creds

extraArgs:
  whitelist-domain: .local.ndslabs.org
  cookie-domain: .local.ndslabs.org
  provider: github

authenticatedEmailsFile:
  enabled: true
  restricted_access: |-
    [email protected]

ingress:
  enabled: true
  path: /oauth2
  hosts:
    - www.local.ndslabs.org
  annotations:
  tls:
    - secretName: ndslabs-tls
      hosts:
        - www.local.ndslabs.org
  1. Deploy the oauth2-proxy Helm chart: helm upgrade oauth2-proxy --install stable/oauth2-proxy --values oauth2-proxy-config.github.yaml
  2. Edit the Workbench Helm chart'svalues.yaml, and set the following:
workbench:
  image:
    apiserver: ndslabs/apiserver:external-auth
    webui: ndslabs/angular-ui:external-auth
oauth:
  enabled: true
  auth_url: "https://$host/oauth2/auth"
  signin_url: "https://$host/oauth2/start?rd=$escaped_request_uri"
  auth_response_headers: "x-auth-request-user, x-auth-request-email"
  1. Deploy the new Helm chart values into Workbench: helm upgrade --install workbench . -f values.yaml

Setup: Configuring oauth2-proxy to use Globus

Final chain: Workbench -> oauth2-proxy -> Globus -> NCSA -> 2FA -> Workbench

NOTE: These steps require using a custom-built version the oauth2-proxy Docker image that adds the Globus provider. This will be submitted back to the project as a PR.

  1. Redeploy the oauth2-proxy Helm chart with the following oauth2-values.globus.yaml:
image:
  repository: ndslabs/oauth2-proxy
  tag: globus-provider
  pullPolicy: Always

extraArgs:
  whitelist-domain: .local.ndslabs.org
  cookie-domain: .local.ndslabs.org
  provider: globus
  client-id: ""
  client-secret: ""
  cookie-secret: ""   # Generate with python -c 'import os,base64; print base64.b64encode(os.urandom(16))'

authenticatedEmailsFile:
  enabled: true
  restricted_access: |-
    [email protected]

ingress:
  enabled: true
  path: /oauth2
  hosts:
    - www.local.ndslabs.org
  annotations:
  tls:
    # Reuse ndslabs wildcard certificate
    - secretName: ndslabs-tls
      hosts:
        - www.local.ndslabs.org

Setup: Configuring oauth2-proxy to use Keycloak + CILogon

Final chain: Workbench -> oauth2-proxy -> Keycloak -> CILogon -> NCSA -> 2FA -> Workbench

NOTE 1: These steps require using a custom-built version the oauth2-proxy Docker image that enriches the key. This will be submitted back to the project as a PR.

NOTE 2: The names of the Realm / Client / Identity Provider created below are important, and are used later on to build up the callback URLs we will use in the oauth2-proxy

Deploying Keycloak
  1. Deploy Keycloak to your Kubernetes cluster as described in the docs: kubectl create -f https://raw.githubusercontent.com/keycloak/keycloak-quickstarts/latest/kubernetes-examples/keycloak.yaml
    • I changed the ingress rules here to keycloak.local.ndslabs.org, allowing me to reuse the wildcard cert for *.local.ndslabs.org
    • NOTE: Eventually, we will want to deploy Keycloak via Helm chart
  2. Log into your Keycloak instance via the default credentials: admin / admin
Configuring Keycloak + CILogon

Source: https://osc.github.io/ood-documentation/release-1.7/authentication/tutorial-oidc-keycloak-rhel7/configure-cilogon.html#register-your-keycloak-instance-with-cilogon

  1. Create a Realm named workbench in Keycloak, switch to the Realm using the selector at the top-left
  2. Register a new OAuth application with CILogon: https://cilogon.org/oauth2/register
    • NOTE: You should save the ClientId and ClientSecret
  3. On the left side, click on Identity Providers - create an OIDC Identity Provider in Keycloak named CILogon using the ClientId and ClientSecret that CILogon returned to you
  4. Wait for the CILogon administrators to approve your application
    • NOTE: this shouldn't take longer than a few hours on a normal business day
  5. On the left side, click on Clients - create a Client in Keycloak name cilogon for the CILogon Identity Provider
  6. On the left side, click on Authentication - create a new Authentication Flow called "Simple First Login" and add a single "REQUIRED" execution step of "Create New User (if Unique)"
    • This tells Keycloak how to handle our users on first login - in our case, we simply want to ensure the user exists in Keycloak
    • NOTE: Eventually, we will want to expand this with a more robust new sign-up process (e.g. handling linking multiple accounts and more advanced use cases)

NOTE: There may be more that we can do here with Mappings and Scopes to provide more info, but I don't understand enough about Keycloak/OAuth2 to know for sure if that gains us anything valuable for Workbench.

Configuring Keycloak + oauth2-proxy

Source: https://docs.syseleven.de/metakube/de/tutorials/setup-ingress-auth-to-use-keycloak-oauth

  1. If you're running on localhost or .local.ndslabs.org, then you'll need to add the following magic snippet in your templates/deployment.yaml:
      hostAliases:
      # TODO: How can we parameterize this IP?
      # To get, run: "kubectl get svc -n kube-system" and get nginx service IP
      - ip: "10.110.50.133"    
        hostnames:
        - "{{ .Values.workbench.subdomain_prefix}}.{{ .Values.workbench.domain }}"
* Background: Since `www.local.ndslabs.org` resolves to `127.0.0.1`, within a container it doesn't resolve to the ingress controller and so traffic never reaches our Pod. To fix this and talk directly to the oauth2-proxy, we can use `hostAlises` to add static entries to the `/etc/hosts` file so that we can hit the ingress controller. Obviously this is for development environments only, and configurations `hostAliases` should not be used in production systems.
  1. Redeploy the oauth2-proxy Helm chart with the following oauth2-values.keycloak.yaml:
image:
  repository: ndslabs/oauth2-proxy
  tag: enrich-keycloak-session
  pullPolicy: Always

extraArgs:
  whitelist-domain: .local.ndslabs.org
  cookie-domain: .local.ndslabs.org
  scope: "openid email profile"
  provider: keycloak
  provider-display-name: Workbench Login
  login-url: "https://keycloak.local.ndslabs.org/auth/realms/workbench/protocol/openid-connect/auth" # Change REALM_NAME to your realm
  redeem-url: "https://keycloak.local.ndslabs.org/auth/realms/workbench/protocol/openid-connect/token" # Change REALM_NAME to your realm
  profile-url: "https://keycloak.local.ndslabs.org/auth/realms/workbench/protocol/openid-connect/userinfo" # Change REALM_NAME to your realm
  validate-url: "https://keycloak.local.ndslabs.org/auth/realms/workbench/protocol/openid-connect/userinfo" # Change REALM_NAME to your realm
  set-xauthrequest: true
  set-authorization-header: true
  client-id: "cilogon"    # Must match "Client" ID in Keycloak
  client-secret: "f8ec195f-b104-4c36-8ede-f1a268a5626a"     # Must match "Client" Secret (See Credentials tabs in Keycloak)
  cookie-secret: ""      # Generate with python -c 'import os,base64; print base64.b64encode(os.urandom(16))'
  #keycloak-group: "approved"    # Change XXX by your group that shall have access to it - this was not tested

authenticatedEmailsFile:
  enabled: true
  restricted_access: |-
    [email protected]

ingress:
  enabled: true
  path: /oauth2
  hosts:
    - www.local.ndslabs.org
  annotations:
  tls:
    # Reuse ndslabs wildcard certificate
    - secretName: ndslabs-tls
      hosts:
        - www.local.ndslabs.org

Test Steps

With any of the above providers configured, test steps should be identical:

  1. Attempt to navigate directly to https://www.local.ndslabs.org/dashboard
  2. Login to the Workbench
    • You should be redirected through one or more providers to sign-in / authorize
  3. Sign in and authorize this application
    • You should be redirected back to the dashboard page with a valid session
    • You should see your user info appear at the top-right
    • You should see any applications that you have previously added on your Workbench account load into your Dashboard

Added a secure endpoint that can run on the usual 30001 instead
@bodom0015 bodom0015 changed the base branch from master to cddr March 12, 2021 21:48
@bodom0015 bodom0015 merged commit 3c6d0d8 into cddr Mar 12, 2021
bodom0015 added a commit that referenced this pull request Nov 30, 2021
* Fix / Enable OAuth configuration (#24)

* Remove duplicate /dashboard ingress path

This fixes /cauth and SSO with oauth2-proxy

* Enable OAuth2 configuration via Helm chart values.yaml

* Simplify ingress configuration considerably

* Expose admin port internally if oauth enabled

* Fix auth-repsonse-headers annotation name, fix hard-coded secret name

* Fix default values.yaml entry for auth_response_headers

* Remove port mapping for 30002

Added a secure endpoint that can run on the usual 30001 instead

* Include root Ingress (where did this go??)

* Add back ingress rules that were accidentally removed

* Parameterize ProductName and ProductLandingHtml for the webui (#28)

* Parameterize AdvancedFeatures and ProductLandingHtml

* Fix typo in deployment.yaml

* Add favicon and logo path configs

* Add brand logo and favicon path configs

* Add brand_logo and favicon path configs

* Consistency is important

* Update config.yaml

* Update deployment.yaml

* Update values.yaml

* Fix typo in configmap

* Remove duplicate ingress rule

* Fix AdvancedFeatures overrides by adding to ConfigMap

* Fix AdvancedFeatures overrides by adding to ConfigMap

* Redirect non-WWW to the correct subdomain

* Added a hacky override for help_links (#29)

* Parameterize AdvancedFeatures and ProductLandingHtml

* Fix typo in deployment.yaml

* feat: add override for help_links

* added default values for help_links

* feat: start moving to configmap -> json mounted into pod

* feat: support traefik ingress (#31)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant