Skip to content

Latest commit

 

History

History
224 lines (183 loc) · 14.2 KB

clusterpool-inventory.md

File metadata and controls

224 lines (183 loc) · 14.2 KB

Clusterpool for on-prem cloud providers

HIVE-1367

Summary

As a cluster administrator, I want to be able to use ClusterPools to provision clusters on on-prem clouds like vSphere, OpenStack and Baremetals.

Problem Statement

Provisioning a cluster requires configuration unique to that cluster. For example, the cluster's name is used to build the hostnames for the API and console. Someone has to handle the DNS that resolves those hostnames to the IPs of the services created by the installer. One option is to create your ClusterDeployment with ManageDNS=True and we'll create a DNSZone on the fly.

But that doesn't work for some cloud providers such a vSphere, OpenStack and Baremetals.

In such cases, one solution is to configure DNS manually and then create your ClusterDeployment with the right Name and BaseDomain so that the assembled hostnames match those entries. This is fine if you're creating ClusterDeployments by hand, but breaks down if ClusterDeployment names are being generated with random slugs, as is the case with ClusterPools.

Proposal

Summary

Allow ClusterPool to accept an inventory of ClusterDeploymentCustomizations that will have install config json patches to be used when generating ClusterDeployments for the pool.

ClusterPool.Spec.Inventory

Add a field to ClusterPool.Spec called Inventory.

It has an array of object references to ClusterDeploymentCustomization custom resource. This CR will have a json patch (RFC 6902) for default install config generated by clusterpool controller. The json patch will have cluster specific changes, for vSphere this corresponds to configured DNS hostnames

spec:
  inventory:
    ClusterDeploymentCustomizations:
      - name: foo-cluster-deployment-customization
      - name: bar-cluster-deployment-customization
      - name: baz-cluster-deployment-customization

and ClusterDeploymentCustomization CR will look like

apiVersion: v1
kind: ClusterDeploymentCustomization
metadata:
  name: foo-cluster-deployment-customization
  namespace: my-project
spec:
  installConfigPatches:
    - op: replace
      path: metadata/name
      value: foo
status:
  clusterDeploymentRef: 
    name: foo
    namespace: foo-namespace
  conditions:
    - lastProbeTime: "2020-11-05T14:49:26Z"
      lastTransitionTime: "2020-11-05T14:49:26Z"
      message: Currently in use by cluster deployment foo of clusterpool foo-pool
      reason: ClusterDeploymentCustomizationInUse
      status: False
      type: Available

if DNS is configured with the name foo, ClusterDeploymentCustomization.spec.installConfigPatches content to patch vSphere install config will be as follows

spec:
  installConfigPatches:
    - op: replace
      path: metadata/name
      value: foo

When adding a ClusterDeployment, if such an Inventory is present, ClusterPool controller will:

  • Load up the inventory list.
  • For each reference it will do a GET to fetch ClusterDeploymentCustomization and check if the Available condition is true and .status.clusterDeploymentRef is nil.
  • If the ClusterDeploymentCustomization is unavailable, we set InventoryValid condition to false on ClusterPool indicating that some entries in the inventory are invalid or missing.
  • For the first ClusterDeploymentCustomization that is available, it will use the patches in the spec.installConfigPatches field to update the default install config generated by clusterpool. The patches will be applied in the order listed in the inventory. It will also update the status with reference to the cluster deployment that is using it and update the Available condition to false.
  • Set the spec.clusterPoolReference.ClusterDeploymentCustomizationRef field in the ClusterDeployment with a reference to ClusterDeploymentCustomization CR.
  • Set a Finalizer on the ClusterDeployment that prevents it from getting deleted before releasing the lease on ClusterDeploymentCustomization CR.

Absent an Inventory, ClusterPool will continue to use the generated default install config as it does today.

When ClusterDeployment is deleted, ClusterPool controller will:

  • Find out if ClusterDeployment used an ClusterDeploymentCustomization from spec.ClusterPoolReference.ClusterDeploymentCustomizationRef field.
  • If yes, we simply update ClusterDeploymentCustomization, setting the spec.clusterDeploymentRef to nil and Available to true.

How To Use

For the VSphere case, this allows the administrator to:

  • Preconfigure DNS with following entries (assuming cluster name is foo)
    10.0.0.10  api.foo.example.com
    10.0.0.11  apps.foo.example.com
    
  • Create a ClusterDeploymentCustomization CR to patch spec.metadata.name field of the default install config generated by clusterpool controller. Please refer the section above of a sample CR. The content in spec.installConfigPatches field should be as follows
    spec:
    installConfigPatches:
    - op: replace
      path: metadata/name
      value: foo
  • Add the name of ClusterDeploymentCustomization CR to clusterPool.spec.inventory.ClusterDeploymentCustomizations list. For ClusterDeploymentCustomization with a name foo-cluster-deployment-customization the clusterpool should be configured as follows
    spec:
      inventory:
        ClusterDeploymentCustomizations:
          - name: foo-cluster-deployment-customization

Validation

Webhook validation will ensure that for a clusterpool

  • if spec.inventory.ClusterDeploymentCustomizations is specified, it is not an empty list.

Other validations include

  • If clusterpool.spec.size is greater than length of the inventory, we put status condition on the clusterpool saying size requirements cannot be satisfied.
  • if inventory has a reference to a ClusterDeploymentCustomization that does not exist, we log an error, along with a status condition on ClusterDeploymentCustomization and Clusterpool, and move on to the next entry in the list. We do not ignore that ClusterDeploymentCustomization for the next iteration as user might have created it again.
  • if inventory has a reference to malformed ClusterDeploymentCustomization, we log an error, along with a status condition on ClusterDeploymentCustomization and Clusterpool, and move on to the next entry in the list. We do not ignore that ClusterDeploymentCustomization for the next iteration as user might have fixed it.

Size and MaxSize

If Inventory is used, ClusterPool.Spec.Size and .MaxSize are implicitly constrained to the length of the list of nonempty spec.inventory.ClusterDeploymentCustomizations. Setting either/both to a smaller number still works as you would expect.

Pool Version

To make adding and deleting Inventory work sanely, we will adjust the computation of the pool version used for stale CD detection/replacement as follows:

  • When Inventory is present, append an arbitrary fixed string before the final hash operation. (We don't want to recalculate the hash any time the inventory changes, as this would treat all existing unclaimed CDs as stale and replace them.)
  • When Inventory is absent, compute the pool version as before. This ensures the version does not change (triggering replacement of all unclaimed CDs) for existing pools when hive is upgraded to include this feature.

Handling Inventory Updates

Adding An Inventory

Adding an Inventory to a ClusterPool which previously didn't have one will cause the controller to recompute the pool version, rendering all existing unclaimed clusters stale, causing them to be replaced gradually.

Adding An Entry to the Inventory

If MaxSize is unset or less than/equal to the length of the inventory, and Size allows, adding a new entry will cause a new ClusterDeployment to be added to the pool. If the pool is already at [Max]Size there is no immediate effect.

Removing An Entry from the Inventory

  • If the entry is unused (no pool CD with that name exists), this is a no-op.
  • If an unclaimed CD exists with that name, we delete it. The controller will replace it, assuming an unused entry is available.
  • If a claimed CD exists with that name, no-op.

These are conceived to correlate as closely as possible to what happens when editing a pool's Size.

Deleting The Inventory

This will change the pool version, rendering existing unclaimed clusters stale and causing the controller to replace them gradually. The administrator may wish to speed up this process by manually deleting CDs, or scaling the pool size to zero and back.

Maintaining the lease of the ClusterDeploymentCustomization

If two controller pods trying to build a ClusterDeployment for the ClusterPool end up fetching the same ClusterDeploymentCustomization, they will be trying to claim same configured DNS or a hostname - a chaos scenario. To avoid it, we need to maintain a lease for each ClusterDeploymentCustomization

To solve the above problem, we will have status.clusterDeploymentRef field and Available condition on ClusterDeploymentCustomization.

  • When the Available condition is false and status.clusterDeploymentRef is nil, the ClusterDeploymentCustomization hasn't been claimed yet.

  • When we need to build a new ClusterDeployment, we

    • Load up the ClusterDeploymentCustomization list from the inventory.
    • Do a GET on each entry in the list and check if the Available condition is true
    • For the first ClusterDeploymentCustomization that is available, we will use the patches in the spec.installConfigPatches field to update the default install config generated by the clusterpool.
    • Update the spec.clusterPoolReference.ClusterDeploymentCustomizationRef field in the ClusterDeployment with a reference to ClusterDeploymentCustomization
    • Post an update to the ClusterDeploymentCustomization with the reference to the ClusterDeployment. Also update the Available condition to false. Here's where we bounce if two controllers try to grab the same ClusterDeploymentCustomization: the loser will bounce the update with a 409; we'll requeue and start over.
    • Proceed with existing algorithm to create new cluster.
  • Add a top-level routine to ensure all unavailable ClusterDeploymentCustomizations have valid ClusterDeployments associated with them. If not, attempt to create them through the same flow, within the bounds of the various pool capacities.

    This is to guard against leaking entries if a controller crashes between updating the lease on ClusterDeploymentCustomization and creating the CD.

Fairness

We will assume it is not important that we rotate through the list of supplied install config patches in any particular order, or with any nod to "fairness". For example: If there are five customizations and the usage pattern happens to only use two at a time, there is no guarantee that we will round-robin through all five. We may reuse the same two over and over, or pick at random, or something else -- any of which has the potential to "starve" some patches.

Future

Recognizing that there are other aspects of the ClusterDeployment that may need to be customizable for inventory purposes, we've left this design open to adding e.g. clusterDeploymentSpecPatches to the ClusterDeploymentCustomization spec.

Alternatives

In conceiving this design, a number of alternative designs were considered:

Bespoke Inventory Definition

As in a precursor to this RFE, creating a bespoke inventory structure with hand-picked fields whose behavior has to be custom implemented every time, and may even need different branches for different cloud providers, depending where the values actually have to land. E.g. install configs have different structures for different cloud providers.

Pro: Simple UX.

Con: Difficult to code and maintain, since we have to do new work every time we need to support a new field or provider, or even in cases where underlying formats/APIs change.

Full Spec

The inventory is simply (a ref to) a list of opaque install configs, or even ClusterDeploymentSpecs.

Pro: One and done coding for the hive team (modulo bugs). We never again have to introspect install configs.

Con: This is pretty terrible UX. The user has to mantain an entire install config for every possible cluster. If all you need is e.g. a different name/IP for each, that's really heavy. The user probably needs to go write their own templating tool, etc. Also, this is potentially rife for abuse: nothing would be stopping the user from making their pool truly heterogeneous, potentially even to the point of different cloud providers. This is anathema to the philosophy of ClusterPool, where every cluster is supposed to be substantially "the same".

Hooks

Support a hook calling some external service to mutate install config and/or ClusterDeployment spec.

Pro: Opacity for hive again. Ultimate flexibility for the user again.

Con: Really heavy for the user – now they have to write code and deploy a service.