Skip to content

Commit

Permalink
move config for subcapacities/subresources into plugin config
Browse files Browse the repository at this point in the history
This is a weird wart from way back when plugin config was organized
completely differently. Nowadays, it's just way easier to have this as
part of the plugin config.
  • Loading branch information
majewsky committed Nov 29, 2023
1 parent 3cc00ce commit 5f560ef
Show file tree
Hide file tree
Showing 25 changed files with 77 additions and 113 deletions.
56 changes: 25 additions & 31 deletions docs/operators/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,13 +64,6 @@ services:
capacitors:
- id: nova
type: nova
subresources:
compute:
- instances
subcapacities:
compute:
- cores
- ram
bursting:
max_multiplier: 0.2
```
Expand All @@ -86,8 +79,6 @@ The following fields and sections are supported:
| `discovery.only_domains` | no | May contain a regex. If given, only domains whose names match the regex will be considered by Limes. If `except_domains` is also given, it takes precedence over `only_domains`. |
| `discovery.params` | yes/no | A subsection containing additional parameters for the specific discovery method. Whether this is required depends on the discovery method; see [*Supported discovery methods*](#supported-discovery-methods) for details. |
| `services` | yes | List of backend services for which to scrape quota/usage data. Service types for which Limes does not include a suitable *quota plugin* will be ignored. See below for supported service types. |
| `subresources` | no | List of resources where subresource scraping is requested. This is an object with service types as keys, and a list of resource names as values. |
| `subcapacities` | no | List of resources where subcapacity scraping is requested. This is an object with service types as keys, and a list of resource names as values. |
| `capacitors` | no | List of capacity plugins to use for scraping capacity data. See below for supported capacity plugins. |
| `lowpriv_raise` | no | Configuration options for low-privilege quota raising. See [*low-privilege quota raising*](#low-privilege-quota-raising) for details. |
| `resource_behavior` | no | Configuration options for special resource behaviors. See [*resource behavior*](#resource-behavior) for details. |
Expand Down Expand Up @@ -264,7 +255,14 @@ become more generic once I have more than this singular usecase and a general pa
#### Instance subresources
The `instances` resource supports subresource scraping. Subresources bear the following attributes:
```yaml
services:
- type: compute
params:
with_subresources: true
```
The `instances` resource supports subresource scraping. If enabled, subresources bear the following attributes:

| Attribute | Type | Comment |
| --- | --- | --- |
Expand Down Expand Up @@ -539,6 +537,8 @@ services:
- type: volumev2
params:
volume_types: [ vmware, vmware_hdd ]
with_volume_subresources: true
with_snapshot_subresources: true
```

The area for this service is `storage`. The following resources are always exposed:
Expand All @@ -564,8 +564,8 @@ In Cinder, besides the volume-type-specific quotas, the general quotas
(`gigabytes`, `snapshots`, `volumes`) are set to the sum across all volume
types.

The `volumes` and `volumes_${volume_type}` resources supports subresource
scraping. Subresources bear the following attributes:
When subresource scraping is enabled (as shown above) for the `volumes` and `volumes_${volume_type}` resources,
volume subresources bear the following attributes:

| Attribute | Type | Comment |
| --- | --- | --- |
Expand All @@ -575,8 +575,8 @@ scraping. Subresources bear the following attributes:
| `size` | integer value with unit | volume size |
| `availability_zone` | string | availability zone where volume is located |

The `snapshots` and `snapshots_${volume_type}` resources supports subresource
scraping. Subresources bear the following attributes:
When subresource scraping is enabled (as shown above) for the `snapshots` and `snapshots_${volume_type}` resources,
snapshot subresources bear the following attributes:

| Attribute | Type | Comment |
| --- | --- | --- |
Expand Down Expand Up @@ -607,8 +607,7 @@ capacitors:
volume_types:
vmware: { volume_backend_name: vmware_ssd, default: true }
vmware_hdd: { volume_backend_name: vmware_hdd, default: false }
subcapacities:
- volumev2/capacity
with_subcapacities: true
```

| Resource | Method |
Expand Down Expand Up @@ -647,9 +646,7 @@ capacitors:
shares_per_pool: 1000
snapshots_per_share: 5
capacity_balance: 0.5
subcapacities:
- sharev2/share_capacity
- sharev2/snapshot_capacity
with_subcapacities: true
```

| Resource | Method |
Expand All @@ -668,8 +665,8 @@ considering pools with that share type.
The `mapping_rules` inside a share type have the same semantics as for the `sharev2` quota plugin, and must be set
identically to ensure that the capacity values make sense in context.

When subcapacity scraping is enabled (as shown above), subcapacities will be scraped for the respective resources. Each
subcapacity corresponds to one Manila pool, and bears the following attributes:
When subcapacity scraping is enabled (as shown above), subcapacities will be scraped for the `share_capacity` and
`snapshot_capacity` resources. Each subcapacity corresponds to one Manila pool, and bears the following attributes:

| Name | Type | Comment |
| --- | --- | --- |
Expand Down Expand Up @@ -725,10 +722,7 @@ capacitors:
extra_specs:
first: 'foo'
second: 'bar'
subcapacities:
- compute/cores
- compute/instances
- compute/ram
with_subcapacities: true
```
| Resource | Method |
Expand All @@ -751,7 +745,7 @@ The `params.extra_specs` parameter can be used to control how flavors are enumer
considered which have all the extra specs noted in this map, with the same values as defined in the configuration file.
This is particularly useful to filter Ironic flavors, which usually have much larger root disk sizes.

When subcapacity scraping is enabled (as shown above), subcapacities will be scraped for the respective resources. Each
When subcapacity scraping is enabled (as shown above), subcapacities will be scraped for all three resources. Each
subcapacity corresponds to one Nova hypervisor. If the `params.hypervisor_type_pattern` parameter is set, only matching
hypervisors will be shown. Aggregates with no matching hypervisor will not be considered. Subcapacities bear the
following attributes:
Expand Down Expand Up @@ -793,7 +787,7 @@ certificate (`params.api.ca_cert`) and/or specify a TLS client certificate
(`params.api.cert`) and private key (`params.api.key`) combination that
will be used by the HTTP client to make requests to the Prometheus API.

For example, the following configuration can be used with [swift-health-statsd][shs] to find the net capacity of a Swift cluster with 3 replicas:
For example, the following configuration can be used with [swift-health-exporter][she] to find the net capacity of a Swift cluster with 3 replicas:

```yaml
capacitors:
Expand All @@ -817,6 +811,7 @@ capacitors:
flavor_aliases:
newflavor1: [ oldflavor1 ]
newflavor2: [ oldflavor2, oldflavor3 ]
with_subcapacities: true
```

This capacity plugin reports capacity for the special `compute/instances_<flavorname>` resources that exist on SAP
Expand All @@ -835,9 +830,8 @@ subcapacities:
- compute: [ instances-baremetal ]
```

When the "compute/instances-baremetal" pseudo-resource is set up for subcapacity scraping (as shown above),
subcapacities will be scraped for all resources reported by this plugin. Subcapacities correspond to Ironic nodes and
bear the following attributes:
When subcapacity scraping is enabled (as shown above), subcapacities will be scraped for all resources reported by
this plugin. Subcapacities correspond to Ironic nodes and bear the following attributes:

| Attribute | Type | Comment |
| --- | --- | --- |
Expand All @@ -854,7 +848,7 @@ bear the following attributes:
[policy]: https://docs.openstack.org/security-guide/identity/policies.html
[ex-pol]: ../example-policy.yaml
[prom]: https://prometheus.io
[shs]: https://github.com/sapcc/swift-health-statsd
[she]: https://github.com/sapcc/swift-health-exporter

## Rate Limits

Expand Down
17 changes: 2 additions & 15 deletions internal/core/cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -108,40 +108,27 @@ func (c *Cluster) Connect(provider *gophercloud.ProviderClient, eo gophercloud.E

//initialize quota plugins
for _, srv := range c.Config.Services {
scrapeSubresources := map[string]bool{}
for _, resName := range c.Config.Subresources[srv.ServiceType] {
scrapeSubresources[resName] = true
}

plugin := c.QuotaPlugins[srv.ServiceType]
err = yaml.UnmarshalStrict([]byte(srv.Parameters), plugin)
if err != nil {
errs.Addf("failed to supply params to service %s: %w", srv.ServiceType, err)
continue
}
err := plugin.Init(provider, eo, scrapeSubresources)
err := plugin.Init(provider, eo)
if err != nil {
errs.Addf("failed to initialize service %s: %w", srv.ServiceType, util.UnpackError(err))
}
}

//initialize capacity plugins
scrapeSubcapacities := make(map[string]map[string]bool)
for serviceType, resourceNames := range c.Config.Subcapacities {
m := make(map[string]bool)
for _, resourceName := range resourceNames {
m[resourceName] = true
}
scrapeSubcapacities[serviceType] = m
}
for _, capa := range c.Config.Capacitors {
plugin := c.CapacityPlugins[capa.ID]
err = yaml.UnmarshalStrict([]byte(capa.Parameters), plugin)
if err != nil {
errs.Addf("failed to supply params to capacitor %s: %w", capa.ID, err)
continue
}
err := plugin.Init(provider, eo, scrapeSubcapacities)
err := plugin.Init(provider, eo)
if err != nil {
errs.Addf("failed to initialize capacitor %s: %w", capa.ID, util.UnpackError(err))
}
Expand Down
2 changes: 0 additions & 2 deletions internal/core/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,6 @@ type ClusterConfiguration struct {
Services []ServiceConfiguration `yaml:"services"`
Capacitors []CapacitorConfiguration `yaml:"capacitors"`
//^ Sorry for the stupid pun. Not.
Subresources map[string][]string `yaml:"subresources"`
Subcapacities map[string][]string `yaml:"subcapacities"`
LowPrivilegeRaise LowPrivilegeRaiseConfiguration `yaml:"lowpriv_raise"`
ResourceBehaviors []ResourceBehavior `yaml:"resource_behavior"`
Bursting BurstingConfiguration `yaml:"bursting"`
Expand Down
2 changes: 1 addition & 1 deletion internal/core/constraints_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ func expectQuotaConstraintInvalid(t *testing.T, path string, expectedErrors ...s

type quotaConstraintTestPlugin struct{}

func (p quotaConstraintTestPlugin) Init(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubresources map[string]bool) error {
func (p quotaConstraintTestPlugin) Init(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) error {
return nil
}
func (p quotaConstraintTestPlugin) PluginTypeID() string {
Expand Down
4 changes: 2 additions & 2 deletions internal/core/plugin.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ type QuotaPlugin interface {
//
//Before Init is called, the `services[].params` provided in the config
//file will be yaml.Unmarshal()ed into the plugin object itself.
Init(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubresources map[string]bool) error
Init(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) error

//ServiceInfo returns metadata for this service.
//
Expand Down Expand Up @@ -196,7 +196,7 @@ type CapacityPlugin interface {
//
//Before Init is called, the `capacitors[].params` provided in the config
//file will be yaml.Unmarshal()ed into the plugin object itself.
Init(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubcapacities map[string]map[string]bool) error
Init(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) error
//Scrape queries the backend service(s) for the capacities of the resources
//that this plugin is concerned with. The result is a two-dimensional map,
//with the first key being the service type, and the second key being the
Expand Down
2 changes: 1 addition & 1 deletion internal/plugins/archer.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ func init() {
}

// Init implements the core.QuotaPlugin interface.
func (p *archerPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubresources map[string]bool) error {
func (p *archerPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) error {
serviceType := "endpoint-services"
eo.ApplyDefaults(serviceType)

Expand Down
9 changes: 3 additions & 6 deletions internal/plugins/capacity_cinder.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,7 @@ type capacityCinderPlugin struct {
VolumeBackendName string `yaml:"volume_backend_name"`
IsDefault bool `yaml:"default"`
} `yaml:"volume_types"`
//computed state
reportSubcapacities map[string]bool `yaml:"-"`
WithSubcapacities bool `yaml:"with_subcapacities"`
//connections
CinderV3 *gophercloud.ServiceClient `yaml:"-"`
}
Expand All @@ -53,9 +52,7 @@ func init() {
}

// Init implements the core.CapacityPlugin interface.
func (p *capacityCinderPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubcapacities map[string]map[string]bool) (err error) {
p.reportSubcapacities = scrapeSubcapacities["volumev2"]

func (p *capacityCinderPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (err error) {
if len(p.VolumeTypes) == 0 {
//nolint:stylecheck //Cinder is a proper name
return errors.New("Cinder capacity plugin: missing required configuration field cinder.volume_types")
Expand Down Expand Up @@ -181,7 +178,7 @@ func (p *capacityCinderPlugin) Scrape() (result map[string]map[string]core.PerAZ
}
*capa.Usage += uint64(pool.Capabilities.AllocatedCapacityGB)

if p.reportSubcapacities["capacity"] {
if p.WithSubcapacities {
capa.Subcapacities = append(capa.Subcapacities, storagePoolSubcapacity{
PoolName: pool.Name,
AvailabilityZone: poolAZ,
Expand Down
26 changes: 10 additions & 16 deletions internal/plugins/capacity_manila.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,7 @@ type capacityManilaPlugin struct {
SharesPerPool uint64 `yaml:"shares_per_pool"`
SnapshotsPerShare uint64 `yaml:"snapshots_per_share"`
CapacityBalance float64 `yaml:"capacity_balance"`
//computed state
reportSubcapacities map[string]bool `yaml:"-"`
WithSubcapacities bool `yaml:"with_subcapacities"`
//connections
ManilaV2 *gophercloud.ServiceClient `yaml:"-"`
}
Expand All @@ -63,9 +62,7 @@ func init() {
}

// Init implements the core.CapacityPlugin interface.
func (p *capacityManilaPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubcapacities map[string]map[string]bool) (err error) {
p.reportSubcapacities = scrapeSubcapacities["sharev2"]

func (p *capacityManilaPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (err error) {
if len(p.ShareTypes) == 0 {
return errors.New("capacity plugin manila: missing required configuration field manila.share_types")
}
Expand Down Expand Up @@ -218,29 +215,26 @@ func (p *capacityManilaPlugin) scrapeForShareType(shareType ManilaShareTypeSpec,
allocatedCapacityGbPerAZ[poolAZ] += pool.Capabilities.AllocatedCapacityGB
}

if p.reportSubcapacities["share_capacity"] {
subcapa := storagePoolSubcapacity{
if p.WithSubcapacities {
shareSubcapa := storagePoolSubcapacity{
PoolName: pool.Name,
AvailabilityZone: poolAZ,
CapacityGiB: getShareCapacity(pool.Capabilities.TotalCapacityGB, capBalance),
UsageGiB: getShareCapacity(pool.Capabilities.AllocatedCapacityGB, capBalance),
}
if !isIncluded {
subcapa.ExclusionReason = "hardware_state = " + pool.Capabilities.HardwareState
}
shareSubcapacitiesPerAZ[poolAZ] = append(shareSubcapacitiesPerAZ[poolAZ], subcapa)
}
if p.reportSubcapacities["snapshot_capacity"] {
subcapa := storagePoolSubcapacity{
snapshotSubcapa := storagePoolSubcapacity{
PoolName: pool.Name,
AvailabilityZone: poolAZ,
CapacityGiB: getSnapshotCapacity(pool.Capabilities.TotalCapacityGB, capBalance),
UsageGiB: getSnapshotCapacity(pool.Capabilities.AllocatedCapacityGB, capBalance),
}

if !isIncluded {
subcapa.ExclusionReason = "hardware_state = " + pool.Capabilities.HardwareState
shareSubcapa.ExclusionReason = "hardware_state = " + pool.Capabilities.HardwareState
snapshotSubcapa.ExclusionReason = "hardware_state = " + pool.Capabilities.HardwareState
}
snapshotSubcapacitiesPerAZ[poolAZ] = append(snapshotSubcapacitiesPerAZ[poolAZ], subcapa)
shareSubcapacitiesPerAZ[poolAZ] = append(shareSubcapacitiesPerAZ[poolAZ], shareSubcapa)
snapshotSubcapacitiesPerAZ[poolAZ] = append(snapshotSubcapacitiesPerAZ[poolAZ], snapshotSubcapa)
}
}

Expand Down
2 changes: 1 addition & 1 deletion internal/plugins/capacity_manual.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ func init() {
}

// Init implements the core.CapacityPlugin interface.
func (p *capacityManualPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubcapacities map[string]map[string]bool) error {
func (p *capacityManualPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) error {
return nil
}

Expand Down
9 changes: 4 additions & 5 deletions internal/plugins/capacity_nova.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ type capacityNovaPlugin struct {
MaxInstancesPerAggregate uint64 `yaml:"max_instances_per_aggregate"`
ExtraSpecs map[string]string `yaml:"extra_specs"`
HypervisorTypeRx regexpext.PlainRegexp `yaml:"hypervisor_type_pattern"`
WithSubcapacities bool `yaml:"with_subcapacities"`
//computed state
reportSubcapacities map[string]bool `yaml:"-"`

Check failure on line 52 in internal/plugins/capacity_nova.go

View workflow job for this annotation

GitHub Actions / Build & Lint

field `reportSubcapacities` is unused (unused)
//connections
Expand All @@ -70,9 +71,7 @@ func init() {
}

// Init implements the core.CapacityPlugin interface.
func (p *capacityNovaPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubcapacities map[string]map[string]bool) (err error) {
p.reportSubcapacities = scrapeSubcapacities["compute"]

func (p *capacityNovaPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (err error) {
if p.AggregateNameRx == "" {
return errors.New("missing value for nova.aggregate_name_pattern")
}
Expand Down Expand Up @@ -259,8 +258,8 @@ func (p *capacityNovaPlugin) Scrape() (result map[string]map[string]core.PerAZ[c
azCapacity.Add(hvCapacity)

//report subcapacity for this hypervisor if requested
for _, resName := range resourceNames {
if p.reportSubcapacities[resName] {
if p.WithSubcapacities {
for _, resName := range resourceNames {
resCapa := hvCapacity.GetCapacity(resName, maxRootDiskSize)
azCapacity.Subcapacities = append(azCapacity.Subcapacities, novaHypervisorSubcapacity{
ServiceHost: hypervisor.Service.Host,
Expand Down
2 changes: 1 addition & 1 deletion internal/plugins/capacity_prometheus.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ func init() {
}

// Init implements the core.CapacityPlugin interface.
func (p *capacityPrometheusPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts, scrapeSubcapacities map[string]map[string]bool) error {
func (p *capacityPrometheusPlugin) Init(provider *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) error {
return nil
}

Expand Down
Loading

0 comments on commit 5f560ef

Please sign in to comment.