Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stac-22253: release notes 2.3.0 #1575

Merged
merged 9 commits into from
Jan 30, 2025
1 change: 1 addition & 0 deletions SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,7 @@
* [v2.1.0 - 29/Oct/2024](setup/release-notes/v2.1.0.md)
* [v2.2.0 - 09/Dec/2024](setup/release-notes/v2.2.0.md)
* [v2.2.1 - 10/Dec/2024](setup/release-notes/v2.2.1.md)
* [v2.3.0 - 30/Jan/2025](setup/release-notes/v2.3.0.md)
* [Upgrade SUSE Observability](setup/upgrade-stackstate/README.md)
* [Migration from StackState](setup/upgrade-stackstate/migrate-from-6.md)
* [Steps to upgrade](setup/upgrade-stackstate/steps-to-upgrade.md)
Expand Down
25 changes: 13 additions & 12 deletions k8s-suse-rancher-prime.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,23 +23,24 @@ To install SUSE Observability, ensure that the nodes have enough CPU and memory

There are different installation options available for SUSE Observability. It is possible to install SUSE Observability either in a High-Availability (HA) or single instance (non-HA) setup. The non-HA setup is recommended for testing purposes or small environments. For production environments, it is recommended to install SUSE Observability in a HA setup.

The HA production setup can support from 150 up to 500 observed nodes. An observed node in this sizing table is taken to be 4 vCPUs and 16GB of memory, our `default node size`.
The HA production setup can support from 150 up to 4000 observed nodes. An observed node in this sizing table is taken to be 4 vCPUs and 16GB of memory, our `default node size`.
If nodes in your observed cluster are bigger, they can count for multiple `default nodes`, so a node of 12vCPU and 48GB counts as 3 `default nodes` under observation when picking
a profile.
The Non-HA setup can support up to 100 Nodes under observation.

| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA |
| ------------------- | ------ | --------- | --------- | --------- | ---------- | ------ | ------- | ------- |
| **CPU Requests** | 7.5 | 7.5 | 10.5 | 15 | 25 | 49 | 62 | 86.5 |
| **CPU Limits** | 16 | 16 | 21.5 | 30.5 | 50 | 103 | 128 | 176 |
| **Memory Requests** | 22.5Gi | 22.5Gi | 28Gi | 32Gi | 51Gi | 67Gi | 143Gi | 161.5Gi |
| **Memory Limits** | 23.5Gi | 23.5Gi | 29Gi | 33.5Gi | 51.5Gi | 131Gi | 147.5Gi | 166Gi |
| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA | 4000 HA |
| ------------------- | ------ | --------- | --------- | --------- | ---------- | ------ | ------- | ------- | ------- |
| **CPU Requests** | 7.5 | 7.5 | 10.5 | 15 | 25 | 49 | 62 | 86.5 | 210 |
| **CPU Limits** | 16 | 16 | 21.5 | 30.5 | 50 | 103 | 128 | 176 | 278 |
| **Memory Requests** | 22.5Gi | 22.5Gi | 28Gi | 32Gi | 51Gi | 67Gi | 143Gi | 161.5Gi | 256Gi |
| **Memory Limits** | 23.5Gi | 23.5Gi | 29Gi | 33.5Gi | 51.5Gi | 131Gi | 147.5Gi | 166Gi | 317.5Gi |

{% hint style="info" %}
The requirement shown for profile represent the total amount of resources needed to run the Suse Observability server.
To ensure that all different services of Suse Observability server can be allocated:
* For non-HA installations the recommended node size is 4VCPU, 8GB
* For HA installations the min recommended node size is 8VCPU, 16GB
* For HA installations up to 500 nodes the min recommended node size is 8VCPU, 16GB
* For 4000 nodes HA installations the min recommended node size is 16VCPU, 32GB
{% endhint %}

{% hint style="info" %}
Expand All @@ -58,10 +59,10 @@ SUSE Observability uses persistent volume claims for the services that need to s

For our different installation profiles, the following are the defaulted storage requirements:

| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| **Retention (days)** | 3 | 30 | 30 | 30 | 30 | 30 | 30 | 30 |
| **Storage requirement** | 125GB | 280GB | 420GB | 420GB | 600GB | 2TB | 2TB | 2.5TB |
| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA | 4000 HA
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| **Retention (days)** | 3 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 |
| **Storage requirement** | 125GB | 280GB | 420GB | 420GB | 600GB | 2TB | 2TB | 2.5TB | 5.5TB

For more details on the defaults used, see the page [Configure storage](/setup/install-stackstate/kubernetes_openshift/storage.md).

Expand Down
24 changes: 13 additions & 11 deletions setup/install-stackstate/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,18 +25,20 @@ An observed node in this sizing table is taken to be 4 vCPUs and 16GB of memory,
If nodes in your observed cluster are bigger, they can count for multiple `default nodes`, so a node of 12vCPU and 48GB counts as 3 `default nodes` under observation when picking
a profile.

| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA |
| ------------------- | ------ | --------- | --------- | --------- | ---------- | ------ | ------- | ------- |
| **CPU Requests** | 7.5 | 7.5 | 10.5 | 15 | 25 | 49 | 62 | 86.5 |
| **CPU Limits** | 16 | 16 | 21.5 | 30.5 | 50 | 103 | 128 | 176 |
| **Memory Requests** | 22.5Gi | 22.5Gi | 28Gi | 32Gi | 51Gi | 67Gi | 143Gi | 161.5Gi |
| **Memory Limits** | 23.5Gi | 23.5Gi | 29Gi | 33.5Gi | 51.5Gi | 131Gi | 147.5Gi | 166Gi |

| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA | 4000 HA |
| ------------------- | ------ | --------- | --------- | --------- | ---------- | ------ | ------- | ------- | ------- |
| **CPU Requests** | 7.5 | 7.5 | 10.5 | 15 | 25 | 49 | 62 | 86.5 | 210 |
| **CPU Limits** | 16 | 16 | 21.5 | 30.5 | 50 | 103 | 128 | 176 | 278 |
| **Memory Requests** | 22.5Gi | 22.5Gi | 28Gi | 32Gi | 51Gi | 67Gi | 143Gi | 161.5Gi | 256Gi |
| **Memory Limits** | 23.5Gi | 23.5Gi | 29Gi | 33.5Gi | 51.5Gi | 131Gi | 147.5Gi | 166Gi | 317.5Gi |

{% hint style="info" %}
The requirement shown for profile represent the total amount of resources needed to run the Suse Observability server.
To ensure that all different services of Suse Observability server can be allocated:
* For non-HA installations the recommended node size is 4VCPU, 8GB
* For HA installations the min recommended node size is 8VCPU, 16GB
* For HA installations up to 500 nodes the min recommended node size is 8VCPU, 16GB
* For 4000 nodes HA installations the min recommended node size is 16VCPU, 32GB
{% endhint %}

These are just the upper and lower bounds of the resources that can be consumed by SUSE Observability in the different installation options. The actual resource usage will depend on the features used, configured resource limits and dynamic usage patterns, such as Deployment or DaemonSet scaling. For our Self-hosted customers, we recommend to start with the default requirements and monitor the resource usage of the SUSE Observability components.
Expand All @@ -55,10 +57,10 @@ SUSE Observability uses persistent volume claims for the services that need to s

For our different installation profiles, the following are the defaulted storage requirements:

| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| **Retention (days)** | 3 | 30 | 30 | 30 | 30 | 30 | 30 | 30 |
| **Storage requirement** | 125GB | 280GB | 420GB | 420GB | 600GB | 2TB | 2TB | 2.5TB |
| | trial | 10 non-HA | 20 non-HA | 50 non-HA | 100 non-HA | 150 HA | 250 HA | 500 HA | 4000 HA
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| **Retention (days)** | 3 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 |
| **Storage requirement** | 125GB | 280GB | 420GB | 420GB | 600GB | 2TB | 2TB | 2.5TB | 5.5TB

{% hint style="info" %}
The storage estimates presented take into account a default of 14 days of retention for NONHA and 1 month for HA installations. For short lived test instances the storage sizes can be further reduced.
Expand Down
36 changes: 36 additions & 0 deletions setup/release-notes/v2.3.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
description: SUSE Observability Self-hosted
---

# v2.3.0 - 30/Jan/2025

## Release Notes SUSE Observability Helm Chart version 2.3.0

### New Features & Enhancements
* Support for a 4000 Nodes HA deployment profile was added.
* The following libraries and container images were upgraded to fix CVEs in Suse Observability.
* HDFS
* Avro
* Protobuf
* Pac4j
* Logback
* Sts-toolbox
* All Suse Observability base images
* Kafka and Kafka operator
* Victoria metrics
* Container tools
* Minio
* Nginx prometheus exporter
* ElasticSearch prometheus Exporter

### Bug Fixes
* Fix issue where the vmrestore docker image could not be pulled from the rancher docker repositories.
* Fixed an issue where helm install would fail due to the suse-observability-backup-conf job exiting so fast that helm does not observe it.
* Fix bug where fullComponents() in scriptAPI would fail with a `Could not find elements` message.

### Breaking changes
* Using `stackstate.components.all.image.pullSecretUserName` for defining pull-secrets was removed from the suse-observability helm chart. The way to define a pull secret is through the `suse-observability-values` (see [air-gapped installation instructions](/k8s-suse-rancher-prime-air-gapped.md#installing-suse-observability)) or through `pull-secret.credentials` in the `suse-observability` chart.

## Agent Bug Fixes
* Fix SUSE Observability agent not installing due to a pull secret not yet being created during running of a helm prehook.
* Fix issue where the SUSE Observability agent prevents containerd tmpmounts to be unmounted
12 changes: 6 additions & 6 deletions setup/security/authentication/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ stackstate:
components:
server:
additionalLogging: |
logger("org.pac4j.core.engine", DEBUG)
logger("org.pac4j.oidc.profile.creator", DEBUG)
logger("org.pac4j.oidc.credentials.authenticator", DEBUG)
<logger name="org.pac4j.core.engine" level="DEBUG"/>
<logger name="org.pac4j.oidc.profile.creator" level="DEBUG"/>
<logger name="org.pac4j.oidc.credentials.authenticator" level="DEBUG"/>
api:
additionalLogging: |
logger("org.pac4j.core.engine", DEBUG)
logger("org.pac4j.oidc.profile.creator", DEBUG)
logger("org.pac4j.oidc.credentials.authenticator", DEBUG)
<logger name="org.pac4j.core.engine" level="DEBUG"/>
<logger name="org.pac4j.oidc.profile.creator" level="DEBUG"/>
<logger name="org.pac4j.oidc.credentials.authenticator" level="DEBUG"/>
```

Now run the `helm upgrade` command you used before but include this one extra yaml file (so `helm upgrade .... --values debug-auth.yaml`) to enable debug logging. No pods will be restarting, the logging configuration changes will be loaded automatically after about 30 seconds.
Expand Down