Skip to content

Commit

Permalink
Merge pull request #12 from skysqlinc/DEV-234/fix-storage-auto-scale
Browse files Browse the repository at this point in the history
Fix auto-scale storage formatting in table
  • Loading branch information
karsov authored May 10, 2024
2 parents 0923dd0 + d0c38a1 commit 9b9714b
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions docs/Autonomously scale Compute, Storage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,17 +49,17 @@ To manage Autonomous settings:

Automatic scaling occurs based on rules.

| Policy | Condition | Action |
|--------------------|--------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
| Auto-Scale Disk | Disk utilization > 90% sustained for 5 minutes | The disk is expected to run out of capacity in the next 24 hours (predicted based on the last 6 hours of service usage). Upgrade storage to the next available size in 100GB increments Note: you cannot downgrade storage, the upgrade is irreversible |
| Auto-Scale Nodes Out | CPU utilization > 75% over all replicas sustained for 30 minutes. <br> Number of concurrent sessions > 90% over all replicas sustained for 1 hour. <br> Number of concurrent sessions is expected to hit the maximum within 4 hours (predicted based on the last 2 hours of service usage) | Add new replica or node Additional nodes will be of the same size and configuration as existing nodes |
| Auto-Scale Nodes In | CPU utilization < 50% over all replicas sustained for 1 hour. <br> Number of concurrent sessions < 50% over all replicas sustained for 1 hour | Remove replica or node Node count will not decrease below the initial count set at launch |
| Auto-Scale Nodes Up | Number of concurrent sessions is expected to hit the maximum within 4 hours (predicted based on the last 2 hours of service usage) | Upgrade all nodes to the next available size |
| Policy | Condition | Action |
|-----------------------|--------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
| Auto-Scale Disk | Disk utilization > 90% sustained for 5 minutes. <br> The disk is expected to run out of capacity in the next 24 hours (predicted based on the last 6 hours of service usage). | Upgrade storage to the next available size in 100GB increments. <br> Note: you cannot downgrade storage, the upgrade is irreversible |
| Auto-Scale Nodes Out | CPU utilization > 75% over all replicas sustained for 30 minutes. <br> Number of concurrent sessions > 90% over all replicas sustained for 1 hour. <br> Number of concurrent sessions is expected to hit the maximum within 4 hours (predicted based on the last 2 hours of service usage) | Add new replica or node Additional nodes will be of the same size and configuration as existing nodes |
| Auto-Scale Nodes In | CPU utilization < 50% over all replicas sustained for 1 hour. <br> Number of concurrent sessions < 50% over all replicas sustained for 1 hour | Remove replica or node Node count will not decrease below the initial count set at launch |
| Auto-Scale Nodes Up | Number of concurrent sessions is expected to hit the maximum within 4 hours (predicted based on the last 2 hours of service usage) | Upgrade all nodes to the next available size |
| Auto-Scale Nodes Down | CPU utilization < 50% over all replicas sustained for 1 hour. <br> Number of concurrent sessions < 50% over all replicas sustained for 1 hour | Downgrade nodes Node size will not decrease below the initial node size set at launch |


Autonomous actions are not instantaneous.

Cooldown periods may apply. A cooldown period is the time period after a scaling operation is completed before another scaling operation can occur. The cooldown period for storage scaling is 6 hours.

[Uptime SLA](Uptime%20SLA%20ea985d9f82fc48b8bc0476cd359f48ce.md)
[Uptime SLA](Uptime%20SLA%20ea985d9f82fc48b8bc0476cd359f48ce.md)

0 comments on commit 9b9714b

Please sign in to comment.