diff --git a/articles/cosmos-db/nosql/TOC.yml b/articles/cosmos-db/nosql/TOC.yml index ab0b0a9fb0..baab1fcf8a 100644 --- a/articles/cosmos-db/nosql/TOC.yml +++ b/articles/cosmos-db/nosql/TOC.yml @@ -340,6 +340,8 @@ href: ../integrated-cache-faq.md - name: Integrated vector database href: ../vector-search.md + - name: Materialized views (preview) + href: materialized-views.md - name: Analytics and BI items: - name: Analytics and BI overview @@ -593,8 +595,8 @@ href: how-to-java-change-feed.md - name: NoSQL query href: query/toc.yml - - name: Materialized views (preview) - href: materialized-views.md + - name: Configure materialized views (preview) + href: how-to-configure-materialized-views.md - name: Index and query geospatial data displayName: geospatial, geojson, spatial, index, query, geography, geometry href: how-to-geospatial-index-query.md diff --git a/articles/cosmos-db/nosql/how-to-configure-materialized-views.md b/articles/cosmos-db/nosql/how-to-configure-materialized-views.md new file mode 100644 index 0000000000..d817bf2946 --- /dev/null +++ b/articles/cosmos-db/nosql/how-to-configure-materialized-views.md @@ -0,0 +1,332 @@ +--- +title: How to configure materialized views (preview) +titleSuffix: Azure Cosmos DB for NoSQL +description: Learn how to configure materialized views and use them to avoid expensive cross-partition queries. +author: jcocchi +ms.author: jucocchi +ms.service: azure-cosmos-db +ms.subservice: nosql +ms.topic: how-to +ms.date: 12/13/2024 +--- + +# How to configure Azure Cosmos DB for NoSQL materialized views (preview) + +[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] + +> [!IMPORTANT] +> Azure Cosmos DB for NoSQL materialized views are currently in preview. You can enable this feature by using the Azure portal. This preview is provided without a service-level agreement. At this time, we don't recommend that you use materialized views for production workloads. Certain features of this preview might not be supported or might have constrained capabilities. For more information, see the [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + +Materialized views provide a powerful way to optimize query performance and simplify application logic by creating views of your data with a different partition key and/ or data model. This article describes how to create materialized views and how to use them to handle cross-partition queries efficiently. + +## Prerequisites + +- An existing Azure Cosmos DB account. + - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal). + - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit. + +## Enable materialized views + +The materialized views feature needs to be enabled for your Azure Cosmos DB account before provisioning a builder or creating views. + +### [Azure portal](#tab/azure-portal) + +1. Sign in to the [Azure portal](https://portal.azure.com/). + +1. Go to your Azure Cosmos DB for NoSQL account. + +1. In the resource menu, select **Settings**. + +1. In the **Features** section under **Settings**, toggle the **Materialized View for NoSQL API (preview)** option to **On**. + +1. In the new dialog, select **Enable** to enable this feature for the account. + +### [Azure CLI](#tab/azure-cli) + +Use the Azure CLI to enable the materialized views feature either by using a native command or a REST API operation on your Azure Cosmos DB for NoSQL account. + +1. Sign in to the Azure CLI. + + ```azurecli + az login + ``` + + > [!NOTE] + > This requires the Azure CLI, see [how to install the Azure CLI](/cli/azure/install-azure-cli). + +1. Define the variables for the resource group and account name of your existing Azure Cosmos DB for NoSQL account. + + ```azurecli + # Variable for resource group name + $resourceGroupName="" + + # Variable for account name + $accountName="" + + # Variable for Azure subscription + $subscriptionId="" + ``` + +1. Create a new JSON file named *capabilities.json* by using the capabilities manifest. + + ```json + { + "properties": { + "enableMaterializedViews": true + } + } + ``` + +1. Get the identifier of the account and store it in a shell variable named `$accountId`. + + ```azurecli + $accountId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName" + ``` + +1. Enable the preview materialized views feature for the account by using the REST API and [az rest](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb. + + ```azurecli + az rest \ + --method PATCH \ + --uri "https://management.azure.com/$accountId/?api-version=2022-11-15-preview" \ + --body @capabilities.json + ``` + +--- + +## Create a materialized view builder + +After the materialized views feature is enabled for your account, you'll see a new page in the **Settings** section of the Azure portal for **Materialized Views Builder**. You must provision a materialized views builder before creating views in your account. The builder is responsible for automatically hydrating data in the views and keeping them in sync with source containers. Learn more about options for [provisioning the materialized view builder](./materialized-views.md#provisioning-the-materialized-views-builder). + +### [Azure portal](#tab/azure-portal) + +1. Sign in to the [Azure portal](https://portal.azure.com/). + +1. Go to your Azure Cosmos DB for NoSQL account. + +1. In the resource menu, select **Materialized Views Builder**. + +1. On the **Materialized Views Builder** page, configure the SKU and the number of instances for the builder. + + > [!NOTE] + > This resource menu option and page appear only when the materialized views feature is enabled for the account. + +1. Select **Save**. + +### [Azure CLI](#tab/azure-cli) + +1. Define the variables for the resource group and account name of your existing Azure Cosmos DB for NoSQL account. + + ```azurecli + # Variable for resource group name + $resourceGroupName="" + + # Variable for account name + $accountName="" + + # Variable for Azure subscription + $subscriptionId="" + ``` + +1. Create a new JSON file named *builder.json* by using the builder manifest. Update the `instanceCount` and `instanceSize` as needed. + + ```json + { + "properties": { + "serviceType": "materializedViewsBuilder", + "instanceCount": 1, + "instanceSize": "Cosmos.D4s" + } + } + ``` + +1. Get the identifier of the account and store it in a shell variable named `$accountId`. + + ```azurecli + $accountId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName" + ``` + +1. Enable the materialized views builder for the account using the REST API and `az rest` with an HTTP `PUT` verb: + + ```azurecli + az rest \ + --method PUT \ + --uri "https://management.azure.com$accountId/services/materializedViewsBuilder/?api-version=2022-11-15-preview" \ + --body @builder.json + ``` + +1. Wait for a couple of minutes, and then check the status by using `az rest` again with the HTTP `GET` verb. The status in the output should now be `Running`. + + ```azurecli + az rest \ + --method GET \ + --uri "https://management.azure.com$accountId/services/materializedViewsBuilder/?api-version=2022-11-15-preview" + ``` + +--- + +## Create a materialized view + +After the feature is enabled and the materialized view builder is provisioned, you can create materialized views using the REST API. + +1. Use the Azure portal, the Azure SDKs, the Azure CLI, or the REST API to create a source container that has `/customerId` as the partition key path. Name this source container `mv-src`. + + > [!NOTE] + > The `/customerId` field is used only as an example in this article. For your own containers, select a partition key that works for your solution. + +1. Insert a few items in the source container. To follow the examples that are shown in this article, make sure that the items have `customerId` and `emailAddress` fields. A sample item might look like this: + + ```json + { + "id": "eaf0338e-2b61-4163-822f-7bef75bf51de", + "customerId": "36c7cc3d-1709-45c6-819f-10e5586a6cb7", + "emailAddress": "justine@contoso.com", + "name": "Justine" + } + ``` + + > [!NOTE] + > In this example, you populate the source container with sample data before adding a view. You can also create a materialized view from an empty source container. + +1. Now, create a materialized view named `mv-target` with a partition key path that is different from the source container. For this example, specify `/emailAddress` as the partition key path for the `mv-target` container. + + 1. Create a definition manifest for a materialized view and save it in a JSON file named *mv-definition.json*: + + ```json + { + "location": "North Central US", + "tags": {}, + "properties": { + "resource": { + "id": "mv-target", + "partitionKey": { + "paths": [ + "/emailAddress" + ] + }, + "materializedViewDefinition": { + "sourceCollectionId": "mv-src", + "definition": "SELECT c.customerId, c.emailAddress FROM c" + } + }, + "options": { + "throughput": 400 + } + } + } + ``` + + > [!IMPORTANT] + > In the template, notice that the partition key path is set as `/emailAddress`. The `sourceCollectionId` defines the source container for the view and the `definition` contains a query to determine the data model of the view. Learn more about [defining materialized views](materialized-views.md#defining-materialized-views) and the query constraints. + > + > The materialized view source container and definition query can't be changed once created. + +1. Next, make a REST API call to create the materialized view as defined in the *mv-definition.json* file. Use the Azure CLI to make the REST API call. + + 1. Create a variable for the name of the materialized view and source database name: + + ```azurecli + # This should match the resource ID you defined in your json file + $materializedViewName = "mv-target" + + # Database name for the source and view containers + $databaseName = "" + + # Azure Cosmos DB account name + $accountName = "" + + # Resource name for your Azure Cosmos DB account + $resourceGroupName = "" + + # Subscription id for your Azure Cosmos DB account + $subscriptionId = "" + ``` + + 1. Construct the resource ID using these variables. + + ```azurecli + $accountId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName" + ``` + + 1. Make a REST API call to create the materialized view: + + ```azurecli + az rest \ + --method PUT \ + --uri "https://management.azure.com$accountId/sqlDatabases/ \ + $databaseName/containers/$materializedViewName/?api-version=2022-11-15-preview" \ + --body @mv-definition.json \ + --headers content-type=application/json + ``` + + 1. Check the status of the materialized view container creation by using the REST API: + + ```azurecli + az rest \ + --method GET \ + --uri "https://management.azure.com$accountId/sqlDatabases/ + $databaseName/containers/$materializedViewName/?api-version=2022-11-15-preview" \ + --headers content-type=application/json \ + --query "{mvCreateStatus: properties.Status}" + ``` + +1. After the materialized view is created, the materialized view builder automatically syncs changes with the source container. Try executing create, update, and delete operations in the source container. You'll see the same changes propagated to the materialized view container. + +## Query data from materialized views + +In this example, we have a source container partitioned on `customerId` and a view partitioned on `emailAddress`. Without the view, queries that only include the `emailAddress` would be cross-partition, but now they can use be executed against the view instead to increase efficiency. + +Querying data from materialized views is similar to querying data from any other container. You can use the Azure portal, Azure SDKs, or REST API to query data in materialized views. + +### [.NET](#tab/dotnet) + +```csharp +Container container = client.GetDatabase("mv-db").GetContainer("mv-target"); + +FeedIterator myQuery = container.GetItemQueryIterator(new QueryDefinition("SELECT * FROM c WHERE c.emailAddress = 'justine@contoso.com'")); +``` + +### [Java](#tab/java) + +```java +CosmosAsyncDatabase container = client.getDatabase("mv-db"); +CosmosAsyncContainer container = database.getContainer("mv-target"); + +CosmosPagedFlux pagedFluxResponse = container.queryItems( + "SELECT * FROM c WHERE c.emailAddress = 'justine@contoso.com'", null, MyClass.class); +``` + +### [Node.js](#tab/nodejs) + +```javascript +const database = client.database("mv-db"); +const container = database.container("mv-target"); + +const querySpec = { + query: "SELECT * FROM c WHERE c.emailAddress = 'justine@contoso.com'" + }; +const { resources: items } = await container.items + .query(querySpec) + .fetchAll(); +``` + +### [Python](#tab/python) + +```python +database = client.get_database_client("mv-db") +container = database.get_container_client("mv-target") + +query = "SELECT * FROM c WHERE c.emailAddress = 'justine@contoso.com'" +container.query_items( + query=query +) +``` + +--- + +## Next steps + +> [!div class="nextstepaction"] +> [Data modeling and partitioning](model-partition-example.md) +> [Materialized views overview](materialized-views.md) diff --git a/articles/cosmos-db/nosql/materialized-views.md b/articles/cosmos-db/nosql/materialized-views.md index 30ff8fd2b6..c2e173e495 100644 --- a/articles/cosmos-db/nosql/materialized-views.md +++ b/articles/cosmos-db/nosql/materialized-views.md @@ -1,311 +1,131 @@ --- title: Materialized views (preview) titleSuffix: Azure Cosmos DB for NoSQL -description: Learn how to efficiently query a base container by using predefined filters in materialized views for Azure Cosmos DB for NoSQL. Use materilaized views as global secondary indexes to avoid expensive cross-partition queries. -author: AbhinavTrips -ms.author: abtripathi +description: Materialized views are read-only containers with a persistent copy of data from a source container. They can be used to implement the Global Secondary Index pattern on Azure Cosmos DB. +author: jcocchi +ms.author: jucocchi ms.service: azure-cosmos-db ms.subservice: nosql ms.custom: build-2023, devx-track-azurecli -ms.topic: how-to -ms.date: 06/09/2023 +ms.topic: conceptual +ms.date: 12/13/2024 --- -# Materialized views for Azure Cosmos DB for NoSQL (preview) +# Azure Cosmos DB for NoSQL materialized views (preview) [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!IMPORTANT] -> The materialized view feature of Azure Cosmos DB for NoSQL is currently in preview. You can enable this feature by using the Azure portal. This preview is provided without a service-level agreement. At this time, we don't recommend that you use materialized views for production workloads. Certain features of this preview might not be supported or might have constrained capabilities. For more information, see the [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> Azure Cosmos DB for NoSQL materialized views are currently in preview. You can enable this feature by using the Azure portal. This preview is provided without a service-level agreement. At this time, we don't recommend that you use materialized views for production workloads. Certain features of this preview might not be supported or might have constrained capabilities. For more information, see the [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -Applications frequently are required to make queries that don't specify a partition key. In these cases, the queries might scan through all data for a small result set. The queries end up being expensive because they inadvertently run as a cross-partition query. +Materialized views are read-only containers that store a persistent copy of data from a source container. These views have their own settings, separate from the source container, such as partition key, indexing policy, Request Unit (RU) limit, and data model, which can be customized by selecting a subset of item properties. Materialized views are automatically kept in sync with the source container using change feed, managed by the materialized views builder. The materialized views builder is dedicated compute provisioned for your Azure Cosmos DB account to maintain views. -Materialized views, when defined, help provide a way to efficiently query a base container in Azure Cosmos DB by using filters that don't include the partition key. When users write to the base container, the materialized view is built automatically in the background. This view can have a different partition key for efficient lookups. The view also contains only fields that are explicitly projected from the base container. This view is a read-only table. The Azure Cosmos DB materialized views can be used as global secondary indexes to avoid expensive cross-partition queries. +## Use cases -> [!IMPORTANT] -> The materialized view feature of Azure Cosmos DB for NoSQL can be used as Global Secondary Indexes. Users can specify the fields that are projected from the base container to the materialized view and they can choose a different partition key for the materialized view. Choosing a different partition key based on the most common queries, helps in scoping the queries to a single logical partition and avoiding cross-partition queries.. +Applications often need to query data without specifying a partition key. These queries must be executed across all partitions, even if some partitions don't contain data that matches the filter criteria. As a result, queries that don't include the partition key consume more RUs and have higher latency. With a materialized view, you can: -- Use the view as a lookup or mapping container to persist cross-partition scans that would otherwise be expensive queries. -- Provide a SQL-based predicate (without conditions) to populate only specific fields. -- Use change feed triggers to create real-time views to simplify event-based scenarios that are commonly stored as separate containers. - -The benefits of using Azure Cosmos DB Materiliazed Views include, but aren't limited to: - -- You can implement server-side denormalization by using materialized views. With server-side denormalization, you can avoid multiple independent tables and computationally complex denormalization in client applications. -- Materialized views automatically update views to keep views consistent with the base container. This automatic update abstracts the responsibilities of your client applications that would otherwise typically implement custom logic to perform dual writes to the base container and the view. -- Materialized views optimize read performance by reading from a single view. -- You can specify throughput for the materialized view independently. -- You can configure a materialized view builder layer to map to your requirements to hydrate a view. -- Materialized views improve write performance (compared to a multi-container-write strategy) because write operations need to be written only to the base container. -- The Azure Cosmos DB implementation of materialized views is based on a pull model. This implementation doesn't affect write performance. -- Azure Cosmos DB materialized views for NoSQL API caters to the Global Secondary Index use cases as well. Global Secondary Indexes are also used to maintain secondary data views and help in reducing cross-partition queries. - -> [!NOTE] -> The "id" field in the materialized view is auto populated with "_rid" from source document. This is done to maintain the one-to-one relationship between materialized view and source container documents. - -## Prerequisites - -- An existing Azure Cosmos DB account. - - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal). - - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit. - -## Enable materialized views - -Use the Azure CLI to enable the materialized views feature either by using a native command or a REST API operation on your Cosmos DB for NoSQL account. - -### [Azure portal](#tab/azure-portal) - -1. Sign in to the [Azure portal](https://portal.azure.com/). - -1. Go to your API for NOSQL account. - -1. In the resource menu, select **Settings**. - -1. In the **Features** section under **Settings**, toggle the **Materialized View for NoSQL API (Preview)** option to **On**. - -1. In the new dialog, select **Enable** to enable this feature for the account. +- Maintain a copy of data with a different partition key, allowing cross-partition queries to be retargeted to the view for more efficient lookups. +- Provide a SQL-based predicate (without conditions) to populate only specific item properties. +- Create real-time views to handle event-based data, which is often stored in separate containers. -### [Azure CLI](#tab/azure-cli) +### Implement the Global Secondary Index pattern -1. Sign in to the Azure CLI. +Materialized views can act as a Global Secondary Index (GSI), enabling efficient querying on properties other than the partition key of the source container. By creating a materialized view with a different partition key, you can achieve a similar effect to a GSI. Once the materialized view is created, queries that would otherwise be cross-partition can be retargeted to the view container, leading to reduced RU consumption and reduced latency. - ```azurecli - az login - ``` +## Materialized views features - > [!NOTE] - > If you need to first install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli). +Azure Cosmos DB materialized views offer the following features: -1. Define the variables for the resource group and account name for your existing API for NoSQL account. +- Automatic Syncing: Views are automatically synced with the source container, eliminating the need for custom logic in client applications. +- Eventual Consistency: Views are eventually consistent with the source container regardless of the [consistency level](../consistency-levels.md) set for the account. +- Performance Isolation: View containers have their own storage and RU limits, providing performance isolation. +- Optimized Read Performance: Fine-tuned data model, partition key, and indexing policy for optimized read performance. +- Improved Write Performance: Clients only need to write to the source container, improving write performance compared to a multi-container-write strategy. +- Read-Only Containers: Writes to the view are asynchronous and managed by the materialized view builder. Client applications can't write directly to views. +- Multiple Views: You can create multiple views for the same source container without extra overhead. - ```azurecli - # Variable for resource group name - resourceGroupName="" - - # Variable for account name - accountName="" - - # Variable for Azure subscription - subscriptionId="" - ``` +## Defining materialized views -1. Create a new JSON file named *capabilities.json* by using the capabilities manifest. +Creating a materialized view is similar to creating a new container, with requirements to specify the source container and a query defining the view. Each item in the materialized view has a one-to-one mapping to an item in the source container. To maintain this mapping, the `id` field in materialized view items is auto populated. The value of `id` from the source collection is represented as `_id` in the view. - ```json - { - "properties": { - "enableMaterializedViews": true - } - } - ``` +The query used to define a materialized view must adhere to the following constraints: + - The SELECT statement allows projection of only one level of properties in the JSON tree, or it can be SELECT * to include all properties. + - Aliasing property names using AS isn’t supported. + - Queries can’t include a WHERE clause or other clauses such as JOIN, DISTINCT, GROUP BY, ORDER BY, TOP, OFFSET LIMIT, and EXISTS. + - System functions and user-defined functions (UDFs) aren't supported. -1. Get the identifier of the account and store it in a shell variable named `$accountId`. - - ```azurecli - accountId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName" - ``` - -1. Enable the preview materialized views feature for the account by using the REST API and [az rest](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb. - - ```azurecli - az rest \ - --method PATCH \ - --uri "https://management.azure.com/$accountId?api-version=2022-11-15-preview" \ - --body @capabilities.json - ``` - ---- - -## Create a materialized view builder - -Create a materialized view builder to automatically transform data and write to a materialized view. - -### [Azure portal](#tab/azure-portal) + For example, a valid query could be: `SELECT c.userName, c.emailAddress FROM c`, which selects the `userName` and `emailAddress` properties from the source container `c`. This query defines the structure of the materialized view, determining which properties are included in the view. The materialized view source container and definition query can't be changed once created. + + [Learn how to create materialized views.](how-to-configure-materialized-views.md#create-a-materialized-view) + +> [!NOTE] +> Once views are created, if you want to delete the source container, you must first delete all materialized views that are created for it. -1. Sign in to the [Azure portal](https://portal.azure.com/). +## Provisioning the materialized views builder -1. Go to your API for NoSQL account. +The materialized views builder is a dedicated compute layer provisioned for your Azure Cosmos DB account that automatically maintains views defined for source containers. The builder reads from the [change feed](../change-feed.md) of the source container and writes changes to the materialized views according to the view definition, keeping them in sync. Updating views is asynchronous and doesn't affect writes to the source container. Updates to the views are eventually consistent with the source container regardless of the consistency level set for the account. -1. In the resource menu, select **Materialized Views Builder**. +You must provision a materialized views builder for your Azure Cosmos DB account for views to begin populating. The amount of compute provisioned in the builder, including the SKU and the number of nodes, as well as the RUs provisioned on the view container, determine how quickly views are hydrated and synced. The builder can have up to five nodes by default and you can add or remove nodes at any time. Scaling up and down the number of nodes helps control the rate at which views are built. -1. On the **Materialized Views Builder** page, configure the SKU and the number of instances for the builder. +The materialized views builder is available in the following sizes: - > [!NOTE] - > This resource menu option and page appear only when the materialized views feature is enabled for the account. +| **Sku Name** | **vCPU** | **Memory** | +| ------------ | -------- | ----------- | +| **D2s** | **2** | **8 GB** | +| **D4s** | **4** | **16 GB** | +| **D8s** | **8** | **32 GB** | +| **D16s** | **16** | **64 GB** | -1. Select **Save**. +> [!TIP] +> Once created, you can add or remove builder nodes, but you can't modify the size of the nodes. To change the size of your materialized view builder nodes you can deprovision the builder and provision it again in a different size. Views don't need to be re-created and will catch up to the source once the builder is provisioned again. -### [Azure CLI](#tab/azure-cli) +### Materialized views builders in multi-region accounts -1. Create a new JSON file named *builder.json* by using the builder manifest: +For Azure Cosmos DB accounts with a single region, the materialized views builder is provisioned in that region. In a multi-region account with a single write region, the builder is provisioned in the write region and reads change feed from there. In an account with multiple write regions, the builder is provisioned in one of the write regions and reads change feed from the same region it's provisioned in. - ```json - { - "properties": { - "serviceType": "materializedViewsBuilder", - "instanceCount": 1, - "instanceSize": "Cosmos.D4s" - } - } - ``` +[Learn how to provision the materialized views builder.](how-to-configure-materialized-views.md#create-a-materialized-view-builder) -1. Enable the materialized views builder for the account by using the REST API and `az rest` with an HTTP `PUT` verb: +> [!IMPORTANT] +> In the event of a failover for your account, the materialized views builder is deprovisioned and re-provisioned in the new write region. +> +> Manual failovers (change write region operation) are graceful operations, and views are guaranteed to be consistent with the source. However, service managed failovers are not guaranteed to be graceful and can result in inconsistencies between the source and view containers. In such cases, it's recommended to re-build the view containers and fall back to executing cross-partition queries on the source container until the view is updated. +> +> Learn more about [service managed failover.](/azure/reliability/reliability-cosmos-db-nosql#service-managed-failover) - ```azurecli - az rest \ - --method PUT \ - --uri "https://management.azure.com$accountId/services/materializedViewsBuilder?api-version=2022-11-15-preview" \ - --body @builder.json - ``` +## Monitoring -1. Wait for a couple of minutes, and then check the status by using `az rest` again with the HTTP `GET` verb. The status in the output should now be `Running`. +You can monitor the lag in building views and the health of the materialized views builder through Metrics in the Azure portal. To learn about these metrics, see [Supported metrics for Microsoft.DocumentDB/DatabaseAccounts](../monitor-reference.md#supported-metrics-for-microsoftdocumentdbdatabaseaccounts). - ```azurecli - az rest \ - --method GET \ - --uri "https://management.azure.com$accountId/services/materializedViewsBuilder?api-version=2022-11-15-preview" - ``` +:::image type="content" source="./media/materialized-views/materialized-views-metrics.png" alt-text="Screenshot of the Materialized Views Builder Average CPU Usage metric in the Azure portal." ::: ---- +### Troubleshooting common issues -Azure Cosmos DB For NoSQL uses a materialized view builder compute layer to maintain the views. - -You have the flexibility of configuring the view builder's compute instances based on your latency and lag requirements to hydrate the views. From a technical standpoint, this compute layer helps you manage connections between partitions in a more efficient manner, even when the data size is large and the number of partitions is high. - -The compute containers are shared among all materialized views within an Azure Cosmos DB account. Each provisioned compute container initiates multiple tasks that read the change feed from the base container partitions and write data to the target materialized view or views. The compute container transforms the data per the materialized view definition for each materialized view in the account. - -## Create a materialized view - -After your account and the materialized view builder are set up, you should be able to create materialized views by using the REST API. - -### [Azure portal / Azure CLI](#tab/azure-portal+azure-cli) - -1. Use the Azure portal, the Azure SDK, the Azure CLI, or the REST API to create a source container that has `/accountId` as the partition key path. Name this source container `mv-src. - - > [!NOTE] - > The `/accountId` field is used only as an example in this article. For your own containers, select a partition key that works for your solution. - -1. Insert a few items in the source container. To follow the examples that are shown in this article, make sure that the items have `accountId`, `fullName`, and `emailAddress` fields. A sample item might look like this example: - - ```json - { - "accountId": "prikrylova-libuse", - "emailAddress": "libpri@contoso.com", - "name": { - "first": "Libuse", - "last": "Prikrylova" - } - } - ``` - - > [!NOTE] - > In this example, you populate the source container with sample data. You can also create a materialized view from an empty source container. - -1. Now, create a materialized view named `mv-target` with a partition key path that is different from the source container. For this example, specify `/emailAddress` as the partition key path for the `mv-target` container. - - 1. First, create a definition manifest for a materialized view and save it in a JSON file named *definition.json*: - - ```json - { - "location": "North Central US", - "tags": {}, - "properties": { - "resource": { - "id": "mv-target", - "partitionKey": { - "paths": [ - "/emailAddress" - ], - "kind": "Hash" - }, - "materializedViewDefinition": { - "sourceCollectionId": "mv-src", - "definition": "SELECT s.accountId, s.emailAddress FROM s" - } - }, - "options": { - "throughput": 400 - } - } - } - ``` - - > [!NOTE] - > In the template, notice that the partitionKey path is set as `/emailAddress`. We also have more parameters to specify the source collection and the definition to populate the materialized view. - -1. Now, make a REST API call to create the materialized view as defined in the *mv_definition.json* file. Use the Azure CLI to make the REST API call. - - 1. Create a variable for the name of the materialized view and source database name: - - ```azurecli - materializedViewName="mv-target" - - # Variable for database name used in later section - databaseName="" - ``` - - 1. If you haven't already, get the identifier of the account and store it in a shell variable named `$accountId`. - - ```azurecli - accountId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName" - ``` - - 1. Make a REST API call to create the materialized view: - - ```azurecli - az rest \ - --method PUT \ - --uri "https://management.azure.com$accountId/sqlDatabases/ - $databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ - --body @definition.json \ - --headers content-type=application/json - ``` - - 1. Check the status of the materialized view container creation by using the REST API: - - ```azurecli - az rest \ - --method GET \ - --uri "https://management.azure.com$accountId/sqlDatabases/ - $databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ - --headers content-type=application/json \ - --query "{mvCreateStatus: properties.Status}" - ``` +#### I want to understand the lag between my source container and views ---- +The **MaterializedViewCatchupGapInMinutes** metric shows the maximum difference in minutes between data in a source container and a view. While there can be multiple views created in a single account, this metric exposes the highest lag among all views. A high value indicates the builder needs more compute to keep up with the volume of changes to source containers. The RUs provisioned on source and view containers can also affect the rate at which changes are propagated to the view. Check the **Total Requests** metric and split by **StatusCode** to determine if there are throttled requests on these containers. Throttled requests have status code 429. -After the materialized view is created, the materialized view container automatically syncs changes with the source container. Try executing create, read, update, and delete (CRUD) operations in the source container. You'll see the same changes in the materialized view container. +#### I want to understand if my materialized views builder has the right number of nodes -> [!NOTE] -> Materialized view containers are read-only containers for users. The containers can be automatically modified only by a materialized view builder. +The **MaterializedViewsBuilderAverageCPUUsage** and **MaterializedViewsBuilderAverageMemoryUsage** metrics show the average CPU usage and memory consumption across all nodes in the builder. If these metrics are too high, add nodes to scale up the cluster. If these metrics show under-utilization of CPU and memory, remove nodes by scaling down the cluster. For optimal performance, CPU usage should be no higher than 70 percent. -## Current limitations +## Limitations There are a few limitations with the Azure Cosmos DB for NoSQL API materialized view feature while it is in preview: -- `WHERE` clauses aren't supported in the materialized view definition. -- You can project only the source container item's JSON `object` property list in the materialized view definition. Currently, the list can contain only one level of properties in the JSON tree. -- In the materialized view definition, aliases aren't supported for fields of documents. -- We recommend that you create a materialized view when the source container is still empty or has only a few items. -- Restoring a container from a backup doesn't restore materialized views. You must re-create the materialized views after the restore process is finished. -- You must delete all materialized views that are defined on a specific source container before you delete the source container. +- Role-based access control isn't supported for materialized views. +- Materialized views can't be enabled on accounts that have partition merge, analytical store, or continuous backups. - Point-in-time restore, hierarchical partitioning, and end-to-end encryption aren't supported on source containers that have materialized views associated with them. -- Role-based access control is currently not supported for materialized views. - Cross-tenant customer-managed key (CMK) encryption isn't supported on materialized views. -- Currently, this feature can't be enabled if any of the following features are enabled: partition merge, analytical store, or continuous backup. - -Note the additional following limitations: - - Availability zones - Materialized views can't be enabled on an account that has availability zone-enabled regions. - - Adding a new region with an availability zone isn't supported after `enableMaterializedViews` is set to `true` on the account. + - Adding a new region with an availability zone isn't supported after materialized views are enabled on an account. - Periodic backup and restore - - Materialized views aren't automatically restored by using the restore process. You must re-create the materialized views after the restore process is finished. Then, you should configure `enableMaterializedViews` on the restored account before you create the materialized views and builders again. + - Materialized views aren't automatically restored during the restore process. You must enable the materialized views feature on the restored account after the restore process is finished. Then, you can create the materialized views and builder again. ## Next steps > [!div class="nextstepaction"] > [Data modeling and partitioning](model-partition-example.md) +> [Learn how to configure materialized views](how-to-configure-materialized-views.md) diff --git a/articles/cosmos-db/nosql/media/materialized-views/materialized-views-metrics.png b/articles/cosmos-db/nosql/media/materialized-views/materialized-views-metrics.png new file mode 100644 index 0000000000..17ff309992 Binary files /dev/null and b/articles/cosmos-db/nosql/media/materialized-views/materialized-views-metrics.png differ diff --git a/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md b/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md index 09d9e9393f..3afa08ad35 100644 --- a/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md +++ b/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md @@ -28,10 +28,6 @@ You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Sp ## Version support policy -### Spring Boot version support - -This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/-sdk-for-java/tree/main/sdk/spring/-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/-sdk-for-java/tree/main/sdk/spring/-spring-data-cosmos#spring-boot-version-support) for more information. - ### Spring Data version support This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-data-version-support) for more information. diff --git a/articles/postgresql/flexible-server/concepts-compute.md b/articles/postgresql/flexible-server/concepts-compute.md index 1275126048..f05aa5e271 100644 --- a/articles/postgresql/flexible-server/concepts-compute.md +++ b/articles/postgresql/flexible-server/concepts-compute.md @@ -4,7 +4,7 @@ description: This article describes the compute options in Azure Database for Po author: kabharati ms.author: kabharati ms.reviewer: maghan -ms.date: 12/16/2024 +ms.date: 05/01/2024 ms.service: azure-database-postgresql ms.subservice: flexible-server ms.topic: conceptual @@ -24,18 +24,18 @@ You can create an Azure Database for PostgreSQL flexible server instance in one | vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64, 96 | 2, 4, 8, 16, 20 (v4/v5), 32, 48, 64, 96 | | Memory per vCore | Variable | 4 GiB | 6.75 GiB to 8 GiB | | Storage size | 32 GiB to 64 TiB | 32 GiB to 64 TiB | 32 GiB to 64 TiB | -| Automated database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days | -| Long term database backup retention period | up to 10 years | up to 10 years | up to 10 years | +| Automated Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days | +| Long term Database backup retention period | up to 10 years | up to 10 years | up to 10 years | -To help you choose the pricing tier that better adjusts to your needs, use the guidelines provided in the following table as a starting point: +To choose a pricing tier, use the following table as a starting point: | Pricing tier | Target workloads | | :--- | :--- | | Burstable | Workloads that don't need the full CPU continuously. | -| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile applications and other enterprise applications. | -| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical applications. | +| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications. | +| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps. | -After you create a server for the compute tier, you can change the number of vCores (up or down) and the storage size (up) in seconds. You can also increase or decrease the backup retention period independently. For more information, see [scaling resources in Azure Database for PostgreSQL - Flexible Server](concepts-scaling-resources.md). +After you create a server for the compute tier, you can change the number of vCores (up or down) and the storage size (up) in seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scaling resources in Azure Database for PostgreSQL flexible server](concepts-scaling-resources.md) page. ## Compute tiers, vCores, and server types @@ -88,23 +88,9 @@ The detailed specifications of the available server types are as follows: | E64ds_v5 / E64ads_v4 | 64 | 512 GiB | 80,000 | 1200 MiB/sec | | E96ds_v5 /E96ads_v5 | 96 | 672 GiB | 80,000 | 1200 MiB/sec | > [!IMPORTANT] -> Minimum and maximum IOPS are also determined by the storage tier, so please choose a storage tier and instance type that can scale as per your workload requirements. +> Minimum and maximum IOPS are also determined by the storage tier so please choose a storage tier and instance type that can scale as per your workload requirements. -## Price - -For the most up-to-date pricing information, see [Azure Database for PostgreSQL - Flexible Server pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/). - -[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) also shows you an estimation of the monthly costs of a server configuration, based on the options selected. - -That estimation can be seen throughout the server creation experience, in the **New Azure Database for PostgreSQL Flexible server** page: - -:::image type="content" source="./media/concepts-compute/new-server-estimated-costs.png" alt-text="Screenshot that shows the estimated monthly costs in the New Azure Database for PostgreSQL Flexible server wizard." lightbox="./media/concepts-compute/new-server-estimated-costs.png"::: - -It can also be seen for existing servers if, in the resource menu of an existing instance, under the **Settings** section, you select **Compute + storage**: - -:::image type="content" source="./media/concepts-compute/existing-server-estimated-costs.png" alt-text="Screenshot that shows the estimated monthly costs in the Compute + storage page of an existing Azure Database for PostgreSQL flexible server instance." lightbox="./media/concepts-compute/existing-server-estimated-costs.png"::: - -If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. In the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Databases** category, and then select **Azure Database for PostgreSQL** to add the service to your estimate and then customize the options. +[!INCLUDE [pricing](./includes/compute-storage-pricing.md)] [Share your suggestions and bugs with the Azure Database for PostgreSQL product team](https://aka.ms/pgfeedback). diff --git a/articles/postgresql/flexible-server/concepts-storage.md b/articles/postgresql/flexible-server/concepts-storage.md index 07f02062ca..dd0dd4c38a 100644 --- a/articles/postgresql/flexible-server/concepts-storage.md +++ b/articles/postgresql/flexible-server/concepts-storage.md @@ -4,7 +4,7 @@ description: This article describes the storage options in Azure Database for Po author: kabharati ms.author: kabharati ms.reviewer: maghan -ms.date: 11/19/2024 +ms.date: 12/16/2024 ms.service: azure-database-postgresql ms.subservice: flexible-server ms.topic: conceptual @@ -14,58 +14,57 @@ ms.topic: conceptual [!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)] -You can create an Azure Database for PostgreSQL flexible server instance using Azure managed disks, which are block-level storage volumes managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server but, virtualized. With managed disks, all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk, Azure handles the rest. Azure Database for PostgreSQL Flexible Server supports premium solid-state drives (SSD) and Premium SSD v2 and the pricing is calculated based on the compute, memory, and storage tier you provision. +You can create an Azure Database for PostgreSQL flexible server instance using [Azure managed disks](/azure/virtual-machines/managed-disks-overview), which are block-level storage volumes managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server, but they're virtualized. With managed disks, all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk, Azure handles the rest. Azure Database for PostgreSQL - Flexible Server supports premium solid-state drives (Premium SSD) and premium solid-state drives version 2 (Premium SSD v2), and the pricing is calculated based on the compute, memory, and storage tier you provision. ## Premium SSD -Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs support the 512E sector size. +Azure Premium SSD deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. Premium SSD units are suitable for mission-critical production applications. ## Premium SSD v2 (preview) -Premium SSD v2 offers higher performance than Premium SSDs while also generally being less costly. You can individually tweak the performance (capacity, throughput, and IOPS(input/output operations per second)) of Premium SSD v2 disks at any time, allowing workloads to be cost-efficient while meeting shifting performance needs. For example, a transaction-intensive database might need a large amount of IOPS at a small size, or a gaming application might need a large amount of IOPS but only during peak hours. Hence, for most general-purpose workloads, Premium SSD v2 can provide the best price performance. You can now deploy Azure Database for PostgreSQL flexible server instances with Premium SSD v2 disk in all supported regions. +Premium SSD v2 offers higher performance than Premium SSD, while also being less costly, as a general rule. You can individually tweak the performance (capacity, throughput, and input/output operations per second, referred to as IOPS) of Premium SSD v2 at any time. The ability to do these adjustments allow workloads to be cost-efficient, while meeting shifting performance needs. For example, a transaction-intensive database might need to cope with a large amount of IOPS for a couple of exceptionally high-demand days. Or a gaming application might demand higher throughput during peak hours only. Hence, for most general-purpose workloads, Premium SSD v2 can provide the best price for performance. > [!NOTE] -> Premium SSD v2 is currently in preview for Azure Database for PostgreSQL flexible server. +> Premium SSD v2 is currently in preview for Azure Database for PostgreSQL - Flexible Server. ### Differences between Premium SSD and Premium SSD v2 -Unlike Premium SSDs, Premium SSD v2 doesn't have dedicated sizes. You can set a Premium SSD v2 to any supported size you prefer, and make granular adjustments (1-GiB increments) as per your workload requirements. Premium SSD v2 doesn't support host caching but still provides lower latency than Premium SSD. Premium SSD v2 capacities range from 1 GiB to 64 TiBs. +Unlike Premium SSD, Premium SSD v2 doesn't have dedicated sizes. You can set a Premium SSD v2 disk to any size you prefer, and make granular adjustments as per your workload requirements. Those granular increments can go in steps of 1 GiB. Premium SSD v2 doesn't support host caching, but still provide lower latency than Premium SSD. Premium SSD v2 capacities range from 1 GiB to 64 TiBs. -The following table provides a comparison of the five disk types to help you decide which one to use. +The following table provides a comparison of different aspect of the types of disk supported by Azure Database for PostgreSQL - Flexible Server, to help you decide which one suits your needs better. | | Premium SSD v2 | Premium SSD | | --- | --- | --- | | **Disk type** | SSD | SSD | -| **Scenario** | Production and performance-sensitive workloads that consistently require low latency and high IOPS and throughput | Production and performance-sensitive workloads | +| **Scenario** | Production and performance-sensitive workloads that consistently require low latency and high IOPS and throughput. | Production and performance-sensitive workloads. | | **Max disk size** | 65,536 GiB | 32,767 GiB | | **Max throughput** | 1,200 MB/s | 900 MB/s | | **Max IOPS** | 80,000 | 20,000 | -| **Usable as OS Disk?** | No | Yes | -Premium SSD v2 offers up to 32 TiBs per region per subscription by default, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support. +Premium SSD v2 offers up to 32 TiBs per region per subscription by default, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). -#### Premium SSD v2 IOPS +#### Premium SSD v2 - IOPS -All Premium SSD v2 disks have a baseline of 3000 IOPS that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So, an 8-GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiBs. Increasing your IOPS beyond 3,000 increases the price of your disk. +All Premium SSD v2 disks have a baseline of 3,000 IOPS that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So, a disk of 8 GiB can have up to 4,000 IOPS, and a disk of 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiBs. Increasing your IOPS beyond 3,000 increases the price of your disk. -#### Premium SSD v2 throughput +#### Premium SSD v2 - Throughput -All Premium SSD v2 disks have a baseline throughput of 125 MB/s that is free of charge. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the maximum throughput it can set is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increase the IOPS to 4,000, then the maximum throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more. Increasing your throughput beyond 125 increases the price of your disk. +All Premium SSD v2 disks have a baseline throughput of 125 MB/s that is free of charge. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the maximum throughput it can be set to is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increase the IOPS to 4,000, then the maximum throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more. Increasing your throughput beyond 125 MB/s increases the price of your disk. > [!NOTE] -> Premium SSD v2 is currently in preview for Azure Database for PostgreSQL flexible server. +> Premium SSD v2 is currently in preview for Azure Database for PostgreSQL - Flexible Server. -#### Premium SSD v2 early preview limitations +#### Premium SSD v2 - Limitations during preview -- During the preview, features like High Availability, Read Replicas, Geo Redundant Backups, Customer Managed Keys, or Storage Autogrow features aren't supported for PV2. +- [High availability](/azure/reliability/reliability-postgresql-flexible-server), [read replicas](concepts-read-replicas.md), [geographically redundant backups](concepts-geo-disaster-recovery.md), [data encryption with customer managed keys](concepts-data-encryption.md), or [storage autogrowth](#limitations-and-considerations-of-storage-autogrowth) features aren't supported for Premium SSD v2. -- During the preview, online migration from PV1 to PV2 isn't supported. Customers can perform PITR (Point-In-Time-Restore) to migrate from PV1 to PV2. +- Online migration from Premium SSD (PV1) to Premium SSD v2 (PV2) isn't supported. As an alternative, if you want to migrate across the different storage types, you can perform a [point-in-time-restore](concepts-backup-restore.md#point-in-time-recovery) of your existing server to a new one that is provisioned with a different storage type. -- During the preview, you can enable Premium SSD V2 only for newly created servers. Enabling Premium SSD V2 on existing servers is currently not supported. +- Premium SSD V2 can only be enabled for newly created servers. Enabling Premium SSD V2 on existing servers isn't supported. -The storage that you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and PostgreSQL server logs. The total amount of storage that you provision also defines the I/O capacity available to your server. +The storage that you provision is the amount of storage capacity available to your Azure Database for PostgreSQL flexible server instance. This storage is used for database files, temporary files, transaction logs, and PostgreSQL server logs. The total amount of storage that you provision also defines the I/O capacity available to your server. -| Disk size | Premium SSD IOPS | Premium SSD V2 IOPS | +| Disk size | Premium SSD IOPS | Premium SSD v2 IOPS | | :--- | :--- | :--- | | 32 GiB | Provisioned 120; up to 3,500 | First 3000 IOPS free can scale up to 17179 | | 64 GiB | Provisioned 240; up to 3,500 | First 3000 IOPS free can scale up to 34359 | @@ -80,48 +79,50 @@ The storage that you provision is the amount of storage capacity available to yo | 32 TiB | 20,000 | First 3000 IOPS free can scale up to 80000 | | 64 TiB | N/A | First 3000 IOPS free can scale up to 80000 | -The following table provides an overview of premium SSD V2 disk capacities and performance maximums to help you decide which to use. +The following table provides an overview of premium SSD v2 disk capacities and performance maximums to help you decide which want you should use. -| SSD v2 Disk size | Maximum available IOPS | Maximum available throughput (MB/s) | +| SSD v2 disk size | Maximum available IOPS | Maximum available throughput (MB/s) | | :--- | :--- | :--- | | 1 GiB-64 TiBs | 3,000-80,000 (Increases by 500 IOPS per GiB) | 125-1,200 (increases by 0.25 MB/s per set IOPS) | -Your VM type also has IOPS limits. Even though you can select any storage size independently from the server type, you might not be able to use all IOPS that the storage provides, especially when you choose a server with a few vCores. -You can learn more about flexible server [Compute options in Azure Database for PostgreSQL - Flexible Server](concepts-compute.md). +Your virtual machine type also has IOPS limits. Although you can select any storage size, independently from the server type, you might not be able to use all IOPS that the storage provides, especially when you choose a server with a few vCores. +To learn more, see [compute options in Azure Database for PostgreSQL - Flexible Server](concepts-compute.md). > [!NOTE] -> Storage can only be scaled up, not down. +> Regardless of the type of storage you assign to your instance, storage can only be scaled up, not down. -You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and I/O percentage](concepts-monitoring.md). +You can monitor your I/O consumption in the [Azure portal](https://portal.azure.com/), or by using [Azure CLI commands](/cli/azure/monitor/metrics). The relevant metrics to monitor are [storage limit, storage percentage, storage used, and I/O percentage](concepts-monitoring.md). -### Reach Storage Limits +### Disk full conditions -When you reach the storage limit, the server starts returning errors and prevents any further modifications. Reaching the limit might also cause problems with other operational activities, such as backups and write-ahead log (WAL) archiving. -To avoid this situation, the server is automatically switched to read-only mode when the storage usage reaches 95 percent or when the available capacity is less than 5 GiB. You can use storage autogrow feature to avoid this issue with Premium SSD disk. +When your disk becomes full, the server starts returning errors and prevents any further modifications. Reaching the limit might also cause problems with other operational activities, such as backups and write-ahead log (WAL) archiving. -We recommend that you actively monitor the disk space that's in use and increase the disk size before you run out of storage. You can set up an alert to notify you when your server storage is approaching an out-of-disk state. For more information, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](how-to-alert-on-metrics.md). +To avoid this situation, the server is automatically switched to read-only mode when the storage usage reaches 95 percent, or when the available capacity is less than 5 GiB. If you're using Premium SSD storage type, you can use the [storage autogrow](#storage-autogrow-premium-ssd) feature to avoid this issue from occurring. + +We recommend that you actively monitor the disk space that's in use, and increase the disk size before you run out of available space in your storage. You can set up an alert to notify you when your server storage is approaching an out-of-disk state. For more information, see how to [use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](how-to-alert-on-metrics.md). ### Storage autogrow (Premium SSD) -Storage autogrow can help ensure that your server always has enough storage capacity and doesn't become read-only. When you turn on storage autogrow, disk size increases without affecting the workload. Storage Autogrow is only supported for Premium SSD storage tier. Premium SSD v2 doesn't support storage autogrow. +Storage autogrow can help ensure that your server always has enough free space available, and doesn't become read-only. When you turn on storage autogrow, disk size increases without affecting the workload. Storage autogrow is only supported for Premium SSD storage tier. -For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values are smaller. Conversely, for servers with storage under 1 TiB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller. +For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls below 10% of the total capacity or 64 GiB, whichever of the two values are smaller. Conversely, for servers with storage under 1 TiB this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller. -As an illustration, take a server with a storage capacity of 2 TiB (greater than 1 TiB). In this case, the autogrow limit is set at 64 GiB. This choice is made because 64 GiB is the smaller value when compared to 10% of 2 TiB, which is roughly 204.8 GiB. In contrast, for a server with a storage size of 128 GiB (less than 1 TiB), the autogrow feature activates when there's only 25.8 GiB of space left. This activation is based on the 20% threshold of the total allocated storage (128 GiB), which is smaller than 64 GiB. +As an illustrative example, let's consider a server with a storage capacity of 2 TiB (which is greater than 1 TiB). In this case, the autogrow limit is set at 64 GiB. This choice is made because 64 GiB is the smaller value when compared to 10% of 2 TiB, which is roughly 204.8 GiB. In contrast, for a server with a storage size of 128 GiB (which is smaller than 1 TiB), the autogrow feature activates when there's only 25.8 GiB of space left. This activation is based on the 20% threshold of the total allocated storage (128 GiB), which is smaller than 64 GiB. -The default behavior is to increase the disk size to the next premium SSD storage tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage autogrow. Enabling storage autogrow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly. +The default behavior increases the disk size to the next premium SSD storage size. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage autogrow. Enabling storage autogrow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly. -The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity isn't triggered, even if storage autogrow is turned on. In such cases, you need to scale your storage manually. Please rememeber that in this specific case, manual scaling is an offline operation and should be scheduled in alignment with your business needs. +The process of scaling storage is performed online, without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of [Azure managed disks](/azure/virtual-machines/managed-disks-overview). If a disk is already 4,096 GiB, the storage scaling activity isn't triggered, even if storage autogrow is turned on. In such cases, you need to scale your storage manually. Remember that, in this specific case, manual scaling is an offline operation and should be scheduled in alignment with your business needs. -Remember that storage can only be scaled up, not down. +> [!NOTE] +> Regardless of the type of storage you assign to your instance, storage can only be scaled up, not down. -## Storage Autogrow Limitations and Considerations +## Limitations and considerations of storage autogrowth -- Disk scaling operations are generally performed online, except in specific scenarios involving the 4,096-GiB boundary. These scenarios include reaching or crossing the 4,096-GiB limit. For instance, scaling from 2,048 GiB to 8,192 GiB will trigger an offline operation. In the Azure portal, moving to 4 TB, which is represented as 4,095 GiB, will keep the operation online. However, if you explicitly specify 4 TB as 4,096 GiB, such as in Azure CLI, the scaling operation will be offline since it reaches the 4,096-GiB limit. +- Disk scaling operations are typically performed online, except in specific scenarios involving the boundary of 4,096 GiB. These scenarios include reaching or crossing the limit of 4,096 GiB. For instance, scaling from 2,048 GiB to 8,192 GiB triggers an offline operation. In the Azure portal, moving to 4 TB, which is represented as 4,095 GiB, keeps the operation online. However, if you explicitly specify 4 TB as 4,096 GiB, such as in Azure CLI, the scaling operation is completed in offline mode, since it reaches the limit of 4,096 GiB. -- Host Caching (ReadOnly and Read/Write) is supported on disk sizes less than 4 TiB. Any disk that is provisioned up to 4,095 GiB can take advantage of Host Caching. Host caching isn't supported for disk sizes more than or equal to 4,096 GiB. For example, a P50 premium disk provisioned at 4,095 GiB can take advantage of Host caching and a P50 disk provisioned at 4,096 GiB can't take advantage of Host Caching. Customers moving from lower disk size to 4,096 GiB or higher won't get disk caching ability. +- Host Caching (ReadOnly and Read/Write) is supported on disk sizes less than 4 TiB. Any disk that is provisioned up to 4,095 GiB can take advantage of Host Caching. Host caching isn't supported for disk sizes more than or equal to 4,096 GiB. For example, a P50 premium disk provisioned at 4,095 GiB can take advantage of Host caching and a P50 disk provisioned at 4,096 GiB can't take advantage of Host Caching. Customers moving from lower disk size to 4,096 GiB or higher lose the ability to use disk caching. - This limitation is due to the underlying Azure Managed disk, which needs a manual disk scaling operation. You receive an informational message in the portal when you approach this limit. + This limitation is due to the underlying [Azure managed disks](/azure/virtual-machines/managed-disks-overview), which needs a manual disk scaling operation. You receive an informational message in the portal when you approach this limit. - Storage autogrow isn't triggered when you have high WAL usage. @@ -130,23 +131,20 @@ Remember that storage can only be scaled up, not down. ## IOPS scaling -Azure Database for PostgreSQL flexible server supports provisioning of extra IOPS. This feature enables you to provision more IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. +Azure Database for PostgreSQL - Flexible Server supports provisioning of extra IOPS. This feature enables you to provision more IOPS beyond the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned, to adjust them to your workload requirements at any time. - -The minimum and maximum IOPS are determined by the selected compute size. To learn more about the minimum and maximum IOPS per compute size refer to the [compute size](concepts-compute.md). +The compute size selected determines the minimum and maximum IOPS. To learn more about the minimum and maximum IOPS per compute size, see [compute size](concepts-compute.md). > [!IMPORTANT] -> Minimum and maximum IOPS are determined by the selected compute size. +> The selected compute size determines the minimum and maximum IOPS. Learn how to [scale up or down IOPS](how-to-scale-compute-storage-portal.md). -## Price - -For the most up-to-date pricing information, see the [Azure Database for PostgreSQL flexible server pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) page. The [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab, based on the options that you select. +[!INCLUDE [pricing](./includes/compute-storage-princing.md)] -If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and then select **Azure Database for PostgreSQL** to customize the options. +[Share your suggestions and bugs with the Azure Database for PostgreSQL product team](https://aka.ms/pgfeedback). ## Related content -- [Manage Azure Database for PostgreSQL - Flexible Server using the Azure portal](how-to-manage-server-portal.md) +- [Manage Azure Database for PostgreSQL - Flexible Server](how-to-manage-server-portal.md) - [Limits in Azure Database for PostgreSQL - Flexible Server](concepts-limits.md) diff --git a/articles/postgresql/flexible-server/includes/compute-storage-pricing.md b/articles/postgresql/flexible-server/includes/compute-storage-pricing.md new file mode 100644 index 0000000000..b2e8d0a2b4 --- /dev/null +++ b/articles/postgresql/flexible-server/includes/compute-storage-pricing.md @@ -0,0 +1,26 @@ +--- +author: kabharati +ms.author: kabharati +ms.reviewer: maghan +ms.date: 12/16/2024 +ms.service: azure-database-postgresql +ms.subservice: flexible-server +ms.topic: include +--- +## Price + +For the most up-to-date pricing information, see [Azure Database for PostgreSQL - Flexible Server pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/). + +[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) also shows you an estimation of the monthly costs of a server configuration, based on the options selected. + +That estimation can be seen throughout the server creation experience, in the **New Azure Database for PostgreSQL Flexible server** page: + +:::image type="content" source="../media/compute-storage-pricing/new-server-estimated-costs.png" alt-text="Screenshot that shows the estimated monthly costs in the New Azure Database for PostgreSQL Flexible server wizard." lightbox="../media/compute-storage-pricing/new-server-estimated-costs.png"::: + +It can also be seen for existing servers if, in the resource menu of an existing instance, under the **Settings** section, you select **Compute + storage**: + +:::image type="content" source="../media/compute-storage-pricing/existing-server-estimated-costs.png" alt-text="Screenshot that shows the estimated monthly costs in the Compute + storage page of an existing Azure Database for PostgreSQL flexible server instance." lightbox="../media/compute-storage-pricing/existing-server-estimated-costs.png"::: + +If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. In the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Databases** category, and then select **Azure Database for PostgreSQL** to add the service to your estimate and then customize the options. + +:::image type="content" source="../media/compute-storage-pricing/existing-server-estimated-costs.png" alt-text="Screenshot that shows the Azure pricing calculator." lightbox="../media/compute-storage-pricing/existing-server-estimated-costs.png"::: \ No newline at end of file diff --git a/articles/postgresql/flexible-server/includes/compute-storage-princing.md b/articles/postgresql/flexible-server/includes/compute-storage-princing.md new file mode 100644 index 0000000000..b2e8d0a2b4 --- /dev/null +++ b/articles/postgresql/flexible-server/includes/compute-storage-princing.md @@ -0,0 +1,26 @@ +--- +author: kabharati +ms.author: kabharati +ms.reviewer: maghan +ms.date: 12/16/2024 +ms.service: azure-database-postgresql +ms.subservice: flexible-server +ms.topic: include +--- +## Price + +For the most up-to-date pricing information, see [Azure Database for PostgreSQL - Flexible Server pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/). + +[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) also shows you an estimation of the monthly costs of a server configuration, based on the options selected. + +That estimation can be seen throughout the server creation experience, in the **New Azure Database for PostgreSQL Flexible server** page: + +:::image type="content" source="../media/compute-storage-pricing/new-server-estimated-costs.png" alt-text="Screenshot that shows the estimated monthly costs in the New Azure Database for PostgreSQL Flexible server wizard." lightbox="../media/compute-storage-pricing/new-server-estimated-costs.png"::: + +It can also be seen for existing servers if, in the resource menu of an existing instance, under the **Settings** section, you select **Compute + storage**: + +:::image type="content" source="../media/compute-storage-pricing/existing-server-estimated-costs.png" alt-text="Screenshot that shows the estimated monthly costs in the Compute + storage page of an existing Azure Database for PostgreSQL flexible server instance." lightbox="../media/compute-storage-pricing/existing-server-estimated-costs.png"::: + +If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. In the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Databases** category, and then select **Azure Database for PostgreSQL** to add the service to your estimate and then customize the options. + +:::image type="content" source="../media/compute-storage-pricing/existing-server-estimated-costs.png" alt-text="Screenshot that shows the Azure pricing calculator." lightbox="../media/compute-storage-pricing/existing-server-estimated-costs.png"::: \ No newline at end of file diff --git a/articles/postgresql/flexible-server/media/concepts-compute/existing-server-estimated-costs.png b/articles/postgresql/flexible-server/media/compute-storage-pricing/existing-server-estimated-costs.png similarity index 100% rename from articles/postgresql/flexible-server/media/concepts-compute/existing-server-estimated-costs.png rename to articles/postgresql/flexible-server/media/compute-storage-pricing/existing-server-estimated-costs.png diff --git a/articles/postgresql/flexible-server/media/concepts-compute/new-server-estimated-costs.png b/articles/postgresql/flexible-server/media/compute-storage-pricing/new-server-estimated-costs.png similarity index 100% rename from articles/postgresql/flexible-server/media/concepts-compute/new-server-estimated-costs.png rename to articles/postgresql/flexible-server/media/compute-storage-pricing/new-server-estimated-costs.png diff --git a/articles/postgresql/flexible-server/media/concepts-compute/pricing-calculator.png b/articles/postgresql/flexible-server/media/compute-storage-pricing/pricing-calculator.png similarity index 100% rename from articles/postgresql/flexible-server/media/concepts-compute/pricing-calculator.png rename to articles/postgresql/flexible-server/media/compute-storage-pricing/pricing-calculator.png