diff --git a/articles/cosmos-db/emulator-linux.md b/articles/cosmos-db/emulator-linux.md index 47006db48a..be14595bc7 100644 --- a/articles/cosmos-db/emulator-linux.md +++ b/articles/cosmos-db/emulator-linux.md @@ -52,6 +52,13 @@ c1bb8cf53f8a mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-prev > > The emulator gateway endpoint is typically available on port `8081` at the address . To navigate to the data explorer, use the address in your web browser. It may take a few seconds for data explorer to be available. The gateway endpoint is typically available immediately. +> [!IMPORTANT] +> The .NET and Java SDKs don't support HTTP mode in the emulator. Since this version of the emulator starts with HTTP by default, you will need to explicitly enable HTTPS when starting the container (see below). For the Java SDK, you will also need to [install certificates](#installing-certificates-for-java-sdk). +> +> ```bash +> docker run --detach --publish 8081:8081 --publish 1234:1234 mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-preview --protocol https +> ``` + ## Docker commands The following table summarizes the available Docker commands for configuring the emulator. This table details the corresponding arguments, environment variables, allowed values, default settings, and descriptions of each command. @@ -75,63 +82,103 @@ The following table summarizes the available Docker commands for configuring the This emulator is in active development and preview. As a result, not all Azure Cosmos DB features are supported. Some features will also not be supported in the future. This table includes the state of various features and their level of support. -| | Support | +| Feature | Support | |---|---| -| **Create database** | ✅ Supported | -| **Read database** | ✅ Supported | -| **Delete database** | ✅ Supported | -| **Read database feed** | ✅ Supported | -| **Create database twice conflict** | ✅ Supported | +| **Batch API** | ✅ Supported | +| **Bulk API** | ✅ Supported | +| **Change Feed** | ⚠️ Not yet implemented | +| **Create and read document with utf data** | ✅ Supported | | **Create collection** | ✅ Supported | -| **Read collection** | ✅ Supported | -| **Update collection** | ✅ Supported | -| **Delete collection** | ✅ Supported | -| **Read collection feed** | ✅ Supported | | **Create collection twice conflict** | ✅ Supported | -| **Create collection with custom index policy** | ✅ Supported | -| **Create collection with ttl expiration** | ✅ Supported | -| **Create partitioned collection** | ✅ Supported | -| **Get and change collection performance** | ✅ Supported | +| **Create collection with custom index policy** | ⚠️ Not yet implemented | +| **Create collection with ttl expiration** | ⚠️ Not yet implemented | +| **Create database** | ✅ Supported | +| **Create database twice conflict** | ✅ Supported | | **Create document** | ✅ Supported | -| **Read document** | ✅ Supported | -| **Update document** | ✅ Supported | -| **Patch document** | ✅ Supported | +| **Create partitioned collection** | ⚠️ Not yet implemented | +| **Delete collection** | ✅ Supported | +| **Delete database** | ✅ Supported | | **Delete document** | ✅ Supported | -| **Read document feed** | ✅ Supported | +| **Get and change collection performance** | ⚠️ Not yet implemented | | **Insert large document** | ✅ Supported | -| **Create and read document with utf data** | ✅ Supported | -| **Query with sql query spec** | ✅ Supported | -| **Query with equality** | ✅ Supported | -| **Query with and filter and projection** | ⚠️ Not yet implemented | +| **Patch document** | ✅ Supported | +| **Query partitioned collection in parallel** | ⚠️ Not yet implemented | +| **Query with aggregates** | ⚠️ Not yet implemented | | **Query with and filter** | ⚠️ Not yet implemented | +| **Query with and filter and projection** | ⚠️ Not yet implemented | +| **Query with equality** | ✅ Supported | | **Query with equals on id** | ✅ Supported | -| **Query with inequality** | ⚠️ Not yet implemented | -| **Query with range operators on numbers** | ⚠️ Not yet implemented | -| **Query with range operators on strings** | ⚠️ Not yet implemented | -| **Query with range operators date times** | ⚠️ Not yet implemented | +| **Query with joins** | ⚠️ Not yet implemented | | **Query with order by** | ✅ Supported | +| **Query with order by for partitioned collection** | ⚠️ Not yet implemented | | **Query with order by numbers** | ✅ Supported | | **Query with order by strings** | ⚠️ Not yet implemented | -| **Query with aggregates** | ⚠️ Not yet implemented | +| **Query with paging** | ⚠️ Not yet implemented | +| **Query with range operators date times** | ⚠️ Not yet implemented | +| **Query with range operators on numbers** | ⚠️ Not yet implemented | +| **Query with range operators on strings** | ⚠️ Not yet implemented | +| **Query with single join** | ⚠️ Not yet implemented | +| **Query with string math and array operators** | ⚠️ Not yet implemented | | **Query with subdocuments** | ⚠️ Not yet implemented | -| **Query with joins** | ⚠️ Not yet implemented | | **Query with two joins** | ⚠️ Not yet implemented | | **Query with two joins and filter** | ⚠️ Not yet implemented | -| **Query with single join** | ⚠️ Not yet implemented | -| **Query with string math and array operators** | ⚠️ Not yet implemented | -| **Query with paging** | ⚠️ Not yet implemented | -| **Query partitioned collection in parallel** | ⚠️ Not yet implemented | -| **Query with order by for partitioned collection** | ⚠️ Not yet implemented | -| **Stored procedure** | ❌ Not planned | +| **Read collection** | ✅ Supported | +| **Read collection feed** | ⚠️ Not yet implemented | +| **Read database** | ✅ Supported | +| **Read database feed** | ⚠️ Not yet implemented | +| **Read document** | ✅ Supported | +| **Read document feed** | ✅ Supported | +| **Replace document** | ✅ Supported | +| **Request Units** | ⚠️ Not yet implemented | +| **Stored procedures** | ❌ Not planned | | **Triggers** | ❌ Not planned | | **UDFs** | ❌ Not planned | +| **Update collection** | ⚠️ Not yet implemented | +| **Update document** | ✅ Supported | + ## Limitations In addition to features not yet supported or not planned, the following list includes current limitations of the emulator. - The .NET SDK for Azure Cosmos DB doesn't support bulk execution in the emulator. -- The .NET SDK doesn't support HTTP mode in the emulator. +- The .NET and Java SDKs don't support HTTP mode in the emulator. + +## Installing certificates for Java SDK + +When using the [Java SDK for Azure Cosmos DB](./nosql/sdk-java-v4.md) with this version of the emulator in https mode, it is necessary to install it's certificates to your local Java trust store. + +### Get certificate + +In a `bash` window, run the following: + +```bash +# If the emulator was started with /AllowNetworkAccess, replace localhost with the actual IP address of it: +EMULATOR_HOST=localhost +EMULATOR_PORT=8081 +EMULATOR_CERT_PATH=/tmp/cosmos_emulator.cert +openssl s_client -connect ${EMULATOR_HOST}:${EMULATOR_PORT} $EMULATOR_CERT_PATH +``` + +### Install certificate + +Navigate to the directory of your java installation where `cacerts` file is located (replace below with correct directory): + +```bash +cd "C:/Program Files/Eclipse Adoptium/jdk-17.0.10.7-hotspot/bin" +``` + +Import the cert (you may be asked for a password, the default value is "changeit"): + +```bash +keytool -cacerts -importcert -alias cosmos_emulator -file $EMULATOR_CERT_PATH +``` + +If you get an error because the alias already exists, delete it and then run the above again: + +```bash +keytool -cacerts -delete -alias cosmos_emulator +``` ## Reporting issues diff --git a/articles/cosmos-db/how-to-configure-nsp.md b/articles/cosmos-db/how-to-configure-nsp.md new file mode 100644 index 0000000000..de22ec5e54 --- /dev/null +++ b/articles/cosmos-db/how-to-configure-nsp.md @@ -0,0 +1,55 @@ +--- +title: Configure Network Security Perimeter for an Azure Cosmos DB account +description: Learn how to secure your Cosmos DB account using Network Service Perimeter. +ms.service: azure-cosmos-db +ms.topic: how-to +ms.date: 11/20/2024 +ms.author: iriaosara +author: iriaosara +--- + +# Configure Network Security Perimeter for an Azure Cosmos DB account +[!INCLUDE[NoSQL](includes/appliesto-nosql.md)] + +This article explains how to configure Network Security Perimeter on your Azure Cosmos DB account. + +> [!IMPORTANT] +> Network Security Perimeter is in public preview. +> This feature is provided without a service level agreement. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + +## Feature overview +Network administrators can define a network isolation boundary for their PaaS services, which allows communication between their Azure Cosmos DB account and Keyvault, SQL, and other services using Azure Network Security Perimeter. Securing public access on Azure Service can be accomplished in several ways: + +- Securing inbound connections: Restrict public exposure of your Azure Cosmos DB account by explicitly granting ingress access to resources inside the perimeter. By default, access from unauthorized networks is denied, and access from private endpoints into the perimeter or resources in a subscription can be configured. +- Securing service-to-service communication: All resources inside the perimeter can communicate with any other resources within the perimeter, preventing data exfiltration. +- Securing outbound connections: If Network Security Perimeter doesn't manage the destination tenant, it blocks access when attempting to copy data from one tenant to another. Access is granted based on FQDN or access from other network perimeters; all other access attempts are denied. + +:::image type="content" source="./media/network-service-perimeter/nsp-overview.png" alt-text="Screenshot showing network service perimeter."::: + +All of these communications are taken care of automatically once Network Security Perimeter is set up, and users don't have to manage them. Instead of setting up a private endpoint for each resource to enable communication or configure virtual network, Network Security Perimeter at the top level enables this functionality. + +> [!NOTE] +> Azure Network security perimeter complements what we currently have in place today, including private endpoint, which allows access to a private resource within the perimeter, and VNet injection, which enables managed VNet offerings to access resources within the perimeter. +> We currently do not support the combination of network security perimeter, customer-managed keys (CMK), and log store features. If you need to perform restores on a CMK with a network security perimeter account, you'll temporarily need to relax the perimeter settings in the key vault to allow your Cosmos DB account access to the key. + +## Getting started +> [!IMPORTANT] +> Before setting up a network security perimeter [create a managed identity in Azure](./how-to-setup-managed-identity.md#add-a-user-assigned-identity). + +* In the Azure portal, search for **network security perimeters** in the resource list and select **Create +**. +* From the list of resources, select the resources that you want to associate with the perimeter. +* Add an inbound access rule, the source type can be either an IP address or a subscription. +* Add outbound access rules to allow resources inside the perimeter to connect to the internet and resources outside of the perimeter. + +In cases where you have existing Azure Cosmos DB account and looking to add security perimeter: +* Select **Networking** from the **Settings** + +:::image type="content" source="./media/network-service-perimeter/add-nsp.png" alt-text="Screenshot showing how to add NSP to an Azure resource."::: + +* Then select **Associate NSP** to associate this resource with your network security perimeter to enable communication with other Azure resources in the same perimeter while restricting public access to only allow the connections you specify. + +## Next steps + +* Overview of [network service perimeter](https://aka.ms/networksecurityperimeter) +* Learn to monitor with [diagnostic logs in network security perimeter](https://aka.ms/networksecurityperimeter) \ No newline at end of file diff --git a/articles/cosmos-db/index-policy.md b/articles/cosmos-db/index-policy.md index 181fb3cd31..29b2fc2da6 100644 --- a/articles/cosmos-db/index-policy.md +++ b/articles/cosmos-db/index-policy.md @@ -159,14 +159,14 @@ Here are some rules for included and excluded paths precedence in Azure Cosmos D | **`diskANN`** | Creates an index based on DiskANN for fast and efficient approximate search. | 4096 | A few points to note: - - The `flat` and `quantizedFlat` index types apply Azure Cosmos DB's index to store and read each vector when performing a vector search. Vector searches with a `flat` index are brute-force searches and produce 100% accuracy or recall. That is, it's guaranteed to find the most similar vectors in the dataset. However, there's a limitation of `505` dimensions for vectors on a flat index. +- The `flat` and `quantizedFlat` index types apply Azure Cosmos DB's index to store and read each vector when performing a vector search. Vector searches with a `flat` index are brute-force searches and produce 100% accuracy or recall. That is, it's guaranteed to find the most similar vectors in the dataset. However, there's a limitation of `505` dimensions for vectors on a flat index. - The `quantizedFlat` index stores quantized (compressed) vectors on the index. Vector searches with `quantizedFlat` index are also brute-force searches, however their accuracy might be slightly less than 100% since the vectors are quantized before adding to the index. However, vector searches with `quantized flat` should have lower latency, higher throughput, and lower RU cost than vector searches on a `flat` index. This is a good option for scenarios where you're using query filters to narrow down the vector search to a relatively small set of vectors, and high accuracy is required. - The `diskANN` index is a separate index defined specifically for vectors applying [DiskANN](https://www.microsoft.com/research/publication/diskann-fast-accurate-billion-point-nearest-neighbor-search-on-a-single-node/), a suite of high performance vector indexing algorithms developed by Microsoft Research. DiskANN indexes can offer some of the lowest latency, highest throughput, and lowest RU cost queries, while still maintaining high accuracy. However, since DiskANN is an approximate nearest neighbors (ANN) index, the accuracy might be lower than `quantizedFlat` or `flat`. The `diskANN` and `quantizedFlat` indexes can take optional index build parameters that can be used to tune the accuracy versus latency trade-off that applies to every Approximate Nearest Neighbors vector index. - - `quantizationByteSize`: Sets the size (in bytes) for product quantization. Min=1, Default=dynamic (system decides), Max=512. Setting this larger may result in higher accuracy vector searches at expense of higher RU cost and higher latency. This applies to both `quantizedFlat` and `DiskANN` index types. +- `quantizationByteSize`: Sets the size (in bytes) for product quantization. Min=1, Default=dynamic (system decides), Max=512. Setting this larger may result in higher accuracy vector searches at expense of higher RU cost and higher latency. This applies to both `quantizedFlat` and `DiskANN` index types. - `indexingSearchListSize`: Sets how many vectors to search over during index build construction. Min=10, Default=100, Max=500. Setting this larger may result in higher accuracy vector searches at the expense of longer index build times and higher vector ingest latencies. This applies to `DiskANN` indexes only. Here's an example of an indexing policy with a vector index: @@ -494,7 +494,7 @@ WHERE r.familyname = 'Anderson' AND ch.age > 20 A container's indexing policy can be updated at any time [by using the Azure portal or one of the supported SDKs](how-to-manage-indexing-policy.md). An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no extra storage space is consumed during the operation). The old indexing policy is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size. If multiple indexing policy updates have to be made, it's recommended to do all the changes as a single operation in order to have the index transformation complete as quickly as possible. > [!IMPORTANT] -> Index transformation is an operation that consumes [request units](request-units.md). +> Index transformation is an operation that consumes [request units](request-units.md) and updating the index policy is an RU bound operation. If any indexing term is missed, the customer will see queries consuming more overall RUs. > [!NOTE] > You can track the progress of index transformation in the [Azure portal](how-to-manage-indexing-policy.md#use-the-azure-portal) or by [using one of the SDKs](how-to-manage-indexing-policy.md#dotnet-sdk). diff --git a/articles/cosmos-db/media/network-service-perimeter/add-nsp.png b/articles/cosmos-db/media/network-service-perimeter/add-nsp.png new file mode 100644 index 0000000000..bab2f5f5ad Binary files /dev/null and b/articles/cosmos-db/media/network-service-perimeter/add-nsp.png differ diff --git a/articles/cosmos-db/media/network-service-perimeter/nsp-overview.png b/articles/cosmos-db/media/network-service-perimeter/nsp-overview.png new file mode 100644 index 0000000000..80b5925116 Binary files /dev/null and b/articles/cosmos-db/media/network-service-perimeter/nsp-overview.png differ diff --git a/articles/cosmos-db/mongodb/vcore/TOC.yml b/articles/cosmos-db/mongodb/vcore/TOC.yml index 523f59eaa5..7d2bd9bfc0 100644 --- a/articles/cosmos-db/mongodb/vcore/TOC.yml +++ b/articles/cosmos-db/mongodb/vcore/TOC.yml @@ -24,6 +24,8 @@ href: tutorial-nodejs-web-app.md - name: Concepts items: + - name: Autoscale + href: autoscale.md - name: Free tier href: free-tier.md - name: Burstable tier diff --git a/articles/cosmos-db/mongodb/vcore/autoscale.md b/articles/cosmos-db/mongodb/vcore/autoscale.md new file mode 100644 index 0000000000..aa68912376 --- /dev/null +++ b/articles/cosmos-db/mongodb/vcore/autoscale.md @@ -0,0 +1,111 @@ +--- +title: Autoscale on vCore based Azure Cosmos DB for MongoDB +titleSuffix: Azure Cosmos DB for MongoDB (vCore) +description: Autoscale on vCore based Azure Cosmos DB for MongoDB. +author: suvishodcitus +ms.author: suvishod +ms.service: azure-cosmos-db +ms.subservice: mongodb-vcore +ms.topic: conceptual +ms.date: 11/18/2024 +ms.custom: references_regions +# CustomerIntent: As a PM, we wanted to offer our customers a feature that allows database adapts immediately to changing workloads, eliminating performance bottlenecks +--- + + +# Autoscale for vCore-based Azure Cosmos DB for MongoDB (public preview) + +[!INCLUDE[MongoDB (vCore)](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb-vcore.md)] + +Managing databases with fluctuating workloads can be complex and costly, especially when unpredictable traffic spikes require overprovisioning resources. To address this +challenge, Azure Cosmos DB for MongoDB introduces Autoscale for its vCore-based clusters. Autoscale is designed to handle variable workloads by dynamically adjusting capacity +in real-time, scaling up or down based on application demands. + +Unlike other managed MongoDB solutions, which often experience delays of several hours when scaling up and more than 24 hours +for scaling down, Azure Cosmos DB's Autoscale offers instant scalability. This feature ensures that your database adapts +immediately to changing workloads, eliminating performance bottlenecks and avoiding unnecessary costs. + +## Get started + +Follow this document to [create a new Azure Cosmos DB for MongoDB (vCore)](quickstart-portal.md) cluster and select the 'M200-Autoscale tier (Preview)' checkbox. +Alternatively, you can also use [Bicep template](quickstart-bicep.md) to provision the resource. + +:::image type="content" source="media/how-to-scale-cluster/provision-autoscale-tier.jpg" alt-text="Screenshot of the free tier provisioning."::: + +## Benefits + +- **Instant Scale** + + - Automatically adjusts capacity without downtime, maintaining performance during unexpected workload spikes. + - Eliminates the need for manual scaling, reducing the risk of service disruptions. + +- **Cost Efficiency** + + - Reduces expenses by preventing overprovisioning, utilizing resources only when necessary. + - Pay-as-you-use pricing ensures that you’re only billed for actual usage, maximizing resource utilization. + +- **Predictable Pricing** + + - Core-based pricing with transparent cost calculations makes budgeting and forecasting easier. + - Flexible pricing model adapts to workload demands, avoiding unexpected cost spikes. + +## Pricing Model + +For simplicity it uses a core-based pricing model, where charges are based on the higher of CPU or committed memory usage +in the last hour, compared to a 35% utilization threshold. + +* Upto 35% Utilization: Minimum price applies. +* Above 35% Utilization: Maximum price applies. +* Autoscale clusters incur a 50% premium over the base tier due to their instant scaling capabilities. +* Billing Frequency: Costs are calculated and billed hourly, ensuring you only pay for the capacity you use. + +### Example: +In a scenario where an application experiences usage spikes for 10% of its runtime: + +* Without Autoscale: An overprovisioned M200 cluster would cost $1,185.24. +* With Autoscale: An M200-Autoscale cluster would cost $968.41, offering a savings of 18.29%. + +This flexible pricing model helps reduce costs while maintaining optimal performance during peak demand. + +## Restrictions + +- Currently, only the M200 Autoscale tier is supported, allowing scaling within the range of M80 to M200 tiers. +- Autoscale applies only to compute resources. Storage capacity must still be scaled manually. +- Upgrades or downgrades between the General Tier and Autoscale Tier are not supported at this time. + +## Frequently Asked Questions (FAQs) + +- Which clusters support Autoscale? + +Currently, Autoscale is only available for the M200 tier, with scaling capabilities from M80 to M200. + +- Does Autoscale manage both compute and storage scaling? + +No, Autoscale only manages compute resources. Storage must be scaled manually. + +- Can I switch between the General Tier and Autoscale Tier? + +No, upgrades or downgrades between the General Tier and Autoscale Tier are not supported at this time. + +- Is there any downtime when Autoscale adjusts capacity? + +No, Autoscale adjusts capacity instantly and seamlessly, without any downtime or impact on performance. + +- What happens if my workload exceeds the M200 tier limits? + +If your workload consistently exceeds the M200 limits, you may need to consider a higher tier or alternative scaling strategies, as Autoscale currently only supports up to M200. + +- Is Autoscale available in all Azure regions? + +Autoscale support may vary by region. Please check the Azure portal for availability in your preferred region. + +- How can I verify the charges incurred with Autoscale? + +To provide cost transparency, we’ve introduced a new metric called “Autoscale Utilization Percentage.” This metric shows the maximum of CPU or committed memory usage over time, allowing you to compare it against the charges incurred. + +## Next steps + +Having explored the capabilities of the Autoscale tier in Azure Cosmos DB for MongoDB (vCore), the next step is to dive into the migration journey. This involves understanding how to conduct a migration assessment and planning a seamless transfer of your existing MongoDB workloads to Azure. + +> [!div class="nextstepaction"] +> [Migration options for Azure Cosmos DB for MongoDB (vCore)](migration-options.md) diff --git a/articles/cosmos-db/mongodb/vcore/compatibility.md b/articles/cosmos-db/mongodb/vcore/compatibility.md index 75e39c9eff..75da56506e 100644 --- a/articles/cosmos-db/mongodb/vcore/compatibility.md +++ b/articles/cosmos-db/mongodb/vcore/compatibility.md @@ -495,7 +495,7 @@ Below are the list of operators currently supported on Azure Cosmos DB for Mongo $collStatsYesYesYes $countYesYesYes $densifyYesYes -$documentsNoNo +$documentsYesYes $facetYesYesYes $fillYesYes $geoNearYesYesYes diff --git a/articles/cosmos-db/mongodb/vcore/free-tier.md b/articles/cosmos-db/mongodb/vcore/free-tier.md index 822b8cf53f..0f5f7a2651 100644 --- a/articles/cosmos-db/mongodb/vcore/free-tier.md +++ b/articles/cosmos-db/mongodb/vcore/free-tier.md @@ -19,7 +19,7 @@ ms.custom: references_regions Azure Cosmos DB for MongoDB (vCore) now introduces a new SKU, the "Free Tier," enabling users to explore the platform without any financial commitments. The free tier lasts for the lifetime of your account, boasting command and feature parity with a regular Azure Cosmos DB for MongoDB (vCore) account. -It makes it easy for you to get started, develop, test your applications, or even run small production workloads for free. With Free Tier, you get a dedicated MongoDB cluster with 32-GB storage, perfect for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available in the Southeast Asia region. +It makes it easy for you to get started, develop, test your applications, or even run small production workloads for free. With Free Tier, you get a dedicated MongoDB cluster with 32-GB storage, perfect for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available in the South India region. ## Get started @@ -48,7 +48,7 @@ specify your storage requirements, and you're all set. Rest assured, your data, ## Restrictions * For a given subscription, only one free tier account is permissible. -* Free tier is currently available in the Southeast Asia region only. +* Free tier is currently available in the South India region only. * High availability, Azure Active Directory (Azure AD) and Diagnostic Logging are not supported. diff --git a/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-autoscale-tier.jpg b/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-autoscale-tier.jpg new file mode 100644 index 0000000000..95f99d3fbf Binary files /dev/null and b/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-autoscale-tier.jpg differ diff --git a/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-free-tier.jpg b/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-free-tier.jpg index 2b917d902d..7260c61fd3 100644 Binary files a/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-free-tier.jpg and b/articles/cosmos-db/mongodb/vcore/media/how-to-scale-cluster/provision-free-tier.jpg differ diff --git a/articles/cosmos-db/nosql/TOC.yml b/articles/cosmos-db/nosql/TOC.yml index ddc81ac5e2..e2a5088a80 100644 --- a/articles/cosmos-db/nosql/TOC.yml +++ b/articles/cosmos-db/nosql/TOC.yml @@ -903,6 +903,8 @@ href: ../how-to-configure-vnet-service-endpoint.md - name: Configure access from private endpoints href: ../how-to-configure-private-endpoints.md + - name: Configure Network Security Perimeter + href: ../how-to-configure-nsp.md - name: Encryption items: - name: Use Always Encrypted diff --git a/articles/cosmos-db/nosql/change-feed-modes.md b/articles/cosmos-db/nosql/change-feed-modes.md index 4bd698391c..69cb0dd3fa 100644 --- a/articles/cosmos-db/nosql/change-feed-modes.md +++ b/articles/cosmos-db/nosql/change-feed-modes.md @@ -6,7 +6,7 @@ ms.author: jucocchi ms.service: azure-cosmos-db ms.custom: build-2023 ms.topic: conceptual -ms.date: 07/25/2024 +ms.date: 11/11/2024 --- # Change feed modes in Azure Cosmos DB @@ -85,7 +85,7 @@ In addition to the [common features across all change feed modes](../change-feed * Change feed items come in the order of their modification time. Deletes from TTL expirations aren't guaranteed to appear in the feed immediately after the item expires. They appear when the item is purged from the container. -* All changes that occurred within the retention window that's set for continuous backups on the account can be read. Attempting to read changes that occurred outside of the retention window results in an error. For example, if your container was created eight days ago and your continuous backup period retention period is seven days, then you can only read changes from the last seven days. +* All changes that occurred within the retention window for continuous backups on the account can be read. Attempting to read changes that occurred outside of the retention window results in an error. For example, if your container was created eight days ago and your continuous backup period retention period is seven days, then you can only read changes from the last seven days. * The change feed starting point can be from "now" or from a specific checkpoint within your retention period. You can't read changes from the beginning of the container or from a specific point in time by using this mode. @@ -107,7 +107,7 @@ You can use the following ways to consume changes from change feed in latest ver ### Parse the response object -In latest version mode, the default response object is an array of items that have changed. Each item contains the standard metadata for any Azure Cosmos DB item, including `_etag` and `_ts`, with the addition of a new property, `_lsn`. +In latest version mode, the default response object is an array of items that changed. Each item contains the standard metadata for any Azure Cosmos DB item, including `_etag` and `_ts`, with the addition of a new property, `_lsn`. The `_etag` format is internal and you shouldn't take dependency on it because it can change anytime. `_ts` is a modification or a creation time stamp. You can use `_ts` for chronological comparison. `_lsn` is a batch ID that is added for change feed only that represents the transaction ID. Many items can have same `_lsn`. @@ -120,7 +120,7 @@ During the preview, the following methods to read the change feed are available | **Method to read change feed** | **.NET** | **Java** | **Python** | **Node.js** | | --- | --- | --- | --- | --- | | [Change feed pull model](change-feed-pull-model.md) | [>= 3.32.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.32.0-preview) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.37.0) | No | [>= 4.1.0](https://www.npmjs.com/package/@azure/cosmos?activeTab=versions) | -| [Change feed processor](change-feed-processor.md) | No | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.42.0) | No | No | +| [Change feed processor](change-feed-processor.md) | [>= 3.40.0-preview.0](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.40.0-preview.0) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.42.0) | No | No | | Azure Functions trigger | No | No | No | No | > [!NOTE] @@ -132,33 +132,58 @@ To get started using all versions and deletes change feed mode, enroll in the pr :::image type="content" source="media/change-feed-modes/enroll-in-preview.png" alt-text="Screenshot of All versions and deletes change feed mode feature in Preview Features page in Subscriptions overview in Azure portal."::: -Before you submit your request, ensure that you have at least one Azure Cosmos DB account in the subscription. This account can be an existing account or a new account that you created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, the request is declined because there are no accounts to apply the feature to. +Before you submit your request, ensure that you have at least one Azure Cosmos DB account in the subscription. This account can be an existing account or a new account that you created to try out the preview feature. If you have no accounts in the subscription when your request is received, the request is declined because there are no accounts to apply the feature to. The Azure Cosmos DB team reviews your request and contacts you via email to confirm which Azure Cosmos DB accounts in the subscription you want to enroll in the preview. To use the preview, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Continuous backups can be enabled either before or after being admitted to the preview, but continuous backups must be enabled before you attempt to read from the change feed in all versions and deletes mode. ### Parse the response object -The response object is an array of items that represent each change. The array looks like the following example: - -```json -[ - {  - "current": { - - },  - "previous" : { - - },  - "metadata": { - "lsn": , - "operationType": , - "previousImageLSN" : , - "timeToLiveExpired" : , - "crts": - } +The response object is an array of items that represent each change. Different properties will be populated depending on the change type. There's currently no way to get the previous version of items for either replace or delete operations. + +* Create operations + ```json + { + "current": { + + }, + "metadata": { + "operationType": "create", + "lsn": , + "crts": + } } -] -``` + ``` + +* Replace operations + ```json + { + "current": { + + }, + "metadata": { + "operationType": "replace", + "lsn": , + "crts": , + "previousImageLSN" : , + } + } + ``` + +* Delete operations + ```json + { + "metadata": { + "operationType": "delete", + "lsn": , + "crts": , + "previousImageLSN" : , + "id": "", + "partitionKey": { + "": "" + } + } + } + ``` ### Limitations @@ -170,9 +195,9 @@ The response object is an array of items that represent each change. The array l * The ability to start reading the change feed from the beginning or to select a start time based on a past time stamp isn't currently supported. You can either start from "now" or from a previous [lease](change-feed-processor.md#components-of-the-change-feed-processor) or [continuation token](change-feed-pull-model.md#save-continuation-tokens). -* Receiving the previous version of items that have been updated isn't currently available. +* Receiving the previous version of items that were deleted or updated isn't currently available. -* Accounts that have enabled [merging partitions](../merge.md) aren't supported. +* Accounts that enabled [merging partitions](../merge.md) aren't supported. --- diff --git a/articles/cosmos-db/nosql/query/vectordistance.md b/articles/cosmos-db/nosql/query/vectordistance.md index 4b321896bc..3826cd5dd5 100644 --- a/articles/cosmos-db/nosql/query/vectordistance.md +++ b/articles/cosmos-db/nosql/query/vectordistance.md @@ -64,9 +64,9 @@ ORDER BY VectorDistance(c.vector1, ) This next example also includes optional arguments for `VectorDistance` ```nosql -SELECT TOP 10 s.name, VectorDistance(c.vector1, , true, {'distanceFunction':'cosine', 'dataType':'float32',}) +SELECT TOP 10 s.name, VectorDistance(c.vector1, , true, {'distanceFunction':'cosine', 'dataType':'float32'}) FROM c -ORDER BY VectorDistance(c.vector1, , true, {'distanceFunction':'cosine', 'dataType':'float32',}) +ORDER BY VectorDistance(c.vector1, , true, {'distanceFunction':'cosine', 'dataType':'float32'}) ``` >[!IMPORTANT] diff --git a/articles/cosmos-db/nosql/security/how-to-grant-control-plane-role-based-access.md b/articles/cosmos-db/nosql/security/how-to-grant-control-plane-role-based-access.md index 958273066b..1a53b99f6d 100644 --- a/articles/cosmos-db/nosql/security/how-to-grant-control-plane-role-based-access.md +++ b/articles/cosmos-db/nosql/security/how-to-grant-control-plane-role-based-access.md @@ -24,7 +24,7 @@ Diagram of the sequence of the deployment guide including these locations, in or This article walks through the steps to grant an identity access to manage an Azure Cosmos DB for NoSQL account and its resources. > [!IMPORTANT] -> The steps in this article only cover control plane access to perform operations on the account itself of any resources in the account's hierarchy. To learn how to manage roles, definitions, and assignments for the control plane, see [grant data plane role-based access](how-to-grant-data-plane-role-based-access.md). +> The steps in this article only cover control plane access to perform operations on the account itself of any resources in the account's hierarchy. To learn how to manage items and execute queries for the data plane, see [grant data plane role-based access](how-to-grant-data-plane-role-based-access.md). [!INCLUDE[Grant control plane role-based access](../../includes/grant-control-plane-role-based-access.md)] diff --git a/articles/cosmos-db/nosql/security/how-to-grant-data-plane-role-based-access.md b/articles/cosmos-db/nosql/security/how-to-grant-data-plane-role-based-access.md index 3055b2282d..bdc5225d0a 100644 --- a/articles/cosmos-db/nosql/security/how-to-grant-data-plane-role-based-access.md +++ b/articles/cosmos-db/nosql/security/how-to-grant-data-plane-role-based-access.md @@ -27,7 +27,7 @@ Diagram of the sequence of the deployment guide including these locations, in or This article walks through the steps to grant an identity access to manage data in an Azure Cosmos DB for NoSQL account. > [!IMPORTANT] -> The steps in this article only cover data plane access to perform operations on individual items and run queries. To learn how to manage roles, definitions, and assignments for the control plane, see [grant control plane role-based access](how-to-grant-control-plane-role-based-access.md). +> The steps in this article only cover data plane access to perform operations on individual items and run queries. To learn how to manage databases and containers for the control plane, see [grant control plane role-based access](how-to-grant-control-plane-role-based-access.md). ## Prerequisites diff --git a/articles/cosmos-db/partitioning-overview.md b/articles/cosmos-db/partitioning-overview.md index 3fb0073358..495fe4d14f 100644 --- a/articles/cosmos-db/partitioning-overview.md +++ b/articles/cosmos-db/partitioning-overview.md @@ -102,6 +102,16 @@ If you need [multi-item ACID transactions](database-transactions-optimistic-conc > [!NOTE] > If you only have one physical partition, the value of the partition key may not be relevant as all queries will target the same physical partition. +## Types of partition keys + + +| **Partitioning Strategy** | **When to Use** | **Pros** | **Cons** | +|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +|**Regular Partition Key** (e.g., CustomerId, OrderId) | - Use when the partition key has high cardinality and aligns with query patterns (e.g., filtering by CustomerId).
- Suitable for workloads where queries mostly target a single customer’s data (e.g., retrieving all orders for a customer). | - Simple to manage.
- Efficient queries when the access pattern matches the partition key (e.g., querying all orders by CustomerId).
- Prevents cross-partition queries if access patterns are consistent. | - Risk of hot partitions if some values (e.g., a few high-traffic customers) generate significantly more data than others.
- May hit the 20 GB limit per logical partition if data volume for a specific key grows rapidly. | +|**Synthetic Partition Key** (e.g., CustomerId + OrderDate) | - Use when no single field has both high cardinality and matches query patterns.
- Good for write-heavy workloads where data needs to be evenly distributed across physical partitions (e.g., many orders placed on the same date). | - Helps distribute data evenly across partitions, reducing hot partitions (e.g., distributing orders by both CustomerId and OrderDate).
- Spreads writes across multiple partitions, improving throughput. | - Queries that only filter by one field (e.g., CustomerId only) could result in cross-partition queries.
- Cross-partition queries can lead to higher RU consumption (2-3 RU/s additional charge for every physical partition that exists) and added latency. | +| **Hierarchical Partition Key (HPK)** (e.g., CustomerId/OrderId, StoreId/ProductId) | - Use when you need multi-level partitioning to support large-scale datasets.
- Ideal when queries filter on first and second levels of the hierarchy. | - Helps avoid the 20 GB limit by creating multiple levels of partitioning.
- Efficient querying on both hierarchical levels (e.g., filtering first by CustomerID, then by OrderID).
- Minimizes cross-partition queries for queries targeting the top level (e.g., retrieving all data from a specific CustomerID). | - Requires careful planning to ensure the first-level key has high cardinality and is included in most queries.
- More complex to manage than a regular partition key.
- If queries don’t align with the hierarchy (e.g., filtering only by OrderID when CustomerID is the first level), query performance could suffer. | + + ## Partition keys for read-heavy containers For most containers, the above criteria are all you need to consider when picking a partition key. For large read-heavy containers, however, you might want to choose a partition key that appears frequently as a filter in your queries. Queries can be [efficiently routed to only the relevant physical partitions](how-to-query-container.md#in-partition-query) by including the partition key in the filter predicate. diff --git a/articles/dms/migration-dms-powershell-cli.md b/articles/dms/migration-dms-powershell-cli.md index d3a38edd5e..f6ced87b41 100644 --- a/articles/dms/migration-dms-powershell-cli.md +++ b/articles/dms/migration-dms-powershell-cli.md @@ -129,7 +129,7 @@ az datamigration sql-managed-instance create ` --migration-service "/subscriptions/mySubscriptionID/resourceGroups/myRG/providers/Microsoft.DataMigration/SqlMigrationServices/myMigrationService" ` --scope "/subscriptions/mySubscriptionID/resourceGroups/myRG/providers/Microsoft.Sql/managedInstances/mySQLMI" ` --source-database-name "AdventureWorks2008" ` ---source-sql-connection authentication="SqlAuthentication" data-source="mySQLServer" password="myPassword" user-name="sqluser" ` +--source-sql-connection authentication="SqlAuthentication" data-source="mySQLServer" password="" user-name="sqluser" ` --target-db-name "AdventureWorks2008" ` --resource-group myRG ` --managed-instance-name mySQLMI diff --git a/articles/mysql/flexible-server/concept-reserved-pricing.md b/articles/mysql/flexible-server/concept-reserved-pricing.md index c89a747741..d3eee17b82 100644 --- a/articles/mysql/flexible-server/concept-reserved-pricing.md +++ b/articles/mysql/flexible-server/concept-reserved-pricing.md @@ -16,22 +16,22 @@ ms.topic: conceptual [!INCLUDE[azure-database-for-mysql-single-server-deprecation](~/reusable-content/ce-skilling/azure/includes/mysql/includes/azure-database-for-mysql-single-server-deprecation.md)] -Azure Database for MySQL flexible server now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL flexible server reserved instances, you make an upfront commitment on Azure Database for MySQL flexible server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL flexible server reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. +Azure Database for MySQL Flexible Server now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL Flexible Server reserved instances, you make an upfront commitment on Azure Database for MySQL Flexible Server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL Flexible Server reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. ## How does the instance reservation work? -You don't need to assign the reservation to specific Azure Database for MySQL flexible server instances. An already running Azure Database for MySQL flexible server instance or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure Database for MySQL flexible server compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with Azure Database for MySQL flexible server. At the end of the reservation term, the billing benefit expires, and Azure Database for MySQL flexible server is billed at the pay-as-you go price. Reservations don't auto-renew. For pricing information, see the [Azure Database for MySQL flexible server reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). +You don't need to assign the reservation to specific Azure Database for MySQL Flexible Server instances. An already running Azure Database for MySQL Flexible Server instance or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure Database for MySQL Flexible Server compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with Azure Database for MySQL Flexible Server. At the end of the reservation term, the billing benefit expires, and Azure Database for MySQL Flexible Server is billed at the pay-as-you go price. Reservations don't auto-renew. For pricing information, see the [Azure Database for MySQL Flexible Server reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). -You can buy Azure Database for MySQL flexible server reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](/azure/cost-management-billing/reservations/prepare-buy-reservation). To buy the reserved capacity: +You can buy Azure Database for MySQL Flexible Server reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](/azure/cost-management-billing/reservations/prepare-buy-reservation). To buy the reserved capacity: * To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription. * For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription. -* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for MySQL flexible server reserved capacity.
+* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for MySQL Flexible Server reserved capacity.
The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](/azure/cost-management-billing/reservations/understand-reserved-instance-usage-ea) and [understand Azure reservation usage for your Pay-As-You-Go subscription](/azure/cost-management-billing/reservations/understand-reserved-instance-usage). ## Reservation exchanges and refunds -You can exchange a reservation for another reservation of the same type. You can also exchange a reservation from Azure Database for MySQL - Single Server with one for Azure Database for MySQL flexible server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](/azure/cost-management-billing/reservations/exchange-and-refund-azure-reservations). +You can exchange a reservation for another reservation of the same type. You can also exchange a reservation from Azure Database for MySQL - Single Server with one for Azure Database for MySQL Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](/azure/cost-management-billing/reservations/exchange-and-refund-azure-reservations). ## Reservation discount @@ -42,15 +42,15 @@ You may save up to 67% on compute costs with reserved instances. In order to fin The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed server within a specific region and using the same performance tier and hardware generation.
-For example, let's suppose that you're running one general purpose, Gen5 – 32 vCore Azure Database for MySQL flexible server database, and two memory optimized, Gen5 – 16 vCore Azure Database for MySQL flexible server databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 – 32 vCore database server, and one memory optimized, Gen5 – 16 vCore database server. Let's suppose that you know that you'll need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5. +For example, let's suppose that you're running one general purpose, Gen5 – 32 vCore Azure Database for MySQL Flexible Server database, and two memory optimized, Gen5 – 16 vCore Azure Database for MySQL Flexible Server databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 – 32 vCore database server, and one memory optimized, Gen5 – 16 vCore database server. Let's suppose that you know that you'll need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5. ## Buy Azure Database for MySQL reserved capacity 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations**. -3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for MySQL** to purchase a new reservation for your Azure Database for MySQL flexible server databases. -4. Fill-in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for MySQL flexible server instances that get the discount depend on the scope and quantity selected. +3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for MySQL** to purchase a new reservation for your Azure Database for MySQL Flexible Server databases. +4. Fill-in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for MySQL Flexible Server instances that get the discount depend on the scope and quantity selected. :::image type="content" source="media/concepts-reserved-pricing/mysql-reserved-price.png" alt-text="Overview of reserved pricing"::: @@ -60,13 +60,13 @@ The following table describes required fields. | Field | Description | | :------------ | :------- | -| Subscription | The subscription used to pay for the Azure Database for MySQL flexible server reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL flexible server reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. -| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select:

**Shared**, the vCore reservation discount is applied to Azure Database for MySQL flexible server instances running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.

**Single subscription**, the vCore reservation discount is applied to Azure Database for MySQL flexible server instances in this subscription.

**Single resource group**, the reservation discount is applied to Azure Database for MySQL flexible server instances in the selected subscription and the selected resource group within that subscription. -| Region | The Azure region that's covered by the Azure Database for MySQL flexible server reserved capacity reservation. -| Deployment Type | The Azure Database for MySQL flexible server resource type that you want to buy the reservation for. -| Performance Tier | The service tier for the Azure Database for MySQL flexible server instances. +| Subscription | The subscription used to pay for the Azure Database for MySQL Flexible Server reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL Flexible Server reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. +| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select:

**Shared**, the vCore reservation discount is applied to Azure Database for MySQL Flexible Server instances running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.

**Single subscription**, the vCore reservation discount is applied to Azure Database for MySQL Flexible Server instances in this subscription.

**Single resource group**, the reservation discount is applied to Azure Database for MySQL Flexible Server instances in the selected subscription and the selected resource group within that subscription. +| Region | The Azure region that's covered by the Azure Database for MySQL Flexible Server reserved capacity reservation. +| Deployment Type | The Azure Database for MySQL Flexible Server resource type that you want to buy the reservation for. +| Performance Tier | The service tier for the Azure Database for MySQL Flexible Server instances. | Term | One year or three years -| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you're running or planning to run Azure Database for MySQL flexible server instances with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers. +| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you're running or planning to run Azure Database for MySQL Flexible Server instances with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers. ## Reserved instances API support @@ -87,11 +87,11 @@ vCore size flexibility helps you scale up or down within a performance tier and ## How to view reserved instance purchase details -You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for MySQL flexible server](/azure/cost-management-billing/reservations/understand-reservation-charges-mysql). +You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for MySQL Flexible Server](/azure/cost-management-billing/reservations/understand-reservation-charges-mysql). ## Reserved instance expiration -You receive email notifications, the first one 30 days prior to reservation expiry and the other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for MySQL flexible server](/azure/cost-management-billing/reservations/understand-reservation-charges-mysql). +You receive email notifications, the first one 30 days prior to reservation expiry and the other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for MySQL Flexible Server](/azure/cost-management-billing/reservations/understand-reservation-charges-mysql). ## Need help ? Contact us @@ -99,7 +99,7 @@ If you have questions or need help, [create a support request](https://portal.az ## Next steps -The vCore reservation discount is applied automatically to the number of Azure Database for MySQL flexible server instances that match the Azure Database for MySQL flexible server reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for MySQL flexible server reserved capacity reservation through the Azure portal, PowerShell, Azure CLI, or through the API. +The vCore reservation discount is applied automatically to the number of Azure Database for MySQL Flexible Server instances that match the Azure Database for MySQL Flexible Server reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for MySQL Flexible Server reserved capacity reservation through the Azure portal, PowerShell, Azure CLI, or through the API. To learn more about Azure Reservations, see the following articles: diff --git a/articles/mysql/flexible-server/concepts-slow-query-logs.md b/articles/mysql/flexible-server/concepts-slow-query-logs.md index e64c412fe1..fa39849b9a 100644 --- a/articles/mysql/flexible-server/concepts-slow-query-logs.md +++ b/articles/mysql/flexible-server/concepts-slow-query-logs.md @@ -13,7 +13,7 @@ ms.topic: conceptual [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] -In Azure Database for MySQL flexible server, the slow query log is available to users to configure and access. Slow query logs are disabled by default and can be enabled to assist with identifying performance bottlenecks during troubleshooting. +In Azure Database for MySQL Flexible Server, the slow query log is available to users to configure and access. Slow query logs are disabled by default and can be enabled to assist with identifying performance bottlenecks during troubleshooting. For more information about the MySQL slow query log, see the [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) in the MySQL engine documentation. @@ -22,7 +22,7 @@ By default, the slow query log is disabled. To enable logs, set the `slow_query_ Other parameters you can adjust to control slow query logging behavior include: -- **long_query_time**: log a query if it takes longer than `long_query_time` (in seconds) to complete. The default is 10 seconds. Server parameter `long_query_time` applies globally to all newly established connections in MySQL. However, it doesn't affect threads that are already connected. It's recommended to reconnect to Azure Database for MySQL flexible server from the application, or restarting the server will help clear out threads with older values of "long_query_time" and apply the updated parameter value. +- **long_query_time**: log a query if it takes longer than `long_query_time` (in seconds) to complete. The default is 10 seconds. Server parameter `long_query_time` applies globally to all newly established connections in MySQL. However, it doesn't affect threads that are already connected. It's recommended to reconnect to Azure Database for MySQL Flexible Server from the application, or restarting the server will help clear out threads with older values of "long_query_time" and apply the updated parameter value. - **log_slow_admin_statements**: determines if administrative statements (ex. `ALTER_TABLE`, `ANALYZE_TABLE`) are logged. - **log_queries_not_using_indexes**: determines if queries that don't use indexes are logged. - **log_throttle_queries_not_using_indexes**: limits the number of non-indexed queries that can be written to the slow query log. This parameter takes effect when `log_queries_not_using_indexes` is set to *ON* @@ -34,7 +34,7 @@ See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5. ## Access slow query logs -Slow query logs are integrated with Azure Monitor diagnostic settings. Once you've enabled slow query logs on your Azure Database for MySQL flexible server instance, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about diagnostic settings, see the [diagnostic logs documentation](/azure/azure-monitor/essentials/platform-logs-overview). To learn more about how to enable diagnostic settings in the Azure portal, see the [slow query log portal article](tutorial-query-performance-insights.md#set-up-diagnostics). +Slow query logs are integrated with Azure Monitor diagnostic settings. Once you've enabled slow query logs on your Azure Database for MySQL Flexible Server instance, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about diagnostic settings, see the [diagnostic logs documentation](/azure/azure-monitor/essentials/platform-logs-overview). To learn more about how to enable diagnostic settings in the Azure portal, see the [slow query log portal article](tutorial-query-performance-insights.md#set-up-diagnostics). >[!Note] >Premium Storage accounts are not supported if you are sending the logs to Azure storage via diagnostics and settings. @@ -118,7 +118,7 @@ Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log | render timechart ``` -- Display queries longer than 10 seconds across all Azure Database for MySQL flexible server instances with Diagnostic Logs enabled +- Display queries longer than 10 seconds across all Azure Database for MySQL Flexible Server instances with Diagnostic Logs enabled ```Kusto AzureDiagnostics diff --git a/articles/mysql/flexible-server/how-to-azure-ad.md b/articles/mysql/flexible-server/how-to-azure-ad.md index ceda1bf523..2e7a7a29ea 100644 --- a/articles/mysql/flexible-server/how-to-azure-ad.md +++ b/articles/mysql/flexible-server/how-to-azure-ad.md @@ -17,12 +17,12 @@ ms.custom: [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] -This tutorial shows you how to set up Microsoft Entra authentication for Azure Database for MySQL flexible server. +This tutorial shows you how to set up Microsoft Entra authentication for Azure Database for MySQL Flexible Server. In this tutorial, you learn how to: - Configure the Microsoft Entra Admin. -- Connect to Azure Database for MySQL flexible server using Microsoft Entra ID. +- Connect to Azure Database for MySQL Flexible Server using Microsoft Entra ID. ## Prerequisites @@ -31,7 +31,7 @@ In this tutorial, you learn how to: - If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin. > [!NOTE] - > With an Azure free account, you can now try Azure Database for MySQL flexible server for free for 12 months. For more information, see [Try Azure Database for MySQL flexible server for free](how-to-deploy-on-azure-free-account.md). + > With an Azure free account, you can now try Azure Database for MySQL Flexible Server for free for 12 months. For more information, see [Try Azure Database for MySQL Flexible Server for free](how-to-deploy-on-azure-free-account.md). - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli). @@ -41,7 +41,7 @@ In this tutorial, you learn how to: To create a Microsoft Entra Admin user, follow the following steps. -- In the Azure portal, select the instance of Azure Database for MySQL flexible server that you want to enable for Microsoft Entra ID. +- In the Azure portal, select the instance of Azure Database for MySQL Flexible Server that you want to enable for Microsoft Entra ID. - Under the Security pane, select **Authentication**: :::image type="content" source="media//how-to-Azure-ad/Azure-ad-configuration.jpg" alt-text="Diagram of how to configure Microsoft Entra authentication."::: @@ -137,7 +137,7 @@ After you grant the permissions to the UMI, they're enabled for all servers crea
-## Connect to Azure Database for MySQL flexible server using Microsoft Entra ID +## Connect to Azure Database for MySQL Flexible Server using Microsoft Entra ID @@ -164,7 +164,7 @@ The command launches a browser window to the Microsoft Entra authentication page ### 2 - Retrieve Microsoft Entra access token -Invoke the Azure CLI tool to acquire an access token for the Microsoft Entra authenticated user from step 1 to access Azure Database for MySQL flexible server. +Invoke the Azure CLI tool to acquire an access token for the Microsoft Entra authenticated user from step 1 to access Azure Database for MySQL Flexible Server. - Example (for Public Cloud): @@ -205,7 +205,7 @@ After authentication is successful, Microsoft Entra ID returns an access token: The token is a Base 64 string that encodes all the information about the authenticated user and is targeted to the Azure Database for MySQL service. -The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token before initiating the sign-in to Azure Database for MySQL flexible server. +The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token before initiating the sign-in to Azure Database for MySQL Flexible Server. - You can use the following PowerShell command to see the token validity. @@ -217,7 +217,7 @@ The access token validity is anywhere between 5 minutes to 60 minutes. We recomm You need to use the access token as the MySQL user password when connecting. You can use the method described above to retrieve the token using GUI clients such as MySQL workbench. -## Connect to Azure Database for MySQL flexible server using MySQL CLI +## Connect to Azure Database for MySQL Flexible Server using MySQL CLI When using the CLI, you can use this shorthand to connect: @@ -245,7 +245,7 @@ mysql -h mydb.mysql.database.azure.com \ --password=$((Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net).Token) ``` -## Connect to Azure Database for MySQL flexible server using MySQL Workbench +## Connect to Azure Database for MySQL Flexible Server using MySQL Workbench - Launch MySQL Workbench and Select the Database option, then select **Connect to database**. - In the hostname field, enter the MySQL FQDN for example, mysql.database.azure.com. @@ -421,4 +421,4 @@ Most drivers are supported; however, make sure to use the settings for sending t ## Next steps -- Review the concepts for [Microsoft Entra authentication with Azure Database for MySQL flexible server](concepts-azure-ad-authentication.md) +- Review the concepts for [Microsoft Entra authentication with Azure Database for MySQL Flexible Server](concepts-azure-ad-authentication.md) diff --git a/articles/mysql/flexible-server/how-to-connect-tls-ssl.md b/articles/mysql/flexible-server/how-to-connect-tls-ssl.md index e1807cf23f..e7d124d816 100644 --- a/articles/mysql/flexible-server/how-to-connect-tls-ssl.md +++ b/articles/mysql/flexible-server/how-to-connect-tls-ssl.md @@ -211,7 +211,7 @@ $db = new PDO('mysql:host=mydemoserver.mysql.database.azure.com;port=3306;dbname ```python try: conn = mysql.connector.connect(user='myadmin', - password='yourpassword', + password='', database='quickstartdb', host='mydemoserver.mysql.database.azure.com', ssl_ca='/var/www/html/DigiCertGlobalRootCA.crt.pem') @@ -223,7 +223,7 @@ except mysql.connector.Error as err: ```python conn = pymysql.connect(user='myadmin', - password='yourpassword', + password='', database='quickstartdb', host='mydemoserver.mysql.database.azure.com', ssl={'ca': '/var/www/html/DigiCertGlobalRootCA.crt.pem'}) diff --git a/articles/mysql/flexible-server/how-to-move-regions.md b/articles/mysql/flexible-server/how-to-move-regions.md index 7d10d0e705..714d128e92 100644 --- a/articles/mysql/flexible-server/how-to-move-regions.md +++ b/articles/mysql/flexible-server/how-to-move-regions.md @@ -16,9 +16,9 @@ ms.custom: [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] -There are various scenarios for moving an existing Azure Database for MySQL flexible server instance from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning. +There are various scenarios for moving an existing Azure Database for MySQL Flexible Server instance from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning. -You can use Azure Database for MySQL flexible server's [geo restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your Azure Database for MySQL flexible server instance. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region. +You can use Azure Database for MySQL Flexible Server's [geo restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your Azure Database for MySQL Flexible Server instance. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region. > [!NOTE] > This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](/azure/azure-resource-manager/management/move-resource-group-and-subscription) article. @@ -27,13 +27,13 @@ You can use Azure Database for MySQL flexible server's [geo restore](concepts-ba - Ensure the source server has geo-redundancy enabled. You can enable geo-redundancy post server-create for locally redundant or same-zone redundant servers. Currently, for a Zone-redundant High Availability server geo-redundancy can only be enabled/disabled at server create time. -- Make sure that your source Azure Database for MySQL flexible server instance is deployed in the Azure region that you want to move from. +- Make sure that your source Azure Database for MySQL Flexible Server instance is deployed in the Azure region that you want to move from. ## Move -To move the Azure Database for MySQL flexible server instance to the geo-paired region using the Azure portal, use the following steps: +To move the Azure Database for MySQL Flexible Server instance to the geo-paired region using the Azure portal, use the following steps: -1. In the [Azure portal](https://portal.azure.com/), choose your Azure Database for MySQL flexible server instance that you want to restore the backup from. +1. In the [Azure portal](https://portal.azure.com/), choose your Azure Database for MySQL Flexible Server instance that you want to restore the backup from. 1. Select **Overview** from the left panel. @@ -63,17 +63,17 @@ The new server created by geo-restore has the same server admin sign-in name and ## Clean up source server -You may want to delete the source Azure Database for MySQL flexible server instance. To do so, use the following steps: +You may want to delete the source Azure Database for MySQL Flexible Server instance. To do so, use the following steps: -1. Once the replica has been created, locate and select your Azure Database for MySQL flexible server source instance. +1. Once the replica has been created, locate and select your Azure Database for MySQL Flexible Server source instance. 1. In the **Overview** window, select **Delete**. 1. Type in the name of the source server to confirm you want to delete. 1. Select **Delete**. ## Next steps -In this tutorial, you moved an Azure Database for MySQL flexible server instance from one region to another by using the Azure portal and then cleaned up the unneeded source resources. +In this tutorial, you moved an Azure Database for MySQL Flexible Server instance from one region to another by using the Azure portal and then cleaned up the unneeded source resources. - Learn more about [geo-restore](concepts-backup-restore.md#geo-restore) -- Learn more about [Azure paired regions](overview.md#azure-regions) supported for Azure Database for MySQL flexible server +- Learn more about [Azure paired regions](overview.md#azure-regions) supported for Azure Database for MySQL Flexible Server - Learn more about [business continuity](concepts-business-continuity.md) options diff --git a/articles/mysql/flexible-server/how-to-troubleshoot-connectivity-issues.md b/articles/mysql/flexible-server/how-to-troubleshoot-connectivity-issues.md index 62af4df90f..c6aa8af478 100644 --- a/articles/mysql/flexible-server/how-to-troubleshoot-connectivity-issues.md +++ b/articles/mysql/flexible-server/how-to-troubleshoot-connectivity-issues.md @@ -22,7 +22,7 @@ There are potential issues associated with this type of connection handling. For ## Diagnosing common connectivity errors -Whenever your instance of Azure Database for MySQL flexible server is experiencing connectivity issues, remember that problems can exist in any of the three layers involved: the client device, the network, or your Azure Database for MySQL flexible server instance. +Whenever your instance of Azure Database for MySQL Flexible Server is experiencing connectivity issues, remember that problems can exist in any of the three layers involved: the client device, the network, or your Azure Database for MySQL Flexible Server instance. As a result, whenever you’re diagnosing connectivity errors, be sure to consider full details of the: @@ -46,11 +46,11 @@ Quick reference notes for some client-side error 2005 codes appear in the follow | **ERROR 2005 code** | **Notes** | |----------|----------| -| **(11) "EAI_SYSTEM - system error"** | There's an error on the DNS resolution on the client side. Not an Azure Database for MySQL flexible server issue. Use dig/nslookup on the client to troubleshoot. | -| **(110) "ETIMEDOUT - Connection timed out"** | There was a timeout connecting to the client's DNS server. Not an Azure Database for MySQL flexible server issue. Use dig/nslookup on the client to troubleshoot. | -| **(0) "name unknown"** | The name specified wasn't resolvable by DNS. Check the input on the client. This is very likely not an issue with Azure Database for MySQL flexible server. | +| **(11) "EAI_SYSTEM - system error"** | There's an error on the DNS resolution on the client side. Not an Azure Database for MySQL Flexible Server issue. Use dig/nslookup on the client to troubleshoot. | +| **(110) "ETIMEDOUT - Connection timed out"** | There was a timeout connecting to the client's DNS server. Not an Azure Database for MySQL Flexible Server issue. Use dig/nslookup on the client to troubleshoot. | +| **(0) "name unknown"** | The name specified wasn't resolvable by DNS. Check the input on the client. This is very likely not an issue with Azure Database for MySQL Flexible Server. | -The second call in mysql is with socket connectivity and when looking at an error message like "ERROR 2003 (HY000): Can't connect to Azure Database for MySQL flexible server on 'mysql-example.mysql.database.azure.com' (111)", the number in the end (99, 110, 111, 113, etc.). +The second call in mysql is with socket connectivity and when looking at an error message like "ERROR 2003 (HY000): Can't connect to Azure Database for MySQL Flexible Server on 'mysql-example.mysql.database.azure.com' (111)", the number in the end (99, 110, 111, 113, etc.). ### Client-side error 2003 codes @@ -58,9 +58,9 @@ Quick reference notes for some client-side error 2003 codes appear in the follow | **ERROR 2003 code** | **Notes** | |----------|----------| -| **(99) "EADDRNOTAVAIL - Cannot assign requested address"** | This error isn’t caused by Azure Database for MySQL flexible server, rather it is on the client side. | -| **(110) "ETIMEDOUT - Connection timed out"** | TThere was a timeout connecting to the IP address provided. Likely a security (firewall rules) or networking (routing) issue. Usually, this isn’t an issue with Azure Database for MySQL flexible server. Use `nc/telnet/TCPtraceroute` on the client device to troubleshoot. | -| **(111) "ECONNREFUSED - Connection refused"** | While the packets reached the target server, the server rejected the connection. This might be an attempt to connect to the wrong server or the wrong port. This also might relate to the target service (Azure Database for MySQL flexible server) being down, recovering from failover, or going through crash recovery, and not yet accepting connections. This issue could be on either the client side or the server side. Use `nc/telnet/TCPtraceroute` on the client device to troubleshoot. | +| **(99) "EADDRNOTAVAIL - Cannot assign requested address"** | This error isn’t caused by Azure Database for MySQL Flexible Server, rather it is on the client side. | +| **(110) "ETIMEDOUT - Connection timed out"** | TThere was a timeout connecting to the IP address provided. Likely a security (firewall rules) or networking (routing) issue. Usually, this isn’t an issue with Azure Database for MySQL Flexible Server. Use `nc/telnet/TCPtraceroute` on the client device to troubleshoot. | +| **(111) "ECONNREFUSED - Connection refused"** | While the packets reached the target server, the server rejected the connection. This might be an attempt to connect to the wrong server or the wrong port. This also might relate to the target service (Azure Database for MySQL Flexible Server) being down, recovering from failover, or going through crash recovery, and not yet accepting connections. This issue could be on either the client side or the server side. Use `nc/telnet/TCPtraceroute` on the client device to troubleshoot. | | **(113) "EHOSTUNREACH - Host unreachable"** | The client device’s routing table doesn’t include a path to the network on which the database server is located. Check the client device's networking configuration. | ### Other error codes @@ -77,7 +77,7 @@ Quick reference notes for some other error codes related to issues that occur af | **ERROR 1129 "Host '1.2.3.4' is blocked because of many connection errors”** | Unblock with 'mysqladmin flush-hosts'" - all clients in a single machine will be blocked if one client of that machine attempts several times to use the wrong protocol to connect with MySQL (telnetting to the MySQL port is one example). As the error message says, the database’s admin user has to run `FLUSH HOSTS;` to clear the issue. | > [!NOTE] -> For more information about connectivity errors, see the blog post [Investigating connection issues with Azure Database for MySQL flexible server](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/investigating-connection-issues-with-azure-database-for-mysql/ba-p/2121204). +> For more information about connectivity errors, see the blog post [Investigating connection issues with Azure Database for MySQL Flexible Server](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/investigating-connection-issues-with-azure-database-for-mysql/ba-p/2121204). ## Next steps diff --git a/articles/mysql/flexible-server/how-to-troubleshoot-high-cpu-utilization.md b/articles/mysql/flexible-server/how-to-troubleshoot-high-cpu-utilization.md index 389395e569..2d6e27a24a 100644 --- a/articles/mysql/flexible-server/how-to-troubleshoot-high-cpu-utilization.md +++ b/articles/mysql/flexible-server/how-to-troubleshoot-high-cpu-utilization.md @@ -16,7 +16,7 @@ ms.topic: troubleshooting [!INCLUDE[azure-database-for-mysql-single-server-deprecation](~/reusable-content/ce-skilling/azure/includes/mysql/includes/azure-database-for-mysql-single-server-deprecation.md)] -Azure Database for MySQL flexible server provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as “Host CPU percent”, “Total Connections”, “Host Memory Percent”, and “IO Percent”. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL flexible server instance. +Azure Database for MySQL Flexible Server provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as “Host CPU percent”, “Total Connections”, “Host Memory Percent”, and “IO Percent”. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL Flexible Server instance. For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up. @@ -50,7 +50,7 @@ Queries that are expensive to execute and scan a large number of rows without an ## Capturing details of the current workload -The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL flexible server instance. It also provides details about the current state and activity of each session. +The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL Flexible Server instance. It also provides details about the current state and activity of each session. This command only produces a snapshot of the current session status and doesn't provide information about historical session activity. @@ -150,7 +150,7 @@ This state usually means the open table operation is consuming a long time. Usua ### Sending data -While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory. For more information, see [Troubleshoot low memory issues in Azure Database for MySQL flexible server](how-to-troubleshoot-low-memory-issues.md). +While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory. For more information, see [Troubleshoot low memory issues in Azure Database for MySQL Flexible Server](how-to-troubleshoot-low-memory-issues.md). ### Updating diff --git a/articles/mysql/flexible-server/how-to-troubleshoot-sys-schema.md b/articles/mysql/flexible-server/how-to-troubleshoot-sys-schema.md index 43027057e8..a4a9a62673 100644 --- a/articles/mysql/flexible-server/how-to-troubleshoot-sys-schema.md +++ b/articles/mysql/flexible-server/how-to-troubleshoot-sys-schema.md @@ -16,7 +16,7 @@ ms.topic: troubleshooting [!INCLUDE[azure-database-for-mysql-single-server-deprecation](~/reusable-content/ce-skilling/azure/includes/mysql/includes/azure-database-for-mysql-single-server-deprecation.md)] -The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL flexible server version 5.7. +The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL Flexible Server version 5.7. :::image type="content" source="./media/how-to-troubleshoot-sys-schema/sys-schema-views.png" alt-text="Views of sys_schema."::: @@ -40,7 +40,7 @@ IO is the most expensive operation in the database. We can find out the average :::image type="content" source="./media/how-to-troubleshoot-sys-schema/io-latency-125GB.png" alt-text="IO latency: 125 GB."::: -Because Azure Database for MySQL flexible server scales IO with respect to storage, after increasing my provisioned storage to 1 TB, my IO latency reduces to 571 ms. +Because Azure Database for MySQL Flexible Server scales IO with respect to storage, after increasing my provisioned storage to 1 TB, my IO latency reduces to 571 ms. :::image type="content" source="./media/how-to-troubleshoot-sys-schema/io-latency-1TB.png" alt-text="IO latency: 1TB."::: @@ -56,7 +56,7 @@ To troubleshoot database performance issues, it may be beneficial to identify th :::image type="content" source="./media/how-to-troubleshoot-sys-schema/summary-by-statement.png" alt-text="Summary by statement."::: -In this example, Azure Database for MySQL flexible server spent 53 minutes flushing the slow query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query login to the Azure portal. +In this example, Azure Database for MySQL Flexible Server spent 53 minutes flushing the slow query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query login to the Azure portal. ## Database maintenance @@ -81,7 +81,7 @@ Indexes are great tools to improve read performance, but they do incur additiona ## Conclusion -In summary, the sys_schema is a great tool for both performance tuning and database maintenance. Make sure to take advantage of this feature in your Azure Database for MySQL flexible server instance. +In summary, the sys_schema is a great tool for both performance tuning and database maintenance. Make sure to take advantage of this feature in your Azure Database for MySQL Flexible Server instance. ## Next steps diff --git a/articles/mysql/flexible-server/quickstart-create-server-cli.md b/articles/mysql/flexible-server/quickstart-create-server-cli.md index caf136f2ba..6b01671dfc 100644 --- a/articles/mysql/flexible-server/quickstart-create-server-cli.md +++ b/articles/mysql/flexible-server/quickstart-create-server-cli.md @@ -85,12 +85,12 @@ Your server 'serverXXXXXXXXX' is using SKU 'Standard_B1ms' (Paid Tier). For pric Creating MySQL database 'flexibleserverdb'... Make a note of your password. If you forget your password, reset the password by running 'az mysql flexible-server update -n serverXXXXXXXXX -g groupXXXXXXXXXX -p '. { - "connectionString": "server=serverXXXXXXXXX.mysql.database.azure.com;database=flexibleserverdb;uid=secureusername;pwd=securepasswordstring", + "connectionString": "server=.mysql.database.azure.com;database=flexibleserverdb;uid=secureusername;pwd=", "databaseName": "flexibleserverdb", "host": "serverXXXXXXXXX.mysql.database.azure.com", "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/groupXXXXXXXXXX/providers/Microsoft.DBforMySQL/flexibleServers/serverXXXXXXXXX", "location": "East US 2", - "password": "securepasswordstring", + "password": "", "resourceGroup": "groupXXXXXXXXXX", "skuname": "Standard_B1ms", "subnetId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/groupXXXXXXXXXX/providers/Microsoft.Network/virtualNetworks/serverXXXXXXXXXVNET/subnets/serverXXXXXXXXXSubnet", diff --git a/articles/mysql/flexible-server/tutorial-add-redis-to-mysql.md b/articles/mysql/flexible-server/tutorial-add-redis-to-mysql.md index 84b0e255be..d543fe1793 100644 --- a/articles/mysql/flexible-server/tutorial-add-redis-to-mysql.md +++ b/articles/mysql/flexible-server/tutorial-add-redis-to-mysql.md @@ -76,7 +76,7 @@ r = redis.Redis( port=6379, password='azure-redis-primary-access-key') -mysqlcnx = mysql.connector.connect(user='your-admin-username', password='db-user-password', +mysqlcnx = mysql.connector.connect(user='your-admin-username', password='', host='database-servername.mysql.database.azure.com', database='your-databsae-name') diff --git a/articles/mysql/flexible-server/tutorial-query-performance-insights.md b/articles/mysql/flexible-server/tutorial-query-performance-insights.md index a4a56ff50e..c7f282082a 100644 --- a/articles/mysql/flexible-server/tutorial-query-performance-insights.md +++ b/articles/mysql/flexible-server/tutorial-query-performance-insights.md @@ -23,7 +23,7 @@ Query Performance Insight is designed to help you spend less time troubleshootin * The query details: view the history of execution with minimum, maximum, average, and standard deviation query time. * The resource utilizations (CPU, memory, and storage). -This article discusses how to use Azure Database for MySQL flexible server slow query logs, the Log Analytics tool, and workbooks templates to visualize Query Performance Insight for Azure Database for MySQL flexible server. +This article discusses how to use Azure Database for MySQL Flexible Server slow query logs, the Log Analytics tool, and workbooks templates to visualize Query Performance Insight for Azure Database for MySQL Flexible Server. In this tutorial, you'll learn how to: >[!div class="checklist"] @@ -34,7 +34,7 @@ In this tutorial, you'll learn how to: ## Prerequisites -- [Create an Azure Database for MySQL flexible server instance](./quickstart-create-server-portal.md). +- [Create an Azure Database for MySQL Flexible Server instance](./quickstart-create-server-portal.md). - [Create a Log Analytics workspace](/azure/azure-monitor/logs/quick-create-workspace). @@ -42,7 +42,7 @@ In this tutorial, you'll learn how to: 1. Sign in to the [Azure portal](https://portal.azure.com/). -1. Select your Azure Database for MySQL flexible server instance. +1. Select your Azure Database for MySQL Flexible Server instance. 1. On the left pane, under **Settings**, select **Server parameters**. @@ -64,10 +64,10 @@ You can return to the list of logs by closing the **Server parameters** page. ## Configure slow query logs by using the Azure CLI -Alternatively, you can enable and configure slow query logs for your Azure Database for MySQL flexible server instance from the Azure CLI by running the following command: +Alternatively, you can enable and configure slow query logs for your Azure Database for MySQL Flexible Server instance from the Azure CLI by running the following command: > [!IMPORTANT] -> To ensure that your Azure Database for MySQL flexible server instance's performance is not heavily affected, we recommend that you log only the event types and users that are required for your auditing purposes. +> To ensure that your Azure Database for MySQL Flexible Server instance's performance is not heavily affected, we recommend that you log only the event types and users that are required for your auditing purposes. - Enable slow query logs. @@ -145,7 +145,7 @@ Slow query logs are integrated with Azure Monitor diagnostic settings to allow y ## View query insights by using workbooks -1. In the Azure portal, on the left pane, under **Monitoring** for your Azure Database for MySQL flexible server instance, select **Workbooks**. +1. In the Azure portal, on the left pane, under **Monitoring** for your Azure Database for MySQL Flexible Server instance, select **Workbooks**. 1. Select the **Query Performance Insight** template. diff --git a/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md b/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md index f2ce78c42f..b236dfdca9 100644 --- a/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md +++ b/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md @@ -91,21 +91,21 @@ To configure Data in replication, perform the following steps: If you're using SSL, run the following command: ```sql - CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword'; + CREATE USER 'syncuser'@'%' IDENTIFIED BY ''; GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL; ``` If you're not using SSL, run the following command: ```sql - CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword'; + CREATE USER 'syncuser'@'%' IDENTIFIED BY ''; GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%'; ``` 1. To back up the database using mydumper, run the following command on the Azure VM where we installed the mydumper\myloader: ```bash - mydumper --host=.mysql.database.azure.com --user=@ --password= --outputdir=./backup --rows=100000 -G -E -R -z --trx-consistency-only --compress --build-empty-files --threads=16 --compress-protocol --ssl --regex '^(classicmodels\.)' -L mydumper-logs.txt + mydumper --host=.mysql.database.azure.com --user=@ --password= --outputdir=./backup --rows=100000 -G -E -R -z --trx-consistency-only --compress --build-empty-files --threads=16 --compress-protocol --ssl --regex '^(classicmodels\.)' -L mydumper-logs.txt ``` > [!TIP] @@ -143,7 +143,7 @@ To configure Data in replication, perform the following steps: 1. Restore the database using myloader by running the following command: ```bash - myloader --host=.mysql.database.azure.com --user= --password= --directory=./backup --queries-per-transaction=100 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt + myloader --host=.mysql.database.azure.com --user= --password= --directory=./backup --queries-per-transaction=100 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt ``` The variables in this command are explained below: @@ -169,7 +169,7 @@ To configure Data in replication, perform the following steps: iii. To configure Data in replication, run the following command: ```sql - CALL mysql.az_replication_change_master('.mysql.database.azure.com', '@', '', 3306, '', , @cert); + CALL mysql.az_replication_change_master('.mysql.database.azure.com', '@', '', 3306, '', , @cert); ``` > [!NOTE] @@ -178,7 +178,7 @@ To configure Data in replication, perform the following steps: - If SSL enforcement isn't enabled, then run the following command: ```sql - CALL mysql.az_replication_change_master('.mysql.database.azure.com', '@', '', 3306, '', , ''); + CALL mysql.az_replication_change_master('.mysql.database.azure.com', '@', '', 3306, '', , ''); ``` 1. To start replication from the replica server, call the below stored procedure. diff --git a/articles/postgresql/flexible-server/concepts-backup-restore.md b/articles/postgresql/flexible-server/concepts-backup-restore.md index d4fb8aa8fa..e84cdaa907 100644 --- a/articles/postgresql/flexible-server/concepts-backup-restore.md +++ b/articles/postgresql/flexible-server/concepts-backup-restore.md @@ -183,6 +183,10 @@ On-demand backups can be taken in addition to scheduled automatic backups. These For more information about performing a on-demand backup, visit the [how-to guide](./how-to-perform-on-demand-backup-portal.md). +#### Limitations + +On-demand backup feature is currently not supported with the Burstable server compute tier. + ## Long-term retention Azure Backup and Azure Database for PostgreSQL flexible server services have built an enterprise-class long-term backup solution for Azure Database for PostgreSQL flexible server instances that retains backups for up to 10 years. You can use long-term retention (LTR) independently or in addition to the automated backup solution offered by Azure Database for PostgreSQL flexible server, which offers retention of up to 35 days. Automated backups are physical backups suited for operational recoveries, especially when you want to restore from the latest backups. Long-term backups help you with your compliance needs, are more granular, and are taken as logical backups using native pg_dump. In addition to long-term retention, the solution offers the following capabilities: diff --git a/articles/postgresql/flexible-server/concepts-index-tuning.md b/articles/postgresql/flexible-server/concepts-index-tuning.md index 215edbb266..070607f44c 100644 --- a/articles/postgresql/flexible-server/concepts-index-tuning.md +++ b/articles/postgresql/flexible-server/concepts-index-tuning.md @@ -41,7 +41,7 @@ The algorithm iterates over the target databases, searching for possible indexes ### CREATE INDEX recommendations -For each database identified as a candidate to analyze for producing index recommendations, all SELECT queries executed during the lookup interval and in the context of that specific database are factored in. +For each database identified as a candidate to analyze for producing index recommendations, all SELECT, UPDATE, INSERT, and DELETE queries executed during the lookup interval and in the context of that specific database are factored in. > [!NOTE] > Index tuning analyzes not only SELECT statements, but also DML (UPDATE, INSERT, and DELETE) statements. @@ -155,7 +155,7 @@ Index tuning is supported on all [currently available tiers](concepts-compute.md ### Supported versions of PostgreSQL -Index tuning is supported on [major versions](concepts-supported-versions.md) **14 or greater** of Azure Database for PostgreSQL Flexible Server. +Index tuning is supported on [major versions](concepts-supported-versions.md) **12 or greater** of Azure Database for PostgreSQL Flexible Server. ### Use of search_path diff --git a/articles/postgresql/flexible-server/concepts-logical.md b/articles/postgresql/flexible-server/concepts-logical.md index 65de5cae6d..79b70c4bd3 100644 --- a/articles/postgresql/flexible-server/concepts-logical.md +++ b/articles/postgresql/flexible-server/concepts-logical.md @@ -189,7 +189,7 @@ Here's an example of configuring pglogical at the provider database server and t ```sql select pglogical.create_node( node_name := 'provider1', - dsn := ' host=myProviderServer.postgres.database.azure.com port=5432 dbname=myDB user=myUser password=myPassword'); + dsn := ' host=myProviderServer.postgres.database.azure.com port=5432 dbname=myDB user=myUser password='); ``` 1. Create a replication set. @@ -214,7 +214,7 @@ Here's an example of configuring pglogical at the provider database server and t ```sql select pglogical.create_node( node_name := 'subscriber1', - dsn := ' host=mySubscriberServer.postgres.database.azure.com port=5432 dbname=myDB user=myUser password=myPasword' ); + dsn := ' host=mySubscriberServer.postgres.database.azure.com port=5432 dbname=myDB user=myUser password=' ); ``` 1. Create a subscription to start the synchronization and the replication process. @@ -223,7 +223,7 @@ Here's an example of configuring pglogical at the provider database server and t select pglogical.create_subscription ( subscription_name := 'subscription1', replication_sets := array['myreplicationset'], - provider_dsn := 'host=myProviderServer.postgres.database.azure.com port=5432 dbname=myDB user=myUser password=myPassword'); + provider_dsn := 'host=myProviderServer.postgres.database.azure.com port=5432 dbname=myDB user=myUser password='); ``` 1. You can then verify the subscription status. diff --git a/articles/postgresql/flexible-server/concepts-pgbouncer.md b/articles/postgresql/flexible-server/concepts-pgbouncer.md index bc81b9cf9c..22f1e43a12 100644 --- a/articles/postgresql/flexible-server/concepts-pgbouncer.md +++ b/articles/postgresql/flexible-server/concepts-pgbouncer.md @@ -79,7 +79,7 @@ To connect to the `pgbouncer` database: 1. Connect to the `pgbouncer` database as this user and set the port as `6432`: ```sql - psql "host=myPgServer.postgres.database.azure.com port=6432 dbname=pgbouncer user=myUser password=myPassword sslmode=require" + psql "host=myPgServer.postgres.database.azure.com port=6432 dbname=pgbouncer user=myUser password= sslmode=require" ``` After you're connected to the database, use `SHOW` commands to view PgBouncer statistics: @@ -98,7 +98,7 @@ To start using PgBouncer, follow these steps: 1. Connect to your database server, but use port 6432 instead of the regular port 5432. Verify that this connection works. ```azurecli-interactive - psql "host=myPgServer.postgres.database.azure.com port=6432 dbname=postgres user=myUser password=myPassword sslmode=require" + psql "host=myPgServer.postgres.database.azure.com port=6432 dbname=postgres user=myUser password= sslmode=require" ``` 2. Test your application in a QA environment against PgBouncer, to make sure you don't have any compatibility problems. The PgBouncer project provides a compatibility matrix, and we recommend [transaction pooling](https://www.PgBouncer.org/features.html#sql-feature-map-for-pooling-modes) for most users. diff --git a/articles/postgresql/flexible-server/concepts-query-store.md b/articles/postgresql/flexible-server/concepts-query-store.md index 729b20a88a..546f0a0405 100644 --- a/articles/postgresql/flexible-server/concepts-query-store.md +++ b/articles/postgresql/flexible-server/concepts-query-store.md @@ -80,7 +80,7 @@ Here are some examples of how you can gain more insights into your workload usin ## Configuration options -When query store is enabled, it saves data in aggregation windows of length determined by the [pg_qs.interval_length_minutes](server-parameters-table-customized-options.md?pivots=postgresql-16#pg_qsinterval_length_minutes) server parameter (defaults to 15 minutes). For each window, it stores up to 500 distinct queries per window. Attributes that distinguish the uniqueness of each query are userid (identifier of the user who executes the query), dbid (identifier of the database in whose context the query exeutes), and queryid (an integer value uniquely identifying the query executed). If the number of distinct queries reaches 500 during the configured interval, 5% of the ones that are recorded are deallocated to make room for more. The ones deallocated first are the ones which were executed the least number of times. +When query store is enabled, it saves data in aggregation windows of length determined by the [pg_qs.interval_length_minutes](server-parameters-table-customized-options.md?pivots=postgresql-16#pg_qsinterval_length_minutes) server parameter (defaults to 15 minutes). For each window, it stores up to 500 distinct queries per window. Attributes that distinguish the uniqueness of each query are user_id (identifier of the user who executes the query), db_id (identifier of the database in whose context the query executes), and query_id (an integer value uniquely identifying the query executed). If the number of distinct queries reaches 500 during the configured interval, 5% of the ones that are recorded are deallocated to make room for more. The ones deallocated first are the ones which were executed the least number of times. The following options are available for configuring Query Store parameters: diff --git a/articles/postgresql/flexible-server/concepts-storage.md b/articles/postgresql/flexible-server/concepts-storage.md index 99c217c578..07f02062ca 100644 --- a/articles/postgresql/flexible-server/concepts-storage.md +++ b/articles/postgresql/flexible-server/concepts-storage.md @@ -4,7 +4,7 @@ description: This article describes the storage options in Azure Database for Po author: kabharati ms.author: kabharati ms.reviewer: maghan -ms.date: 10/28/2024 +ms.date: 11/19/2024 ms.service: azure-database-postgresql ms.subservice: flexible-server ms.topic: conceptual @@ -111,7 +111,7 @@ As an illustration, take a server with a storage capacity of 2 TiB (greater than The default behavior is to increase the disk size to the next premium SSD storage tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage autogrow. Enabling storage autogrow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly. -The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity isn't triggered, even if storage autogrow is turned on. In such cases, you need to scale your storage manually. Manual scaling is an offline operation that you should plan according to your business requirements. +The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity isn't triggered, even if storage autogrow is turned on. In such cases, you need to scale your storage manually. Please rememeber that in this specific case, manual scaling is an offline operation and should be scheduled in alignment with your business needs. Remember that storage can only be scaled up, not down. diff --git a/articles/postgresql/flexible-server/how-to-connect-scram.md b/articles/postgresql/flexible-server/how-to-connect-scram.md index 5a50539958..f83fea9f74 100644 --- a/articles/postgresql/flexible-server/how-to-connect-scram.md +++ b/articles/postgresql/flexible-server/how-to-connect-scram.md @@ -34,7 +34,7 @@ Salted Challenge Response Authentication Mechanism (SCRAM) is a password-based m 1. From your Azure Database for PostgreSQL flexible server client, connect to the Azure Database for PostgreSQL flexible server instance. For example, ```bash - psql "host=myPGServer.postgres.database.azure.com port=5432 dbname=postgres user=myDemoUser password=MyPassword sslmode=require" + psql "host=myPGServer.postgres.database.azure.com port=5432 dbname=postgres user=myDemoUser password= sslmode=require" psql (12.3 (Ubuntu 12.3-1.pgdg18.04+1), server 12.6) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) diff --git a/articles/postgresql/flexible-server/how-to-perform-fullvacuum-pg-repack.md b/articles/postgresql/flexible-server/how-to-perform-fullvacuum-pg-repack.md index c04f49d2fa..cdadbdc5fa 100644 --- a/articles/postgresql/flexible-server/how-to-perform-fullvacuum-pg-repack.md +++ b/articles/postgresql/flexible-server/how-to-perform-fullvacuum-pg-repack.md @@ -88,7 +88,7 @@ Example of how to run pg_repack on a table named info in a public schema within 1. Connect to the Azure Database for PostgreSQL flexible server instance. This article uses psql for simplicity. ```psql - psql "host=xxxxxxxxx.postgres.database.azure.com port=5432 dbname=foo user=xxxxxxxxxxxxx password=[my_password] sslmode=require" + psql "host=xxxxxxxxx.postgres.database.azure.com port=5432 dbname=foo user=xxxxxxxxxxxxx password= sslmode=require" ``` 2. Create the pg_repack extension in the databases intended to be repacked. diff --git a/articles/postgresql/flexible-server/how-to-use-pg-azure-storage.md b/articles/postgresql/flexible-server/how-to-use-pg-azure-storage.md index 269fce8ca9..ce5c5e5974 100644 --- a/articles/postgresql/flexible-server/how-to-use-pg-azure-storage.md +++ b/articles/postgresql/flexible-server/how-to-use-pg-azure-storage.md @@ -63,6 +63,7 @@ Using [Configurations - Put](/rest/api/postgresql/flexibleserver/configurations/ Because the `shared_preload_libraries` is static, the server must be restarted for a change to take effect. For restarting the server, you can use the [Server - Restart](/rest/api/postgresql/flexibleserver/servers/restart) REST API. --- + 4. Include `azure_storage` in `azure.extensions`: # [Azure portal](#tab/portal-02) @@ -119,8 +120,9 @@ az rest --method patch --url https://management.azure.com/subscriptions/ --na Using [Storage Accounts - Update](/rest/api/storagerp/storage-accounts/update) REST API. --- -1. To pass it to the [azure_storage.account_add](#azure_storageaccount_add) function, [fetch either of the two access keys](/azure/storage/common/storage-account-keys-manage?tabs=azure-portal#view-account-access-keys) of the Azure Storage account. + +2. To pass it to the [azure_storage.account_add](#azure_storageaccount_add) function, [fetch either of the two access keys](/azure/storage/common/storage-account-keys-manage?tabs=azure-portal#view-account-access-keys) of the Azure Storage account. # [Azure portal](#tab/portal-05) @@ -156,9 +159,12 @@ az storage account keys list --resource-group - # [REST API](#tab/rest-05) Using [Storage Accounts - List Keys](/rest/api/storagerp/storage-accounts/list-keys) REST API. + --- -## azure_storage.account_add +## Functions + +### azure_storage.account_add Function that allows adding a storage account, and its associated access key, to the list of storage accounts that the `pg_azure_storage` extension can access. @@ -177,29 +183,29 @@ There's an overloaded version of this function, which accepts an `account_config azure_storage.account_add(account_config jsonb); ``` -### Permissions +#### Permissions Must be a member of `azure_storage_admin`. -### Arguments +#### Arguments -#### account_name_p +##### account_name_p `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### account_key_p +##### account_key_p `text` the value of one of the access keys for the storage account. Your Azure blob storage access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible only by the superuser. Users granted the `azure_storage_admin` role can interact with this table via functions. To see which storage accounts are added, use the function [azure_storage.account_list](#azure_storageaccount_list). -#### account_config +##### account_config `jsonb` the name of the Azure Storage account and all the required settings like authentication type, account type, or storage credentials. We recommend the use of the utility functions [azure_storage.account_options_managed_identity](#azure_storageaccount_options_managed_identity), [azure_storage.account_options_credentials](#azure_storageaccount_options_credentials), or [azure_storage.account_options](#azure_storageaccount_options) to create any of the valid values that must be passed as this argument. -### Return type +#### Return type `VOID` -## azure_storage.account_options_managed_identity +### azure_storage.account_options_managed_identity Function that acts as a utility function, which can be called as a parameter within [azure_storage.account_add](#azure_storageaccount_add), and is useful to produce a valid value for the `account_config` argument, when using a system assigned managed identity to interact with the Azure Storage account. @@ -207,25 +213,25 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.account_options_managed_identity(name text, type azure_storage.storage_type); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### name +##### name `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### type +##### type `azure_storage.storage_type` the value of one of the types of storage supported. Only supported value is `blob`. -### Return type +#### Return type `jsonb` -## azure_storage.account_options_credentials +### azure_storage.account_options_credentials Function that acts as a utility function, which can be called as a parameter within [azure_storage.account_add](#azure_storageaccount_add), and is useful to produce a valid value for the `account_config` argument, when using an Azure Storage access key to interact with the Azure Storage account. @@ -233,29 +239,29 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.account_options_credentials(name text, credentials text, type azure_storage.storage_type); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### name +##### name `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### credentials +##### credentials `text` the value of one of the access keys for the storage account. Your Azure blob storage access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible only by the superuser. Users granted the `azure_storage_admin` role can interact with this table via functions. To see which storage accounts are added, use the function [azure_storage.account_list](#azure_storageaccount_list). -#### type +##### type `azure_storage.storage_type` the value of one of the types of storage supported. Only supported value is `blob`. -### Return type +#### Return type `jsonb` -## azure_storage.account_options +### azure_storage.account_options Function that acts as a utility function, which can be called as a parameter within [azure_storage.account_add](#azure_storageaccount_add), and is useful to produce a valid value for the `account_config` argument, when using an Azure Storage access key or a system assigned managed identity to interact with the Azure Storage account. @@ -263,33 +269,33 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.account_options(name text, auth_type azure_storage.auth_type, storage_type azure_storage.storage_type, credentials text DEFAULT NULL); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### name +##### name `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### auth_type +##### auth_type `azure_storage.auth_type` the value of one of the types of storage supported. Only supported values are `access-key`, and `managed-identity`. -#### storage_type +##### storage_type `azure_storage.storage_type` the value of one of the types of storage supported. Only supported value is `blob`. -#### credentials +##### credentials `text` the value of one of the access keys for the storage account. Your Azure blob storage access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible only by the superuser. Users granted the `azure_storage_admin` role can interact with this table via functions. To see which storage accounts are added, use the function [azure_storage.account_list](#azure_storageaccount_list). -### Return type +#### Return type `jsonb` -## azure_storage.account_remove +### azure_storage.account_remove Function that allows removing a storage account and its associated access key from the list of storage accounts that the `pg_azure_storage` extension can access. @@ -297,21 +303,21 @@ Function that allows removing a storage account and its associated access key fr azure_storage.account_remove(account_name_p text); ``` -### Permissions +#### Permissions Must be a member of `azure_storage_admin`. -### Arguments +#### Arguments -#### account_name_p +##### account_name_p `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -### Return type +#### Return type `VOID` -## azure_storage.account_user_add +### azure_storage.account_user_add Function that allows granting a PostgreSQL user or role access to a storage account through the functions provided by the `pg_azure_storage` extension. @@ -322,25 +328,25 @@ Function that allows granting a PostgreSQL user or role access to a storage acco azure_storage.account_add(account_name_p text, user_p regrole); ``` -### Permissions +#### Permissions Must be a member of `azure_storage_admin`. -### Arguments +#### Arguments -#### account_name_p +##### account_name_p `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### user_p +##### user_p `regrole` the name of a PostgreSQL user or role available on the server. -### Return type +#### Return type `VOID` -## azure_storage.account_user_remove +### azure_storage.account_user_remove Function that allows revoking a PostgreSQL user or role access to a storage account through the functions provided by the `pg_azure_storage` extension. @@ -352,25 +358,25 @@ Function that allows revoking a PostgreSQL user or role access to a storage acco azure_storage.account_user_remove(account_name_p text, user_p regrole); ``` -### Permissions +#### Permissions Must be a member of `azure_storage_admin`. -### Arguments +#### Arguments -#### account_name_p +##### account_name_p `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### user_p +##### user_p `regrole` the name of a PostgreSQL user or role available on the server. -### Return type +#### Return type `VOID` -## azure_storage.account_list +### azure_storage.account_list Function that lists the names of the storage accounts that were configured via the [azure_storage.account_add](#azure_storageaccount_add) function, together with the PostgreSQL users or roles that are granted permissions to interact with that storage account through the functions provided by the `pg_azure_storage` extension. @@ -378,19 +384,19 @@ Function that lists the names of the storage accounts that were configured via t azure_storage.account_list(); ``` -### Permissions +#### Permissions Must be a member of `azure_storage_admin`. -### Arguments +#### Arguments This function doesn't take any arguments. -### Return type +#### Return type `TABLE(account_name text, auth_type azure_storage.auth_type, azure_storage_type azure_storage.storage_type, allowed_users regrole[])` a four-column table with the list of Azure Storage accounts added, the type of authentication used to interact with each account, the type of storage, and the list of PostgreSQL users or roles that are granted access to it. -## azure_storage.blob_list +### azure_storage.blob_list Function that lists the names and other properties (size, lastModified, eTag, contentType, contentEncoding, and contentHash) of blobs stored in the given container of the referred storage account. @@ -398,17 +404,17 @@ Function that lists the names and other properties (size, lastModified, eTag, co azure_storage.blob_list(account_name text, container_name text, prefix text DEFAULT ''::text); ``` -### Permissions +#### Permissions User or role invoking this function must be added to the allowed list for the `account_name` referred, by executing [azure_storage.account_user_add](#azure_storageaccount_user_add). Members of `azure_storage_admin` are automatically allowed to reference all Azure Storage accounts whose references were added using [azure_storage.account_add](#azure_storageaccount_add). -### Arguments +#### Arguments -#### account_name +##### account_name `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### container_name +##### container_name `text` the name of a container. A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. A container name must be a valid Domain Name System (DNS) name, as it forms part of the unique URI used to address the container or its blobs. @@ -417,44 +423,44 @@ When naming a container, make sure to follow [these rules](/rest/api/storageserv The URI for a container is similar to: `https://myaccount.blob.core.windows.net/mycontainer` -#### prefix +##### prefix `text` when specified, the function returns the blobs whose names begin with the value provided in this parameter. Defaults to an empty string. -### Return type +#### Return type `TABLE(path text, bytes bigint, last_modified timestamp with time zone, etag text, content_type text, content_encoding text, content_hash text)` a table with one record per blob returned, including the full name of the blob, and some other properties. -#### path +##### path `text` the full name of the blob. -#### bytes +##### bytes `bigint` the size of blob in bytes. -#### last_modified +##### last_modified `timestamp with time zone`the date and time the blob was last modified. Any operation that modifies the blob, including an update of the blob's metadata or properties, changes the last-modified time of the blob. -#### etag +##### etag `text` the ETag property is used for optimistic concurrency during updates. It isn't a timestamp as there's another property called Timestamp that stores the last time a record was updated. For example, if you load an entity and want to update it, the ETag must match what is currently stored. Setting the appropriate ETag is important because if you have multiple users editing the same item, you don't want them overwriting each other's changes. -#### content_type +##### content_type `text` the content type specified for the blob. The default content type is `application/octet-stream`. -#### content_encoding +##### content_encoding `text` the Content-Encoding property of a blob that Azure Storage allows you to define. For compressed content, you could set the property to be Gzip. When the browser accesses the content, it automatically decompresses the content. -#### content_hash +##### content_hash `text` the hash used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the provided hash with one computed from content. If the two hashes don't match, the operation fails with error code 400 (Bad Request). -## azure_storage.blob_get +### azure_storage.blob_get Function that allows importing data. It downloads one or more files from a blob container in an Azure Storage account. Then it translates the contents into rows, which can be consumed and processed with SQL language constructs. This function adds support to filter and manipulate the data fetched from the blob container before importing it. @@ -471,17 +477,17 @@ There's an overloaded version of this function, which accepts a `rec` parameter azure_storage.blob_get(account_name text, container_name text, path text, rec anyelement, decoder text DEFAULT 'auto'::text, compression text DEFAULT 'auto'::text, options jsonb DEFAULT NULL::jsonb); ``` -### Permissions +#### Permissions User or role invoking this function must be added to the allowed list for the `account_name` referred, by executing [azure_storage.account_user_add](#azure_storageaccount_user_add). Members of `azure_storage_admin` are automatically allowed to reference all Azure Storage accounts whose references were added using [azure_storage.account_add](#azure_storageaccount_add). -### Arguments +#### Arguments -#### account_name +##### account_name `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### container_name +##### container_name `text` the name of a container. A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. A container name must be a valid Domain Name System (DNS) name, as it forms part of the unique URI used to address the container or its blobs. @@ -490,15 +496,15 @@ When naming a container, make sure to follow [these rules](/rest/api/storageserv The URI for a container is similar to: `https://myaccount.blob.core.windows.net/mycontainer` -#### path +##### path `text` the full name of the blob. -#### rec +##### rec `anyelement` the definition of the record output structure. -#### decoder +##### decoder `text` the specification of the blob format. Can be set to any of the following values: @@ -510,7 +516,7 @@ The URI for a container is similar to: | `binary` | | Binary PostgreSQL COPY format. | | `text` \| `xml` \| `json` | | A file containing a single text value. | -#### compression +##### compression `text` the specification of compression type. Can be set to any of the following values: @@ -522,16 +528,16 @@ The URI for a container is similar to: The extension doesn't support any other compression types. -#### options +##### options `jsonb` the settings that define handling of custom headers, custom separators, escape characters, etc. `options` affects the behavior of this function in a way similar to how the options you can pass to the [`COPY`](https://www.postgresql.org/docs/current/sql-copy.html) command in PostgreSQL affect its behavior. -### Return type +#### Return type `SETOF record` `SETOF anyelement` -## azure_storage.blob_put +### azure_storage.blob_put Function that allows exporting data, by uploading files to a blob container in an Azure Storage account. The content of the files is produced from rows in PostgreSQL. @@ -564,17 +570,17 @@ azure_storage.blob_put(account_name text, container_name text, path text, tuple RETURNS VOID; ``` -### Permissions +#### Permissions User or role invoking this function must be added to the allowed list for the `account_name` referred, by executing [azure_storage.account_user_add](#azure_storageaccount_user_add). Members of `azure_storage_admin` are automatically allowed to reference all Azure Storage accounts whose references were added using [azure_storage.account_add](#azure_storageaccount_add). -### Arguments +#### Arguments -#### account_name +##### account_name `text` the name of the Azure blob storage account that contains all of your objects: blobs, files, queues, and tables. The storage account provides a unique namespace that is accessible from anywhere in the world over HTTPS. -#### container_name +##### container_name `text` the name of a container. A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. A container name must be a valid Domain Name System (DNS) name, as it forms part of the unique URI used to address the container or its blobs. @@ -583,15 +589,15 @@ When naming a container, make sure to follow [these rules](/rest/api/storageserv The URI for a container is similar to: `https://myaccount.blob.core.windows.net/mycontainer` -#### path +##### path `text` the full name of the blob. -#### tuple +##### tuple `record` the definition of the record output structure. -#### encoder +##### encoder `text` the specification of the blob format. Can be set to any of the following values: @@ -603,7 +609,7 @@ The URI for a container is similar to: | `binary` | | Binary PostgreSQL COPY format. | | `text` \| `xml` \| `json` | | A file containing a single text value. | -#### compression +##### compression `text` the specification of compression type. Can be set to any of the following values: @@ -615,15 +621,15 @@ The URI for a container is similar to: The extension doesn't support any other compression types. -#### options +##### options `jsonb` the settings that define handling of custom headers, custom separators, escape characters, etc. `options` affects the behavior of this function in a way similar to how the options you can pass to the [`COPY`](https://www.postgresql.org/docs/current/sql-copy.html) command in PostgreSQL affect its behavior. -### Return type +#### Return type `VOID` -## azure_storage.options_csv_get +### azure_storage.options_csv_get Function that acts as a utility function, which can be called as a parameter within `blob_get`, and is useful for decoding the content of a csv file. @@ -631,49 +637,49 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.options_csv_get(delimiter text DEFAULT NULL::text, null_string text DEFAULT NULL::text, header boolean DEFAULT NULL::boolean, quote text DEFAULT NULL::text, escape text DEFAULT NULL::text, force_not_null text[] DEFAULT NULL::text[], force_null text[] DEFAULT NULL::text[], content_encoding text DEFAULT NULL::text); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### delimiter +##### delimiter `text` the character that separates columns within each row (line) of the file. It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY delimiter must be a single one-byte character` error. -#### null_string +##### null_string `text` the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. -#### header +##### header `boolean` flag that indicates if the file contains a header line with the names of each column in the file. On output, the initial line contains the column names from the table. -#### quote +##### quote `text` the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY quote must be a single one-byte character` error. -#### escape +##### escape `text` the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY escape must be a single one-byte character` error. -#### force_not_null +##### force_not_null `text[]` don't match the specified columns' values against the null string. In the default case where the null string is empty, it means that empty values are read as zero-length strings rather than nulls, even when they aren't quoted. -#### force_null +##### force_null `text[]` match the specified columns' values against the null string, even if quoted, and if a match is found, set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL. -#### content_encoding +##### content_encoding `text` name of the encoding with which the file is encoded. If the option is omitted, the current client encoding is used. -### Return type +#### Return type `jsonb` -## azure_storage.options_copy +### azure_storage.options_copy Function that acts as a utility function, which can be called as a parameter within `blob_get`. It acts as a helper function for [options_csv_get](#azure_storageoptions_csv_get), [options_tsv](#azure_storageoptions_tsv), and [options_binary](#azure_storageoptions_binary). @@ -681,53 +687,53 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.options_copy(delimiter text DEFAULT NULL::text, null_string text DEFAULT NULL::text, header boolean DEFAULT NULL::boolean, quote text DEFAULT NULL::text, escape text DEFAULT NULL::text, force_quote text[] DEFAULT NULL::text[], force_not_null text[] DEFAULT NULL::text[], force_null text[] DEFAULT NULL::text[], content_encoding text DEFAULT NULL::text); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### delimiter +##### delimiter `text` the character that separates columns within each row (line) of the file. It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY delimiter must be a single one-byte character` error. -#### null_string +##### null_string `text` the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. -#### header +##### header `boolean` flag that indicates if the file contains a header line with the names of each column in the file. On output, the initial line contains the column names from the table. -#### quote +##### quote `text` the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY quote must be a single one-byte character` error. -#### escape +##### escape `text` the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY escape must be a single one-byte character` error. -#### force_quote +##### force_quote `text[]` forces quoting to be used for all non-NULL values in each specified column. NULL output is never quoted. If * is specified, non-NULL values are quoted in all columns. -#### force_not_null +##### force_not_null `text[]` don't match the specified columns' values against the null string. In the default case where the null string is empty, it means that empty values are read as zero-length strings rather than nulls, even when they aren't quoted. -#### force_null +##### force_null `text[]` match the specified columns' values against the null string, even if quoted, and if a match is found, set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL. -#### content_encoding +##### content_encoding `text` name of the encoding with which the file is encoded. If the option is omitted, the current client encoding is used. -### Return type +#### Return type `jsonb` -## azure_storage.options_tsv +### azure_storage.options_tsv Function that acts as a utility function, which can be called as a parameter within `blob_get`, and is useful for decoding the content of a tsv file. @@ -735,29 +741,29 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.options_tsv(delimiter text DEFAULT NULL::text, null_string text DEFAULT NULL::text, content_encoding text DEFAULT NULL::text); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### delimiter +##### delimiter `text` the character that separates columns within each row (line) of the file. It must be a single 1-byte character. Although this function supports delimiters of any number of characters, if you try to use more than a single 1-byte character, PostgreSQL reports back a `COPY delimiter must be a single one-byte character` error. -#### null_string +##### null_string `text` the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. -#### content_encoding +##### content_encoding `text` name of the encoding with which the file is encoded. If the option is omitted, the current client encoding is used. -### Return type +#### Return type `jsonb` -## azure_storage.options_binary +### azure_storage.options_binary Function that acts as a utility function, which can be called as a parameter within `blob_get`, and is useful for decoding the content of a binary file. @@ -765,17 +771,17 @@ Function that acts as a utility function, which can be called as a parameter wit azure_storage.options_binary(content_encoding text DEFAULT NULL::text); ``` -### Permissions +#### Permissions Any user or role can invoke this function. -### Arguments +#### Arguments -#### content_encoding +##### content_encoding `text` name of the encoding with which the file is encoded. If the option is omitted, the current client encoding is used. -### Return type +#### Return type `jsonb` diff --git a/articles/postgresql/flexible-server/includes/extensions-table.md b/articles/postgresql/flexible-server/includes/extensions-table.md index 406d91bc37..9f119d8793 100644 --- a/articles/postgresql/flexible-server/includes/extensions-table.md +++ b/articles/postgresql/flexible-server/includes/extensions-table.md @@ -2,7 +2,7 @@ author: akashraokm ms.author: akashrao ms.reviewer: maghan -ms.date: 11/04/2024 +ms.date: 11/18/2024 ms.service: azure-database-postgresql ms.subservice: flexible-server ms.topic: include @@ -34,7 +34,7 @@ ms.topic: include | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1| | [login_hook](https://github.com/splendiddata/login_hook) | Login_hook - hook to execute login_hook.login() at login time | 1.5 | 1.5 | 1.4 | 1.4 | 1.4 | 1.4 | 1.4| | [ltree](https://www.postgresql.org/docs/current/ltree.html) | Data type for hierarchical tree-like structures | 1.3 | 1.2 | 1.2 | 1.2 | 1.2 | 1.1 | 1.1| -| [oracle_fdw](https://github.com/laurenz/oracle_fdw) | Foreign data wrapper for Oracle databases | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | N/A| +| [oracle_fdw](https://github.com/laurenz/oracle_fdw) | Foreign data wrapper for Oracle access | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | N/A| | [orafce](https://github.com/orafce/orafce) | Functions and operators that emulate a subset of functions and packages from the Oracle RDBMS | 4.9 | 4.4 | 3.24 | 3.18 | 3.18 | 3.18 | 3.7| | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level | 1.12 | 1.12 | 1.11 | 1.9 | 1.8 | 1.7 | 1.7| | [pgaudit](https://www.pgaudit.org/) | Provides auditing functionality | 16.0 :heavy_check_mark: | 16.0 :heavy_check_mark: | 1.7 :heavy_check_mark: | 1.6.2 :heavy_check_mark: | 1.5 :heavy_check_mark: | 1.4.3 :heavy_check_mark: | 1.3.2 :heavy_check_mark:| @@ -50,7 +50,7 @@ ms.topic: include | [pgrouting](https://pgrouting.org/) | PgRouting Extension | N/A | N/A | 3.5.0 | 3.3.0 | 3.3.0 | 3.3.0 | 3.3.0| | [pgrowlocks](https://www.postgresql.org/docs/current/pgrowlocks.html) | Show row-level locking information | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2| | [pg_squeeze](https://github.com/cybertec-postgresql/pg_squeeze) | A tool to remove unused space from a relation. | 1.7 :heavy_check_mark: | 1.6 :heavy_check_mark: | 1.6 :heavy_check_mark: | 1.5 :heavy_check_mark: | 1.5 :heavy_check_mark: | 1.5 :heavy_check_mark: | 1.5 :heavy_check_mark:| -| [pg_stat_statements](https://www.postgresql.org/docs/current/pgstatstatements.html) | Track execution statistics of all SQL statements executed | 1.11 :heavy_check_mark: | 1.10 :heavy_check_mark: | 1.10 :heavy_check_mark: | 1.9 :heavy_check_mark: | 1.8 :heavy_check_mark: | 1.7 :heavy_check_mark: | 1.6 :heavy_check_mark:| +| [pg_stat_statements](https://www.postgresql.org/docs/current/pgstatstatements.html) | Track planning and execution statistics of all SQL statements executed | 1.11 :heavy_check_mark: | 1.10 :heavy_check_mark: | 1.10 :heavy_check_mark: | 1.9 :heavy_check_mark: | 1.8 :heavy_check_mark: | 1.7 :heavy_check_mark: | 1.6 :heavy_check_mark:| | [pgstattuple](https://www.postgresql.org/docs/current/pgstattuple.html) | Show tuple-level statistics | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5| | [pg_trgm](https://www.postgresql.org/docs/current/pgtrgm.html) | Text similarity measurement and index searching based on trigrams | 1.6 | 1.6 | 1.6 | 1.6 | 1.5 | 1.4 | 1.4| | [pg_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility info | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2| @@ -68,7 +68,7 @@ ms.topic: include | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about SSL certificates | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2| | [tablefunc](https://www.postgresql.org/docs/current/tablefunc.html) | Functions that manipulate whole tables, including crosstab | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0| | [tds_fdw](https://github.com/tds-fdw/tds_fdw) | Foreign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server) | 2.0.3 | 2.0.3 | 2.0.3 | 2.0.3 | 2.0.3 | 2.0.3 | 2.0.3| -| [timescaledb](https://github.com/timescale/timescaledb) | Enables scalable inserts and complex queries for time-series data | N/A | 2.13.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 1.7.4 :heavy_check_mark:| +| [timescaledb](https://github.com/timescale/timescaledb) | Enables scalable inserts and complex queries for time-series data (Apache 2 Edition) | N/A | 2.13.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 2.10.0 :heavy_check_mark: | 1.7.4 :heavy_check_mark:| | [tsm_system_rows](https://www.postgresql.org/docs/13/tsm-system-rows.html) | TABLESAMPLE method which accepts number of rows as a limit | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0| | [tsm_system_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method which accepts time in milliseconds as a limit | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0| | [unaccent](https://www.postgresql.org/docs/current/unaccent.html) | Text search dictionary that removes accents | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1| diff --git a/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md b/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md index c6ab6d9253..4b98613c96 100644 --- a/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md +++ b/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md @@ -61,7 +61,7 @@ def main(): "location": "westus", "properties": { "administratorLogin": "cloudsa", - "administratorLoginPassword": "password", + "administratorLoginPassword": "", "availabilityZone": "1", "backup": {"backupRetentionDays": 7, "geoRedundantBackup": "Disabled"}, "createMode": "Create", diff --git a/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md b/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md index fd3e3002a3..52b2a91328 100644 --- a/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md +++ b/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md @@ -17,56 +17,53 @@ ms.custom: [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] -**Automigration** from Azure Database for Postgresql – Single Server to Flexible Server is a service-initiated migration during a planned downtime window for Single Server running PostgreSQL 11 and database workloads with **Basic, General Purpose or Memory Optimized SKU**, data storage used **<= 10 GiB** and **no complex features (CMK, Microsoft Entra ID, Read Replica or Private Link) enabled**. The eligible servers are identified by the service and are sent advance notifications detailing steps to review migration details and make modifications if necessary. +**Automigration** from Azure Database for PostgreSQL – Single Server to Flexible Server is a service-initiated migration that takes place during a planned downtime window for Single Server, separate from its patching or maintenance window. The service identifies eligible servers and sends advance notifications with detailed steps about the automigration process. You can review and adjust the migration schedule if needed or submit a support request to opt out of automigration for your servers. -The automigration provides a highly resilient and self-healing offline migration experience during a planned migration window, with up to **20 mins** of downtime. The migration service is a hosted solution using the [pgcopydb](https://github.com/dimitri/pgcopydb) binary and provides a fast and efficient way of copying databases from the source PostgreSQL instance to the target. This migration removes the overhead to manually migrate your server. Post migration, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration: +Automigration leverages the [Azure PostgreSQL migration service](../migrate/migration-service/overview-migration-service-postgresql.md) to deliver a resilient offline migration during the planned migration window. Downtime will vary based on workload characteristics, with larger workloads potentially requiring up to 20 minutes. For migration speed benchmarks, see [Azure PostgreSQL Migration Speed Benchmarking](../migrate/migration-service/best-practices-migration-service-postgresql.md#migration-speed-benchmarking). This migration eliminates the need for manual server migration, allowing you to benefit from Flexible Server features post-migration, including improved price-performance, granular database configuration control, and custom maintenance windows. -- **Target Flexible Server is deployed** and matches your Single server SKU in terms of performance and cost, inheriting all firewall rules from source Single Server. +> [!NOTE] +> The Automigration service selects Single server to migrate based on the following criteria: +> - Single server version 11 +> - Servers with no complex feature such as CMK, Microsoft Entra ID, Read Replica and Private end-point +> - Size of data <= 10 GB +> - Public access is enabled -- **Date is migrated** during the migration window chosen by the service or elected by you. If the window is chosen by the service, it's typically outside business hours of the specific region the server is hosted in. Source Single Server is set to read-only and the data & schema is migrated from the source Single Server to the target Flexible Server. User roles, privileges, and ownership of all database objects are also migrated to the flexible server. +## Automigration Process -- **DNS switch and cutover** are performed within the planned migration window with minimal downtime, allowing usage of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates or changes. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats – username@server_name and username are also supported on the migrated Flexible Server. +The automigration process includes several key phases: -- The **migrated Flexible Server is online** and can now be managed via Azure portal/CLI. + - **Target Flexible Server Creation** - A Flexible Server is created to match the performance and cost of your Single Server SKU. It inherits all firewall rules from the source Single Server. -- The **updated connection strings** to connect to your old single server are shared with you by email if you have enabled Service health notifications on the Azure portal. Alternatively, you can find the connection strings in the Single server portal page under **Settings->Connection strings**. The connection strings can be used to log in to the Single server if you want to copy any settings to your new Flexible server. + - **Data Migration** - Data migration occurs during the designated migration window, typically scheduled outside business hours for the server’s hosting region (if the window is chosen by the service). The source Single Server is set to read-only, and all data, schemas, user roles, privileges, and ownership of database objects are migrated to the Flexible Server. -- The **legacy Single Server** is deleted **seven days** after the migration. + - **DNS Switch** - After data migration, a DNS switch is performed, allowing the existing Single Server connection string to seamlessly connect to the new Flexible Server. Both Single and Flexible Server connection string formats, as well as username formats (**username@server_name** and **username**), are supported on the migrated Flexible Server. -> [!NOTE] -> The Automigration service selects Single server to migrate based on the following criteria: -> - The server runs PostgreSQL version 11 -> - Servers with no complex feature such as CMK, Microsoft Entra ID, Read Replica and Private end-point -> - Size of data <= 10 GB -> - Public access is enabled + - **Flexible Server Visibility** - After a successful data migration and DNS switch, the new Flexible Server appears under your subscription and can be managed via the Azure portal or CLI. -The preceding filters are used to select servers to be Automigrated. Servers can also be nominated for Automigration by the user. The nomination process is more flexible and not all filters are applicable. + - **Updated Single Server Connection Strings** - Updated connection strings for the legacy Single Server are sent via Service Health notifications on the Azure portal. They are also accessible on the Single Server portal page under **Settings -> Connection Strings**. -## Nominate Single servers for Automigration + - **Single Server Deletion** - The Single Server is retained for seven days post-migration before it is deleted. -The nomination process is for users who want to voluntarily fast-track their migration to Flexible server. If you own a Single Server workload, you can now nominate yourself (if not already scheduled by the service) for automigration. Submit your server details through this [form](https://forms.office.com/r/4pF55L8TxY). -## Configure migration alerts and review migration schedule -Servers eligible for automigration are sent advance Azure health notifications by the service. The health notifications are sent **30 days, 14 days and 7 days** before the migration date. Notifications are also sent when the migration is **in progress, has completed, and 6 days after migration** before the legacy Single server is dropped. You can check and configure the Azure portal to receive the automigration notifications via email or SMS. +## Nominate Single servers for Automigration -Following described are the ways to check and configure automigration notifications: +The nomination process is for users who want to voluntarily fast-track their migration to Flexible server. If you own a Single Server workload, you can now nominate yourself (if not already scheduled by the service) for automigration. Submit your server details through this [form](https://forms.office.com/r/4pF55L8TxY). -- Subscription owners for Single Servers scheduled for automigration receive an email notification. -- Configure **service health alerts** to receive automigration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). -- Check the automigration **notification on the Azure portal** by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal). +## How to check if your Single Server is scheduled for Automigration -Following described are the ways to review your migration schedule once you receive the automigration notification: +To determine if your Single Server is selected for automigration, follow these steps: + - **[Service Health Notifications](https://learn.microsoft.com/azure/service-health/service-health-portal-update)** - In the Azure portal, go to **Service Health > Planned Maintenance** events. Look for events labeled **'Notification for Scheduled Auto Migration to Azure Database for PostgreSQL Single Server'**. The notifications are sent 30, 14, and 7 days before the migration date, and again during migration stages: in progress, completed, and six days before the Single Server is decommissioned. > [!NOTE] -> The migration schedule will be locked 7 days prior to the scheduled migration window during which you'll be unable to reschedule. +> These notifications do not land in your inbox by default. To receive them via email or SMS, you need to set up Service Health Alerts by following the steps [here](https://learn.microsoft.com/previous-versions/azure/postgresql/single-server/concepts-planned-maintenance-notification#to-receive-planned-maintenance-notification) -- The **Single Server overview page** for your instance displays a portal banner with information about your migration schedule. -- For Single Servers scheduled for automigration, the **Overview** page is updated with the relevant information. You can review the migration schedule by navigating to the Overview page of your Single Server instance. -- If you wish to defer the migration, you can defer by a month at a time on the Azure portal. You can reschedule the migration by selecting another migration window within a month. +- **Single Server Overview Page** - Navigate to your Single Server instance in the Azure portal and check the Overview page. If scheduled for automigration, you’ll find details here, including an option to defer the migration by one month at a time or reschedule within the current month. > [!NOTE] -> Typically, candidate servers short-listed for automigration do not use cross region or Geo redundant backups. And these features can only be enabled during create time for a postgresql Flexible Server. In case you plan to use any of these features, it's recommended to opt out of the automigration schedule and migrate your server manually. +> The migration schedule will be locked 7 days prior to the scheduled migration window during which you'll be unable to reschedule. + +- **Azure CXP email notifications** - Azure Customer Experience(CXP) also sends direct emails to classic roles and RBAC roles associated with the subscription containing the Single Server, providing information on upcoming automigrations. ## Prerequisite checks for automigration @@ -115,7 +112,7 @@ Here's the info you need to know post automigration: In Azure Database for PostgreSQL Single Server, a virtual network (VNet) rule is a subnet listed in the server’s access control list (ACL). This rule allows the Single Server to accept communication from nodes within that particular subnet. For Flexible Server, VNet rules are not supported. Instead, Flexible Server allows the creation of [private endpoints](../flexible-server/concepts-networking-private-link.md), enabling the server to function within your virtual network. A private endpoint assigns a private IP to the Flexible Server, and all traffic between your virtual network and the server travels securely via the Azure backbone network, eliminating the need for public internet exposure. -After the migration, you must add a private endpoint to your Flexible Server for all subnets previously covered by VNet rules on your Single Server. You can complete this process using either the [Azure Portal](../flexible-server/how-to-manage-virtual-network-private-endpoint-portal.md) or the [Azure CLI](../flexible-server/how-to-manage-virtual-network-private-endpoint-cli.md). +After the migration, you must add a private endpoint to your Flexible Server for all subnets previously covered by VNet rules on your Single Server. You can complete this process using either the [Azure portal](../flexible-server/how-to-manage-virtual-network-private-endpoint-portal.md) or the [Azure CLI](../flexible-server/how-to-manage-virtual-network-private-endpoint-cli.md). Once this step is completed, your network connectivity will remain intact on the Flexible Server after the migration from Single Server. ## Frequently Asked Questions (FAQs) @@ -149,7 +146,7 @@ Once this step is completed, your network connectivity will remain intact on the **Q. I see a pricing difference on my potential move from postgresql Basic Single Server to postgresql Flexible Server??​** -**A.** Few servers might see a minor price revision after migration as the minimum storage limit on both offerings is different (5 GiB on Single Server and 32 GiB on Flexible Server). Storage cost for Flexible Server is marginally higher than Single Server. Any price increase is offset through better throughput and performance compared to Single Server. For more information on Flexible server pricing, click [here](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) +**A.** Few servers might see a minor price revision after migration as the minimum storage limit on both offerings is different (5 GiB on Single Server and 32 GiB on Flexible Server). Storage cost for Flexible Server is marginally higher than Single Server. Any price increase is offset through better throughput and performance compared to Single Server. For more information on Flexible server pricing, please refer to [this document](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) ## Related content diff --git a/docfx.json b/docfx.json index 64f2afe24b..20d57a38a4 100644 --- a/docfx.json +++ b/docfx.json @@ -227,7 +227,11 @@ "articles/mysql/flexible-server/**/*.md": "Azure Database for MySQL - Flexible Server", "articles/postgresql/scripts/**/*.md": "Azure Database for PostgreSQL - Flexible Server", "articles/postgresql/flexible-server/**/*.md": "Azure Database for PostgreSQL - Flexible Server", - "articles/postgresql/migrate/**/*.md": "Azure Database for PostgreSQL - Flexible Server" + "articles/postgresql/migrate/**/*.md": "Azure Database for PostgreSQL - Flexible Server", + "articles/mysql/flexible-server/**/*.yml": "Azure Database for MySQL - Flexible Server", + "articles/postgresql/scripts/**/*.yml": "Azure Database for PostgreSQL - Flexible Server", + "articles/postgresql/flexible-server/**/*.yml": "Azure Database for PostgreSQL - Flexible Server", + "articles/postgresql/migrate/**/*.yml": "Azure Database for PostgreSQL - Flexible Server", } }, "overwrite": [],