diff --git a/articles/azure-resource-manager/management/azure-services-resource-providers.md b/articles/azure-resource-manager/management/azure-services-resource-providers.md index 5672c430bf0c7..2fa7f78983990 100644 --- a/articles/azure-resource-manager/management/azure-services-resource-providers.md +++ b/articles/azure-resource-manager/management/azure-services-resource-providers.md @@ -63,7 +63,7 @@ The resource providers for compute services are: | Resource provider namespace | Azure service | | --------------------------- | ------------- | -| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/enterprise/overview.md) | +| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/basic-standard/overview.md) | | Microsoft.AVS | [Azure VMware Solution](../../azure-vmware/index.yml) | | Microsoft.Batch | [Batch](../../batch/index.yml) | | Microsoft.ClassicCompute | Classic deployment model virtual machine | @@ -73,7 +73,7 @@ The resource providers for compute services are: | Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](/azure/virtual-machines/workloads/sap/hana-overview-architecture) | | Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) | | Microsoft.Maintenance | [Azure Maintenance](/azure/virtual-machines/maintenance-configurations) | -| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/enterprise/overview.md) | +| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/basic-standard/overview.md) | | Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) | | Microsoft.SerialConsole - [registered by default](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) | | Microsoft.ServiceFabric | [Service Fabric](/azure/service-fabric/) | @@ -135,6 +135,7 @@ The resource providers for developer tools services are: | Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) | | Microsoft.DevCenter | [Microsoft Dev Box](../../dev-box/index.yml) | | Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) | +| Microsoft.LoadTestService | [Azure Load Testing](/azure/load-testing/) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | | Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | diff --git a/articles/azure-resource-manager/management/azure-subscription-service-limits.md b/articles/azure-resource-manager/management/azure-subscription-service-limits.md index edb39362a35af..f6769d8dd3a87 100644 --- a/articles/azure-resource-manager/management/azure-subscription-service-limits.md +++ b/articles/azure-resource-manager/management/azure-subscription-service-limits.md @@ -257,7 +257,7 @@ The following limits apply to [Azure role-based access control (Azure RBAC)](../ ## Azure Spring Apps limits -To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/enterprise/quotas.md). +To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/basic-standard/quotas.md). ## Azure Storage limits diff --git a/articles/azure-resource-manager/management/lock-resources.md b/articles/azure-resource-manager/management/lock-resources.md index cd4f4f0ae4e26..edb607cc0763f 100644 --- a/articles/azure-resource-manager/management/lock-resources.md +++ b/articles/azure-resource-manager/management/lock-resources.md @@ -101,7 +101,7 @@ Applying locks can lead to unexpected results. Some operations, which don't seem - A cannot-delete lock on a **Virtual Machine** that is protected by **Site Recovery** prevents certain resource links related to Site Recovery from being removed properly when you remove the protection or disable replication. If you plan to protect the VM again later, you need to remove the lock before disabling protection. If you don't remove the lock, you need to follow certain steps to clean up the stale links before you can protect the VM. For more information, see [Troubleshoot Azure VM replication](../../site-recovery/azure-to-azure-troubleshoot-errors.md#replication-not-enabled-on-vm-with-stale-resources-error-code-150226). -- For **Postgresql**, the virtual network shouldn't have any resource locks set at the virtual network or subnet level, as locks may interfere with network and DNS operations. Before creating the server in a virtual network, ensure you remove any delete or read-only locks from your virtual network and all subnets. You can reapply the locks after the server is created. +- For **PostgreSQL**, the virtual network shouldn't have any resource locks set at the virtual network or subnet level, as locks may interfere with network and DNS operations. Before creating the server in a virtual network, ensure you remove any delete or read-only locks from your virtual network and all subnets. You can reapply the locks after the server is created. ## Who can create or delete locks diff --git a/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md b/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md index 3221f798ff482..e272ea3b54066 100644 --- a/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md +++ b/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md @@ -119,7 +119,7 @@ Use the following steps to guide you through the process to onboard Azure Arc fo } ``` -3. Run the installation scripts. You can optionionally setup this preview from a Windows or Linux-based jump box/VM. +3. Run the installation scripts. You can optionally setup this preview from a Windows or Linux-based jump box/VM. Run the following commands to execute the installation script. diff --git a/articles/container-apps/compare-options.md b/articles/container-apps/compare-options.md index 0338100174756..a8fa848685a4b 100644 --- a/articles/container-apps/compare-options.md +++ b/articles/container-apps/compare-options.md @@ -51,7 +51,7 @@ You can get started building your first container app [using the quickstarts](ge [Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps -[Azure Spring Apps](../spring-apps/enterprise/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. +[Azure Spring Apps](../spring-apps/basic-standard/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. ### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is an integrated product with Red Hat and Microsoft jointly engineered, operated, and supported. This collaboration provides an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions. Alternatively, they can use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option. diff --git a/articles/event-grid/authenticate-with-entra-id-namespaces.md b/articles/event-grid/authenticate-with-entra-id-namespaces.md index 622e008aa245d..81f7afd32e964 100644 --- a/articles/event-grid/authenticate-with-entra-id-namespaces.md +++ b/articles/event-grid/authenticate-with-entra-id-namespaces.md @@ -52,7 +52,7 @@ Besides managed identities, another identity option is to create a security prin Once you have an application security principal and followed above steps, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events). > [!NOTE] -> When you register an application in the portal, an [application object](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#application-object) and a [service principal](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsot Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. +> When you register an application in the portal, an [application object](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#application-object) and a [service principal](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsoft Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. ## Assign permission to a security principal to publish events diff --git a/articles/event-grid/authenticate-with-microsoft-entra-id.md b/articles/event-grid/authenticate-with-microsoft-entra-id.md index 117dd3d69023d..bfdaca00821ba 100644 --- a/articles/event-grid/authenticate-with-microsoft-entra-id.md +++ b/articles/event-grid/authenticate-with-microsoft-entra-id.md @@ -54,7 +54,7 @@ Besides managed identities, another identity option is to create a security prin Once you have an application security principal and followed above steps, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events). > [!NOTE] -> When you register an application in the portal, an [application object](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#application-object) and a [service principal](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsot Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. +> When you register an application in the portal, an [application object](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#application-object) and a [service principal](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsoft Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. ## Assign permission to a security principal to publish events @@ -136,7 +136,7 @@ Microsoft Entra authentication provides a superior authentication support than t Once you decide to use Microsoft Entra authentication, you can disable authentication based on access keys or SAS tokens. > [!NOTE] -> Acess keys or SAS token authentication is a form of **local authentication**. you'll hear sometimes referring to "local auth" when discussing this category of authentication mechanisms that don't rely on Microsoft Entra ID. The API parameter used to disable local authentication is called, appropriately so, ``disableLocalAuth``. +> Access keys or SAS token authentication is a form of **local authentication**. you'll hear sometimes referring to "local auth" when discussing this category of authentication mechanisms that don't rely on Microsoft Entra ID. The API parameter used to disable local authentication is called, appropriately so, ``disableLocalAuth``. ### Azure portal diff --git a/articles/event-grid/communication-services-advanced-messaging-events.md b/articles/event-grid/communication-services-advanced-messaging-events.md index 2c2496f501514..b411b896199c1 100644 --- a/articles/event-grid/communication-services-advanced-messaging-events.md +++ b/articles/event-grid/communication-services-advanced-messaging-events.md @@ -266,7 +266,7 @@ Details for the attributes specific to `Microsoft.Communication.AdvancedMessageA | Attribute | Type | Nullable | Description | |:---------------|:--------:|:--------:|---------------------------------------------| -| language | `string` | ✔️ | The languege detected. | +| language | `string` | ✔️ | The language detected. | | confidenceScore | `float` | ✔️ | The confidence score of the language detected. | | translation | `string` | ✔️ | The message translation. | diff --git a/articles/event-grid/configure-private-endpoints.md b/articles/event-grid/configure-private-endpoints.md index 74c4c8fc1395b..3bde21bf80b7b 100644 --- a/articles/event-grid/configure-private-endpoints.md +++ b/articles/event-grid/configure-private-endpoints.md @@ -129,7 +129,7 @@ To create a private endpoint, use the [az network private-endpoint create](/cli/ ```azurecli-interactive az network private-endpoint create \ - --resource-group \ + --resource-group \ --name \ --vnet-name \ --subnet \ @@ -147,7 +147,7 @@ For descriptions of the parameters used in the example, see documentation for [a To delete a private endpoint, use the [az network private-endpoint delete](/cli/azure/network/private-endpoint?#az-network-private-endpoint-delete) method as shown in the following example: ```azurecli-interactive -az network private-endpoint delete --resource-group --name +az network private-endpoint delete --resource-group --name ``` > [!NOTE] @@ -173,7 +173,7 @@ To create a private endpoint, use the [az network private-endpoint create](/cli/ ```azurecli-interactive az network private-endpoint create \ - --resource-group \ + --resource-group \ --name \ --vnet-name \ --subnet \ @@ -191,7 +191,7 @@ For descriptions of the parameters used in the example, see documentation for [a To delete a private endpoint, use the [az network private-endpoint delete](/cli/azure/network/private-endpoint?#az-network-private-endpoint-delete) method as shown in the following example: ```azurecli-interactive -az network private-endpoint delete --resource-group --name +az network private-endpoint delete --resource-group --name ``` > [!NOTE] diff --git a/articles/event-grid/cross-tenant-delivery-using-managed-identity.md b/articles/event-grid/cross-tenant-delivery-using-managed-identity.md index 45213df67580f..250668c42fd0f 100644 --- a/articles/event-grid/cross-tenant-delivery-using-managed-identity.md +++ b/articles/event-grid/cross-tenant-delivery-using-managed-identity.md @@ -57,7 +57,7 @@ For more information, see the following articles: - In the URL, use the multitenant app object ID. - For **Name**, provide a unique name for the federated client credential. - - For **Issuer**, use `https://login.microsoftonline.com/TENANTAID/v2.0` where `TENANTAID` is the ID of the tenant where the user-assigned identity is located. + - For **Issuer**, use `https://login.microsoftonline.com/TENANTID/v2.0` where `TENANTID` is the ID of the tenant where the user-assigned identity is located. - For **Subject**, specify the client ID of the user-assigned identity. Verify and wait for the API call to succeed. diff --git a/articles/event-grid/dead-letter-event-subscriptions-namespace-topics.md b/articles/event-grid/dead-letter-event-subscriptions-namespace-topics.md index ade168a4f68f2..add7666f06307 100644 --- a/articles/event-grid/dead-letter-event-subscriptions-namespace-topics.md +++ b/articles/event-grid/dead-letter-event-subscriptions-namespace-topics.md @@ -35,7 +35,7 @@ The format used when storing dead-letter events is the [CloudEvents JSON format] - `deliveryresult` - The last result during the last time the service attempted to deliver the event. - `publishutc` - The UTC time at which the event was persisted and accepted (HTTP 200 OK, for example) by Event Grid. - `deliveryattemptutc` - The UTC time of the last delivery attempt. -- `customDeliveryProperties` - Headers (custom push delivery properties) configured on the event subscription to go with every outgoing HTTP push delivery request. One or more of these custom properties might be present in the persisted dead-letter JSON. Custom properties identified as secrets aren't stored. This metadata are described using a separate object whose key name is `customDeliveryProperties`. The property key names inside that object and their values are exactly the same as the ones set in the event subscription. Here's an exmaple: +- `customDeliveryProperties` - Headers (custom push delivery properties) configured on the event subscription to go with every outgoing HTTP push delivery request. One or more of these custom properties might be present in the persisted dead-letter JSON. Custom properties identified as secrets aren't stored. This metadata are described using a separate object whose key name is `customDeliveryProperties`. The property key names inside that object and their values are exactly the same as the ones set in the event subscription. Here's an example: ``` Custom-Header-1: value1 diff --git a/articles/event-grid/enable-diagnostic-logs-topic.md b/articles/event-grid/enable-diagnostic-logs-topic.md index 32640450593d9..71ad7e1eb5db4 100644 --- a/articles/event-grid/enable-diagnostic-logs-topic.md +++ b/articles/event-grid/enable-diagnostic-logs-topic.md @@ -104,7 +104,7 @@ You can also enable collection of all **metrics** for the system topic. ```json { "time": "2019-11-01T00:17:13.4389048Z", - "resourceId": "/SUBSCRIPTIONS/SAMPLE-SUBSCTIPTION-ID /RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME ", + "resourceId": "/SUBSCRIPTIONS/SAMPLE-SUBSCRIPTION-ID /RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME ", "eventSubscriptionName": "SAMPLEDESTINATION", "category": "DeliveryFailures", "operationName": "Deliver", diff --git a/articles/event-grid/event-schema-compatibility.md b/articles/event-grid/event-schema-compatibility.md index 4f6ecb52f5a56..35d5fdb2f59c6 100644 --- a/articles/event-grid/event-schema-compatibility.md +++ b/articles/event-grid/event-schema-compatibility.md @@ -11,7 +11,7 @@ ms.custom: FY25Q1-Linter # Event schema compatibility When a topic is created, an incoming event schema is defined. And, when a subscription is created, an outgoing event schema is defined. This article shows you the compatibility between input and output schema that's allowed when creating an event subscription. -## Input schema to outupt schema +## Input schema to output schema The following table shows you the compatibility allowed when creating a subscription. | Incoming event schema | Outgoing event schema | Supported | diff --git a/articles/event-grid/event-schema-resources.md b/articles/event-grid/event-schema-resources.md index 7e5f5b19e0519..01f5f899a5a9d 100644 --- a/articles/event-grid/event-schema-resources.md +++ b/articles/event-grid/event-schema-resources.md @@ -243,7 +243,7 @@ This section shows the `CreatedOrUpdated` event generated when an Azure Storage "data": { "resourceInfo": { "tags": {}, - "id": "/subscriptions/{subcription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}", + "id": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}", "name": "StorageAccount-name", "type": "Microsoft.Storage/storageAccounts", "location": "eastus", @@ -320,7 +320,7 @@ This section shows the `CreatedOrUpdated` event generated when an Azure Storage "data": { "resourceInfo": { "tags": {}, - "id": "/subscriptions/{subcription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}", + "id": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}", "name": "StorageAccount-name", "type": "Microsoft.Storage/storageAccounts", "location": "eastus", diff --git a/articles/event-grid/handle-health-resources-events-using-azure-monitor-alerts.md b/articles/event-grid/handle-health-resources-events-using-azure-monitor-alerts.md index 6781bdb528e32..f400f654e659c 100644 --- a/articles/event-grid/handle-health-resources-events-using-azure-monitor-alerts.md +++ b/articles/event-grid/handle-health-resources-events-using-azure-monitor-alerts.md @@ -128,7 +128,7 @@ Here's a sample `AvailabilityStatusChanged` event. Notice that the `type` is set "time": "2024-02-22T01:40:17.6532683Z", "data": { "resourceInfo": { - "id": "/subscriptions/sample-subscription/resourceGroups/sample-rg/providers/Microsoft.Compute/virtualMachines/sample-machinee/providers/Microsoft.ResourceHealth/availabilityStatuses/current", + "id": "/subscriptions/sample-subscription/resourceGroups/sample-rg/providers/Microsoft.Compute/virtualMachines/sample-machine/providers/Microsoft.ResourceHealth/availabilityStatuses/current", "name": "current", "type": "Microsoft.ResourceHealth/availabilityStatuses", "properties": { diff --git a/articles/event-grid/manage-event-delivery.md b/articles/event-grid/manage-event-delivery.md index 65b73294eca9b..dfb7fa44bd162 100644 --- a/articles/event-grid/manage-event-delivery.md +++ b/articles/event-grid/manage-event-delivery.md @@ -75,7 +75,7 @@ New-AzEventGridSubscription ` To turn off dead-lettering, rerun the command to create the event subscription but don't provide a value for `DeadLetterEndpoint`. You don't need to delete the event subscription. > [!NOTE] -> If you are using Azure Poweshell on your local machine, use Azure PowerShell version 1.1.0 or greater. Download and install the latest Azure PowerShell from [Azure downloads](https://azure.microsoft.com/downloads/). +> If you are using Azure PowerShell on your local machine, use Azure PowerShell version 1.1.0 or greater. Download and install the latest Azure PowerShell from [Azure downloads](https://azure.microsoft.com/downloads/). ## Set retry policy diff --git a/articles/event-grid/monitor-pull-reference.md b/articles/event-grid/monitor-pull-reference.md index 834ea8ec441ef..7269c2f675dc3 100644 --- a/articles/event-grid/monitor-pull-reference.md +++ b/articles/event-grid/monitor-pull-reference.md @@ -24,7 +24,7 @@ This article provides a reference of log and metric data collected to analyze th | FailedPublishedEvents | Failed to publish events | Number of events that failed because Event Grid didn't accept them. This count doesn't include events that were published but failed to reach Event Grid due to a network issue. | | SuccessfulReceivedEvents | Successful received event | Number of events that were successfully returned to (received by) clients. | | FailedReceivedEvents | Failed to receive event | Number of events requested by clients that Event Grid couldn't deliver successfully. | -| SuccessfulAcknowlegedEvents | Successful acknowledged events | Number of events acknowledged by clients. | +| SuccessfulAcknowledgedEvents | Successful acknowledged events | Number of events acknowledged by clients. | | FailedAcknowledgedEvents | Failed to acknowledge events | Number of events that clients didn't acknowledge. | | SuccessfulReleasedEvents | Successful released events | Number of events released by queue subscriber clients. | | FailedReleasedEvents | Failed to release event counts | Number of events that failed to be released back to Event Grid. | diff --git a/articles/event-grid/monitor-push-reference.md b/articles/event-grid/monitor-push-reference.md index e19639f762215..2f73d25d092cf 100644 --- a/articles/event-grid/monitor-push-reference.md +++ b/articles/event-grid/monitor-push-reference.md @@ -125,7 +125,7 @@ Diagnostic settings allow Event Grid users to capture and view **publish and del ```json { "time": "2019-11-01T00:17:13.4389048Z", - "resourceId": "/SUBSCRIPTIONS/SAMPLE-SUBSCTIPTION-ID /RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME ", + "resourceId": "/SUBSCRIPTIONS/SAMPLE-SUBSCRIPTION-ID /RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME ", "eventSubscriptionName": "SAMPLEDESTINATION", "category": "DeliveryFailures", "operationName": "Deliver", diff --git a/articles/event-grid/mqtt-clients.md b/articles/event-grid/mqtt-clients.md index f8aeddb18b481..148c0d824d030 100644 --- a/articles/event-grid/mqtt-clients.md +++ b/articles/event-grid/mqtt-clients.md @@ -54,10 +54,10 @@ Use the "Thumbprint Match" option while using self-signed certificate to authent > [!NOTE] > - clientCertificateAuthentication is always required with a valid value of validationScheme. -> - authenticationName is not required, but after the first create request, authenticatioName value defaults to ARM name, and then it can not be updated. +> - authenticationName is not required, but after the first create request, authenticationName value defaults to ARM name, and then it can not be updated. > - authenticationName can not be updated. > - If validationScheme is anything other than ThumbprintMatch, then allowedThumbprints list can not be provided. -> - allowedThumbprints list can only be provided and must be provided if validationScheme is ThumbprintMatch with atleast one thumbprint. +> - allowedThumbprints list can only be provided and must be provided if validationScheme is ThumbprintMatch with at least one thumbprint. > - allowedThumbprints can only hold maximum of 2 thumbprints. > - Allowed validationScheme values are SubjectMatchesAuthenticationName, DnsMatchesAuthenticationName, UriMatchesAuthenticationName, IpMatchesAuthenticationName, EmailMatchesAuthenticationName, ThumbprintMatch > - Using thumbprint with allow reuse of the same certificate across multiple clients. For other types of validation, the authentication name needs to be in the chosen field of the client certificate. diff --git a/articles/event-grid/mqtt-establishing-multiple-sessions-per-client.md b/articles/event-grid/mqtt-establishing-multiple-sessions-per-client.md index f529d3117b345..7cbf9cb8c1774 100644 --- a/articles/event-grid/mqtt-establishing-multiple-sessions-per-client.md +++ b/articles/event-grid/mqtt-establishing-multiple-sessions-per-client.md @@ -54,6 +54,6 @@ Second connect packet: - username: “ipv4=127.0.0.1” - clientId: “sessionId2” -:::image type="content" source="./media/mqtt-establishing-multiple-sessions-per-client/mqtt-mqttx-app-session-2-connect-configuration.png" alt-text="creenshot showing the MQTTX application client configuration with second session."::: +:::image type="content" source="./media/mqtt-establishing-multiple-sessions-per-client/mqtt-mqttx-app-session-2-connect-configuration.png" alt-text="Screenshot showing the MQTTX application client configuration with second session."::: You can use the same client certificate credentials to authenticate both the sessions. diff --git a/articles/event-grid/mqtt-routing-event-schema.md b/articles/event-grid/mqtt-routing-event-schema.md index 66a902caf53f8..8a28c266bae15 100644 --- a/articles/event-grid/mqtt-routing-event-schema.md +++ b/articles/event-grid/mqtt-routing-event-schema.md @@ -55,7 +55,7 @@ For MQTT v5 messages that are already enveloped in a CloudEvent according to the ```json { - "specverion": "1.0", + "specversion": "1.0", "id": "9aeb0fdf-c01e-0131-0922-9eb54906e20", // original id stamped by the client. "time": "2019-11-18T15:13:39.4589254Z", // timestamp when the message was received by the client "type": "Custom.Type", // original type value stamped by the client. diff --git a/articles/event-grid/mqtt-routing-filtering.md b/articles/event-grid/mqtt-routing-filtering.md index 9e0a9d6e66734..80dc1119d49e3 100644 --- a/articles/event-grid/mqtt-routing-filtering.md +++ b/articles/event-grid/mqtt-routing-filtering.md @@ -20,7 +20,7 @@ You can use the Event Grid Subscription’s filtering capability to filter the r You can filter on the messages’ MQTT topics through filtering on the "subject" property in the Cloud Event schema. Event Grid Subscriptions supports free simple subject filtering by specifying a starting or ending value for the subject. For example, - If each vehicle is publishing its location on its own topic (vehicles/vehicle1/gps, vehicles/vehicle2/gps, etc.), you can use the filter: subject ends with "gps" to route only all the location messages. -- If machines from each section of each factory are publishing on topics that mimic the factory hierarchy (for example, factory1/area2/machine4/telemetry), you can use the filter: subject begins with "factory1/area2/" to route only the messages that belong to facotry1 and area 2 to a specific endpoint. You can replicate this configuration to route messages from other factories/areas to different endpoints. +- If machines from each section of each factory are publishing on topics that mimic the factory hierarchy (for example, factory1/area2/machine4/telemetry), you can use the filter: subject begins with "factory1/area2/" to route only the messages that belong to factory1 and area 2 to a specific endpoint. You can replicate this configuration to route messages from other factories/areas to different endpoints. You can also take advantage of the [Event Subscription’s advanced filtering](event-filtering.md) to filter based on the MQTT topic through filtering on the subject property in the Cloud Event Schema. Advanced filters enable you to set more complex filters by specifying a comparison operator, key, and value. diff --git a/articles/event-grid/mqtt-support.md b/articles/event-grid/mqtt-support.md index 4591ca05e2449..35b60ceaa856e 100644 --- a/articles/event-grid/mqtt-support.md +++ b/articles/event-grid/mqtt-support.md @@ -104,7 +104,7 @@ MQTT broker maintains a queue of messages for each active MQTT session that isn' ### Last Will and Testament (LWT) messages Last Will and Testament (LWT) notifies your MQTT clients with the abrupt disconnections of other MQTT clients. You can use LWT to ensure predictable and reliable flow of communication among MQTT clients during unexpected disconnections, which is valuable for scenarios where real-time communication, system reliability, and coordinated actions are critical. Clients that collaborate to perform complex tasks can react to LWT messages from each other by adjusting their behavior, redistributing tasks, or taking over certain responsibilities to maintain the system’s performance and stability. -To use LWT, a client can specify the will message, will topic, and the rest of the will properties in the CONNECT packet during connection. When the client disconnects abruptly, the MQTT broker publishes the will message to all the clients that subscribed to the will topic. To reduce the noise from fluctuating disconnections, the client can set the will delay interval to a value greater than zero. In that case, if the client disconnects abruptly but resotres the connection before the will delay interval expires, the will message isn't published. +To use LWT, a client can specify the will message, will topic, and the rest of the will properties in the CONNECT packet during connection. When the client disconnects abruptly, the MQTT broker publishes the will message to all the clients that subscribed to the will topic. To reduce the noise from fluctuating disconnections, the client can set the will delay interval to a value greater than zero. In that case, if the client disconnects abruptly but restores the connection before the will delay interval expires, the will message isn't published. ### User properties MQTT broker supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information." diff --git a/articles/event-grid/namespace-handler-webhook.md b/articles/event-grid/namespace-handler-webhook.md index 855d990a1dbd5..ce619f108a99c 100644 --- a/articles/event-grid/namespace-handler-webhook.md +++ b/articles/event-grid/namespace-handler-webhook.md @@ -21,7 +21,7 @@ If your webhook endpoint is known by malicious actors, they could exploit attack >[!IMPORTANT] >Event Grid doesn't support the following functionality when [validating webhooks](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#41-validation-request): ->- `WebHook-Request-Callback`. That means that you or your webhook cannot respond asyncronously to Event Grid's validation request. +>- `WebHook-Request-Callback`. That means that you or your webhook cannot respond asynchronously to Event Grid's validation request. >- `WebHook-Request-Rate`. That is, Event Grid does not request a data rate at which it communicates with your webhook endpoint. If your webhook responds with a `WebHook-Allowed-Rate`header, it is ignored. ## Webhooks diff --git a/articles/event-grid/namespaces-cloud-events.md b/articles/event-grid/namespaces-cloud-events.md index bb8d5fe27545a..760d9cf3b79f5 100644 --- a/articles/event-grid/namespaces-cloud-events.md +++ b/articles/event-grid/namespaces-cloud-events.md @@ -3,7 +3,7 @@ ms.date: 09/25/2024 author: robece ms.author: robece title: Event Grid Namespaces - support for CloudEvents schema -description: Desbribes how Event Grid Namespaces support CloudEvents schema, which is an open source standard for defining events. +description: Describes how Event Grid Namespaces support CloudEvents schema, which is an open source standard for defining events. ms.topic: concept-article ms.custom: FY25Q1-Linter #customer intent: As a developer or architect, I want to know whether and how Azure Event Grid Namespaces support CloudEvents schema. diff --git a/articles/event-grid/outlook-events.md b/articles/event-grid/outlook-events.md index 7b1cafa789dba..c670aa97aa2fd 100644 --- a/articles/event-grid/outlook-events.md +++ b/articles/event-grid/outlook-events.md @@ -212,7 +212,7 @@ When an event is triggered, the Event Grid service sends data about that event t "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.MessageCreated", "source": "/tenants//applications/", - "subject": "Users//Messages/", + "subject": "Users//Messages/", "time": "2024-05-22T22:24:31.3062901Z", "datacontenttype": "application/json", "specversion": "1.0", @@ -272,7 +272,7 @@ When an event is triggered, the Event Grid service sends data about that event t "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.MessageDeleted", "source": "/tenants//applications/", - "subject": "Message/", + "subject": "Message/", "time": "2024-05-22T22:24:31.3062901Z", "datacontenttype": "application/json", "specversion": "1.0", diff --git a/articles/event-grid/partner-events-overview-for-partners.md b/articles/event-grid/partner-events-overview-for-partners.md index 1ad63023e99f0..af4e642cc264b 100644 --- a/articles/event-grid/partner-events-overview-for-partners.md +++ b/articles/event-grid/partner-events-overview-for-partners.md @@ -63,7 +63,7 @@ A Channel is a nested resource to a Partner Namespace. A channel has two main pu A channel has the same lifecycle as its associated customer partner topic or destination. When a channel of type `partner topic` is deleted, for example, the associated customer's partner topic is deleted. Similarly, if the partner topic is deleted by the customer, the associated channel on your Azure subscription is deleted. - It's a resource that is used to route events. A channel of type ``partner topic`` is used to route events to a customer's partner topic. It supports two types of routing modes. - - **Channel name routing**. With this kind of routing, you publish events using an http header called `aeg-channel-name` where you provide the name of the channel to which events should be routed. As channels are a partner's representation of partner topics, the events routed to the channel show on the customer's parter topic. This kind of routing is a new capability not present in `event channels`, which support only source-based routing. Channel name routing enables more use cases than the source-based routing and it's the recommended routing mode to choose. For example, with channel name routing a customer can request events that originate in different event sources to land on a single partner topic. + - **Channel name routing**. With this kind of routing, you publish events using an http header called `aeg-channel-name` where you provide the name of the channel to which events should be routed. As channels are a partner's representation of partner topics, the events routed to the channel show on the customer's partner topic. This kind of routing is a new capability not present in `event channels`, which support only source-based routing. Channel name routing enables more use cases than the source-based routing and it's the recommended routing mode to choose. For example, with channel name routing a customer can request events that originate in different event sources to land on a single partner topic. - **Source-based routing**. This routing approach is based on the value of the `source` context attribute in the event. Sources are mapped to channels and when an event comes with a source, say, of value "A" that event is routed to the partner topic associated to the channel that contains "A" in its source property. You may want to declare the event types that are routed to the channel and to its associated partner topic. Event types are shown to customers when creating event subscriptions on the partner topic and are used to select the specific event types to send to an event handler destination. [Learn more](onboard-partner.md#create-a-channel). diff --git a/articles/event-grid/receive-events-from-namespace-topics-java.md b/articles/event-grid/receive-events-from-namespace-topics-java.md index d57e60c187ee5..83649bd922c56 100644 --- a/articles/event-grid/receive-events-from-namespace-topics-java.md +++ b/articles/event-grid/receive-events-from-namespace-topics-java.md @@ -68,7 +68,7 @@ import java.util.List; */ public class NamespaceTopicConsumer { private static final String TOPIC_NAME = ""; - public static final String EVENT_SUBSCRIPTION_NAME = ""; + public static final String EVENT_SUBSCRIPTION_NAME = ""; public static final String ENDPOINT = ""; public static final int MAX_NUMBER_OF_EVENTS_TO_RECEIVE = 10; public static final Duration MAX_WAIT_TIME_FOR_EVENTS = Duration.ofSeconds(10); diff --git a/articles/event-grid/troubleshoot-network-connectivity.md b/articles/event-grid/troubleshoot-network-connectivity.md index 8dd73300c97c2..981fcec40f964 100644 --- a/articles/event-grid/troubleshoot-network-connectivity.md +++ b/articles/event-grid/troubleshoot-network-connectivity.md @@ -71,10 +71,10 @@ Enable diagnostic logs for Event Grid topic/domain [Enable diagnostic logs](enab ```json { "time": "2019-11-01T00:17:13.4389048Z", - "resourceId": "/SUBSCRIPTIONS/SAMPLE-SUBSCTIPTION-ID/RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME", + "resourceId": "/SUBSCRIPTIONS/SAMPLE-SUBSCRIPTION-ID/RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME", "category": "PublishFailures", "operationName": "Post", - "message": "inputEventsCount=null, requestUri=https://SAMPLE-TOPIC-NAME.region-suffix.eventgrid.azure.net/api/events, publisherInfo=PublisherInfo(category=User, inputSchema=EventGridEvent, armResourceId=/SUBSCRIPTIONS/SAMPLE-SUBSCTIPTION-ID/RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME), httpStatusCode=Forbidden, errorType=ClientIPRejected, errorMessage=Publishing to SAMPLE-TOPIC-NAME.{region}-{suffix}.EVENTGRID.AZURE.NET by client {clientIp} is rejected due to IpAddress filtering rules." + "message": "inputEventsCount=null, requestUri=https://SAMPLE-TOPIC-NAME.region-suffix.eventgrid.azure.net/api/events, publisherInfo=PublisherInfo(category=User, inputSchema=EventGridEvent, armResourceId=/SUBSCRIPTIONS/SAMPLE-SUBSCRIPTION-ID/RESOURCEGROUPS/SAMPLE-RESOURCEGROUP-NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/SAMPLE-TOPIC-NAME), httpStatusCode=Forbidden, errorType=ClientIPRejected, errorMessage=Publishing to SAMPLE-TOPIC-NAME.{region}-{suffix}.EVENTGRID.AZURE.NET by client {clientIp} is rejected due to IpAddress filtering rules." } ``` diff --git a/articles/event-hubs/event-hubs-data-explorer.md b/articles/event-hubs/event-hubs-data-explorer.md index 3587bf32f0ee4..83264742a46b1 100644 --- a/articles/event-hubs/event-hubs-data-explorer.md +++ b/articles/event-hubs/event-hubs-data-explorer.md @@ -22,7 +22,7 @@ Operations run on an Azure Event Hubs namespace are of two kinds. > * The Event Hubs Data Explorer doesn't support **management operations**. The event hub must be created before the data explorer can send or view events from that event hub. > * While events payloads (known as **values** in Kafka) sent using the **Kafka protocol** will be visible via the data explorer, the **key** for the specific event will not be visible. > * We advise against using the Event Hubs Data Explorer for larger messages, as this may result in timeouts, depending on the message size, network latency between client and Service Bus service etc. Instead, we recommend that you use your own client to work with larger messages, where you can specify your own timeout values. -> * The operations that a user can perform using Event Hubs Data Exploerer is determined by the [role-based access control (RBAC)](authorize-access-azure-active-directory.md#azure-built-in-roles-for-azure-event-hubs) role that the user is assigned to. +> * The operations that a user can perform using Event Hubs Data Explorer is determined by the [role-based access control (RBAC)](authorize-access-azure-active-directory.md#azure-built-in-roles-for-azure-event-hubs) role that the user is assigned to. ## Prerequisites diff --git a/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md b/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md index d083305ad4753..e8ba8fc8bcd71 100644 --- a/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md +++ b/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md @@ -276,7 +276,7 @@ Replace the contents of **Program.cs** with the following code: using System.Text; // Create a blob container client that the event processor will use - // TODO: Replace and with actual names + // TODO: Replace and with actual names BlobContainerClient storageClient = new BlobContainerClient( new Uri("https://.blob.core.windows.net/"), new DefaultAzureCredential()); diff --git a/articles/event-hubs/event-hubs-kafka-connect-debezium.md b/articles/event-hubs/event-hubs-kafka-connect-debezium.md index fa7274afdef7d..b7659269e6052 100644 --- a/articles/event-hubs/event-hubs-kafka-connect-debezium.md +++ b/articles/event-hubs/event-hubs-kafka-connect-debezium.md @@ -220,7 +220,7 @@ You should see the JSON payloads representing the change data events generated i "source": { "version": "1.2.0.Final", "connector": "postgresql", - "name": "fullfillment", + "name": "fulfillment", "ts_ms": 1593018069944, "snapshot": "last", "db": "postgres", diff --git a/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md b/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md index 7a8565a77b4a4..b66ef45c70080 100644 --- a/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md +++ b/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md @@ -174,7 +174,7 @@ The name format used by Event Hubs Capture to write the Avro files. The capture "type": "string", "defaultValue": "{Namespace}/{EventHub}/{PartitionId}/{Year}/{Month}/{Day}/{Hour}/{Minute}/{Second}", "metadata": { - "description": "A Capture Name Format must contain {Namespace}, {EventHub}, {PartitionId}, {Year}, {Month}, {Day}, {Hour}, {Minute} and {Second} fields. These can be arranged in any order with or without delimeters. E.g. Prod_{EventHub}/{Namespace}\\{PartitionId}_{Year}_{Month}/{Day}/{Hour}/{Minute}/{Second}" + "description": "A Capture Name Format must contain {Namespace}, {EventHub}, {PartitionId}, {Year}, {Month}, {Day}, {Hour}, {Minute} and {Second} fields. These can be arranged in any order with or without delimiters. E.g. Prod_{EventHub}/{Namespace}\\{PartitionId}_{Year}_{Month}/{Day}/{Hour}/{Minute}/{Second}" } } diff --git a/articles/event-hubs/geo-replication.md b/articles/event-hubs/geo-replication.md index fd5655f0cbf24..51fabccd342c3 100644 --- a/articles/event-hubs/geo-replication.md +++ b/articles/event-hubs/geo-replication.md @@ -48,7 +48,7 @@ The Metadata DR feature replicates configuration information for a namespace fro The newer Geo-replication feature replicates configuration information and all of the data from a primary namespace to one, or more secondary namespaces. When a failover is performed, the selected secondary becomes the primary and the previous primary becomes a secondary. Users can perform a failover back to the original primary when desired. -This rest of this article focuses on the Geo-replication feature. For details on the metadata DR feature, see [Event Hubs Geo-disater recovery for metadata](./event-hubs-geo-dr.md). +This rest of this article focuses on the Geo-replication feature. For details on the metadata DR feature, see [Event Hubs Geo-disaster recovery for metadata](./event-hubs-geo-dr.md). ## Geo-replication The public preview of the Geo-replication feature is supported for namespaces in Event Hubs self-serve scaling dedicated clusters. You can use the feature with new, or existing namespaces in dedicated self-serve clusters. The following features aren't supported with Geo-replication: diff --git a/articles/event-hubs/resource-governance-with-app-groups.md b/articles/event-hubs/resource-governance-with-app-groups.md index bdb31993ab162..8da17753d65ac 100644 --- a/articles/event-hubs/resource-governance-with-app-groups.md +++ b/articles/event-hubs/resource-governance-with-app-groups.md @@ -266,7 +266,7 @@ az eventhubs namespace application-group policy add --namespace-name mynamespace ``` ### [Azure PowerShell](#tab/powershell) -Use the [Set-AzEventHubApplicationGroup](/powershell/module/az.eventhub/set-azeventhubapplicationgroup) command with `-ThrottingPolicyConfig` set to appropriate values. +Use the [Set-AzEventHubApplicationGroup](/powershell/module/az.eventhub/set-azeventhubapplicationgroup) command with `-ThrottlingPolicyConfig` set to appropriate values. **Example:** ```azurepowershell-interactive diff --git a/articles/event-hubs/test-locally-with-event-hub-emulator.md b/articles/event-hubs/test-locally-with-event-hub-emulator.md index 1629270287dc0..fe9414dd370cc 100644 --- a/articles/event-hubs/test-locally-with-event-hub-emulator.md +++ b/articles/event-hubs/test-locally-with-event-hub-emulator.md @@ -79,7 +79,7 @@ To run the Event Hubs emulator locally on Linux or macOS: ``` -2. To sping up containers for Event Hubs emulator, Save the following .yaml file as *docker-compose.yaml*. +2. To spin up containers for Event Hubs emulator, Save the following .yaml file as *docker-compose.yaml*. ``` name: microsoft-azure-eventhubs diff --git a/articles/event-hubs/use-geo-replication.md b/articles/event-hubs/use-geo-replication.md index 50635a33e5322..ff0ac02d310db 100644 --- a/articles/event-hubs/use-geo-replication.md +++ b/articles/event-hubs/use-geo-replication.md @@ -68,7 +68,7 @@ In the case where your primary region goes down completely, you can still perfor ## Remove a secondary To remove a Geo-replication pairing with a secondary, select **Geo-replication** on the left menu, select the secondary region, and then select **Remove**. At the prompt, enter the word **delete**, and then you can delete the secondary. -:::image type="content" source="./media/use-geo-replication/remove-secondary.png" alt-text="Screenshot of the Remove secondary function in the geo-replcation UI."::: +:::image type="content" source="./media/use-geo-replication/remove-secondary.png" alt-text="Screenshot of the Remove secondary function in the geo-replication UI."::: When a secondary region is removed, all of the data that it held is also removed. If you wish to re-enable Geo-replication with that region and cluster, it has to replicate the primary region data all over again. diff --git a/articles/expressroute/circuit-placement-api.md b/articles/expressroute/circuit-placement-api.md index 4cfb49e8aced4..9ab06ea0836bd 100644 --- a/articles/expressroute/circuit-placement-api.md +++ b/articles/expressroute/circuit-placement-api.md @@ -1,5 +1,5 @@ --- -title: 'Azure ExpressRoute CrossConnnections circuit placement API' +title: 'Azure ExpressRoute CrossConnections circuit placement API' description: This article provides a detailed overview for ExpressRoute partners about the ExpressRoute CrossConnections circuit placement API. services: expressroute author: mialdrid diff --git a/articles/expressroute/cross-connections-api-development.md b/articles/expressroute/cross-connections-api-development.md index 2432a52ca0e85..e0329c1ec69bf 100644 --- a/articles/expressroute/cross-connections-api-development.md +++ b/articles/expressroute/cross-connections-api-development.md @@ -1,5 +1,5 @@ --- -title: 'Azure ExpressRoute CrossConnnections API development and integration' +title: 'Azure ExpressRoute CrossConnections API development and integration' description: This article provides a detailed overview for ExpressRoute partners about the expressRouteCrossConnections resource type. services: expressroute author: duongau @@ -12,13 +12,13 @@ ms.author: duau --- -# ExpressRoute CrossConnnections API development and integration +# ExpressRoute CrossConnections API development and integration The ExpressRoute Partner Resource Manager API allows ExpressRoute partners to manage the layer-2 and layer-3 configuration of customer ExpressRoute circuits. The ExpressRoute Partner Resource Manager API introduces a new resource type, **expressRouteCrossConnections**. Partners use this resource to manage customer ExpressRoute circuits. ## Workflow -The expressRouteCrossConnections resource is a shadow resource to the ExpressRoute circuit. When an Azure customer creates an ExpressRoute circuit and selects a specific ExpressRoute partner, Microsoft creates an expressRouteCrossConnections resource in the partner's Azure ExpressRoute management subscription. In doing so, Microsoft defines a resource group to create the expressRouteCrossConnections resource in. The naming standard for the resource group is **CrossConnection-*PeeringLocation***; where PeeringLocation = the ExpressRoute Location. For example, if a customer creates an ExpressRoute circuit in Denver, the CrossConnection will be created in the partner's Azure subscription in the following resource group: **CrossConnnection-Denver**. +The expressRouteCrossConnections resource is a shadow resource to the ExpressRoute circuit. When an Azure customer creates an ExpressRoute circuit and selects a specific ExpressRoute partner, Microsoft creates an expressRouteCrossConnections resource in the partner's Azure ExpressRoute management subscription. In doing so, Microsoft defines a resource group to create the expressRouteCrossConnections resource in. The naming standard for the resource group is **CrossConnection-*PeeringLocation***; where PeeringLocation = the ExpressRoute Location. For example, if a customer creates an ExpressRoute circuit in Denver, the CrossConnection will be created in the partner's Azure subscription in the following resource group: **CrossConnection-Denver**. ExpressRoute partners manage layer-2 and layer-3 configuration by issuing REST operations against the expressRouteCrossConnections resource. diff --git a/articles/expressroute/expressroute-faqs.md b/articles/expressroute/expressroute-faqs.md index 9c187b35fcda4..c0a915b604352 100644 --- a/articles/expressroute/expressroute-faqs.md +++ b/articles/expressroute/expressroute-faqs.md @@ -454,9 +454,9 @@ See the recommendation for [High availability and failover with Azure ExpressRou Yes. Office 365 GCC service endpoints are reachable through the Azure US Government ExpressRoute. However, you first need to open a support ticket on the Azure portal to provide the prefixes you intend to advertise to Microsoft. Your connectivity to Office 365 GCC services will be established after the support ticket is resolved. -### Can I have ExpressRoute Private Peering in an Azure Goverment environment with Virtual Network Gateways in Azure commercial cloud? +### Can I have ExpressRoute Private Peering in an Azure Government environment with Virtual Network Gateways in Azure commercial cloud? -No, it's not possible to establish ExpressRoute Private peering in an Azure Goverment environment with a virtual network gateway in Azure commercial cloud environments. Furthermore, the scope of the ExpressRoute Government Microsoft Peering is limited to only public IPs within Azure government regions and doesn't extend to the broader ranges of commercial public IPs. +No, it's not possible to establish ExpressRoute Private peering in an Azure Government environment with a virtual network gateway in Azure commercial cloud environments. Furthermore, the scope of the ExpressRoute Government Microsoft Peering is limited to only public IPs within Azure government regions and doesn't extend to the broader ranges of commercial public IPs. ## Route filters for Microsoft peering diff --git a/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md b/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md index 5dc0e896f96e7..827fc5b48c6a5 100644 --- a/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md +++ b/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md @@ -342,7 +342,7 @@ When your results are ready, you have two sets of them for the primary and secon * **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down). * You can run additional testing to confirm the unhealthy path by advertising a unique /32 on-premises route over the BGP session on this path. - * Run "Test your private peering connectivity" using the unique /32 advertised as the on-premise destination address and reveiw the results to confirm the path health. + * Run "Test your private peering connectivity" using the unique /32 advertised as the on-premise destination address and review the results to confirm the path health. Your test results for each MSEE device look like the following example: diff --git a/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md b/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md index b774406f3ddc4..24c174622e2e3 100644 --- a/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md +++ b/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md @@ -156,7 +156,7 @@ In this example, the variable declarations correspond to the example network. Wh "vpnType": "RouteBased", // type of VPN gateway "sharedKey": "string", // shared secret needs to match with on-premises configuration "asnVpnGateway": 65000, // BGP Autonomous System number assigned to the VPN Gateway - "asnRemote": 65010, // BGP Autonmous Syste number assigned to the on-premises device + "asnRemote": 65010, // BGP Autonomous System number assigned to the on-premises device "bgpPeeringAddress": "172.16.0.3", // IP address of the remote BGP peer on-premises "connectionName": "vpn2local1", "vnetID": "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", diff --git a/articles/external-attack-surface-management/understanding-dashboards.md b/articles/external-attack-surface-management/understanding-dashboards.md index 493f88f74629c..26fdeb57a27a1 100644 --- a/articles/external-attack-surface-management/understanding-dashboards.md +++ b/articles/external-attack-surface-management/understanding-dashboards.md @@ -257,7 +257,7 @@ This chart displays live PII sites by their usage of SSL certificates. By refere A login page is a page on a website where a user has the option to enter a username and password to gain access to services hosted on that site. Login pages have specific requirements under GDPR, so Defender EASM references the DOM of all scanned pages to search for code that correlates to a login. For instance, login pages must be secure to be compliant. This first chart displays Login websites by protocol (HTTP or HTTPS) and the second by certificate posture. -![Screenshot of Login websites by protcol chart.](media/Dashboards-25.png) +![Screenshot of Login websites by protocol chart.](media/Dashboards-25.png) ![Screenshot of Login websites by certificate posture chart.](media/Dashboards-26.png) diff --git a/articles/firewall-manager/secure-cloud-network-powershell.md b/articles/firewall-manager/secure-cloud-network-powershell.md index 149621f8afe41..48deb59779412 100644 --- a/articles/firewall-manager/secure-cloud-network-powershell.md +++ b/articles/firewall-manager/secure-cloud-network-powershell.md @@ -91,7 +91,7 @@ $AzFW = New-AzFirewall -Name "azfw1" -ResourceGroupName $RG -Location $Location Enabling logging from the Azure Firewall to Azure Monitor is optional, but in this example you use the Firewall logs to prove that traffic is traversing the firewall: ```azurepowershell -# Optionally, enable looging of Azure Firewall to Azure Monitor +# Optionally, enable logging of Azure Firewall to Azure Monitor $LogWSName = "vwan-" + (Get-Random -Maximum 99999) + "-" + $RG $LogWS = New-AzOperationalInsightsWorkspace -Location $Location -Name $LogWSName -Sku Standard -ResourceGroupName $RG Set-AzDiagnosticSetting -ResourceId $AzFW.Id -Enabled $True -Category AzureFirewallApplicationRule, AzureFirewallNetworkRule -WorkspaceId $LogWS.ResourceId diff --git a/articles/firewall/deploy-rules-powershell.md b/articles/firewall/deploy-rules-powershell.md index 241deee871c9d..37d9b9c2bc0d2 100644 --- a/articles/firewall/deploy-rules-powershell.md +++ b/articles/firewall/deploy-rules-powershell.md @@ -29,7 +29,7 @@ Carefully review the following steps. You should first try it on a test policy t ```azurepowershell Connect-AzAccount -Set-AzContext -Subscription "" +Set-AzContext -Subscription "" ``` diff --git a/articles/firewall/firewall-copilot.md b/articles/firewall/firewall-copilot.md index 68f7fa76cab69..f2af952247aa9 100644 --- a/articles/firewall/firewall-copilot.md +++ b/articles/firewall/firewall-copilot.md @@ -80,7 +80,7 @@ To view the list of built-in system capabilities for Azure Firewall, use the fol > [!IMPORTANT] > Use of Copilot in Azure to query Azure Firewall is included with Security Copilot and requires [security compute units (SCUs)](/security-copilot/get-started-security-copilot#security-compute-units). You can provision SCUs and increase or decrease them at any time. For more information on SCUs, see [Get started with Microsoft Security Copilot](/security-copilot/get-started-security-copilot). - > If you do not have Security Copilot properly configured but ask a question relavent to the Azure Firewall capabilities via the Copilot in Azure experience then you will see an error message. + > If you do not have Security Copilot properly configured but ask a question relevant to the Azure Firewall capabilities via the Copilot in Azure experience then you will see an error message. ## Sample Azure Firewall prompts diff --git a/articles/firewall/monitor-firewall-reference.md b/articles/firewall/monitor-firewall-reference.md index 923071a9d5161..4a0d9d1cf0293 100644 --- a/articles/firewall/monitor-firewall-reference.md +++ b/articles/firewall/monitor-firewall-reference.md @@ -56,7 +56,7 @@ The *AZFW Latency Probe* metric measures the overall or average latency of Azure **What the AZFW Latency Probe Metric Measures (and Doesn't):** - What it measures: The latency of the Azure Firewall within the Azure platform -- What it doesn't meaure: The metric does not capture end-to-end latency for the entire network path. Instead, it reflects the performance within the firewall, rather than how much latency Azure Firewall introduces into the network. +- What it doesn't measure: The metric does not capture end-to-end latency for the entire network path. Instead, it reflects the performance within the firewall, rather than how much latency Azure Firewall introduces into the network. - Error reporting: If the latency metric isn't functioning correct, it reports a value of 0 in the metrics dashboard, indicating a probe failure or interruption. **Factors that impact latency:** @@ -65,10 +65,10 @@ The *AZFW Latency Probe* metric measures the overall or average latency of Azure - Networking issues within the Azure platform **Latency Probes: From ICMP to TCP** -The latency probe currently uses Microsoft's Ping Mesh technology, which is based on ICMP (Internet Control Message Protcol). ICMP is suitable for quick health checks, like ping requests, but it may not accurately represent real-world application traffic, which typically relis on TCP.However, ICMP probes prioritize differently across the Azure platform, which can result in variation across SKUs. To reduce these discrepancies, Azure Firewall plans to transition to TCP-based probes. +The latency probe currently uses Microsoft's Ping Mesh technology, which is based on ICMP (Internet Control Message Protocol). ICMP is suitable for quick health checks, like ping requests, but it may not accurately represent real-world application traffic, which typically relis on TCP.However, ICMP probes prioritize differently across the Azure platform, which can result in variation across SKUs. To reduce these discrepancies, Azure Firewall plans to transition to TCP-based probes. - Latency spikes: With ICMP probes, intermittent spikes are normal and are part of the host network's standard behavior. These should not be misinterpreted as firewall issues unless they are persistent. -- Average latency: On average, the latency of Azure Firewall is expected to range from 1ms to 10 ms, dpending on the Firewall SKU and deployment size. +- Average latency: On average, the latency of Azure Firewall is expected to range from 1ms to 10 ms, depending on the Firewall SKU and deployment size. **Best Practices for Monitoring Latency** - Set a baseline: Establish a latency baseline under light traffic conditions for accurate comparisons during normal or peak usage. diff --git a/articles/firewall/premium-deploy.md b/articles/firewall/premium-deploy.md index dd576f873ea72..ee15413ad6482 100644 --- a/articles/firewall/premium-deploy.md +++ b/articles/firewall/premium-deploy.md @@ -85,7 +85,7 @@ You can use `curl` to control various HTTP headers and simulate malicious traffi ``` { “msg” : “TCP request from 10.0.100.5:16036 to 10.0.20.10:80. Action: Alert. Rule: 2032081. IDS: - USER_AGENTS Suspicious User Agent (HaxerMen). Priority: 1. Classification: A Network Tojan was + USER_AGENTS Suspicious User Agent (HaxerMen). Priority: 1. Classification: A Network Trojan was detected”} ``` diff --git a/articles/firewall/snat-private-range.md b/articles/firewall/snat-private-range.md index 748c3a4d7a389..fd96aedab120a 100644 --- a/articles/firewall/snat-private-range.md +++ b/articles/firewall/snat-private-range.md @@ -180,7 +180,7 @@ You can use the Azure portal to specify private IP address ranges for the firewa You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network, so traffic to destinations in the learned ranges aren't SNATed. Auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use an ARM template, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes. > [!NOTE] -> Auto-learn SNAT routes is availalable only on VNet deployments (hub virtual network). It isn't availble on VWAN deployments (secured virtual hub). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md) +> Auto-learn SNAT routes is available only on VNet deployments (hub virtual network). It isn't available on VWAN deployments (secured virtual hub). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md) ### Configure using an ARM template diff --git a/articles/healthcare-apis/deidentification/quickstart-sdk-net.md b/articles/healthcare-apis/deidentification/quickstart-sdk-net.md index ff716b68f1707..49509577fe8bd 100644 --- a/articles/healthcare-apis/deidentification/quickstart-sdk-net.md +++ b/articles/healthcare-apis/deidentification/quickstart-sdk-net.md @@ -36,7 +36,14 @@ A de-identification service provides you with an endpoint URL. This endpoint url DEID_SERVICE_NAME="" az resource create -g $RESOURCE_GROUP_NAME -n $DEID_SERVICE_NAME --resource-type microsoft.healthdataaiservices/deidservices --is-full-object -p "{\"identity\":{\"type\":\"SystemAssigned\"},\"properties\":{},\"location\":\"$REGION\"}" ``` - +### Assign RBAC Roles to the de-identification service + +We need to assign a role to our de-identification service so we have permissions to perform the actions in this quickstart. + +Since we're using real-time and job endpoints, we assign the `DeID Data Owner` roles. + +To learn how to assign this role to your de-identification service, refer to: [Manage access to the de-identification service with Azure role-based access control (RBAC) in Azure Health Data Services](manage-access-rbac.md) + ### Create an Azure Storage account 1. Install [Azure CLI](/cli/azure/install-azure-cli) @@ -141,16 +148,16 @@ To create the job, we need the URL to the blob endpoint of the Azure Storage Acc az resource show -n $STORAGE_ACCOUNT_NAME -g $RESOURCE_GROUP_NAME --resource-type Microsoft.Storage/storageAccounts --query properties.primaryEndpoints.blob --output tsv ``` -Now we can create the job. This example uses `folder1/` as the prefix. The job will de-identify any document that matches this prefix and write the de-identified version with the `output_files/` prefix. +Now we can create the job. This example uses `folder1/` as the prefix. The job de-identifies any document that matches this prefix and write the de-identified version with the `output_files/` prefix. ```csharp using Azure; -Uri storageAccountUri = new(""); +Uri storageAccountContainerUri = new("https://exampleStorageAccount.blob.core.windows.net/containerName"); DeidentificationJob job = new( - new SourceStorageLocation(new Uri(storageAccountUrl), "folder1/"), - new TargetStorageLocation(new Uri(storageAccountUrl), "output_files/") + new SourceStorageLocation(storageAccountContainerUri, "folder1/"), + new TargetStorageLocation(storageAccountContainerUri, "output_files/") ); job = client.CreateJob(WaitUntil.Started, "my-job-1", job).Value; diff --git a/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md b/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md index f3d3d0f8ffd11..72cf2e5d2b064 100644 --- a/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md +++ b/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md @@ -235,7 +235,7 @@ These VMs are added to the backend pool of the load balancer that was created ea | Setting | VM 2 | | ------- | ----- | | Name | **myVM2** | - | Availability set | Select the existing **myAvailabiltySet** | + | Availability set | Select the existing **myAvailabilitySet** | | Network security group | Select the existing **myNSG** | [!INCLUDE [ephemeral-ip-note.md](~/reusable-content/ce-skilling/azure/includes/ephemeral-ip-note.md)] diff --git a/articles/load-balancer/create-custom-http-health-probe-howto.md b/articles/load-balancer/create-custom-http-health-probe-howto.md index 96996344dd163..bcb61e7492c01 100644 --- a/articles/load-balancer/create-custom-http-health-probe-howto.md +++ b/articles/load-balancer/create-custom-http-health-probe-howto.md @@ -134,7 +134,7 @@ In this section, you create the load balancer rule that uses the HTTP health pro | **Protocol** | Select **TCP** | | **Port** | Enter **5000** | | **Backend port** | Enter **5000** | - | **Health probe** | Select **HTTP_Health (HTTP:5000/health_checkk/)** | + | **Health probe** | Select **HTTP_Health (HTTP:5000/health_check/)** | | **Session persistence** | Select **None** | | **Idle timeout (minutes)** | Enter **5** | diff --git a/articles/load-balancer/inbound-nat-rules.md b/articles/load-balancer/inbound-nat-rules.md index 0fb93ae342294..05c205285c20b 100644 --- a/articles/load-balancer/inbound-nat-rules.md +++ b/articles/load-balancer/inbound-nat-rules.md @@ -27,7 +27,7 @@ There are two types of inbound NAT rule available for Azure Load Balancer, versi ### Inbound NAT rule V1 -Inbound NAT rule V1 is defined for a single target virtual machine. Inbound NAT pools are feature of Inbound NAT rules V1 and automatically creates Inbound NAT rules per VMSS intance. The load balancer's frontend IP address and the selected frontend port are used for connections to the virtual machine. +Inbound NAT rule V1 is defined for a single target virtual machine. Inbound NAT pools are feature of Inbound NAT rules V1 and automatically creates Inbound NAT rules per VMSS instance. The load balancer's frontend IP address and the selected frontend port are used for connections to the virtual machine. >[!Important] > On September 30, 2027, Inbound NAT rules v1 will be retired. If you are currently using Inbound NAT rules v1, make sure to upgrade to Inbound NAT rules v2 prior to the retirement date. diff --git a/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md b/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md index b5d79d513fe8d..80b90769f7885 100644 --- a/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md +++ b/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md @@ -105,7 +105,7 @@ Add IPv6 address ranges to the virtual network and subnet hosting the VMs as fol ```azurepowershell-interactive #Add IPv6 ranges to the VNET and subnet -#Retreive the VNET object +#Retrieve the VNET object $vnet = Get-AzVirtualNetwork -ResourceGroupName $rg.ResourceGroupName -Name "myVnet" #Add IPv6 prefix to the VNET diff --git a/articles/load-balancer/load-balancer-common-deployment-errors.md b/articles/load-balancer/load-balancer-common-deployment-errors.md index 0a3ce82bf27e7..9cd732998601b 100644 --- a/articles/load-balancer/load-balancer-common-deployment-errors.md +++ b/articles/load-balancer/load-balancer-common-deployment-errors.md @@ -36,7 +36,7 @@ This article describes some common Azure Load Balancer deployment errors and pro | LoadBalancerInUseByVirtualMachineScaleSet | The Load Balancer resource is in use by a Virtual Machine Scale Set and can't be deleted. Use the Azure Resource Manager ID provided in the error message to search for the Virtual Machine Scale Set in order to delete it. | | SpecifiedAllocatedOutboundPortsForOutboundRuleIsNotAMultipleOfEight | The number of specified [SNAT](outbound-rules.md) ports is not a multiply of 8. | SpecifiedAllocatedOutboundPortsForOutboundRuleExceedsTotalNumberOfAllowedPortsPerRule | The number of specified [SNAT](outbound-rules.md) ports is greater than 64000. -| SpecifiedAllocatedOutboundPortsForOutboundRuleExceedsTotalNumberOfAvailablePorts | The number of specified [SNAT](outbound-rules.md) ports is greater than currently avaliable. +| SpecifiedAllocatedOutboundPortsForOutboundRuleExceedsTotalNumberOfAvailablePorts | The number of specified [SNAT](outbound-rules.md) ports is greater than currently available. ## Next steps diff --git a/articles/load-balancer/load-balancer-manage-health-status.md b/articles/load-balancer/load-balancer-manage-health-status.md index 07815fc6e6dce..0065edfbdfa0f 100644 --- a/articles/load-balancer/load-balancer-manage-health-status.md +++ b/articles/load-balancer/load-balancer-manage-health-status.md @@ -37,7 +37,7 @@ The following table describes the success reason codes where the backend state i | **Reason Code** | **Portal displayed reason** | **Description** | |-------------|-------------------------|-------------| -| **Up_Probe_Succes**s | The backend instance is responding to health probe successfully. | Your backend instance is responding to the health probe successfully. | +| **Up_Probe_Success** | The backend instance is responding to health probe successfully. | Your backend instance is responding to the health probe successfully. | | **Up_Probe_AllDownIsUp** | The backend instance is considered healthy due to enablement of *NoHealthyBackendsBehavior*. | Health probe state of the backend instance is ignored because *NoHealthyBackendsBehavior* is enabled. The backend instance is considered healthy and can receive traffic. | | **Up_Probe_ApproachingUnhealthyThreshold** | Health probe is approaching an unhealthy threshold but backend instance remains healthy based on last response. | The most recent probe has failed to respond but the backend instance remains healthy enough based on earlier responses. | | **Up_Admin**| The backend instance is healthy due to Admin State set to *Up*. | Health probe state of the backend instance is ignored because the Admin State is set to *UP*. The backend instance is considered healthy and can receive traffic. | @@ -97,7 +97,7 @@ Health status can be retrieved on a per load balancing rule basis. This is suppo To retrieve the health status information via REST API, you need to do a two request process. > [!NOTE] -> Using the REST API method requres that you have a **Bearer access token** for autorization. For assistance retrieving the access token, see [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) for details. +> Using the REST API method requires that you have a **Bearer access token** for authorization. For assistance retrieving the access token, see [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) for details. 1. Use the following POST request to obtain the Location URI from the Response Headers. diff --git a/articles/load-balancer/load-balancer-monitor-metrics-cli.md b/articles/load-balancer/load-balancer-monitor-metrics-cli.md index f06d24cc3cd38..bcd67bbfa1a56 100644 --- a/articles/load-balancer/load-balancer-monitor-metrics-cli.md +++ b/articles/load-balancer/load-balancer-monitor-metrics-cli.md @@ -19,7 +19,7 @@ Complete reference documentation and other samples for retrieving metrics using ## Table of metric names via CLI -When you use CLI, Load Balancer metrics may use a different metric name for the CLI parameter value. When specifying the metric name via the `--metric dimension` parameter, use the CLI metric name instead. For example, the metric Data path availability would be used by specifying a parameter of `--metric VipAvaialbility`. +When you use CLI, Load Balancer metrics may use a different metric name for the CLI parameter value. When specifying the metric name via the `--metric dimension` parameter, use the CLI metric name instead. For example, the metric Data path availability would be used by specifying a parameter of `--metric VipAvailability`. Here's a table of common Load Balancer metrics, the CLI metric name, and recommend aggregation values for queries: diff --git a/articles/load-balancer/load-balancer-nat-pool-migration.md b/articles/load-balancer/load-balancer-nat-pool-migration.md index 75dba871d91f0..c03791a3e4aed 100644 --- a/articles/load-balancer/load-balancer-nat-pool-migration.md +++ b/articles/load-balancer/load-balancer-nat-pool-migration.md @@ -18,7 +18,7 @@ An [inbound NAT rule](inbound-nat-rules.md) is used to forward traffic from a lo ## NAT rule version 1 -[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load Balancer’s frontend port to each backend instance. Rules are applied to the backend instance’s network interface card (NIC). For Azure Virtual Machine Scale Sets (VMSS) instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down. For VMSS instanes use the `Inbound NAT Pools` property to manage Inbound NAT rules version 1. +[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load Balancer’s frontend port to each backend instance. Rules are applied to the backend instance’s network interface card (NIC). For Azure Virtual Machine Scale Sets (VMSS) instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down. For VMSS instances use the `Inbound NAT Pools` property to manage Inbound NAT rules version 1. ## NAT rule version 2 diff --git a/articles/load-balancer/load-balancer-outbound-connections.md b/articles/load-balancer/load-balancer-outbound-connections.md index fe49527fff1c6..d1163cb035bc9 100644 --- a/articles/load-balancer/load-balancer-outbound-connections.md +++ b/articles/load-balancer/load-balancer-outbound-connections.md @@ -89,7 +89,7 @@ A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and imp In Azure, virtual machines created in a virtual network without explicit outbound connectivity defined are assigned a default outbound public IP address. This IP address enables outbound connectivity from the resources to the Internet. This access is referred to as [default outbound access](../virtual-network/ip-services/default-outbound-access.md). This method of access is **not recommended** as it's insecure and the IP addresses are subject to change. >[!Important] ->On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is recommended to use one the explict forms of connectivity as shown in options 1-3 above. +>On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is recommended to use one the explicit forms of connectivity as shown in options 1-3 above. ### What are SNAT ports? diff --git a/articles/load-balancer/manage-admin-state-how-to.md b/articles/load-balancer/manage-admin-state-how-to.md index 75ae1416712f0..ce765dcfc337e 100644 --- a/articles/load-balancer/manage-admin-state-how-to.md +++ b/articles/load-balancer/manage-admin-state-how-to.md @@ -367,7 +367,7 @@ In this section, you learn how to remove an existing admin state from an existin # [Azure PowerShell](#tab/azurepowershell) 1. Connect to your Azure subscription with Azure PowerShell. -2. Remove an existing backend pool instance. This is done by setting the admin state value to **NONE** with [New-AzLoadBlancerBackendAddressConfig](/powershell/module/az.network/new-azloadbalancerbackendaddressconfig). Replace the values in brackets with the names of the resources in your configuration. +2. Remove an existing backend pool instance. This is done by setting the admin state value to **NONE** with [New-AzLoadBalancerBackendAddressConfig](/powershell/module/az.network/new-azloadbalancerbackendaddressconfig). Replace the values in brackets with the names of the resources in your configuration. ```azurepowershell diff --git a/articles/load-balancer/manage.md b/articles/load-balancer/manage.md index 1a25b60b9a2e6..389cac3c885ef 100644 --- a/articles/load-balancer/manage.md +++ b/articles/load-balancer/manage.md @@ -235,7 +235,7 @@ The following is displayed in the **Add outbound rule** creation page: | Port allocation | Your choices are:
**Manually choose number of outbound ports**
**Use the default number of outbound ports**
The recommended selection is the default of **Manually choose number of outbound ports** to prevent SNAT port exhaustion. If **Use the default number of outbound ports** is chosen, the **Outbound ports** selection is disabled. | | Outbound ports | Your choices are:
**Ports per instance**
**Maximum number of backend instances**.
The recommended selections are select **Ports per instance** and enter **10,000**. | -:::image type="content" source="./media/manage/add-outbound-rule.png" alt-text="Screenshot of add outbound rule." border="true"::: +:::image type="content" source="./media/manage/add-outbound-rule.png" alt-text="Screenshot of add outbound rule. Allowing to set the outbound ports." border="true"::: ## Portal settings @@ -361,7 +361,7 @@ If you want to add an outbound rule to your load balancer, go to your load balan | Available Frontend ports | Displayed value of total available frontend ports based on selected port allocation. | | Maximum number of backend instances | Enter the maximum number of back end instances. This entry is only available when choosing **Maximum number of backend instances** for outbound ports above.
You can't scale your backend pool above this number of instances. Increasing the number of instances decreases the number of ports per instance unless you also add more frontend IP addresses. | -:::image type="content" source="./media/manage/outbound-rule.png" alt-text="Screehshot of add outbound rule." border="true"::: +:::image type="content" source="./media/manage/outbound-rule.png" alt-text="Screenshot of add outbound rule. Allowing to set the maximum number of backend instances." border="true"::: ## Next Steps diff --git a/articles/load-balancer/quickstart-load-balancer-standard-internal-terraform.md b/articles/load-balancer/quickstart-load-balancer-standard-internal-terraform.md index 7959c170ac616..5d5ef721cc954 100644 --- a/articles/load-balancer/quickstart-load-balancer-standard-internal-terraform.md +++ b/articles/load-balancer/quickstart-load-balancer-standard-internal-terraform.md @@ -322,7 +322,7 @@ This quickstart shows you how to deploy a standard internal load balancer and tw variable "password" {   type = string   default = "Microsoft@123" -   description = "The passoword for the local account that will be created on the new VM." +   description = "The password for the local account that will be created on the new VM." } variable "virtual_network_name" { diff --git a/articles/load-balancer/quickstart-load-balancer-standard-public-terraform.md b/articles/load-balancer/quickstart-load-balancer-standard-public-terraform.md index 20cbfe37e05ed..4336553fde997 100644 --- a/articles/load-balancer/quickstart-load-balancer-standard-public-terraform.md +++ b/articles/load-balancer/quickstart-load-balancer-standard-public-terraform.md @@ -270,7 +270,7 @@ This quickstart shows you how to deploy a standard load balancer to load balance variable "password" {   type        = string   default     = "Microsoft@123" -   description = "The passoword for the local account that will be created on the new VM." +   description = "The password for the local account that will be created on the new VM." } variable "virtual_network_name" { diff --git a/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md b/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md index 5c79f3f22f871..32ce6a917b313 100644 --- a/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md +++ b/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md @@ -252,7 +252,7 @@ These VMs are added to the backend pool of the load balancer that was created ea | Availability zone | Select **Zone 1** | | Image | Select **Windows Server 2022 Datacenter: Azure Edition - x64 Gen2** | | Azure Spot instance | Leave the default | - | Size | Select **Standar_DS1_v2** or another image size. | + | Size | Select **Standard_DS1_v2** or another image size. | | **Administrator account** | | | Username | Enter a username | | Password | Enter a password | diff --git a/articles/load-testing/troubleshoot-private-endpoint-tests.md b/articles/load-testing/troubleshoot-private-endpoint-tests.md index 258534cff6e99..1ed64e2ba6462 100644 --- a/articles/load-testing/troubleshoot-private-endpoint-tests.md +++ b/articles/load-testing/troubleshoot-private-endpoint-tests.md @@ -24,6 +24,9 @@ Azure Load Testing service requires outbound connectivity from the virtual netwo Optionally, outbound connectivity is needed to *.maven.org and *.github.com to download any plugins that are included in your test configuration. +> [!NOTE] +> For [Azure Government](/azure/azure-government/documentation-government-welcome) regions, ensure outbound connectivity to *.azure.us, *.usgovcloudapi.net and *.azurecr.us. For more information on Azure Government endpoints, see [Guidance for developers](/azure/azure-government/compare-azure-government-global-azure#guidance-for-developers). + ## Troubleshoot connectivity from the virtual network by deploying an Azure Virtual Machine To test connectivity from your virtual network: diff --git a/articles/logic-apps/authenticate-with-managed-identity.md b/articles/logic-apps/authenticate-with-managed-identity.md index d33df9c5971b1..cad7c7abab7de 100644 --- a/articles/logic-apps/authenticate-with-managed-identity.md +++ b/articles/logic-apps/authenticate-with-managed-identity.md @@ -742,7 +742,7 @@ The following steps show how to use the managed identity with a trigger or actio > user-assigned managed identity defined and enabled. However, your logic app should > use only one managed identity at a time. > - > For example, a workflow that acceses different Azure Service Bus messaging entities + > For example, a workflow that accesses different Azure Service Bus messaging entities > should use only one managed identity. See [Connect to Azure Service Bus from workflows](../connectors/connectors-create-api-servicebus.md#prerequisites). For more information, see [Example: Authenticate built-in trigger or action with a managed identity](#authenticate-built-in-managed-identity). @@ -1303,7 +1303,7 @@ For more information, see [Microsoft.Web/connections/accesspolicies (ARM templat --- - + ## Set up advanced control over API connection authentication diff --git a/articles/logic-apps/biztalk-server-migration-overview.md b/articles/logic-apps/biztalk-server-migration-overview.md index 99a764615582d..217bc83229f55 100644 --- a/articles/logic-apps/biztalk-server-migration-overview.md +++ b/articles/logic-apps/biztalk-server-migration-overview.md @@ -187,7 +187,7 @@ Integration platforms offer ways to solve problems in a consistent and unified m - .NET Framework assemblies - You can share these assemblies across BizTalk Server projects. These assemblies are easier to manage from a dependency perspective. Provided that no breaking changes exist, an update to a .NET Fx assembly requires updating the DLL in the Global Assembly Cache (GAC), which automatically makes the changes available to other assemblies. If breaking changes exist, you must also update the dependent project to accommodate the changes in the .NET Franework assembly. + You can share these assemblies across BizTalk Server projects. These assemblies are easier to manage from a dependency perspective. Provided that no breaking changes exist, an update to a .NET Fx assembly requires updating the DLL in the Global Assembly Cache (GAC), which automatically makes the changes available to other assemblies. If breaking changes exist, you must also update the dependent project to accommodate the changes in the .NET Framework assembly. - Custom pipelines and pipeline components @@ -281,7 +281,7 @@ Web services support is a popular capability in BizTalk Server and is available WCF adapters support [Single Sign On (SSO)](/biztalk/core/single-sign-on-support-for-the-wcf-adapters) through impersonation and acquire the Enterprise SSO ticket for using SSO with WCF adapters. This capability enables the user context to flow across systems. From an authentication perspective, [Service Authentication](/biztalk/core/what-are-the-wcf-adapters#service-authentication-types) supports the following types: None, Windows, and Certificate. [Client Authentication](/biztalk/core/what-are-the-wcf-adapters#client-authentication-types) supports the following types: Anonymous, UserName, Windows, and Certificate. Supported [security modes](/biztalk/core/what-are-the-wcf-adapters#security-modes) include the following types: Transport, Message, and Mixed. -WCF supports transactions using the [WS-AutomicTransaction protocol](/biztalk/core/what-are-the-wcf-adapters#ws-atomictransaction), which you can find in WCF adapters such as WCF-WsHttp, WCF-NetTcp, and WCF-NetMsmq. This capability is supported in the following scenarios: +WCF supports transactions using the [WS-AtomicTransaction protocol](/biztalk/core/what-are-the-wcf-adapters#ws-atomictransaction), which you can find in WCF adapters such as WCF-WsHttp, WCF-NetTcp, and WCF-NetMsmq. This capability is supported in the following scenarios: - Transactional submission of messages to the MessageBox database - Transactional transmission of messages from the MessageBox to a transactional destination @@ -422,7 +422,7 @@ Beyond the core XML transformations, BizTalk Server also provides encoding and d - Azure Functions - You can execute XSLT or Liquid template transformations by using C# or any other programing language to create an Azure function that you can call with Azure API Management or Azure Logic Apps. + You can execute XSLT or Liquid template transformations by using C# or any other programming language to create an Azure function that you can call with Azure API Management or Azure Logic Apps. ### Network connectivity diff --git a/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md b/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md index 7fc2fe562407e..4f567a3935161 100644 --- a/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md +++ b/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md @@ -350,7 +350,7 @@ You can begin your XML schema with an optional XML prolog. The SAP connector wor ### XML samples for RFC requests -The following example shows a basic RFC call where the RFC name is `STFC_CONNECTION`. This request uses the default namespace named `xmlns=`. However, you can assign and use namespace aliases such as `xmmlns:exampleAlias=`. The namespace value is the namespace for all the RFCs in SAP for Microsoft services. The request has a simple input parameter named ``. +The following example shows a basic RFC call where the RFC name is `STFC_CONNECTION`. This request uses the default namespace named `xmlns=`. However, you can assign and use namespace aliases such as `xmlns:exampleAlias=`. The namespace value is the namespace for all the RFCs in SAP for Microsoft services. The request has a simple input parameter named ``. ```xml diff --git a/articles/logic-apps/create-replication-tasks-azure-resources.md b/articles/logic-apps/create-replication-tasks-azure-resources.md index 38278e43c8c84..274a34fb3cea1 100644 --- a/articles/logic-apps/create-replication-tasks-azure-resources.md +++ b/articles/logic-apps/create-replication-tasks-azure-resources.md @@ -113,7 +113,7 @@ For Service Bus, you must enable sessions so that message sequences with the sam > a disruptive event. To prevent reprocessing already processed messages, you have to set up a way to > track the already processed messages so that processing resumes only with the unprocessed messages. > -> For example, you can set up a database that stores the proccessing state for each message. +> For example, you can set up a database that stores the processing state for each message. > When a message arrives, check the message's state and process only when the message is unprocessed. > That way, no processing happens for an already processed message. > diff --git a/articles/logic-apps/create-single-tenant-workflows-azure-portal.md b/articles/logic-apps/create-single-tenant-workflows-azure-portal.md index 31e3c0a7b387d..e7f2f4ec52f60 100644 --- a/articles/logic-apps/create-single-tenant-workflows-azure-portal.md +++ b/articles/logic-apps/create-single-tenant-workflows-azure-portal.md @@ -32,7 +32,7 @@ You can have multiple workflows in a Standard logic app. Workflows in the same l > > - *What's Azure Logic Apps?* > - *What's a Standard logic app workflow?* -> - *What's the Request triger?* +> - *What's the Request trigger?* > - *What's the Office 365 Outlook connector?* > > To find Azure Copilot, on the [Azure portal](https://portal.azure.com) toolbar, select **Copilot**. diff --git a/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md b/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md index b8c69093d1a54..9055be1cb6c12 100644 --- a/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md +++ b/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md @@ -339,7 +339,7 @@ Before you can create your logic app, create a local project so that you can man > [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). > > The **APP_KIND** app setting is required for your Standard logic app, and the value - > must be **workflowApp**. Howeever, in some scenarios, this app setting might be missing, + > must be **workflowApp**. However, in some scenarios, this app setting might be missing, > for example, due to automation using Azure Resource Manager templates or other scenarios > where the setting isn't included. If certain actions don't work, such as the **Execute JavaScript Code** > action, or if the workflow stops working, check that the **APP_KIND** app setting exists and is set to to **workflowApp**. diff --git a/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md b/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md index bf3b16aaf00c2..def461828c6da 100644 --- a/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md +++ b/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md @@ -6,7 +6,7 @@ ms.suite: integration ms.reviewer: estfan, azla ms.topic: conceptual ms.date: 10/23/2024 -# Customer intent: As a developer, I want to learn about DevOps deployment support for Standard logi apps in single-tenant Azure Logic Apps. +# Customer intent: As a developer, I want to learn about DevOps deployment support for Standard logic apps in single-tenant Azure Logic Apps. --- # DevOps deployment for Standard logic apps in single-tenant Azure Logic Apps diff --git a/articles/logic-apps/enterprise-integration/create-integration-account.md b/articles/logic-apps/enterprise-integration/create-integration-account.md index a1523c85c4063..f6ae0f004a38d 100644 --- a/articles/logic-apps/enterprise-integration/create-integration-account.md +++ b/articles/logic-apps/enterprise-integration/create-integration-account.md @@ -437,7 +437,7 @@ To make this change, you can use either the Azure portal or the Azure CLI. 1. In the Azure portal, open the [Azure Cloud Shell](../../cloud-shell/overview.md) environment. - ![Screenshot shows Azure portal toolbar with selected Cloud Shell optiond.](./media/create-integration-account/open-azure-cloud-shell-window.png) + ![Screenshot shows Azure portal toolbar with selected Cloud Shell options.](./media/create-integration-account/open-azure-cloud-shell-window.png) 1. At the command prompt, enter the [**az resource** command](/cli/azure/resource#az-resource-update), and set `skuName` to the higher tier that you want. diff --git a/articles/logic-apps/estimate-storage-costs.md b/articles/logic-apps/estimate-storage-costs.md index bc5c2c12d2822..5fe4c156676cc 100644 --- a/articles/logic-apps/estimate-storage-costs.md +++ b/articles/logic-apps/estimate-storage-costs.md @@ -15,7 +15,7 @@ ms.date: 01/10/2024 Azure Logic Apps uses [Azure Storage](../storage/index.yml) for any storage operations. In traditional *multitenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps, you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data. > [!NOTE] -> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant versus multitenant in Azure Logic Appst](single-tenant-overview-compare.md). +> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant versus multitenant in Azure Logic Apps](single-tenant-overview-compare.md). Storage costs change based on your workflows' content. Different triggers, actions, and payloads result in different storage operations and needs. This article describes how to estimate your storage costs when you're using your own Azure Storage account with single-tenant based logic apps. First, you can [estimate the number of storage operations you'll perform](#estimate-storage-needs) using the Logic Apps storage calculator. Then, you can [estimate your possible storage costs](#estimate-storage-costs) using these numbers in the Azure pricing calculator. diff --git a/articles/logic-apps/logic-apps-control-flow-run-steps-group-scopes.md b/articles/logic-apps/logic-apps-control-flow-run-steps-group-scopes.md index 8d96679e8c2c6..37e4b85a47777 100644 --- a/articles/logic-apps/logic-apps-control-flow-run-steps-group-scopes.md +++ b/articles/logic-apps/logic-apps-control-flow-run-steps-group-scopes.md @@ -334,7 +334,7 @@ here is the JSON definition for trigger and actions in the previous logic app: }, "else": { "actions": { - "Scope_succeded": { + "Scope_succeeded": { "type": "ApiConnection", "inputs": { "body": { diff --git a/articles/logic-apps/logic-apps-diagnosing-failures.md b/articles/logic-apps/logic-apps-diagnosing-failures.md index 6a5787d89bb30..cdd580652ae36 100644 --- a/articles/logic-apps/logic-apps-diagnosing-failures.md +++ b/articles/logic-apps/logic-apps-diagnosing-failures.md @@ -155,7 +155,7 @@ Standard logic apps store all artifacts in an Azure storage account. You might g | Azure portal location | Error | |-----------------------|-------| -| Overview pane | - **System.private.corelib:Access to the path 'C:\\home\\site\\wwwroot\\hostj.son is denied**

- **Azure.Storage.Blobs: This request is not authorized to perform this operation** | +| Overview pane | - **System.private.corelib:Access to the path 'C:\\home\\site\\wwwroot\\host.json is denied**

- **Azure.Storage.Blobs: This request is not authorized to perform this operation** | | Workflows pane | - **Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (InternalServerError) from host runtime.'**

- **Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (ServiceUnavailable) from host runtime.'**

- **Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (BadGateway) from host runtime.'** | | During workflow creation and execution | - **Failed to save workflow**

- **Error in the designer: GetCallFailed. Failed fetching operations**

- **ajaxExtended call failed** | diff --git a/articles/logic-apps/logic-apps-perform-data-operations.md b/articles/logic-apps/logic-apps-perform-data-operations.md index 4c0d928d1da30..f067f91e8bfb0 100644 --- a/articles/logic-apps/logic-apps-perform-data-operations.md +++ b/articles/logic-apps/logic-apps-perform-data-operations.md @@ -969,7 +969,7 @@ To try the **Select** action, follow these steps by using the workflow designer. > [!TIP] > > For an example that creates create an array with strings or integers built from the values in a JSON object array, -> see the **Select** and **Initliaze variable** action definitions in +> see the **Select** and **Initialize variable** action definitions in > [Data operation code examples - Select](logic-apps-data-operations-code-samples.md#select-action-example). ### [Consumption](#tab/consumption) diff --git a/articles/logic-apps/rules-engine/add-rules-operators.md b/articles/logic-apps/rules-engine/add-rules-operators.md index 96659b0a3f2ed..ac4279f4c7633 100644 --- a/articles/logic-apps/rules-engine/add-rules-operators.md +++ b/articles/logic-apps/rules-engine/add-rules-operators.md @@ -142,6 +142,6 @@ In the preceding example, the `` function checks th ## Related content - [Create rules with the Microsoft Rules Composer](create-rules.md) -- [Add control functions to actions for optimizing rules exection](add-rules-control-functions.md) +- [Add control functions to actions for optimizing rules execution](add-rules-control-functions.md) - [Test your rulesets](test-rulesets.md) - [Create an Azure Logic Apps Rules Engine project](create-rules-engine-project.md) diff --git a/articles/logic-apps/rules-engine/create-manage-vocabularies.md b/articles/logic-apps/rules-engine/create-manage-vocabularies.md index 6fbe9eba5c6aa..f41e9530a9b04 100644 --- a/articles/logic-apps/rules-engine/create-manage-vocabularies.md +++ b/articles/logic-apps/rules-engine/create-manage-vocabularies.md @@ -34,7 +34,7 @@ This guide shows how to create and define vocabularies that are placed in the sh The terms that you use to define rule conditions and actions are often expressed using domain or industry-specific nomenclature. For example, an e-mail user writes rules using terms such as "messages received from" and "messages received after". An insurance business analyst writes rules using terms such as "risk factors" and "coverage amount". -As another exmaple, a variable for an approval status might point at a certain value in an XML schema. Rather than insert this complex representation in a rule, you might instead create a vocabulary definition that is associated with that variable value, and use "Status" as the friendly name. You can then use "Status" in any number of rules. Technology artifacts, such as XML objects and XML documents, that implement the rule conditions and rule actions lie under this domain-specific terminology. However, the rules engine can retrieve the corresponding data from the table that stores that data. +As another example, a variable for an approval status might point at a certain value in an XML schema. Rather than insert this complex representation in a rule, you might instead create a vocabulary definition that is associated with that variable value, and use "Status" as the friendly name. You can then use "Status" in any number of rules. Technology artifacts, such as XML objects and XML documents, that implement the rule conditions and rule actions lie under this domain-specific terminology. However, the rules engine can retrieve the corresponding data from the table that stores that data. Rule conditions and actions are based on data sources that might have detailed, difficult-to-read binding information, which tells the user little or nothing about what the bindings reference. The rules engine empowers you to create vocabularies that simplify the rules development by offering intuitive, domain-specific terminology that you can associate with rule conditions and actions. @@ -96,7 +96,7 @@ When you want to make changes in a vocabulary, create a new vocabulary version t > [!IMPORTANT] > > When you create a new vocabulary version, the rules built using a previous vocabulary version still reference -> the previous version. Make sure that you update the references between thoes rules and the new vocabulary version. +> the previous version. Make sure that you update the references between those rules and the new vocabulary version. ## Create an empty vocabulary version diff --git a/articles/logic-apps/rules-engine/create-rules-engine-project.md b/articles/logic-apps/rules-engine/create-rules-engine-project.md index 6569767508e69..81e3741519b93 100644 --- a/articles/logic-apps/rules-engine/create-rules-engine-project.md +++ b/articles/logic-apps/rules-engine/create-rules-engine-project.md @@ -206,7 +206,7 @@ To reuse existing rules from Microsoft BizTalk Server, you can export them. Howe // Create rules engine instance. var ruleEngine = new RuleEngine(ruleSet: ruleSet); - // Create one or more typedXmlDcoument facts from one or more input XML documents. + // Create one or more typedXmlDocument facts from one or more input XML documents. XmlDocument doc = new XmlDocument(); doc.LoadXml(inputXml); var typedXmlDocument = new TypedXmlDocument(documentType, doc); @@ -219,13 +219,13 @@ To reuse existing rules from Microsoft BizTalk Server, you can export them. Howe // Send back the relevant results (facts). var updatedDoc = typedXmlDocument.Document as XmlDocument; - var ruleExectionOutput = new RuleExecutionResult() + var ruleExecutionOutput = new RuleExecutionResult() { XmlDoc = updatedDoc.OuterXml, PurchaseAmountPostTax = currentPurchase.PurchaseAmount + currentPurchase.GetSalesTax() }; - return Task.FromResult(ruleExectionOutput); + return Task.FromResult(ruleExecutionOutput); } catch(RuleEngineException ruleEngineException) { @@ -324,13 +324,13 @@ To reuse existing rules from Microsoft BizTalk Server, you can export them. Howe 1. The engine uses the **`RuleExecutionResult`** custom class to return the values to the **`RunRules`** method: ```csharp - var ruleExectionOutput = new RuleExecutionResult() + var ruleExecutionOutput = new RuleExecutionResult() { XmlDoc = updatedDoc.OuterXml, PurchaseAmountPostTax = currentPurchase.PurchaseAmount + currentPurchase.GetSalesTax() }; - return Task.FromResult(ruleExectionOutput); + return Task.FromResult(ruleExecutionOutput); ``` 1. Replace the sample function code with your own, and edit the default **`RunRules`** method for your own scenarios. diff --git a/articles/logic-apps/rules-engine/create-rules.md b/articles/logic-apps/rules-engine/create-rules.md index 4c3b833bb2426..513ce77d2efab 100644 --- a/articles/logic-apps/rules-engine/create-rules.md +++ b/articles/logic-apps/rules-engine/create-rules.md @@ -403,7 +403,7 @@ For example, suppose you have the following XML schema: ## Related content - [Add arithmetic and logical operators to rules](add-rules-operators.md) -- [Add control functions to actions for optimizing rules exection](add-rules-control-functions.md) +- [Add control functions to actions for optimizing rules execution](add-rules-control-functions.md) - [Perform advanced tasks on rulesets](perform-advanced-ruleset-tasks.md) - [Test your rulesets](test-rulesets.md) - [Create an Azure Logic Apps Rules Engine project](create-rules-engine-project.md) diff --git a/articles/logic-apps/rules-engine/perform-advanced-ruleset-tasks.md b/articles/logic-apps/rules-engine/perform-advanced-ruleset-tasks.md index b69d833d28cc8..2face52ccb301 100644 --- a/articles/logic-apps/rules-engine/perform-advanced-ruleset-tasks.md +++ b/articles/logic-apps/rules-engine/perform-advanced-ruleset-tasks.md @@ -9,7 +9,7 @@ ms.reviewer: estfan, azla ms.topic: how-to ms.date: 06/10/2024 -#CustomerIntent: As a developer, I want to perform more advanced tasks and operations on rulesets using the Microft Rules Composer. +#CustomerIntent: As a developer, I want to perform more advanced tasks and operations on rulesets using the Microsoft Rules Composer. --- # Perform advanced tasks on rulesets with the Microsoft Rules Composer (Preview) @@ -227,7 +227,7 @@ Now, assume that a **Father** instance and a **Son** instance are asserted into > > The **Instance ID** field is only used within the context of a specific rule evaluation. This field > isn't affixed to an object instance across the ruleset execution and isn't related to the order used -> for assesrting objects. Each object instance is evaluated in all rule arguments for that type. +> for asserting objects. Each object instance is evaluated in all rule arguments for that type. ## Related content diff --git a/articles/logic-apps/rules-engine/rules-engine-overview.md b/articles/logic-apps/rules-engine/rules-engine-overview.md index 806206f08eb99..f5543d521a1cf 100644 --- a/articles/logic-apps/rules-engine/rules-engine-overview.md +++ b/articles/logic-apps/rules-engine/rules-engine-overview.md @@ -21,7 +21,7 @@ ms.date: 06/10/2024 Organizations deal with decisions every day, but when you have clear business rules that govern your organization's business logic, these decisions are easier to make. Business rules are the guidelines that shape how a business operates. You can find these rules in manuals, contracts, or agreements, or they can be the unwritten knowledge or expertise of employees. Business rules change over time and affect different types of applications. Many business domains, such as finance, healthcare, insurance, transportation, and telecommunications need to communicate their business rules to their staff so they can implement them in software applications. -Traditional programming languages, such as C++, Java, COBOL, Python, JavaScript, or C#, are designed for programmers. So, nonprogrammers have difficulties changing the business rules that guide how software applications work. These languages also require much time and work to create and update applications. However, business rules engines solve this problem by offering a low-code environment that lets you build applications faster and easier. You can use a rules engine to create and change business rules without having to write code or restart the applications that use them. +Traditional programming languages, such as C++, Java, COBOL, Python, JavaScript, or C#, are designed for programmers. So, non-programmers have difficulties changing the business rules that guide how software applications work. These languages also require much time and work to create and update applications. However, business rules engines solve this problem by offering a low-code environment that lets you build applications faster and easier. You can use a rules engine to create and change business rules without having to write code or restart the applications that use them. ## Rules engines in a world of microservices diff --git a/articles/logic-apps/rules-engine/test-rulesets.md b/articles/logic-apps/rules-engine/test-rulesets.md index 20c536f7e6181..1fad7c0d20bb7 100644 --- a/articles/logic-apps/rules-engine/test-rulesets.md +++ b/articles/logic-apps/rules-engine/test-rulesets.md @@ -46,7 +46,7 @@ If you wait to test your rules all at the same time or when you're all done, and > [!NOTE] > > If you assert a derived class into a rule, but the rules are directly written - > against the base class members, a base class instance is assserted instead, + > against the base class members, a base class instance is asserted instead, > and the conditions are evaluated against the base class instance. 1. To remove a fact instance, select the corresponding fact type, and then select **Remove Instance**. diff --git a/articles/nat-gateway/monitor-nat-gateway-reference.md b/articles/nat-gateway/monitor-nat-gateway-reference.md new file mode 100644 index 0000000000000..f5bde200e8152 --- /dev/null +++ b/articles/nat-gateway/monitor-nat-gateway-reference.md @@ -0,0 +1,63 @@ +--- +title: Monitoring data reference for Azure NAT Gateway +description: This article contains important reference material you need when you monitor Azure NAT Gateway by using Azure Monitor. +ms.date: 12/02/2024 +ms.custom: horz-monitor +ms.topic: reference +author: asudbring +ms.author: allensu +ms.service: azure-nat-gateway +--- +# Azure NAT Gateway monitoring data reference + +[!INCLUDE [horz-monitor-ref-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-intro.md)] + +See [Monitor Azure NAT Gateway](monitor-nat-gateway.md) for details on the data you can collect for Azure NAT Gateway and how to use it. + +[!INCLUDE [horz-monitor-ref-metrics-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-intro.md)] + +NAT gateway metrics can be found in the following locations in the Azure portal. + +- **Metrics** page under **Monitoring** from a NAT gateway's resource page. + +- **Insights** page under **Monitoring** from a NAT gateway's resource page. + + :::image type="content" source="./media/nat-metrics/nat-insights-metrics.png" alt-text="Screenshot of the insights and metrics options in NAT gateway overview."::: + +- Azure Monitor page under **Metrics**. + + :::image type="content" source="./media/nat-metrics/azure-monitor.png" alt-text="Screenshot of the metrics section of Azure Monitor."::: + +### Supported metrics for Microsoft.Network/natgateways + +The following table lists the metrics available for the Microsoft.Network/natgateways resource type. + +[!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] + +[!INCLUDE [Microsoft.Network/natgateways](~/reusable-content/ce-skilling/azure/includes/azure-monitor/reference/metrics/microsoft-network-natgateways-metrics-include.md)] + +> [!NOTE] +> Count aggregation is not recommended for any of the NAT gateway metrics. Count aggregation adds up the number of metric values and not the metric values themselves. Use Total aggregation instead to get the best representation of data values for connection count, bytes, and packets metrics. +> +> Use Average for best represented health data for the datapath availability metric. +> +> For information about aggregation types, see [aggregation types](/azure/azure-monitor/essentials/metrics-aggregation-explained#aggregation-types). + +For more information, see [How to use NAT gateway metrics](nat-metrics.md#how-to-use-nat-gateway-metrics). + +[!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)] + +[!INCLUDE [horz-monitor-ref-metrics-dimensions](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions.md)] + +- ConnectionState: Attempted, Failed +- Direction: In, Out +- Protocol: 6 TCP, 17 UDP + +[!INCLUDE [horz-monitor-ref-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-activity-log.md)] + +- [Microsoft.Network resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftnetwork) + +## Related content + +- See [Monitor Azure NAT Gateway](monitor-nat-gateway.md) for a description of monitoring Azure NAT Gateway. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. diff --git a/articles/nat-gateway/monitor-nat-gateway.md b/articles/nat-gateway/monitor-nat-gateway.md new file mode 100644 index 0000000000000..eb98e270bf9a4 --- /dev/null +++ b/articles/nat-gateway/monitor-nat-gateway.md @@ -0,0 +1,49 @@ +--- +title: Monitor Azure NAT Gateway +description: Start here to learn how to monitor Azure NAT Gateway by using the available Azure Monitor metrics and alerts. +ms.date: 12/02/2024 +ms.custom: horz-monitor +ms.topic: conceptual +author: asudbring +ms.author: allensu +ms.service: azure-nat-gateway +--- + +# Monitor Azure NAT Gateway + +[!INCLUDE [azmon-horz-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-intro.md)] + +## Collect data with Azure Monitor + +This table describes how you can collect data to monitor your service, and what you can do with the data once collected: + +|Data to collect|Description|How to collect and route the data|Where to view the data|Supported data| +|---------|---------|---------|---------|---------| +|Metric data|Metrics are numerical values that describe an aspect of a system at a particular point in time. Metrics can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time.|[- Collected automatically at regular intervals.
- You can route some platform metrics to a Log Analytics workspace to query with other data. Check the **DS export** setting for each metric to see if you can use a diagnostic setting to route the metric data.]|[Metrics explorer](/azure/azure-monitor/essentials/metrics-getting-started)| [Azure NAT Gateway metrics supported by Azure Monitor](/azure/nat-gateway/monitor-nat-gateway-reference#metrics)| +|Resource log data|Logs are recorded system events with a timestamp. Logs can contain different types of data, and be structured or free-form text. You can route resource log data to Log Analytics workspaces for querying and analysis.|[Create a diagnostic setting](/azure/azure-monitor/essentials/create-diagnostic-settings) to collect and route resource log data.| [Log Analytics](/azure/azure-monitor/learn/quick-create-workspace)|[Azure NAT Gateway resource log data supported by Azure Monitor](/azure/nat-gateway/monitor-nat-gateway-reference#activity-log) | +|Activity log data|The Azure Monitor activity log provides insight into subscription-level events. The activity log includes information like when a resource is modified or a virtual machine is started.|- Collected automatically.
- [Create a diagnostic setting](/azure/azure-monitor/essentials/create-diagnostic-settings) to a Log Analytics workspace at no charge.|[Activity log](/azure/azure-monitor/essentials/activity-log)| | + +[!INCLUDE [azmon-horz-supported-data](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-supported-data.md)] + +## Built in monitoring for Azure NAT Gateway + +[Azure Monitor Network Insights](../network-watcher/network-insights-overview.md) allows you to visualize your Azure infrastructure setup and to review all metrics for your NAT gateway resource from a preconfigured metrics dashboard. These visual tools help you diagnose and troubleshoot any issues with your NAT gateway resource. + +For more information on NAT Gateway Insights, see [Network Insights](nat-metrics.md#network-insights). + +[!INCLUDE [azmon-horz-tools](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-tools.md)] + +[!INCLUDE [azmon-horz-export-data](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-export-data.md)] + +[!INCLUDE [azmon-horz-kusto](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-kusto.md)] + +[!INCLUDE [azmon-horz-alerts-part-one](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-alerts-part-one.md)] + +[!INCLUDE [azmon-horz-alerts-part-two](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-alerts-part-two.md)] + +[!INCLUDE [azmon-horz-advisor](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/azmon-horz-advisor.md)] + +## Related content + +- See [Azure NAT Gateway monitoring data reference](monitor-nat-gateway-reference.md) for a reference of the metrics, logs, and other important values created for Azure NAT Gateway. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. \ No newline at end of file diff --git a/articles/nat-gateway/nat-metrics.md b/articles/nat-gateway/nat-metrics.md index 36f6bb139f768..72c8aeb0f0478 100644 --- a/articles/nat-gateway/nat-metrics.md +++ b/articles/nat-gateway/nat-metrics.md @@ -5,7 +5,7 @@ description: Get started learning about Azure Monitor metrics and alerts availab author: asudbring ms.service: azure-nat-gateway ms.topic: how-to -ms.date: 04/29/2024 +ms.date: 09/16/2024 ms.author: allensu # Customer intent: As an IT administrator, I want to understand available Azure Monitor metrics and alerts for Virtual Network NAT. --- @@ -25,16 +25,9 @@ Azure NAT Gateway provides the following diagnostic capabilities: ## Metrics overview -NAT gateway provides the following multi-dimensional metrics in Azure Monitor: +[!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] -| Metric | Description | Recommended aggregation | Dimensions | -|---|---|---|---| -| Bytes | Bytes processed inbound and outbound | Sum | **Direction (In; Out)**, **Protocol (6 TCP; 17 UDP)** | -| Packets | Packets processed inbound and outbound | Sum | **Direction (In; Out)**, **Protocol (6 TCP; 17 UDP)** | -| Dropped Packets | Packets dropped by the NAT gateway | Sum | / | -| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | **Connection State (Attempted, Failed)**, **Protocol (6 TCP; 17 UDP)** | -| Total SNAT Connection Count | Total number of active SNAT connections | Sum | **Protocol (6 TCP; 17 UDP)** | -| Datapath Availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | **Availability (0, 100)** | +[!INCLUDE [Microsoft.Network/natgateways](~/reusable-content/ce-skilling/azure/includes/azure-monitor/reference/metrics/microsoft-network-natgateways-metrics-include.md)] >[!NOTE] > Count aggregation is not recommended for any of the NAT gateway metrics. Count aggregation adds up the number of metric values and not the metric values themselves. Use Sum aggregation instead to get the best representation of data values for connection count, bytes, and packets metrics. @@ -77,7 +70,7 @@ The following sections detail how to use each NAT gateway metric to monitor, man ### Bytes -The **Bytes** metric shows you the amount of data going outbound through NAT gateway and returning inbound in response to an outbound connection. +The Bytes metric shows you the amount of data going outbound through NAT gateway and returning inbound in response to an outbound connection. Use this metric to: @@ -99,11 +92,11 @@ To view the amount of data passing through NAT gateway: 1. To see data processed inbound or outbound as their own individual lines in the metric graph, select **Apply splitting**. -1. In the **Values** drop-down menu, select **Direction (Out | In)**. +1. In the **Values** drop-down menu, select **Direction (Out | In)**. ### Packets -The packets metric shows you the number of data packets passing through NAT gateway. +The Packets metric shows you the number of data packets passing through NAT gateway. Use this metric to: @@ -113,9 +106,9 @@ Use this metric to: To view the number of packets sent in one or both directions through NAT gateway, follow the same steps in the [Bytes](#bytes) section. -### Dropped packets +### Dropped Packets -The dropped packets metric shows you the number of data packets dropped by NAT gateway when traffic goes outbound or returns inbound in response to an outbound connection. +The Dropped Packets metric shows you the number of data packets dropped by NAT gateway when traffic goes outbound or returns inbound in response to an outbound connection. Use this metric to: @@ -127,9 +120,9 @@ Possible reasons for dropped packets: - Outbound connectivity failure can cause packets to drop. Connectivity failure can happen for various reasons. See the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity) to help you further diagnose. -### SNAT connection count +### SNAT Connection Count -The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame. This metric can be filtered by **Attempted** and **Failed** connection states. A failed connection volume greater than zero can indicate SNAT port exhaustion. +The SNAT Connection Count metric shows you the number of new SNAT connections within a specified time frame. This metric can be filtered by **Attempted** and **Failed** connection states. A failed connection volume greater than zero can indicate SNAT port exhaustion. Use this metric to: @@ -159,9 +152,9 @@ To view the connection state of your connections: :::image type="content" source="./media/nat-metrics/nat-metrics-3.png" alt-text="Screenshot of the metrics configuration."::: -### Total SNAT connection count +### Total SNAT Connection Count -The **Total SNAT connection count** metric shows you the total number of active SNAT connections passing through NAT gateway. +The Total SNAT Connection Count metric shows you the total number of active SNAT connections passing through NAT gateway. You can use this metric to: @@ -178,7 +171,7 @@ Possible reasons for failed connections: >[!NOTE] > When NAT gateway is attached to a subnet and public IP address, the Azure platform verifies NAT gateway is healthy by conducting health checks. These health checks appear in NAT gateway's SNAT Connection Count metrics. The amount of health check related connections may vary as the health check service is optimized, but is negligible and doesn’t impact NAT gateway’s ability to connect outbound. -### Datapath availability +### Datapath Availability The datapath availability metric measures the health of the NAT gateway resource over time. This metric indicates if NAT gateway is available for directing outbound traffic to the internet. This metric is a reflection of the health of the Azure infrastructure. @@ -238,7 +231,7 @@ Setting the aggregation granularity to less than 5 minutes may trigger false pos ### Alerts for SNAT port exhaustion -Set up an alert on the **SNAT connection count** metric to notify you of connection failures on your NAT gateway. A failed connection volume greater than zero can indicate that you reached the connection limit on your NAT gateway or that you hit SNAT port exhaustion. Investigate further to determine the root cause of these failures. +Set up an alert on the **SNAT Connection Count** metric to notify you of connection failures on your NAT gateway. A failed connection volume greater than zero can indicate that you reached the connection limit on your NAT gateway or that you hit SNAT port exhaustion. Investigate further to determine the root cause of these failures. To create the alert, use the following steps: @@ -317,7 +310,7 @@ For more information on what each metric is showing you and how to analyze these ### What type of metrics are available for NAT gateway? -The NAT gateway supports [multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics). You can filter the multi-dimensional metrics by different dimensions to gain greater insight into the provided data. The [SNAT connection count](#snat-connection-count) metric allows you to filter the connections by Attempted and Failed connections, enabling you to distinguish between different types of connections made by the NAT gateway. +The NAT gateway supports [multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics). You can filter the multi-dimensional metrics by different dimensions to gain greater insight into the provided data. The [SNAT Connection Count](#snat-connection-count) metric allows you to filter the connections by Attempted and Failed connections, enabling you to distinguish between different types of connections made by the NAT gateway. Refer to the dimensions column in the [metrics overview](#metrics-overview) table to see which dimensions are available for each NAT gateway metric. diff --git a/articles/nat-gateway/toc.yml b/articles/nat-gateway/toc.yml index 33282a6ce1abd..f2424bf21fb5f 100644 --- a/articles/nat-gateway/toc.yml +++ b/articles/nat-gateway/toc.yml @@ -52,10 +52,12 @@ items: href: nat-gateway-snat.md - name: NAT gateway design guidance href: nat-gateway-design.md -- name: How-to - items: - name: Metrics and alerts href: nat-metrics.md +- name: How-to + items: + - name: Monitor NAT gateway + href: monitor-nat-gateway.md - name: Resource health href: resource-health.md - name: Manage NAT gateway @@ -92,6 +94,8 @@ items: href: /azure/templates/microsoft.network/allversions - name: Code samples href: https://azure.microsoft.com/resources/samples/?service=virtual-network + - name: Monitoring data reference + href: monitor-nat-gateway-reference.md - name: Resources items: - name: Build your skills with Microsoft Learn training diff --git a/articles/operator-nexus/quickstarts-virtual-machine-deployment-arm.md b/articles/operator-nexus/quickstarts-virtual-machine-deployment-arm.md index 6b949ac3375dd..10fa370b62108 100644 --- a/articles/operator-nexus/quickstarts-virtual-machine-deployment-arm.md +++ b/articles/operator-nexus/quickstarts-virtual-machine-deployment-arm.md @@ -26,6 +26,9 @@ Before deploying the virtual machine template, let's review the content to under :::code language="json" source="includes/virtual-machine/virtual-machine-arm-template.json"::: +> [!IMPORTANT] +> Please ensure that any sensitive data, such as secrets, passwords, private keys, and so on, are encrypted before they are submitted in the userData or networkData fields. + Once you have reviewed and saved the template file named ```virtual-machine-arm-template.json```, proceed to the next section to deploy the template. ## Deploy the template diff --git a/articles/operator-nexus/quickstarts-virtual-machine-deployment-bicep.md b/articles/operator-nexus/quickstarts-virtual-machine-deployment-bicep.md index d3113aeeb752d..e026346b7890d 100644 --- a/articles/operator-nexus/quickstarts-virtual-machine-deployment-bicep.md +++ b/articles/operator-nexus/quickstarts-virtual-machine-deployment-bicep.md @@ -26,6 +26,9 @@ Before deploying the virtual machine template, let's review the content to under :::code language="bicep" source="includes/virtual-machine/virtual-machine-bicep-template.bicep"::: +> [!IMPORTANT] +> Please ensure that any sensitive data, such as secrets, passwords, private keys, and so on, are encrypted before they are submitted in the userData or networkData fields. + Once you have reviewed and saved the template file named ```virtual-machine-bicep-template.bicep```, proceed to the next section to deploy the template. ## Deploy the template diff --git a/articles/operator-nexus/quickstarts-virtual-machine-deployment-cli.md b/articles/operator-nexus/quickstarts-virtual-machine-deployment-cli.md index 74a3e73ae56b8..3888928bcc976 100644 --- a/articles/operator-nexus/quickstarts-virtual-machine-deployment-cli.md +++ b/articles/operator-nexus/quickstarts-virtual-machine-deployment-cli.md @@ -45,6 +45,9 @@ Before you run the commands, you need to set several variables to define the con | ACR_USERNAME | The username for the Azure Container Registry. | | ACR_PASSWORD | The password for the Azure Container Registry. | +> [!IMPORTANT] +> Please ensure that any sensitive data, such as secrets, passwords, private keys, and so on, are encrypted before they are submitted in the userData or networkData fields. + Once you've defined these variables, you can run the Azure CLI command to create the virtual machine. Add the ```--debug``` flag at the end to provide more detailed output for troubleshooting purposes. To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example: diff --git a/articles/operator-nexus/quickstarts-virtual-machine-deployment-ps.md b/articles/operator-nexus/quickstarts-virtual-machine-deployment-ps.md index 66900a1b86201..690f51895c254 100644 --- a/articles/operator-nexus/quickstarts-virtual-machine-deployment-ps.md +++ b/articles/operator-nexus/quickstarts-virtual-machine-deployment-ps.md @@ -53,6 +53,9 @@ Before you run the commands, you need to set several variables to define the con | NETWORKATTACHMENTNAME | The name of the Network to attach for workload. | | NETWORKDATA | The base64 encoded string of cloud-init network data. | +> [!IMPORTANT] +> Please ensure that any sensitive data, such as secrets, passwords, private keys, and so on, are encrypted before they are submitted in the userData or networkData fields. + Once you've defined these variables, you can run the Azure PowerShell command to create the virtual machine. Add the ```-Debug``` flag at the end to provide more detailed output for troubleshooting purposes. To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example: diff --git a/articles/orbital/initiate-licensing.md b/articles/orbital/initiate-licensing.md index 85a657c8b0246..e3a55be2ccff3 100644 --- a/articles/orbital/initiate-licensing.md +++ b/articles/orbital/initiate-licensing.md @@ -12,6 +12,9 @@ ms.author: mosagie # Initiate ground station licensing +> [!NOTE] +> [Azure Orbital Ground Station is retiring on December 18th, 2024](https://azure.microsoft.com/updates?id=Azure-Orbital-Ground-Station-Retirement) and has stopped issuing new authorizations. + ## About satellite and ground station licensing Both satellites and ground stations require authorizations from federal regulators and other government agencies to operate. diff --git a/articles/orbital/overview.md b/articles/orbital/overview.md index 2e4ccc4d14801..2af3b6d674446 100644 --- a/articles/orbital/overview.md +++ b/articles/orbital/overview.md @@ -12,6 +12,10 @@ ms.author: mosagie # Azure Orbital Ground Station overview +> [!NOTE] +> [Azure Orbital Ground Station is retiring on December 18th, 2024](https://azure.microsoft.com/updates?id=Azure-Orbital-Ground-Station-Retirement) and has stopped issuing new authorizations. + + > [!VIDEO https://www.youtube.com/embed/hQbGZi9iwE4] With Azure Orbital Ground Station, your space data is delivered with near-zero latency to your Azure region over the secure and highly available Microsoft network. Azure Orbital Ground Station supports both Microsoft and industry leading Partner ground station networks, ensuring access to the best sites and networks to support your space missions. Deploying and operating a large, globally distributed ground station solution for your space mission can now be done with the reliability and flexibility of the cloud—at any classification level. diff --git a/articles/orbital/register-spacecraft.md b/articles/orbital/register-spacecraft.md index e3f15bf1817d5..46e7d88158def 100644 --- a/articles/orbital/register-spacecraft.md +++ b/articles/orbital/register-spacecraft.md @@ -12,6 +12,9 @@ ms.author: mosagie # Create and authorize a spacecraft resource +> [!NOTE] +> [Azure Orbital Ground Station is retiring on December 18th, 2024](https://azure.microsoft.com/updates?id=Azure-Orbital-Ground-Station-Retirement) and has stopped issuing new authorizations. + To contact a satellite, it must be registered and authorized as a spacecraft resource with Azure Orbital Ground Station. ## Prerequisites diff --git a/articles/partner-solutions/qumulo/qumulo-virtual-desktop.md b/articles/partner-solutions/qumulo/qumulo-virtual-desktop.md index 67a6f005f6387..459a6ee865f7b 100644 --- a/articles/partner-solutions/qumulo/qumulo-virtual-desktop.md +++ b/articles/partner-solutions/qumulo/qumulo-virtual-desktop.md @@ -71,7 +71,7 @@ The solution architecture comprises the following components: - [FSLogix Profile](/fslogix/overview-what-is-fslogix) [Containers](/fslogix/concepts-container-types#profile-container) to connect each AVD user to their assigned profile on the ANQ storage as part of the sign-in process. - [Microsoft Entra Domain Services](/azure/active-directory-domain-services/overview) to provide user authentication and manage access to Azure-based resources. - [Azure Virtual Networking](/azure/virtual-network/virtual-networks-overview) -- [VNet Injection](../../spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md?tabs=azure-portal) to connect each region’s ANQ instance to the customer’s own Azure subscription resources. +- [VNet Injection](../../spring-apps/basic-standard/how-to-deploy-in-azure-virtual-network.md?tabs=azure-portal) to connect each region’s ANQ instance to the customer’s own Azure subscription resources. ## Considerations diff --git a/articles/reliability/reliability-spring-apps.md b/articles/reliability/reliability-spring-apps.md index f0c0c96d1c7bf..46ee57f3934b6 100644 --- a/articles/reliability/reliability-spring-apps.md +++ b/articles/reliability/reliability-spring-apps.md @@ -77,7 +77,7 @@ To create a service in Azure Spring Apps with zone-redundancy enabled using the ### Enable your own resource with availability zones enabled -You can enable your own resource in Azure Spring Apps, such as your own persistent storage. However, you must make sure to enable zone-redundancy for your resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](../spring-apps/enterprise/how-to-custom-persistent-storage.md). +You can enable your own resource in Azure Spring Apps, such as your own persistent storage. However, you must make sure to enable zone-redundancy for your resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](../spring-apps/basic-standard/how-to-custom-persistent-storage.md). ### Zone down experience @@ -117,7 +117,7 @@ Use the following steps to create an Azure Traffic Manager instance for Azure Sp | service-sample-a | East US | gateway / auth-service / account-service | | service-sample-b | West Europe | gateway / auth-service / account-service | -1. Set up a custom domain for the service instances. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](../spring-apps/enterprise/how-to-custom-domain.md). After successful setup, both service instances will bind to the same custom domain, such as `bcdr-test.contoso.com`. +1. Set up a custom domain for the service instances. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](../spring-apps/basic-standard/how-to-custom-domain.md). After successful setup, both service instances will bind to the same custom domain, such as `bcdr-test.contoso.com`. 1. Create a traffic manager and two endpoints. For instructions, see [Quickstart: Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md), which produces the following Traffic Manager profile: @@ -137,12 +137,12 @@ The environment is now set up. If you used the example values in the linked arti Azure Front Door is a global, scalable entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. Azure Front Door provides the same multi-geo redundancy and routing to the closest region as Azure Traffic Manager. Azure Front Door also provides advanced features such as TLS protocol termination, application layer processing, and Web Application Firewall (WAF). For more information, see [What is Azure Front Door?](../frontdoor/front-door-overview.md) -The following diagram shows the architecture of a multi-region redundancy, virtual-network-integrated Azure Spring Apps service instance. The diagram shows the correct reverse proxy configuration for Application Gateway and Front Door with a custom domain. This architecture is based on the scenario described in [Expose applications with end-to-end TLS in a virtual network](../spring-apps/enterprise/expose-apps-gateway-end-to-end-tls.md). This approach combines two Application-Gateway-integrated Azure Spring Apps virtual-network-injection instances into a geo-redundant instance. +The following diagram shows the architecture of a multi-region redundancy, virtual-network-integrated Azure Spring Apps service instance. The diagram shows the correct reverse proxy configuration for Application Gateway and Front Door with a custom domain. This architecture is based on the scenario described in [Expose applications with end-to-end TLS in a virtual network](../spring-apps/basic-standard/expose-apps-gateway-end-to-end-tls.md). This approach combines two Application-Gateway-integrated Azure Spring Apps virtual-network-injection instances into a geo-redundant instance. :::image type="content" source="media/reliability-spring-apps/multi-region-spring-apps-reference-architecture.png" alt-text="Diagram showing the architecture of a multi-region Azure Spring Apps service instance." lightbox="media/reliability-spring-apps/multi-region-spring-apps-reference-architecture.png"::: ## Next steps -- [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](../spring-apps/enterprise/quickstart.md) +- [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](../spring-apps/basic-standard/quickstart.md) - [Reliability in Azure](./overview.md) diff --git a/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md b/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md index 59d5217a9a98e..749835423b178 100644 --- a/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md +++ b/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md @@ -61,6 +61,7 @@ To register an existing SAP system in Azure Center for SAP solutions: -Tag @{k1 = "v1"; k2 = "v2"} ` -ManagedResourceGroupName "acss-L46-rg" ` -ManagedRgStorageAccountName 'acssstoragel46' ` + -ManagedResourcesNetworkAccessType 'private' ` -IdentityType 'UserAssigned' ` -UserAssignedIdentity @{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ACSS-MSI'= @{}} ` ``` @@ -101,7 +102,8 @@ To register an existing SAP system in Azure Center for SAP solutions: - **Environment** is used to specify the type of SAP environment you're registering. Valid values are *NonProd* and *Prod*. - **SapProduct** is used to specify the type of SAP product you're registering. Valid values are *S4HANA*, *ECC*, *Other*. - **ManagedResourceGroupName** is used to specify the name of the managed resource group which is deployed by ACSS service in your Subscription. This RG is unique for each SAP system (SID) you register. If you don't specify the name, ACSS service sets a name with this naming convention 'mrg-{SID}-{random string}'. - - **ManagedRgStorageAccountName** is used to specify the name of the Storage Account which is deployed into the managed resource group. This storage account is unique for each SAP system (SID) you register. ACSS service sets a default name using '{SID}{random string}' naming convention. + - **ManagedRgStorageAccountName** is used to specify the name of the Storage Account which is deployed into the managed resource group. This storage account is unique for each SAP system (SID) you register. ACSS service sets a default name using '{SID}{random string}' naming convention. + - **ManagedResourcesNetworkAccessType** specifies the network access configuration for the resources that will be deployed in the Managed Resource Group. The options to choose from are Public and Private. If 'Private' is chosen, the Storage Account service tag should be enabled on the subnets in which the SAP VMs exist. This is required for establishing connectivity between VM extensions and the managed resource group storage account. This setting is currently applicable only to Storage Account. 3. Once you trigger the registration process, you can view its status by getting the status of the Virtual Instance for SAP solutions resource that gets deployed as part of the registration process. diff --git a/articles/service-connector/includes/code-keyvault-me-id.md b/articles/service-connector/includes/code-keyvault-me-id.md index 7f65d624a6d81..c187539fcd5b0 100644 --- a/articles/service-connector/includes/code-keyvault-me-id.md +++ b/articles/service-connector/includes/code-keyvault-me-id.md @@ -94,7 +94,7 @@ ms.author: wchi ### [SpringBoot](#tab/springBoot) -Refer to [Tutorial: Connect Azure Spring Apps to Key Vault using managed identities](../../spring-apps/enterprise/tutorial-managed-identities-key-vault.md?tabs=system-assigned-managed-identity) to set up your Spring application. Two sets of configuration properties are added to Spring Apps by Service Connector, according to Spring Cloud Azure version below 4.0 and above 4.0. For more information, check [Migration Guide for 4.0](https://microsoft.github.io/spring-cloud-azure/current/reference/html/appendix.html#configuration-spring-cloud-azure-starter-keyvault-secrets) +Refer to [Tutorial: Connect Azure Spring Apps to Key Vault using managed identities](../../spring-apps/basic-standard/tutorial-managed-identities-key-vault.md?tabs=system-assigned-managed-identity) to set up your Spring application. Two sets of configuration properties are added to Spring Apps by Service Connector, according to Spring Cloud Azure version below 4.0 and above 4.0. For more information, check [Migration Guide for 4.0](https://microsoft.github.io/spring-cloud-azure/current/reference/html/appendix.html#configuration-spring-cloud-azure-starter-keyvault-secrets) ### [Python](#tab/python) diff --git a/articles/service-connector/includes/code-mysql-me-id.md b/articles/service-connector/includes/code-mysql-me-id.md index 4b4c1909d0bf4..a7a8267128ba7 100644 --- a/articles/service-connector/includes/code-mysql-me-id.md +++ b/articles/service-connector/includes/code-mysql-me-id.md @@ -77,7 +77,7 @@ For more information, see [Use Java and JDBC with Azure Database for MySQL - Fle For a Spring application, if you create a connection with option `--client-type springboot`, Service Connector sets the properties `spring.datasource.azure.passwordless-enabled`, `spring.datasource.url`, and `spring.datasource.username` to Azure Spring Apps. -Update your application following the tutorial [Connect an Azure Database for MySQL instance to your application in Azure Spring Apps](../../spring-apps/enterprise/how-to-bind-mysql.md#prepare-your-project). Remember to remove the `spring.datasource.password` configuration property if it was set before and add the correct dependencies to your Spring application. +Update your application following the tutorial [Connect an Azure Database for MySQL instance to your application in Azure Spring Apps](../../spring-apps/basic-standard/how-to-bind-mysql.md#prepare-your-project). Remember to remove the `spring.datasource.password` configuration property if it was set before and add the correct dependencies to your Spring application. For more tutorials, see [Use Spring Data JDBC with Azure Database for MySQL](/azure/developer/java/spring-framework/configure-spring-data-jdbc-with-azure-mysql?tabs=passwordless%2Cservice-connector&pivots=mysql-passwordless-flexible-server#store-data-from-azure-database-for-mysql) diff --git a/articles/service-connector/includes/code-postgres-me-id.md b/articles/service-connector/includes/code-postgres-me-id.md index 203f2a03bd8d9..b1de0d6c92c8c 100644 --- a/articles/service-connector/includes/code-postgres-me-id.md +++ b/articles/service-connector/includes/code-postgres-me-id.md @@ -90,7 +90,7 @@ For more information, see the following resources: For a Spring application, if you create a connection with option `--client-type springboot`, Service Connector sets the properties `spring.datasource.azure.passwordless-enabled`, `spring.datasource.url`, and `spring.datasource.username` to Azure Spring Apps. -Update your application following the tutorial [Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps](../../spring-apps/enterprise/how-to-bind-postgres.md#prepare-your-project). Remember to remove the `spring.datasource.password` configuration property if it was set before and add the correct dependencies to your Spring application. +Update your application following the tutorial [Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps](../../spring-apps/basic-standard/how-to-bind-postgres.md#prepare-your-project). Remember to remove the `spring.datasource.password` configuration property if it was set before and add the correct dependencies to your Spring application. For more tutorials, see [Use Spring Data JDBC with Azure Database for PostgreSQL](/azure/developer/java/spring-framework/configure-spring-data-jdbc-with-azure-postgresql?tabs=passwordless%2Cservice-connector&pivots=postgresql-passwordless-flexible-server#store-data-from-azure-database-for-postgresql) and [Tutorial: Deploy a Spring application to Azure Spring Apps with a passwordless connection to an Azure database](/azure/developer/java/spring-framework/deploy-passwordless-spring-database-app?tabs=postgresq). diff --git a/articles/service-connector/quickstart-cli-spring-cloud-connection.md b/articles/service-connector/quickstart-cli-spring-cloud-connection.md index d204cd448eb0b..0c8d838cd2e27 100644 --- a/articles/service-connector/quickstart-cli-spring-cloud-connection.md +++ b/articles/service-connector/quickstart-cli-spring-cloud-connection.md @@ -20,7 +20,7 @@ Service Connector lets you quickly connect compute services to cloud services, w - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] -- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/enterprise/quickstart.md). +- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/basic-standard/quickstart.md). [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] diff --git a/articles/service-connector/quickstart-portal-spring-cloud-connection.md b/articles/service-connector/quickstart-portal-spring-cloud-connection.md index 1db4a60049b03..45f67101d93bd 100644 --- a/articles/service-connector/quickstart-portal-spring-cloud-connection.md +++ b/articles/service-connector/quickstart-portal-spring-cloud-connection.md @@ -22,7 +22,7 @@ This quickstart shows you how to connect Azure Spring Apps to other Cloud resour ## Prerequisites - An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free). -- An app deployed to [Azure Spring Apps](../spring-apps/enterprise/quickstart.md) in a [region supported by Service Connector](./concept-region-support.md). +- An app deployed to [Azure Spring Apps](../spring-apps/basic-standard/quickstart.md) in a [region supported by Service Connector](./concept-region-support.md). - A target resource to connect Azure Spring Apps to. For example, a [Azure Key Vault](/azure/key-vault/general/quick-create-portal). ## Sign in to Azure diff --git a/articles/service-connector/toc.yml b/articles/service-connector/toc.yml index a966b21c43ce4..64c825071ae5d 100644 --- a/articles/service-connector/toc.yml +++ b/articles/service-connector/toc.yml @@ -84,7 +84,7 @@ items: expanded: false items: - name: Java app to PostgreSQL - href: ../spring-apps/enterprise/how-to-bind-postgres.md?bc=%2fazure%2fservice-connector%2fbreadcrumb%2ftoc.json&tabs=Passwordlessflex&toc=%2fazure%2fservice-connector%2fTOC.json + href: ../spring-apps/basic-standard/how-to-bind-postgres.md?bc=%2fazure%2fservice-connector%2fbreadcrumb%2ftoc.json&tabs=Passwordlessflex&toc=%2fazure%2fservice-connector%2fTOC.json - name: Spring Boot app to Kafka on Confluent Cloud href: tutorial-java-spring-confluent-kafka.md - name: Spring app to MySQL diff --git a/articles/service-connector/tutorial-java-spring-confluent-kafka.md b/articles/service-connector/tutorial-java-spring-confluent-kafka.md index e36eef99c12d1..b62b4eec703fc 100644 --- a/articles/service-connector/tutorial-java-spring-confluent-kafka.md +++ b/articles/service-connector/tutorial-java-spring-confluent-kafka.md @@ -78,7 +78,7 @@ Create an instance of Apache Kafka for Confluent Cloud by following [this guidan ### Create an Azure Spring Apps instance -Create an instance of Azure Spring Apps by following [the Azure Spring Apps quickstart](../spring-apps/enterprise/quickstart.md) in Java. Make sure your Azure Spring Apps instance is created in [a region that has Service Connector support](concept-region-support.md). +Create an instance of Azure Spring Apps by following [the Azure Spring Apps quickstart](../spring-apps/basic-standard/quickstart.md) in Java. Make sure your Azure Spring Apps instance is created in [a region that has Service Connector support](concept-region-support.md). ## Build and deploy the app diff --git a/articles/spring-apps/index.yml b/articles/spring-apps/index.yml index 08920a71e11ae..d589b061f9ba8 100644 --- a/articles/spring-apps/index.yml +++ b/articles/spring-apps/index.yml @@ -24,7 +24,7 @@ highlightedContent: url: basic-standard/retirement-announcement.md - title: What is Azure Spring Apps? itemType: overview - url: enterprise/overview.md + url: basic-standard/overview.md # Card - title: Basic/Standard plan itemType: overview diff --git a/articles/storage/blobs/immutable-storage-overview.md b/articles/storage/blobs/immutable-storage-overview.md index aff64c0c113ed..64033208eb794 100644 --- a/articles/storage/blobs/immutable-storage-overview.md +++ b/articles/storage/blobs/immutable-storage-overview.md @@ -179,7 +179,7 @@ If you fail to pay your bill and your account has an active time-based retention This feature is incompatible with point in time restore and last access tracking. This feature is compatible with customer-managed unplanned failover, however, any changes that are made to the immutable policy after the last sync time (such as locking a time based retention policy, extending it, etc.) will not be synced to the secondary region. Once failover is completed, you can redo the changes to the secondary region to ensure it is up-to-date with your immutability requirements. Immutability policies aren't supported in accounts that have Network File System (NFS) 3.0 protocol or the SSH File Transfer Protocol (SFTP) enabled on them. -Some workloads, such as SQL Backup to URL, create a blob and then add to it. If a container has an active time-based retention policy or legal hold in place, this pattern won't succeed. See the Allow protected append blob writes for more detail. +Some workloads, such as SQL Backup to URL, create a blob and then add to it. If a container has an active time-based retention policy or legal hold in place, this pattern won't succeed. For more information, see [Allow protected append blob writes](immutable-container-level-worm-policies.md#allow-protected-append-blobs-writes). For more information, see [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md). diff --git a/articles/storage/blobs/network-file-system-protocol-support-how-to.md b/articles/storage/blobs/network-file-system-protocol-support-how-to.md index ce22a92a6797a..669eff5a72ccd 100644 --- a/articles/storage/blobs/network-file-system-protocol-support-how-to.md +++ b/articles/storage/blobs/network-file-system-protocol-support-how-to.md @@ -29,7 +29,7 @@ Currently, the only way to secure the data in your storage account is by using a Any other tools used to secure data, including account key authorization, Microsoft Entra security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if you add an entry for a named user or group to the ACL of a blob or directory, that file becomes inaccessible on the client for non-root users. You would have to remove that entry to restore access to non-root users on the client. > [!IMPORTANT] -> The NFS 3.0 protocol uses ports 111 and 2049. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports. +> The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports. ## Step 3: Create and configure a storage account @@ -148,7 +148,7 @@ Create a directory on your Linux system and then mount the container in the stor |`NFS3ERR_IO/EIO ("Input/output error"`) |This error can appear when a client attempts to read, write, or set attributes on blobs that are stored in the archive access tier. | |`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob Storage or Azure Data Lake Storage API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS 3.0 endpoint to work with symbolic links. | |`mount: /nfsdata: bad option;`| Install the NFS helper program by using `sudo apt install nfs-common`.| -|`Connection Timed Out`| Make sure that client allows outgoing communication through ports 111 and 2049. The NFS 3.0 protocol uses these ports. Makes sure to mount the storage account by using the Blob service endpoint and not the Data Lake Storage endpoint. | +|`Connection Timed Out`| Make sure that client allows outgoing communication through ports 111 and 2048. The NFS 3.0 protocol uses these ports. Makes sure to mount the storage account by using the Blob service endpoint and not the Data Lake Storage endpoint. | ## Limitations and troubleshooting for AZNFS Mount Helper diff --git a/articles/storage/blobs/network-file-system-protocol-support.md b/articles/storage/blobs/network-file-system-protocol-support.md index 4a26cb9588c1f..f9a49d437607d 100644 --- a/articles/storage/blobs/network-file-system-protocol-support.md +++ b/articles/storage/blobs/network-file-system-protocol-support.md @@ -79,7 +79,7 @@ A client can connect over a public or a [private endpoint](../common/storage-pri This can be done by using [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or an [ExpressRoute gateway](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md) along with [Gateway transit](/azure/architecture/reference-architectures/hybrid-networking/vnet-peering#gateway-transit). > [!IMPORTANT] -> The NFS 3.0 protocol uses ports 111 and 2049. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports. +> The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports. diff --git a/articles/storage/common/storage-network-security.md b/articles/storage/common/storage-network-security.md index d6ddc481d6dca..b954d57dcd2b1 100644 --- a/articles/storage/common/storage-network-security.md +++ b/articles/storage/common/storage-network-security.md @@ -90,7 +90,11 @@ After you apply network rules, they're enforced for all requests. SAS tokens tha [Network Security Perimeter](../../private-link/network-security-perimeter-concepts.md) (preview) allows organizations to define a logical network isolation boundary for PaaS resources (for example, Azure Blob Storage and SQL Database) that are deployed outside their virtual networks. The feature restricts public network access to PaaS resources outside the perimeter. However, you can exempt access by using explicit access rules for public inbound and outbound traffic. By design, access to a storage account from within a Network Security Perimeter takes the highest precedence over other network access restrictions. -Currently, Network Security Perimeter is in public preview for Azure Blobs, Azure Files (REST), Azure Tables, and Azure Queues. See [Transition to a Network Security Perimeter](../../private-link/network-security-perimeter-transition.md). +Currently, network security perimeter is in public preview for Azure Blobs, Azure Files (REST), Azure Tables, and Azure Queues. See [Transition to a network security perimeter](../../private-link/network-security-perimeter-transition.md). + +The list of services that have been onboarded to network security perimeter can be found [here](../../private-link/network-security-perimeter-concepts.md#onboarded-private-link-resources). + +For services that are not on this list as they have not yet been onboarded to Network Security Perimeter, if you would like to allow access you can use a subscription-based rule on the Network Security Perimeter. All resources within that subscription will then be given access to that Network Security Perimeter. For more information on adding subscription-based access rule, refer [here](/rest/api/networkmanager/nsp-access-rules/create-or-update). > [!IMPORTANT] > Private endpoint traffic is considered highly secure and therefore isn't subject to Network Security Perimeter rules. All other traffic, including trusted services, will be subject to Network Security Perimeter rules if the storage account is associated with a perimeter. diff --git a/articles/virtual-desktop/windows-multisession-faq.yml b/articles/virtual-desktop/windows-multisession-faq.yml index a9a808d3aef8d..c011600f02672 100644 --- a/articles/virtual-desktop/windows-multisession-faq.yml +++ b/articles/virtual-desktop/windows-multisession-faq.yml @@ -27,7 +27,7 @@ sections: Windows Enterprise multi-session is a virtual edition of Windows Enterprise. One of the differences is that this operating system (OS) reports the [ProductType](/windows/win32/cimwin32prov/win32-operatingsystem) as having a value of 3, the same value as Windows Server. This property keeps the OS compatible with existing RDSH management tooling, RDSH multi-session-aware applications, and mostly low-level system performance optimizations for RDSH environments. Some application installers can block installation on Windows multi-session depending on whether they detect the ProductType is set to Client. If your app won't install, contact your application vendor for an updated version. - question: Can I run Windows Enterprise multi-session outside of the Azure Virtual Desktop service? - answer: We don't allow customers to run Windows Enterprise multi-session in production environments outside of the Azure Virtual Desktop service. Only Microsoft or the Azure Virtual Desktop Approved Providers, Citrix and VMware, can provide access to the Azure Virtual Desktop service. It's against the licensing agreement to run Windows multi-session outside of the Azure Virtual Desktop service for production purposes. Windows multi-session also won’t activate against on-premises Key Management Services (KMS). + answer: We don't allow customers to run Windows Enterprise multi-session in production environments outside of the Azure Virtual Desktop service. Only Microsoft or the Azure Virtual Desktop Approved Providers, Citrix and Omnissa, can provide access to the Azure Virtual Desktop service. It's against the licensing agreement to run Windows multi-session outside of the Azure Virtual Desktop service for production purposes. Windows multi-session also won’t activate against on-premises Key Management Services (KMS). - question: Can I upgrade a Windows VM to Windows Enterprise multi-session? answer: No. It's not currently possible to upgrade an existing virtual machine (VM) that's running Windows Professional or Enterprise to Windows Enterprise multi-session. Also, if you deploy a Windows Enterprise multi-session VM and then update the product key to another edition, you won't be able to switch the VM back to Windows Enterprise multi-session and will need to redeploy the VM. Changing your Azure Virtual Desktop VM SKU to another edition is not supported. diff --git a/articles/virtual-network/virtual-network-for-azure-services.md b/articles/virtual-network/virtual-network-for-azure-services.md index 72abc2987f5a1..fb8219ac2317a 100644 --- a/articles/virtual-network/virtual-network-for-azure-services.md +++ b/articles/virtual-network/virtual-network-for-azure-services.md @@ -44,7 +44,7 @@ Deploying services within a virtual network provides the following capabilities: | Containers | [Azure Kubernetes Service (AKS)](/azure/aks/concepts-network?toc=%2fazure%2fvirtual-network%2ftoc.json)
[Azure Container Instance (ACI)](https://www.aka.ms/acivnet)
[Azure Container Service Engine](https://github.com/Azure/acs-engine) with Azure Virtual Network CNI [plug-in](https://github.com/Azure/acs-engine/tree/master/examples/vnet)
[Azure Functions](../azure-functions/functions-networking-options.md#virtual-network-integration) |No2
Yes
No
Yes | Web | [API Management](../api-management/api-management-using-with-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
[Web Apps](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
[App Service Environment](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
[Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
[Azure Container Apps environments](../container-apps/networking.md)
|Yes
Yes
Yes
Yes
Yes | Hosted | [Azure Dedicated HSM](/azure/dedicated-hsm/?toc=%2fazure%2fvirtual-network%2ftoc.json)
[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
|Yes
Yes
-| Azure Spring Apps | [Deploy in Azure virtual network (VNet injection)](../spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md)
| Yes
+| Azure Spring Apps | [Deploy in Azure virtual network (VNet injection)](../spring-apps/basic-standard/how-to-deploy-in-azure-virtual-network.md)
| Yes
| Virtual desktop infrastructure| [Azure Lab Services](../lab-services/how-to-connect-vnet-injection.md)
| Yes
| DevOps | [Azure Load Testing](/azure/load-testing/concept-azure-load-testing-vnet-injection)
| Yes