Skip to content

Commit

Permalink
Merge pull request #291649 from MicrosoftDocs/main
Browse files Browse the repository at this point in the history
Publish to live, Monday 4 AM PST, 12/9
  • Loading branch information
ttorble authored Dec 9, 2024
2 parents 3118284 + 5e624f5 commit 210dff9
Show file tree
Hide file tree
Showing 6 changed files with 51 additions and 17 deletions.
16 changes: 15 additions & 1 deletion articles/automation/change-tracking/overview-monitoring-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Azure Automation Change Tracking and Inventory overview using Azure Monit
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent, which helps you identify software and Microsoft service changes in your environment.
services: automation
ms.subservice: change-inventory-management
ms.date: 11/15/2024
ms.date: 12/09/2024
ms.topic: overview
ms.service: azure-automation
---
Expand All @@ -21,6 +21,20 @@ This article explains on the latest version of change tracking support using Azu
> - [FIM with Change Tracking and Inventory using AMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-ama).
> - [FIM with Change Tracking and Inventory using MMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-mma).
## What is Change Tracking & Inventory

Azure Change Tracking & Inventory service enhances the auditing and governance for in-guest operations by monitoring changes and providing detailed inventory logs for servers across Azure, on-premises, and other cloud environments.

1. **Change Tracking**

a. Monitors changes, including modifications to files, registry keys, software installations, and Windows services or Linux daemons.</br>
b. Provides detailed logs of what and when the changes were made, who made them, enabling you to quickly detect configuration drifts or unauthorized changes.

1. **Inventory**

a. Collects and maintains an updated list of installed software, operating system details, and other server configurations in linked LA workspace </br>
b. Helps create an overview of system assets, which is useful for compliance, audits, and proactive maintenance.

## Support matrix

|**Component**| **Applies to**|
Expand Down
5 changes: 4 additions & 1 deletion articles/backup/backup-azure-immutable-vault-concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: This article explains about the concept of Immutable vault for Azur
ms.topic: overview
ms.service: azure-backup
ms.custom: references_regions, engagement-fy24, ignite-2024
ms.date: 11/20/2024
ms.date: 12/09/2024
author: AbhishekMallick-MS
ms.author: v-abhmallick
---
Expand All @@ -27,6 +27,9 @@ Immutable vault can help you protect your backup data by blocking any operations
- Immutable vault applies to all the data in the vault. Therefore, all instances that are protected in the vault have immutability applied to them.
- Immutability doesn't apply to operational backups, such as operational backup of blobs, files, and disks.

>[!Note]
>Ensure that the resource provider is registered in your subscription for `Microsoft.RecoveryServices`, otherwise Zone-redundant and vault property options like “Immutability settings” will not be accessible.
## How does immutability work?

While Azure Backup stores data in isolation from production workloads, it allows performing management operations to help you manage your backups, including those operations that allow you to delete recovery points. However, in certain scenarios, you may want to make the backup data immutable by preventing any such operations that, if used by malicious actors, could lead to the loss of backups. The Immutable vault setting on your vault enables you to block such operations to ensure that your backup data is protected, even if any malicious actors try to delete them to affect the recoverability of data.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,17 @@ author: KarlErickson
ms.topic: tutorial
ms.author: edburns
ms.service: azure-container-apps
ms.date: 10/10/2024
ms.date: 12/09/2024
ms.custom: devx-track-azurecli, devx-track-extended-java, devx-track-java, devx-track-javaee, devx-track-javaee-quarkus, passwordless-java, service-connector, devx-track-javaee-quarkus-aca
---

# Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity

[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables.

This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md).
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md).

What you will learn:
What you learn:

> [!div class="checklist"]
> * Configure a Quarkus app to authenticate using Microsoft Entra ID with a PostgreSQL Database.
Expand Down Expand Up @@ -88,7 +88,7 @@ cd quarkus-quickstarts/hibernate-orm-panache-quickstart

1. Configure the Quarkus app properties.

The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.

Delete the existing content in *application.properties* and replace with the following to configure the database for dev, test, and production modes:

Expand Down Expand Up @@ -211,7 +211,7 @@ cd quarkus-quickstarts/hibernate-orm-panache-quickstart
## 5. Create and connect a PostgreSQL database with identity connectivity

Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app connects to this database and stores its data when running, persisting the application state no matter where you run the application.

1. Create the database service.

Expand All @@ -236,7 +236,7 @@ Next, create a PostgreSQL Database and configure your container app to connect t
* *resource-group* &rarr; Use the same resource group name in which you created the web app - for example, `msdocs-quarkus-postgres-webapp-rg`.
* *name* &rarr; The PostgreSQL database server name. This name must be **unique across all Azure** (the server endpoint becomes `https://<name>.postgres.database.azure.com`). Allowed characters are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and server identifier. (`msdocs-quarkus-postgres-webapp-db`)
* *location* &rarr; Use the same location used for the web app. Change to a different location if it doesn't work.
* *public-access* &rarr; `None` which sets the server in public access mode with no firewall rules. Rules will be created in a later step.
* *public-access* &rarr; `None` which sets the server in public access mode with no firewall rules. Rules are created in a later step.
* *sku-name* &rarr; The name of the pricing tier and compute configuration - for example, `Standard_B1ms`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
* *tier* &rarr; The compute tier of the server. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
* *active-directory-auth* &rarr; `Enabled` to enable Microsoft Entra authentication.
Expand Down
22 changes: 16 additions & 6 deletions articles/data-factory/connector-google-bigquery-legacy.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ author: jianleishen
ms.subservice: data-movement
ms.topic: conceptual
ms.custom: synapse
ms.date: 11/05/2024
ms.date: 12/02/2024
---

# Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics (legacy)
Expand Down Expand Up @@ -35,12 +35,19 @@ The service provides a built-in driver to enable connectivity. Therefore, you do

The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).

The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).


>[!NOTE]
>This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
## Prerequisites

To use this connector, you need the following minimum permissions of Google BigQuery:
- bigquery.connections.*
- bigquery.datasets.*
- bigquery.jobs.*
- bigquery.readsessions.*
- bigquery.routines.*
- bigquery.tables.*

## Get started

[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
Expand Down Expand Up @@ -128,10 +135,13 @@ Set "authenticationType" property to **ServiceAuthentication**, and specify the
| Property | Description | Required |
|:--- |:--- |:--- |
| email | The service account email ID that is used for ServiceAuthentication. It can be used only on Self-hosted Integration Runtime. | No |
| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address. | Yes |
| keyFilePath | The full path to the `.json` key file that is used to authenticate the service account email address. | Yes |
| trustedCertPath | The full path of the .pem file that contains trusted CA certificates used to verify the server when you connect over TLS. This property can be set only when you use TLS on Self-hosted Integration Runtime. The default value is the cacerts.pem file installed with the integration runtime. | No |
| useSystemTrustStore | Specifies whether to use a CA certificate from the system trust store or from a specified .pem file. The default value is **false**. | No |

> [!NOTE]
> The connector no longer supports P12 key files. If you rely on service accounts, you are recommended to use JSON key files instead. The P12CustomPwd property used for supporting the P12 key file was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
**Example:**

```json
Expand All @@ -144,7 +154,7 @@ Set "authenticationType" property to **ServiceAuthentication**, and specify the
"requestGoogleDriveScope" : true,
"authenticationType" : "ServiceAuthentication",
"email": "<email>",
"keyFilePath": "<.p12 or .json key path on the IR machine>"
"keyFilePath": "<.json key path on the IR machine>"
},
"connectVia": {
"referenceName": "<name of Self-hosted Integration Runtime>",
Expand Down
11 changes: 9 additions & 2 deletions articles/data-factory/connector-netezza.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ author: jianleishen
ms.subservice: data-movement
ms.custom: synapse
ms.topic: conceptual
ms.date: 06/28/2024
ms.date: 12/02/2024
ms.author: jianleishen
---
# Copy data from Netezza by using Azure Data Factory or Synapse Analytics
Expand All @@ -30,7 +30,11 @@ This Netezza connector is supported for the following capabilities:

For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).

Netezza connector supports parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
This Netezza connector supports:

- Parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
- Netezza Performance Server version 11.
- Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).

The service provides a built-in driver to enable connectivity. You don't need to manually install any driver to use this connector.

Expand Down Expand Up @@ -85,6 +89,9 @@ A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
|:--- |:--- |:--- |
| SecurityLevel | The level of security that the driver uses for the connection to the data store. <br>Example: `SecurityLevel=preferredUnSecured`. Supported values are:<br/>- **Only unsecured** (**onlyUnSecured**): The driver doesn't use SSL.<br/>- **Preferred unsecured (preferredUnSecured) (default)**: If the server provides a choice, the driver doesn't use SSL. | No |

> [!NOTE]
> The connector doesn't support SSLv3 as it is [officially deprecated by Netezza](https://www.ibm.com/docs/en/netezza?topic=npssac-netezza-performance-server-client-encryption-security-1).
**Example**

```json
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The VMware vSphere hypervisor requirements are:
- **VMware vCenter Server** - Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0.
- **VMware vSphere ESXi host** - Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0.
- **Multiple vCenter Servers** - A single appliance can connect to up to 10 vCenter Servers.
- **vCenter Server permissions** - VMware account used to access the vCenter server from the Azure Migrate appliance needs below permissions to replicate virtual machines:
- **vCenter Server permissions** - The VMware account used to access the vCenter server from the Azure Migrate appliance must have the following permissions assigned at all required levels - datacenter, cluster, host, VM, and datastore. Ensure permissions are applied at each level to avoid replication errors.

**Privilege Name in the vSphere Client** | **The purpose for the privilege** | **Required On** | **Privilege Name in the API**
--- | --- | --- | ---
Expand Down

0 comments on commit 210dff9

Please sign in to comment.