diff --git a/connectors/dashboard/domo-dashboard/index.mdx b/connectors/dashboard/domo-dashboard/index.mdx
index 7bfc3a8a..0ff152dd 100644
--- a/connectors/dashboard/domo-dashboard/index.mdx
+++ b/connectors/dashboard/domo-dashboard/index.mdx
@@ -27,7 +27,7 @@ For metadata ingestion, make sure to add at least `data` scopes to the clientId
For questions related to scopes, click [here](https://developer.domo.com/portal/1845fc11bbe5d-api-authentication).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/lightdash/index.mdx b/connectors/dashboard/lightdash/index.mdx
index 88e345d0..47ef12bf 100644
--- a/connectors/dashboard/lightdash/index.mdx
+++ b/connectors/dashboard/lightdash/index.mdx
@@ -28,7 +28,7 @@ Configure and schedule Lightdash metadata and profiler workflows from the Collat
To integrate Lightdash, ensure you are using OpenMetadata version 1.2.x or higher.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/looker/index.mdx b/connectors/dashboard/looker/index.mdx
index ccc10f2c..fb68cace 100644
--- a/connectors/dashboard/looker/index.mdx
+++ b/connectors/dashboard/looker/index.mdx
@@ -44,7 +44,7 @@ We do not yet support liquid variables.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/metabase/index.mdx b/connectors/dashboard/metabase/index.mdx
index aaf3b007..9011a22f 100644
--- a/connectors/dashboard/metabase/index.mdx
+++ b/connectors/dashboard/metabase/index.mdx
@@ -28,7 +28,7 @@ The service account must have view access to all dashboards and charts that need
**Note:** We have tested Metabase with Versions `0.42.4` and `0.43.4`.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/microstrategy/index.mdx b/connectors/dashboard/microstrategy/index.mdx
index 7dfeb7bc..2a481f74 100644
--- a/connectors/dashboard/microstrategy/index.mdx
+++ b/connectors/dashboard/microstrategy/index.mdx
@@ -34,7 +34,7 @@ However, if the user still cannot access the APIs, the following should be check
- Browse Repository: Object navigation within projects/folders.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/mode/index.mdx b/connectors/dashboard/mode/index.mdx
index ba07b403..d7178e86 100644
--- a/connectors/dashboard/mode/index.mdx
+++ b/connectors/dashboard/mode/index.mdx
@@ -27,7 +27,7 @@ Configure and schedule Mode metadata and profiler workflows from the Collate UI:
OpenMetadata relies on Mode's API, which is exclusive to members of the Mode Business Workspace. This means that only resources that belong to a Mode Business Workspace can be accessed via the API.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/powerbi/index.mdx b/connectors/dashboard/powerbi/index.mdx
index 7bd30f19..3b17ab10 100644
--- a/connectors/dashboard/powerbi/index.mdx
+++ b/connectors/dashboard/powerbi/index.mdx
@@ -76,7 +76,7 @@ Create new workspaces in PowerBI by following the document [here](https://docs.m
For reference here is a [thread](https://community.powerbi.com/t5/Service/Error-while-executing-Get-dataset-call-quot-API-is-not/m-p/912360#M85711) referring to the same
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/powerbireportserver/index.mdx b/connectors/dashboard/powerbireportserver/index.mdx
index e2866fd9..e00567e2 100644
--- a/connectors/dashboard/powerbireportserver/index.mdx
+++ b/connectors/dashboard/powerbireportserver/index.mdx
@@ -27,7 +27,7 @@ Configure and schedule PowerBI Report Server metadata from CLI:
The PowerBI Report Server should be accessible from the ingestion environment.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/qlikcloud/api_keys.mdx b/connectors/dashboard/qlikcloud/api_keys.mdx
index a24a33a7..fe81d418 100644
--- a/connectors/dashboard/qlikcloud/api_keys.mdx
+++ b/connectors/dashboard/qlikcloud/api_keys.mdx
@@ -12,13 +12,13 @@ OpenMetadata Uses [REST APIs](https://qlik.dev/apis/) to communicate with Qlik C
In this document we will explain how you can generate this token so that OpenMetadata can communicate with Qlik Cloud.
-# Step 1: Open Qlik Cloud Management Console (QMC)
+## Step 1: Open Qlik Cloud Management Console (QMC)
Open your Qlik Cloud Management Console (QMC) and navigate to API Keys section.
-# Step 2: Provide Name and Generate API Key
+## Step 2: Provide Name and Generate API Key
1. Provide name for the API key you will generate.
diff --git a/connectors/dashboard/qlikcloud/index.mdx b/connectors/dashboard/qlikcloud/index.mdx
index e25d67e6..41ce8187 100644
--- a/connectors/dashboard/qlikcloud/index.mdx
+++ b/connectors/dashboard/qlikcloud/index.mdx
@@ -29,7 +29,7 @@ Configure and schedule QlikCloud metadata and profiler workflows from the Collat
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/qliksense/certificates.mdx b/connectors/dashboard/qliksense/certificates.mdx
index 26370048..675cf501 100644
--- a/connectors/dashboard/qliksense/certificates.mdx
+++ b/connectors/dashboard/qliksense/certificates.mdx
@@ -13,13 +13,13 @@ OpenMetadata Uses [Qlik Engine APIs](https://help.qlik.com/en-US/sense-developer
In this document we will explain how you can generate these certificates so that OpenMetadata can communicate with Qlik Sense.
-# Step 1: Open Qlik Management Console (QMC)
+## Step 1: Open Qlik Management Console (QMC)
Open your Qlik Management Console (QMC) and navigate to certificates section.
-# Step 2: Provide Details and Export Certificates
+## Step 2: Provide Details and Export Certificates
1. In the Machine name box, type the full computer name of the computer that you are creating the certificates for: MYMACHINE.mydomain.com or the IP address.
@@ -35,7 +35,7 @@ Open your Qlik Management Console (QMC) and navigate to certificates section.
-# Step 3: Locate the certificates
+## Step 3: Locate the certificates
Once you have exported the certificates you can see the location of exported certificates just below the certificate details page. When you navigate to that location you will find the `root.pem`, `client.pem` & `client_key.pem` certificates which will be used by OpenMetadata.
diff --git a/connectors/dashboard/qliksense/index.mdx b/connectors/dashboard/qliksense/index.mdx
index b31bd315..ca55d1a9 100644
--- a/connectors/dashboard/qliksense/index.mdx
+++ b/connectors/dashboard/qliksense/index.mdx
@@ -30,7 +30,7 @@ Configure and schedule Metabase metadata and profiler workflows from the Collate
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/quicksight/index.mdx b/connectors/dashboard/quicksight/index.mdx
index 52840b53..b93bc054 100644
--- a/connectors/dashboard/quicksight/index.mdx
+++ b/connectors/dashboard/quicksight/index.mdx
@@ -64,7 +64,7 @@ Here is how to add Permissions to an IAM user.
```
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/redash/index.mdx b/connectors/dashboard/redash/index.mdx
index fc7a4467..ed0a0178 100644
--- a/connectors/dashboard/redash/index.mdx
+++ b/connectors/dashboard/redash/index.mdx
@@ -29,7 +29,7 @@ we use in the configuration to ingest data must have enough permissions to view
permissions, please visit Redash documentation [here](https://redash.io/help/user-guide/users/permissions-groups).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/sigma/index.mdx b/connectors/dashboard/sigma/index.mdx
index d7844a6f..a42d886b 100644
--- a/connectors/dashboard/sigma/index.mdx
+++ b/connectors/dashboard/sigma/index.mdx
@@ -27,7 +27,7 @@ Configure and schedule Sigma metadata and profiler workflows from the Collate UI
OpenMetadata relies on Sigma's REST API. To know more you can read the [Sigma API Get Started docs](https://help.sigmacomputing.com/reference/get-started-sigma-api#about-the-api). To [generate API client credentials](https://help.sigmacomputing.com/reference/generate-client-credentials#user-requirements), you must be assigned the Admin account type.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/superset/index.mdx b/connectors/dashboard/superset/index.mdx
index 61b65d03..82c995d8 100644
--- a/connectors/dashboard/superset/index.mdx
+++ b/connectors/dashboard/superset/index.mdx
@@ -31,7 +31,7 @@ The ingestion also works with Superset 2.0.0 🎉
**Database Connection**: To extract metadata from Superset via MySQL or Postgres database, database user must have at least `SELECT` privilege on `dashboards` & `slices` tables within superset schema.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/dashboard/tableau/index.mdx b/connectors/dashboard/tableau/index.mdx
index 5c5e7037..f5257fb6 100644
--- a/connectors/dashboard/tableau/index.mdx
+++ b/connectors/dashboard/tableau/index.mdx
@@ -71,7 +71,7 @@ This mapping ensures that:
- Chrome Extension compatibility is maintained
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/cassandra/hybrid-runner.mdx b/connectors/database/cassandra/hybrid-runner.mdx
index fbf1dfc7..b33089a7 100644
--- a/connectors/database/cassandra/hybrid-runner.mdx
+++ b/connectors/database/cassandra/hybrid-runner.mdx
@@ -29,7 +29,7 @@ To extract metadata using the Cassandra connector, ensure the user in the connec
- Schema Operations: Access to list and describe keyspaces and tables.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/cassandra/index.mdx b/connectors/database/cassandra/index.mdx
index edd210fb..ead5781f 100644
--- a/connectors/database/cassandra/index.mdx
+++ b/connectors/database/cassandra/index.mdx
@@ -29,7 +29,7 @@ To extract metadata using the Cassandra connector, ensure the user in the connec
- Schema Operations: Access to list and describe keyspaces and tables.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Username to connect to Cassandra. This user must have the necessary permissions to perform metadata extraction and table queries.
diff --git a/connectors/database/clickhouse/hybrid-runner.mdx b/connectors/database/clickhouse/hybrid-runner.mdx
index 4287f06b..474a932d 100644
--- a/connectors/database/clickhouse/hybrid-runner.mdx
+++ b/connectors/database/clickhouse/hybrid-runner.mdx
@@ -50,7 +50,7 @@ Executing the profiler workflow or data quality tests, will require the user to
For the usage and lineage workflow, the user will need `SELECT` privilege. You can find more information on the usage workflow [here](/how-to-guides/guide-for-data-users/ingestion/workflows/usage) and the lineage workflow [here](/how-to-guides/guide-for-data-users/ingestion/workflows/lineage).
## Metadata Ingestion
-# Connection Options
+## Connection Options
diff --git a/connectors/database/cockroach/hybrid-runner.mdx b/connectors/database/cockroach/hybrid-runner.mdx
index 31567825..ef7d6273 100644
--- a/connectors/database/cockroach/hybrid-runner.mdx
+++ b/connectors/database/cockroach/hybrid-runner.mdx
@@ -33,7 +33,7 @@ To use the Cockroach connector with the Hybrid Ingestion Runner, ensure:
- The Hybrid Ingestion Runner is installed and configured, and can reach both CockroachDB and the Collate SaaS control plane.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/cockroach/index.mdx b/connectors/database/cockroach/index.mdx
index 46f252b7..e2240f39 100644
--- a/connectors/database/cockroach/index.mdx
+++ b/connectors/database/cockroach/index.mdx
@@ -27,7 +27,7 @@ Configure and schedule Cockroach metadata workflows from the Collate UI:
## Requirements
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to Cockroach. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/couchbase/hybrid-runner.mdx b/connectors/database/couchbase/hybrid-runner.mdx
index 193cc4a0..6fd8e432 100644
--- a/connectors/database/couchbase/hybrid-runner.mdx
+++ b/connectors/database/couchbase/hybrid-runner.mdx
@@ -24,7 +24,7 @@ Configure and schedule Couchbase metadata workflows from the Collate UI:
- [Troubleshooting](/connectors/database/couchbase/troubleshooting)
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/couchbase/index.mdx b/connectors/database/couchbase/index.mdx
index 2608f275..3111ba87 100644
--- a/connectors/database/couchbase/index.mdx
+++ b/connectors/database/couchbase/index.mdx
@@ -24,7 +24,7 @@ Configure and schedule Couchbase metadata workflows from the Collate UI:
- [Troubleshooting](/connectors/database/couchbase/troubleshooting)
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Username to connect to Couchbase.
diff --git a/connectors/database/databricks/hybrid-runner.mdx b/connectors/database/databricks/hybrid-runner.mdx
index 1ca4efdf..04c8c927 100644
--- a/connectors/database/databricks/hybrid-runner.mdx
+++ b/connectors/database/databricks/hybrid-runner.mdx
@@ -65,7 +65,7 @@ Adjust <user>, <catalog_name>, <schema_name>, and <table_na
If you are using Unity Catalog in Databricks, then checkout the [Unity Catalog](/connectors/database/unity-catalog) connector.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/databricks/index.mdx b/connectors/database/databricks/index.mdx
index 46a9ff9f..33b2da9c 100644
--- a/connectors/database/databricks/index.mdx
+++ b/connectors/database/databricks/index.mdx
@@ -65,7 +65,7 @@ Adjust <user>, <catalog_name>, <schema_name>, and <table_na
If you are using Unity Catalog in Databricks, then checkout the [Unity Catalog](/connectors/database/unity-catalog) connector.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Host and Port**: Enter the fully qualified hostname and port number for your Databricks deployment in the Host and Port field.
diff --git a/connectors/database/db2/hybrid-runner.mdx b/connectors/database/db2/hybrid-runner.mdx
index 8bc73c94..84f0163f 100644
--- a/connectors/database/db2/hybrid-runner.mdx
+++ b/connectors/database/db2/hybrid-runner.mdx
@@ -115,7 +115,7 @@ GRANT SELECT ON SYSIBM.SYSSEQUENCES TO USER_NAME;
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/db2/index.mdx b/connectors/database/db2/index.mdx
index d754eb9f..8a5ba7e3 100644
--- a/connectors/database/db2/index.mdx
+++ b/connectors/database/db2/index.mdx
@@ -115,7 +115,7 @@ GRANT SELECT ON SYSIBM.SYSSEQUENCES TO USER_NAME;
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to DB2. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/dbt/configure-dbt-workflow.mdx b/connectors/database/dbt/configure-dbt-workflow.mdx
index 304c9fd4..9f79fad3 100644
--- a/connectors/database/dbt/configure-dbt-workflow.mdx
+++ b/connectors/database/dbt/configure-dbt-workflow.mdx
@@ -13,7 +13,7 @@ name="dbt"
stage="PROD"
availableFeatures={["Metadata", "Queries", "Lineage", "Tags", "Tiers", "Domains", "Custom Properties", "Glossary", "Owners", "Descriptions", "Tests", "Exposures"]} />
-# Configure dbt workflow
+## Configure dbt workflow
Learn how to configure the dbt workflow to ingest dbt data from your data sources.
diff --git a/connectors/database/domo-database/index.mdx b/connectors/database/domo-database/index.mdx
index 831d2254..b2afd5c5 100644
--- a/connectors/database/domo-database/index.mdx
+++ b/connectors/database/domo-database/index.mdx
@@ -29,7 +29,7 @@ For metadata ingestion, make sure to add at least `data` scopes to the clientId
For questions related to scopes, click [here](https://developer.domo.com/portal/1845fc11bbe5d-api-authentication).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/doris/index.mdx b/connectors/database/doris/index.mdx
index 477575b9..8869c45d 100644
--- a/connectors/database/doris/index.mdx
+++ b/connectors/database/doris/index.mdx
@@ -31,7 +31,7 @@ Configure and schedule Doris metadata and profiler workflows from the Collate UI
Metadata: Doris >= 1.2.0, Data Profiler: Doris >= 2.0.2
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/dremio/index.mdx b/connectors/database/dremio/index.mdx
index 39d53538..ef74c6dd 100644
--- a/connectors/database/dremio/index.mdx
+++ b/connectors/database/dremio/index.mdx
@@ -66,7 +66,7 @@ The connector uses Dremio's REST API and SQL interfaces for metadata extraction.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/druid/index.mdx b/connectors/database/druid/index.mdx
index b60b0745..06b77464 100644
--- a/connectors/database/druid/index.mdx
+++ b/connectors/database/druid/index.mdx
@@ -28,7 +28,7 @@ Configure and schedule Druid metadata and profiler workflows from the Collate UI
- [Troubleshooting](/connectors/database/druid/troubleshooting)
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/dynamodb/index.mdx b/connectors/database/dynamodb/index.mdx
index 24e9314a..e77a8db4 100644
--- a/connectors/database/dynamodb/index.mdx
+++ b/connectors/database/dynamodb/index.mdx
@@ -43,7 +43,7 @@ Below defined policy grants the permissions to list all tables in DynamoDB:
For more information on Dynamodb permissions visit the [AWS DynamoDB official documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/api-permissions-reference.html).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/epic/index.mdx b/connectors/database/epic/index.mdx
index 07ef531b..c5577b87 100644
--- a/connectors/database/epic/index.mdx
+++ b/connectors/database/epic/index.mdx
@@ -29,7 +29,7 @@ To fetch metadata from Epic FHIR into OpenMetadata you will need:
2. The FHIR version supported by your Epic server. Supported values are: `R4`, `STU3`, and `DSTU2`.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/glue/index.mdx b/connectors/database/glue/index.mdx
index ea19d5db..0ec74096 100644
--- a/connectors/database/glue/index.mdx
+++ b/connectors/database/glue/index.mdx
@@ -27,7 +27,7 @@ Configure and schedule Glue metadata and profiler workflows from the Collate UI:
User must have `glue:GetDatabases` and `glue:GetTables` permissions to ingest the basic metadata.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/greenplum/index.mdx b/connectors/database/greenplum/index.mdx
index 7a2318ac..5c2a7ac6 100644
--- a/connectors/database/greenplum/index.mdx
+++ b/connectors/database/greenplum/index.mdx
@@ -32,7 +32,7 @@ Configure and schedule Greenplum metadata and profiler workflows from the Collat
## Requirements
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/hive/index.mdx b/connectors/database/hive/index.mdx
index 679c255f..e7dfafae 100644
--- a/connectors/database/hive/index.mdx
+++ b/connectors/database/hive/index.mdx
@@ -35,7 +35,7 @@ To extract metadata, the user used in the connection needs to be able to perform
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/impala/index.mdx b/connectors/database/impala/index.mdx
index 4de6702b..27bca716 100644
--- a/connectors/database/impala/index.mdx
+++ b/connectors/database/impala/index.mdx
@@ -29,7 +29,7 @@ Configure and schedule Impala metadata and profiler workflows from the Collate U
- [Troubleshooting](/connectors/database/impala/troubleshooting)
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/mariadb/hybrid-runner.mdx b/connectors/database/mariadb/hybrid-runner.mdx
index a216834b..09c6d96f 100644
--- a/connectors/database/mariadb/hybrid-runner.mdx
+++ b/connectors/database/mariadb/hybrid-runner.mdx
@@ -43,7 +43,7 @@ GRANT SELECT ON world.hello TO '';
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to MariaDB. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/mariadb/index.mdx b/connectors/database/mariadb/index.mdx
index 019e92fa..3d95194d 100644
--- a/connectors/database/mariadb/index.mdx
+++ b/connectors/database/mariadb/index.mdx
@@ -43,7 +43,7 @@ GRANT SELECT ON world.hello TO '';
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to MariaDB. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/mongodb/hybrid-runner.mdx b/connectors/database/mongodb/hybrid-runner.mdx
index 1fb0544d..66e15cd0 100644
--- a/connectors/database/mongodb/hybrid-runner.mdx
+++ b/connectors/database/mongodb/hybrid-runner.mdx
@@ -28,7 +28,7 @@ Configure and schedule MongoDB metadata workflows from the OpenMetadata UI:
To fetch the metadata from MongoDB to OpenMetadata, the MongoDB user must have access to perform `find` operation on collection and `listCollection` operations on database available in MongoDB.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Username to connect to Mongodb. This user must have access to perform `find` operation on collection and `listCollection` operations on database available in MongoDB.
diff --git a/connectors/database/mongodb/index.mdx b/connectors/database/mongodb/index.mdx
index 72d5389f..80a7e822 100644
--- a/connectors/database/mongodb/index.mdx
+++ b/connectors/database/mongodb/index.mdx
@@ -28,7 +28,7 @@ Configure and schedule MongoDB metadata workflows from the Collate UI:
To fetch the metadata from MongoDB to OpenMetadata, the MongoDB user must have access to perform `find` operation on collection and `listCollection` operations on database available in MongoDB.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Username to connect to Mongodb. This user must have access to perform `find` operation on collection and `listCollection` operations on database available in MongoDB.
diff --git a/connectors/database/mssql/hybrid-runner.mdx b/connectors/database/mssql/hybrid-runner.mdx
index b8341fab..7d5c440d 100644
--- a/connectors/database/mssql/hybrid-runner.mdx
+++ b/connectors/database/mssql/hybrid-runner.mdx
@@ -63,7 +63,7 @@ If you are using SQL server on windows, you must configure the firewall on the c
For details step please refer to this [link](https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access?view=sql-server-ver15).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Connection Scheme**: Defines how to connect to MSSQL. We support `mssql+pytds`, `mssql+pyodbc`, and `mssql+pymssql`. (If you are using windows authentication from a linux deployment please use pymssql)
diff --git a/connectors/database/mssql/index.mdx b/connectors/database/mssql/index.mdx
index 04b57442..95472894 100644
--- a/connectors/database/mssql/index.mdx
+++ b/connectors/database/mssql/index.mdx
@@ -63,7 +63,7 @@ If you are using SQL server on windows, you must configure the firewall on the c
For details step please refer to this [link](https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access?view=sql-server-ver15).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Connection Scheme**: Defines how to connect to MSSQL. We support `mssql+pytds`, `mssql+pyodbc`, and `mssql+pymssql`. (If you are using windows authentication from a linux deployment please use pymssql)
diff --git a/connectors/database/mysql/hybrid-runner.mdx b/connectors/database/mysql/hybrid-runner.mdx
index f94b209d..c3ff31bc 100644
--- a/connectors/database/mysql/hybrid-runner.mdx
+++ b/connectors/database/mysql/hybrid-runner.mdx
@@ -110,7 +110,7 @@ You can also check below docs about more info on logs & its rotation methods.
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to MySQL. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/mysql/index.mdx b/connectors/database/mysql/index.mdx
index 0ee12b0d..a5ff7645 100644
--- a/connectors/database/mysql/index.mdx
+++ b/connectors/database/mysql/index.mdx
@@ -110,7 +110,7 @@ You can also check below docs about more info on logs & its rotation methods.
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to MySQL. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/oracle/hybrid-runner.mdx b/connectors/database/oracle/hybrid-runner.mdx
index 0fba695f..2693bea8 100644
--- a/connectors/database/oracle/hybrid-runner.mdx
+++ b/connectors/database/oracle/hybrid-runner.mdx
@@ -76,7 +76,7 @@ You can find further information [here](https://docs.oracle.com/javadb/10.8.3.0/
there is no routine out of the box in Oracle to grant SELECT to a full schema.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to Oracle. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/oracle/index.mdx b/connectors/database/oracle/index.mdx
index 714cf5f5..308073a4 100644
--- a/connectors/database/oracle/index.mdx
+++ b/connectors/database/oracle/index.mdx
@@ -76,7 +76,7 @@ You can find further information [here](https://docs.oracle.com/javadb/10.8.3.0/
there is no routine out of the box in Oracle to grant SELECT to a full schema.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to Oracle. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/pinotdb/hybrid-runner.mdx b/connectors/database/pinotdb/hybrid-runner.mdx
index 9136ad69..fa053f60 100644
--- a/connectors/database/pinotdb/hybrid-runner.mdx
+++ b/connectors/database/pinotdb/hybrid-runner.mdx
@@ -34,7 +34,7 @@ To extract metadata, the user needs to have access to the Pinot broker and contr
Executing the profiler workflow or data quality tests will require the user to have query execution permissions on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to PinotDB. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/pinotdb/index.mdx b/connectors/database/pinotdb/index.mdx
index d62a373c..f94abab2 100644
--- a/connectors/database/pinotdb/index.mdx
+++ b/connectors/database/pinotdb/index.mdx
@@ -33,7 +33,7 @@ To extract metadata, the user needs to have access to the Pinot broker and contr
Executing the profiler workflow or data quality tests will require the user to have query execution permissions on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to PinotDB. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/postgres/hybrid-runner.mdx b/connectors/database/postgres/hybrid-runner.mdx
index 3afc9cd6..775f7aaa 100644
--- a/connectors/database/postgres/hybrid-runner.mdx
+++ b/connectors/database/postgres/hybrid-runner.mdx
@@ -98,7 +98,7 @@ By default, `pg_stat_statements` may only capture top-level procedure calls and
This ensures that statements executed within procedures are recorded.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to PostgreSQL. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/postgres/index.mdx b/connectors/database/postgres/index.mdx
index 71abefdb..1dc15e2c 100644
--- a/connectors/database/postgres/index.mdx
+++ b/connectors/database/postgres/index.mdx
@@ -98,7 +98,7 @@ By default, `pg_stat_statements` may only capture top-level procedure calls and
This ensures that statements executed within procedures are recorded.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to PostgreSQL. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/redshift/hybrid-runner.mdx b/connectors/database/redshift/hybrid-runner.mdx
index c82b458b..fd0345ce 100644
--- a/connectors/database/redshift/hybrid-runner.mdx
+++ b/connectors/database/redshift/hybrid-runner.mdx
@@ -58,7 +58,7 @@ For the usage and lineage workflow, the user will need `SELECT` privilege on `ST
## Metadata Ingestion
It is recommmended to exclude the schema "information_schema" from the metadata ingestion as it contains system tables and views.
-# Connection Details
+## Connection Details
diff --git a/connectors/database/redshift/index.mdx b/connectors/database/redshift/index.mdx
index 7f5be556..faa118c5 100644
--- a/connectors/database/redshift/index.mdx
+++ b/connectors/database/redshift/index.mdx
@@ -58,7 +58,7 @@ For the usage and lineage workflow, the user will need `SELECT` privilege on `ST
## Metadata Ingestion
It is recommmended to exclude the schema "information_schema" from the metadata ingestion as it contains system tables and views.
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to Redshift. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/salesforce/hybrid-runner.mdx b/connectors/database/salesforce/hybrid-runner.mdx
index d08b8182..b6ca77c7 100644
--- a/connectors/database/salesforce/hybrid-runner.mdx
+++ b/connectors/database/salesforce/hybrid-runner.mdx
@@ -31,7 +31,7 @@ These are the permissions you will require to fetch the metadata from Salesforce
- **Object Permissions**: You must have read access to the Salesforce objects that you want to ingest.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/salesforce/index.mdx b/connectors/database/salesforce/index.mdx
index 592f6df2..b0671e1a 100644
--- a/connectors/database/salesforce/index.mdx
+++ b/connectors/database/salesforce/index.mdx
@@ -31,7 +31,7 @@ These are the permissions you will require to fetch the metadata from Salesforce
- **Object Permissions**: You must have read access to the Salesforce objects that you want to ingest.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Username to connect to the Salesforce. This user should have the access as defined in requirements.
diff --git a/connectors/database/sap-erp/hybrid-runner.mdx b/connectors/database/sap-erp/hybrid-runner.mdx
index 03aef08d..b9e5b51c 100644
--- a/connectors/database/sap-erp/hybrid-runner.mdx
+++ b/connectors/database/sap-erp/hybrid-runner.mdx
@@ -29,7 +29,7 @@ To ingest the SAP ERP metadata, CDS Views and OData services need to be setup to
Follow the guide [here](/connectors/database/sap-erp/setup-sap-apis) to setup the APIs.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/sap-erp/index.mdx b/connectors/database/sap-erp/index.mdx
index 22f5cf68..c895403e 100644
--- a/connectors/database/sap-erp/index.mdx
+++ b/connectors/database/sap-erp/index.mdx
@@ -29,7 +29,7 @@ To ingest the SAP ERP metadata, CDS Views and OData services need to be setup to
Follow the guide [here](/connectors/database/sap-erp/setup-sap-apis) to setup the APIs.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Host and Port**: This parameter specifies the host and port of the SAP ERP instance. This should be specified as a string in the format `https://hostname.com`.
diff --git a/connectors/database/sap-hana/hybrid-runner.mdx b/connectors/database/sap-hana/hybrid-runner.mdx
index 9e718ba1..412347bf 100644
--- a/connectors/database/sap-hana/hybrid-runner.mdx
+++ b/connectors/database/sap-hana/hybrid-runner.mdx
@@ -51,7 +51,7 @@ The same applies to the `_SYS_REPO` schema, required for lineage extraction.
Executing the profiler Workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. The user should also be allowed to view information in `tables` for all objects in the database. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/sap-hana/index.mdx b/connectors/database/sap-hana/index.mdx
index 37106551..7b10906a 100644
--- a/connectors/database/sap-hana/index.mdx
+++ b/connectors/database/sap-hana/index.mdx
@@ -51,7 +51,7 @@ The same applies to the `_SYS_REPO` schema, required for lineage extraction.
Executing the profiler Workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. The user should also be allowed to view information in `tables` for all objects in the database. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
We support two possible connection types:
diff --git a/connectors/database/sas/hybrid-runner.mdx b/connectors/database/sas/hybrid-runner.mdx
index 9fe45a2d..e9d14fcc 100644
--- a/connectors/database/sas/hybrid-runner.mdx
+++ b/connectors/database/sas/hybrid-runner.mdx
@@ -28,7 +28,7 @@ Configure and schedule SAS metadata workflow from the OpenMetadata UI:
## Metadata Ingestion
Prepare the SAS Service and configure the Ingestion:
-# Connection Details
+## Connection Details
diff --git a/connectors/database/sas/index.mdx b/connectors/database/sas/index.mdx
index 83a025af..723b31a6 100644
--- a/connectors/database/sas/index.mdx
+++ b/connectors/database/sas/index.mdx
@@ -28,7 +28,7 @@ Configure and schedule SAS metadata workflow from the Collate UI:
## Metadata Ingestion
Prepare the SAS Service and configure the Ingestion:
-# Connection Details
+## Connection Details
- **ServerHost**: Host and port of the SAS Viya deployment.
diff --git a/connectors/database/servicenow/hybrid-runner.mdx b/connectors/database/servicenow/hybrid-runner.mdx
index fce6a93e..74e8fb59 100644
--- a/connectors/database/servicenow/hybrid-runner.mdx
+++ b/connectors/database/servicenow/hybrid-runner.mdx
@@ -30,7 +30,7 @@ To fetch metadata from ServiceNow into OpenMetadata you will need:
3. The password for the ServiceNow user.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/servicenow/index.mdx b/connectors/database/servicenow/index.mdx
index 51453bca..fcf5130e 100644
--- a/connectors/database/servicenow/index.mdx
+++ b/connectors/database/servicenow/index.mdx
@@ -30,7 +30,7 @@ To fetch metadata from ServiceNow into OpenMetadata you will need:
3. The password for the ServiceNow user.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **ServiceNow Instance URL**: Your ServiceNow instance URL (e.g., `https://your-instance.service-now.com`).
diff --git a/connectors/database/singlestore/index.mdx b/connectors/database/singlestore/index.mdx
index 7aa9faa1..a7363c72 100644
--- a/connectors/database/singlestore/index.mdx
+++ b/connectors/database/singlestore/index.mdx
@@ -45,7 +45,7 @@ GRANT SELECT ON world.hello TO '';
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/snowflake/hybrid-runner.mdx b/connectors/database/snowflake/hybrid-runner.mdx
index ac2f6357..c4ed1ede 100644
--- a/connectors/database/snowflake/hybrid-runner.mdx
+++ b/connectors/database/snowflake/hybrid-runner.mdx
@@ -74,7 +74,7 @@ You can find more information about the `account_usage` schema [here](https://do
- **Ingesting Stored Procedures**: Openmetadata fetches the information by querying `snowflake.account_usage.procedures` & `snowflake.account_usage.functions`.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to Snowflake. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/snowflake/index.mdx b/connectors/database/snowflake/index.mdx
index 3bd3483d..ce2bbd5d 100644
--- a/connectors/database/snowflake/index.mdx
+++ b/connectors/database/snowflake/index.mdx
@@ -78,7 +78,7 @@ You can find more information about the `account_usage` schema [here](https://do
- **Ingesting Stored Procedures**: Openmetadata fetches the information by querying `snowflake.account_usage.procedures` & `snowflake.account_usage.functions`.
## Metadata Ingestion
-# Connection Details
+## Connection Details
- **Username**: Specify the User to connect to Snowflake. It should have enough privileges to read all the metadata.
diff --git a/connectors/database/sqlite/index.mdx b/connectors/database/sqlite/index.mdx
index 09c72790..60c47cd5 100644
--- a/connectors/database/sqlite/index.mdx
+++ b/connectors/database/sqlite/index.mdx
@@ -32,7 +32,7 @@ Configure and schedule Presto metadata and profiler workflows from the Collate U
To extract metadata, the user needs to be able to perform `.tables`, `.schema`, on database you wish to extract metadata from and have `SELECT` permission on the `sqlite_temp_master`. Access to resources will be different based on the connector used.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/ssas/index.mdx b/connectors/database/ssas/index.mdx
index fdeb6c80..460f4bba 100644
--- a/connectors/database/ssas/index.mdx
+++ b/connectors/database/ssas/index.mdx
@@ -31,7 +31,7 @@ To extract metadata from SSAS, ensure the following requirements are met:
These steps are necessary to allow the connector to communicate with your SSAS instance and retrieve metadata successfully.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/teradata/index.mdx b/connectors/database/teradata/index.mdx
index 7d1e8cd6..9d5e0ef3 100644
--- a/connectors/database/teradata/index.mdx
+++ b/connectors/database/teradata/index.mdx
@@ -34,7 +34,7 @@ Connector was tested on Teradata DBS version 17.20. Since there are no significa
## Metadata Ingestion
By default, all valid users in Teradata DB has full access to metadata objects, so there are no any specific requirements to user privileges.
-# Connection Details
+## Connection Details
diff --git a/connectors/database/trino/index.mdx b/connectors/database/trino/index.mdx
index f31c2a04..6f499d40 100644
--- a/connectors/database/trino/index.mdx
+++ b/connectors/database/trino/index.mdx
@@ -37,7 +37,7 @@ Access to resources will be based on the user access permission to access specif
Executing the profiler workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](/how-to-guides/data-quality-observability/profiler/profiler-workflow) and data quality tests [here](/how-to-guides/data-quality-observability/quality).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/unity-catalog/index.mdx b/connectors/database/unity-catalog/index.mdx
index 244f658d..2ecbfc9a 100644
--- a/connectors/database/unity-catalog/index.mdx
+++ b/connectors/database/unity-catalog/index.mdx
@@ -53,7 +53,7 @@ Adjust <user>, <catalog_name>, <schema_name>, and <table_na
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/database/vertica/index.mdx b/connectors/database/vertica/index.mdx
index 15595216..f6a188c9 100644
--- a/connectors/database/vertica/index.mdx
+++ b/connectors/database/vertica/index.mdx
@@ -63,7 +63,7 @@ GRANT SELECT ON ALL TABLES IN SCHEMA TO openmetadata;
```
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/messaging/kafka/index.mdx b/connectors/messaging/kafka/index.mdx
index 81247a57..79930bc6 100644
--- a/connectors/messaging/kafka/index.mdx
+++ b/connectors/messaging/kafka/index.mdx
@@ -30,7 +30,7 @@ The ingestion of the Kafka topics' schema is done separately by configuring the
- READ CLUSTER
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/messaging/kinesis/index.mdx b/connectors/messaging/kinesis/index.mdx
index 3e8a7669..f5191954 100644
--- a/connectors/messaging/kinesis/index.mdx
+++ b/connectors/messaging/kinesis/index.mdx
@@ -47,7 +47,7 @@ The user must have the following policy set to access the metadata from Kinesis.
For more information on Kinesis permissions visit the [AWS Kinesis official documentation](https://docs.aws.amazon.com/streams/latest/dev/controlling-access.html).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/messaging/redpanda/index.mdx b/connectors/messaging/redpanda/index.mdx
index 226ad909..2d99e2e1 100644
--- a/connectors/messaging/redpanda/index.mdx
+++ b/connectors/messaging/redpanda/index.mdx
@@ -26,7 +26,7 @@ Connecting to Redpanda does not require any previous configuration.
The ingestion of the Kafka topics' schema is done separately by configuring the **Schema Registry URL**. However, only the **Bootstrap Servers** information is mandatory.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/metadata/alationsink/index.mdx b/connectors/metadata/alationsink/index.mdx
index 959d98b2..e7ddfa17 100644
--- a/connectors/metadata/alationsink/index.mdx
+++ b/connectors/metadata/alationsink/index.mdx
@@ -41,7 +41,7 @@ Following entities are supported and will be mapped to the from OpenMetadata to
## Metadata Ingestion
Then, prepare the Alation Sink Service and configure the Ingestion:
-# Connection Details
+## Connection Details
diff --git a/connectors/metadata/amundsen/index.mdx b/connectors/metadata/amundsen/index.mdx
index ca21162c..7c182f6f 100644
--- a/connectors/metadata/amundsen/index.mdx
+++ b/connectors/metadata/amundsen/index.mdx
@@ -18,7 +18,7 @@ availableFeatures={["Metadata"]}
unavailableFeatures={[]} />
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/metadata/atlas/index.mdx b/connectors/metadata/atlas/index.mdx
index 5605c9a1..3b21db28 100644
--- a/connectors/metadata/atlas/index.mdx
+++ b/connectors/metadata/atlas/index.mdx
@@ -29,7 +29,7 @@ For example, to create a Hive Service you can follow these steps:
## 2. Atlas Metadata Ingestion
Then, prepare the Atlas Service and configure the Ingestion:
-# Connection Details
+## Connection Details
diff --git a/connectors/ml-model/mlflow/index.mdx b/connectors/ml-model/mlflow/index.mdx
index 88c4cee2..aa1c922e 100644
--- a/connectors/ml-model/mlflow/index.mdx
+++ b/connectors/ml-model/mlflow/index.mdx
@@ -27,7 +27,7 @@ To extract metadata, OpenMetadata needs two elements:
- **Registry URI**: Address of local or remote model registry server.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/ml-model/sagemaker/index.mdx b/connectors/ml-model/sagemaker/index.mdx
index ea71eef9..17a77f9e 100644
--- a/connectors/ml-model/sagemaker/index.mdx
+++ b/connectors/ml-model/sagemaker/index.mdx
@@ -48,7 +48,7 @@ SageMaker also supports metadata ingestion of SageMaker Unified Studio models. T
For more information on Sagemaker permissions visit the [AWS Sagemaker official documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/airbyte/index.mdx b/connectors/pipeline/airbyte/index.mdx
index 42264fa3..373ee2a9 100644
--- a/connectors/pipeline/airbyte/index.mdx
+++ b/connectors/pipeline/airbyte/index.mdx
@@ -22,7 +22,7 @@ Configure and schedule Airbyte metadata and profiler workflows from the Collate
- [Troubleshooting](/connectors/pipeline/airbyte/troubleshooting)
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/airflow/gcp-composer.mdx b/connectors/pipeline/airflow/gcp-composer.mdx
index 1422950a..aa6ffbaf 100644
--- a/connectors/pipeline/airflow/gcp-composer.mdx
+++ b/connectors/pipeline/airflow/gcp-composer.mdx
@@ -249,7 +249,7 @@ KubernetesPodOperator(
You can find more information about the `KubernetesPodOperator` and how to tune its configurations
[here](https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator).
-# OpenMetadata Server Config
+## OpenMetadata Server Config
The easiest approach here is to generate a bot with a **JWT** token directly from the Collate UI. You can then use
the following workflow config:
diff --git a/connectors/pipeline/airflow/index.mdx b/connectors/pipeline/airflow/index.mdx
index 4452f235..28d769a3 100644
--- a/connectors/pipeline/airflow/index.mdx
+++ b/connectors/pipeline/airflow/index.mdx
@@ -35,7 +35,7 @@ You can check the version list [here](https://airflow.apache.org/docs/apache-air
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/dagster/index.mdx b/connectors/pipeline/dagster/index.mdx
index a07f6500..f2dd9e80 100644
--- a/connectors/pipeline/dagster/index.mdx
+++ b/connectors/pipeline/dagster/index.mdx
@@ -30,7 +30,7 @@ OpenMetadata is integrated with dagster up to version [1.0.13](https://docs.dags
The ingestion framework uses [dagster graphql python client](https://docs.dagster.io/_apidocs/libraries/dagster-graphql#dagster_graphql.DagsterGraphQLClient) to connect to the dagster instance and perform the API calls
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/databricks-pipeline/index.mdx b/connectors/pipeline/databricks-pipeline/index.mdx
index 180e1f52..517d808d 100644
--- a/connectors/pipeline/databricks-pipeline/index.mdx
+++ b/connectors/pipeline/databricks-pipeline/index.mdx
@@ -22,7 +22,7 @@ Configure and schedule Databricks Pipeline metadata workflows from the Collate U
- [Troubleshooting](/connectors/pipeline/databricks-pipeline/troubleshooting)
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/datafactory/index.mdx b/connectors/pipeline/datafactory/index.mdx
index c009f114..b4e8e6e1 100644
--- a/connectors/pipeline/datafactory/index.mdx
+++ b/connectors/pipeline/datafactory/index.mdx
@@ -33,7 +33,7 @@ You can find further information on the Azure Data Factory connector in the [doc
Ensure that the service principal or managed identity you’re using has the necessary permissions in the Data Factory resource (Reader, Contributor or Data Factory Contributor role at minimum).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/dbtcloud/index.mdx b/connectors/pipeline/dbtcloud/index.mdx
index 4b2021d2..cd095ba5 100644
--- a/connectors/pipeline/dbtcloud/index.mdx
+++ b/connectors/pipeline/dbtcloud/index.mdx
@@ -37,7 +37,7 @@ To know more about permissions required refer [here](https://docs.getdbt.com/doc
- Your projects must be on dbt version 1.0 or later. Refer to [Upgrade dbt version in Cloud](https://docs.getdbt.com/docs/dbt-versions/upgrade-dbt-version-in-cloud) to upgrade.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/domo-pipeline/index.mdx b/connectors/pipeline/domo-pipeline/index.mdx
index cf23181d..1ca1499a 100644
--- a/connectors/pipeline/domo-pipeline/index.mdx
+++ b/connectors/pipeline/domo-pipeline/index.mdx
@@ -26,7 +26,7 @@ For metadata ingestion, make sure to add at least `data` scopes to the clientId
For questions related to scopes, click [here](https://developer.domo.com/portal/1845fc11bbe5d-api-authentication).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/fivetran/index.mdx b/connectors/pipeline/fivetran/index.mdx
index 0e0708bd..98cbcdfb 100644
--- a/connectors/pipeline/fivetran/index.mdx
+++ b/connectors/pipeline/fivetran/index.mdx
@@ -25,7 +25,7 @@ Configure and schedule Fivetran metadata and profiler workflows from the Collate
To access Fivetran APIs, a Fivetran account on a Standard, Enterprise, or Business Critical plan is required.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/flink/index.mdx b/connectors/pipeline/flink/index.mdx
index fcf0fa34..574fc4c2 100644
--- a/connectors/pipeline/flink/index.mdx
+++ b/connectors/pipeline/flink/index.mdx
@@ -30,7 +30,7 @@ OpenMetadata is integrated with flink up to version [1.19.0](https://nightlies.a
The ingestion framework uses flink REST APIs to connect to the instance and perform the API calls
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/glue-pipeline/index.mdx b/connectors/pipeline/glue-pipeline/index.mdx
index fed92b0e..e5de21f1 100644
--- a/connectors/pipeline/glue-pipeline/index.mdx
+++ b/connectors/pipeline/glue-pipeline/index.mdx
@@ -30,7 +30,7 @@ The user must have the following permissions for the ingestion to run successful
- `glue:GetJobRuns`
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/kafkaconnect/index.mdx b/connectors/pipeline/kafkaconnect/index.mdx
index a4b1cc64..5e817a26 100644
--- a/connectors/pipeline/kafkaconnect/index.mdx
+++ b/connectors/pipeline/kafkaconnect/index.mdx
@@ -30,7 +30,7 @@ OpenMetadata is integrated with kafkaconnect up to version [3.6.1](https://docs.
The ingestion framework uses [kafkaconnect python client](https://libraries.io/pypi/kafka-connect-py) to connect to the kafkaconnect instance and perform the API calls
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/matillion/index.mdx b/connectors/pipeline/matillion/index.mdx
index 1ca84b62..fe8cf0b0 100644
--- a/connectors/pipeline/matillion/index.mdx
+++ b/connectors/pipeline/matillion/index.mdx
@@ -33,7 +33,7 @@ To extract metadata from Matillion, you need to create a user with the following
OpenMetadata is integrated with matillion up to version [1.75.0](https://docs.matillion.io/getting-started).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/nifi/index.mdx b/connectors/pipeline/nifi/index.mdx
index e1a9cbca..cac29296 100644
--- a/connectors/pipeline/nifi/index.mdx
+++ b/connectors/pipeline/nifi/index.mdx
@@ -29,7 +29,7 @@ OpenMetadata supports 2 types of connection for the NiFi connector:
The user should be able to send request to the NiFi API and access the `Resources` endpoint.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/snowplow/index.mdx b/connectors/pipeline/snowplow/index.mdx
index 2c8f6d22..f0b3c650 100644
--- a/connectors/pipeline/snowplow/index.mdx
+++ b/connectors/pipeline/snowplow/index.mdx
@@ -37,7 +37,7 @@ For self-hosted Snowplow Community deployments, you need:
- The file system path to your configuration files
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/spline/index.mdx b/connectors/pipeline/spline/index.mdx
index 4230420d..c05a44d6 100644
--- a/connectors/pipeline/spline/index.mdx
+++ b/connectors/pipeline/spline/index.mdx
@@ -29,7 +29,7 @@ Currently, we do not support data source of type AWS S3 or any other cloud stora
You can refer [this](https://github.com/AbsaOSS/spline-getting-started/tree/main/spline-on-databricks) documentation on how to configure Databricks with Spline.
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/ssis/index.mdx b/connectors/pipeline/ssis/index.mdx
index c2dfd5f6..76e05006 100644
--- a/connectors/pipeline/ssis/index.mdx
+++ b/connectors/pipeline/ssis/index.mdx
@@ -28,7 +28,7 @@ To extract SSIS metadata, we need the batabase connection details where the meta
- To retrieve lineage data, the user must be granted [Component-level permissions](https://docs.matillion.com/metl/docs/2932106/#component).
## Metadata Ingestion
-# Connection Details
+## Connection Details
diff --git a/connectors/pipeline/stitch/index.mdx b/connectors/pipeline/stitch/index.mdx
index e0d98709..03033032 100644
--- a/connectors/pipeline/stitch/index.mdx
+++ b/connectors/pipeline/stitch/index.mdx
@@ -27,7 +27,7 @@ To extract metadata from Stitch, User first need to crate API crednetials:
- `Token`: Token to access Stitch metadata.
## Metadata Ingestion
-# Connection Details
+## Connection Details