Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,299 changes: 1,299 additions & 0 deletions notebooks/move_training_data_across_analyzers.ipynb
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this file? I saw that this was from an October commit

Large diffs are not rendered by default.

34 changes: 28 additions & 6 deletions python/di_to_cu_migration_tool/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Document Intelligence to Content Understanding Migration Tool (Python)

Welcome! This tool helps convert your Document Intelligence (DI) datasets to the Content Understanding (CU) **Preview.2** 2025-05-01-preview format, as used in AI Foundry. The following DI versions are supported:
Welcome! This tool helps convert your Document Intelligence (DI) datasets to the Content Understanding (CU) **GA** 2025-11-01 format, as used in AI Foundry. The following DI versions are supported:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be best to support both CU Preview and GA conversions?


- Custom Extraction Model DI 3.1 GA (2023-07-31) to DI 4.0 GA (2024-11-30) (Document Intelligence Studio) → DI-version = neural
- Document Field Extraction Model 4.0 Preview (2024-07-31-preview) (AI Foundry / AI Services / Vision + Document / Document Field Extraction) → DI-version = generative

To identify the version of your Document Intelligence dataset, please consult the sample documents in this folder to match your format. You can also verify the version by reviewing your DI project's user experience. For instance, Custom Extraction DI 3.1/4.0 GA appears in Document Intelligence Studio (https://documentintelligence.ai.azure.com/studio), whereas Document Field Extraction DI 4.0 Preview is only available on Azure AI Foundry's preview service (https://ai.azure.com/explore/aiservices/vision/document/extraction).

For migrating from these DI versions to Content Understanding Preview.2, this tool first converts the DI dataset into a CU-compatible format. After conversion, you can create a Content Understanding Analyzer trained on your converted CU dataset. Additionally, you have the option to test its quality against any sample documents.
For migrating from these DI versions to Content Understanding GA (2025-11-01), this tool first converts the DI dataset into a CU-compatible format. After conversion, you can create a Content Understanding Analyzer trained on your converted CU dataset. Additionally, you have the option to test its quality against any sample documents.

## Details About the Tools

Expand All @@ -27,8 +27,26 @@ Here is a detailed breakdown of the three CLI tools and their functionality:
* **call_analyze.py**
* This CLI tool verifies that the migration completed successfully and assesses the quality of the created analyzer.


## Setup

## Prerequisites

⚠️ **IMPORTANT: Before using this migration tool**, ensure your Azure AI Foundry resource is properly configured for Content Understanding:

1. **Configure Default Model Deployments**: You must set default model deployments in your Content Understanding in your Foundry Resource before creating or running analyzers.

To do this walk through the prerequisites here:
- [REST API Quickstart Guide](https://learn.microsoft.com/en-us/azure/ai-services/content-understanding/quickstart/use-rest-api?tabs=portal%2Cdocument)

For more details about defaults checkout this documentation:
- [Models and Deployments Documentation](https://learn.microsoft.com/en-us/azure/ai-services/content-understanding/concepts/models-deployments)

2. **Verify you can create and use a basic Content Understanding analyzer** in your Azure AI Foundry resource before attempting migration. This ensures all prerequisites are met.

3. Complete all setup steps outlined in the REST API documentation above, including authentication and model deployment configuration.

### Tool Setup
Please follow these steps to set up the tool:

1. Install dependencies by running:
Expand All @@ -43,7 +61,7 @@ Please follow these steps to set up the tool:
- **SUBSCRIPTION_KEY:** Update to your Azure AI Service API Key or Subscription ID to authenticate the API requests.
- Locate your API Key here: ![Azure AI Service Endpoints With Keys](assets/endpoint-with-keys.png)
- If using Azure Active Directory (AAD), please refer to your Subscription ID: ![Azure AI Service Subscription ID](assets/subscription-id.png)
- **API_VERSION:** This is preset to the CU Preview.2 version; no changes are needed.
- **API_VERSION:** This is preset to the CU GA version (2025-11-01); no changes are needed.

## How to Locate Your Document Field Extraction Dataset for Migration

Expand Down Expand Up @@ -73,8 +91,12 @@ To obtain SAS URLs for a file or folder for any container URL arguments, please
3. Configure permissions and expiry for your SAS URL as follows:

- For the **DI source dataset**, please select permissions: _**Read & List**_
https://jfilcikditestdata.blob.core.windows.net/didata?sv=2025-07-05&spr=https&st=2025-12-16T22%3A17%3A06Z&se=2025-12-17T22%3A17%3A06Z&sr=c&sp=rl&sig=nvUIelZQ9yWEJx3jA%2FjUOIdHn6OVnp5gvKSJ3zgzwvE%3D
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to remove this secret SAS URL.


- For the **CU target dataset**, please select permissions: _**Read, Add, Create, & Write**_

https://jfilcikditestdata.blob.core.windows.net/cudata?sv=2025-07-05&spr=https&st=2025-12-16T22%3A19%3A39Z&se=2025-12-17T22%3A19%3A39Z&sr=c&sp=racwl&sig=K82dxEFNpYhuf5JRq3xJ4vc5SYE8A7FfsBnTJbB1VJY%3D
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't want to check in the secret blob SAS URL.


After configuring, click **Generate SAS Token and URL** and copy the URL shown under **Blob SAS URL**.

![Generate SAS Pop-Up](assets/generate-sas-pop-up.png)
Expand Down Expand Up @@ -155,7 +177,7 @@ Below are common issues you might encounter when creating an analyzer or running
- **400 Bad Request** errors:
Please validate the following:
- The endpoint URL is valid. Example:
`https://yourEndpoint/contentunderstanding/analyzers/yourAnalyzerID?api-version=2025-05-01-preview`
`https://yourEndpoint/contentunderstanding/analyzers/yourAnalyzerID?api-version=2025-11-01`
- Your converted CU dataset respects the naming constraints below. If needed, please manually correct the `analyzer.json` fields:
- Field names start with a letter or underscore
- Field name length must be between 1 and 64 characters
Expand All @@ -174,7 +196,7 @@ Below are common issues you might encounter when creating an analyzer or running

- **400 Bad Request**:
This implies that you might have an incorrect endpoint or SAS URL. Please ensure that your endpoint is valid and that you are using the correct SAS URL for the document:
`https://yourendpoint/contentunderstanding/analyzers/yourAnalyzerID:analyze?api-version=2025-05-01-preview`
`https://yourendpoint/contentunderstanding/analyzers/yourAnalyzerID:analyze?api-version=2025-11-01`
Confirm you are using the correct SAS URL for the document.

- **401 Unauthorized**:
Expand All @@ -189,4 +211,4 @@ Below are common issues you might encounter when creating an analyzer or running
2. Signature field types (e.g., in previous DI versions) are not yet supported in Content Understanding. These will be ignored during migration when creating the analyzer.
3. The content of your training documents is retained in the CU model's metadata, under storage specifically. You can find more details at:
https://learn.microsoft.com/en-us/legal/cognitive-services/content-understanding/transparency-note?toc=%2Fazure%2Fai-services%2Fcontent-understanding%2Ftoc.json&bc=%2Fazure%2Fai-services%2Fcontent-understanding%2Fbreadcrumb%2Ftoc.json
4. All conversions are for Content Understanding preview.2 version only.
4. All conversions are for Content Understanding GA (2025-11-01) version.
2 changes: 1 addition & 1 deletion python/di_to_cu_migration_tool/constants.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Supported DI versions
DI_VERSIONS = ["generative", "neural"]
CU_API_VERSION = "2025-05-01-preview"
CU_API_VERSION = "2025-11-01"

# constants
MAX_FIELD_COUNT = 100
Expand Down
19 changes: 17 additions & 2 deletions python/di_to_cu_migration_tool/cu_converter_generative.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def format_angle(angle: float) -> float:
formatted_num = f"{rounded_angle:.7f}".rstrip('0') # Remove trailing zeros
return float(formatted_num)

def convert_fields_to_analyzer(fields_json_path: Path, analyzer_prefix: Optional[str], target_dir: Path, field_definitions: FieldDefinitions) -> dict:
def convert_fields_to_analyzer(fields_json_path: Path, analyzer_prefix: Optional[str], target_dir: Path, field_definitions: FieldDefinitions, target_container_sas_url: str = None, target_blob_folder: str = None) -> dict:
"""
Convert DI 4.0 preview Custom Document fields.json to analyzer.json format.
Args:
Expand Down Expand Up @@ -79,7 +79,11 @@ def convert_fields_to_analyzer(fields_json_path: Path, analyzer_prefix: Optional
# build analyzer.json appropriately
analyzer_data = {
"analyzerId": analyzer_id,
"baseAnalyzerId": "prebuilt-documentAnalyzer",
"baseAnalyzerId": "prebuilt-document",
"models": {
"completion": "gpt-4.1",
"embedding": "text-embedding-3-large"
},
"config": {
"returnDetails": True,
# Add the following line as a temp workaround before service issue is fixed.
Expand Down Expand Up @@ -121,6 +125,17 @@ def convert_fields_to_analyzer(fields_json_path: Path, analyzer_prefix: Optional
else:
analyzer_json_path = fields_json_path.parent / 'analyzer.json'

# Add knowledgeSources section if container info is provided
if target_container_sas_url and target_blob_folder:
analyzer_data["knowledgeSources"] = [
{
"kind": "labeledData",
"containerUrl": target_container_sas_url,
"prefix": target_blob_folder,
"fileListPath": ""
}
]

# Ensure target directory exists
analyzer_json_path.parent.mkdir(parents=True, exist_ok=True)

Expand Down
19 changes: 17 additions & 2 deletions python/di_to_cu_migration_tool/cu_converter_neural.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def convert_bounding_regions_to_source(page_number: int, polygon: list) -> str:
source = f"D({page_number},{polygon_str})"
return source

def convert_fields_to_analyzer_neural(fields_json_path: Path, analyzer_prefix: Optional[str], target_dir: Optional[Path], field_definitions: FieldDefinitions) -> Tuple[dict, dict]:
def convert_fields_to_analyzer_neural(fields_json_path: Path, analyzer_prefix: Optional[str], target_dir: Optional[Path], field_definitions: FieldDefinitions, target_container_sas_url: str = None, target_blob_folder: str = None) -> Tuple[dict, dict]:
"""
Convert DI 3.1/4.0GA Custom Neural fields.json to analyzer.json format.
Args:
Expand Down Expand Up @@ -67,7 +67,11 @@ def convert_fields_to_analyzer_neural(fields_json_path: Path, analyzer_prefix: O
# Build analyzer.json content
analyzer_data = {
"analyzerId": analyzer_prefix,
"baseAnalyzerId": "prebuilt-documentAnalyzer",
"baseAnalyzerId": "prebuilt-document",
"models": {
"completion": "gpt-4.1",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will need to set completion and embedding models in multiple converters. It will be good to have these default values in constants.py to be reused. Another possible option could be allowing the users to put these as arguments when running the converter.

"embedding": "text-embedding-3-large"
},
"config": {
"returnDetails": True,
# Add the following line as a temp workaround before service issue is fixed.
Expand Down Expand Up @@ -132,6 +136,17 @@ def convert_fields_to_analyzer_neural(fields_json_path: Path, analyzer_prefix: O
else:
analyzer_json_path = fields_json_path.parent / 'analyzer.json'

# Add knowledgeSources section if container info is provided
if target_container_sas_url and target_blob_folder:
analyzer_data["knowledgeSources"] = [
{
"kind": "labeledData",
"containerUrl": target_container_sas_url,
"prefix": target_blob_folder,
"fileListPath": ""
}
]

# Ensure target directory exists
analyzer_json_path.parent.mkdir(parents=True, exist_ok=True)

Expand Down
14 changes: 8 additions & 6 deletions python/di_to_cu_migration_tool/di_to_cu_converter.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import shutil
import tempfile
import typer
from typing import Tuple
from typing import Optional, Tuple

# imports from external packages (in requirements.txt)
from rich import print # For colored output
Expand Down Expand Up @@ -161,7 +161,7 @@ def main(
print(f"[yellow]WARNING: The following signatures were removed from the dataset: {removed_signatures}[/yellow]\n")

print("Second: Running DI to CU dataset conversion...")
analyzer_data, ocr_files = running_cu_conversion(temp_dir, temp_target_dir, DI_version, analyzer_prefix, removed_signatures)
analyzer_data, ocr_files = running_cu_conversion(temp_dir, temp_target_dir, DI_version, analyzer_prefix, removed_signatures, target_container_sas_url, target_blob_folder)

# Run OCR on the pdf files
run_cu_layout_ocr(ocr_files, temp_target_dir, subscription_key)
Expand Down Expand Up @@ -232,15 +232,17 @@ def running_field_type_conversion(temp_source_dir: Path, temp_dir: Path, DI_vers

return removed_signatures

def running_cu_conversion(temp_dir: Path, temp_target_dir: Path, DI_version: str, analyzer_prefix: str, removed_signatures: list) -> Tuple[dict, list]:
def running_cu_conversion(temp_dir: Path, temp_target_dir: Path, DI_version: str, analyzer_prefix: Optional[str], removed_signatures: list, target_container_sas_url: str, target_blob_folder: str) -> Tuple[dict, list]:
"""
Function to run the DI to CU conversion
Function to run the CU conversion
Args:
temp_dir (Path): The path to the source directory
temp_target_dir (Path): The path to the target directory
DI_version (str): The version of DI being used
analyzer_prefix (str): The prefix for the analyzer name
removed_signatures (list): The list of removed signatures that will not be used in the CU converter
target_container_sas_url (str): The target container SAS URL for training data
target_blob_folder (str): The target blob folder prefix for training data
"""
# Creating a FieldDefinitons object to handle the converison of definitions in the fields.json
field_definitions = FieldDefinitions()
Expand All @@ -251,9 +253,9 @@ def running_cu_conversion(temp_dir: Path, temp_target_dir: Path, DI_version: str

assert fields_path.exists(), "fields.json is needed. Fields.json is missing from the given dataset."
if DI_version == "generative":
analyzer_data = cu_converter_generative.convert_fields_to_analyzer(fields_path, analyzer_prefix, temp_target_dir, field_definitions)
analyzer_data = cu_converter_generative.convert_fields_to_analyzer(fields_path, analyzer_prefix, temp_target_dir, field_definitions, target_container_sas_url, target_blob_folder)
elif DI_version == "neural":
analyzer_data, fields_dict = cu_converter_neural.convert_fields_to_analyzer_neural(fields_path, analyzer_prefix, temp_target_dir, field_definitions)
analyzer_data, fields_dict = cu_converter_neural.convert_fields_to_analyzer_neural(fields_path, analyzer_prefix, temp_target_dir, field_definitions, target_container_sas_url, target_blob_folder)

ocr_files = [] # List to store paths to pdf files to get OCR results from later
for file in files:
Expand Down
16 changes: 9 additions & 7 deletions python/di_to_cu_migration_tool/get_ocr.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,11 @@ def build_analyzer(credential, current_token, host, api_version, subscriptionKey
request_body = {
"analyzerId": analyzer_id,
"description": "Sample analyzer",
"baseAnalyzerId": "prebuilt-documentAnalyzer",
"baseAnalyzerId": "prebuilt-document",
"models": {
"completion": "gpt-4.1",
"embedding": "text-embedding-3-large"
},
"config": {
"returnDetails": True,
"enableOcr": True,
Expand All @@ -82,8 +86,7 @@ def build_analyzer(credential, current_token, host, api_version, subscriptionKey
"fieldSchema": {},
"warnings": [],
"status": "ready",
"processingLocation": "geography",
"mode": "standard"
"processingLocation": "geography"
}
endpoint = f"{host}/contentunderstanding/analyzers/{analyzer_id}?api-version={api_version}"
print("[yellow]Creating sample analyzer to attain CU Layout results...[/yellow]")
Expand Down Expand Up @@ -138,9 +141,8 @@ def run_cu_layout_ocr(input_files: list, output_dir_string: str, subscription_ke
output_dir = Path(output_dir_string)
output_dir.mkdir(parents=True, exist_ok=True)

# Need to create analyzer with empty schema
analyzer_id = build_analyzer(credential, current_token, host, api_version, subscription_key)
url = f"{host}/contentunderstanding/analyzers/{analyzer_id}:analyze?api-version={api_version}"
# Use prebuilt-read analyzer directly - no need to create a custom analyzer
url = f"{host}/contentunderstanding/analyzers/prebuilt-read:analyze?api-version={api_version}"

for file in input_files:
try:
Expand All @@ -150,7 +152,7 @@ def run_cu_layout_ocr(input_files: list, output_dir_string: str, subscription_ke
current_token = get_token(credential, current_token)
headers = {
"Authorization": f"Bearer {current_token.token}",
"Apim-Subscription-id": f"{subscription_key}",
"Ocp-Apim-Subscription-Key": f"{subscription_key}",
"Content-Type": "application/pdf",
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"status": "Succeeded",
"result": {
"analyzerId": "mySampleAnalyzer",
"apiVersion": "2025-05-01-preview",
"apiVersion": "2025-11-01",
"createdAt": "2025-05-30T15:47:15Z",
"warnings": [],
"contents": [
Expand Down
Loading