From 6d04e205b7fc3b919f1f41994edea260d08fd48e Mon Sep 17 00:00:00 2001 From: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Date: Wed, 19 Nov 2025 08:34:11 +0000 Subject: [PATCH 1/4] Add n8n integration documentation Co-Authored-By: Ryan Seams --- fern/docs.yml | 2 + fern/pages/06-integrations/index.mdx | 7 + fern/pages/06-integrations/n8n.mdx | 258 +++++++++++++++++++++++++++ 3 files changed, 267 insertions(+) create mode 100644 fern/pages/06-integrations/n8n.mdx diff --git a/fern/docs.yml b/fern/docs.yml index cf3612d9..c0000c9e 100644 --- a/fern/docs.yml +++ b/fern/docs.yml @@ -333,6 +333,8 @@ navigation: path: pages/06-integrations/zapier.mdx - page: Make path: pages/06-integrations/make.mdx + - page: n8n + path: pages/06-integrations/n8n.mdx - page: Postman path: pages/06-integrations/postman.mdx - section: Meeting transcriber tools diff --git a/fern/pages/06-integrations/index.mdx b/fern/pages/06-integrations/index.mdx index 70c62d41..f1c9acf5 100644 --- a/fern/pages/06-integrations/index.mdx +++ b/fern/pages/06-integrations/index.mdx @@ -59,6 +59,13 @@ AssemblyAI seamlessly integrates with a variety of tools and platforms to enhanc > Build complex automation scenarios with Make's visual workflow builder. + + Automate workflows with n8n's fair-code automation platform and integrate AssemblyAI. + + + +Create a new workflow in n8n or open an existing one. Add an HTTP Request node to your workflow canvas. + + + + + +Configure the HTTP Request node to submit a transcription request to AssemblyAI: + +1. Set **Method** to `POST` +2. Set **URL** to `https://api.assemblyai.com/v2/transcript` +3. Under **Authentication**, select **Generic Credential Type** and choose **Header Auth** +4. Create a new credential: + - Set **Name** to `authorization` + - Set **Value** to your AssemblyAI API key +5. Under **Body**, select **JSON** and add your transcription parameters: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3" +} +``` + + + + + +Add a second HTTP Request node to poll for the transcript result: + +1. Set **Method** to `GET` +2. Set **URL** to `https://api.assemblyai.com/v2/transcript/{{$json["id"]}}` + - This uses the transcript ID from the previous step +3. Use the same **Header Auth** credential you created earlier + + + + + +Add a **Wait** node between the two HTTP Request nodes to give AssemblyAI time to process the audio. Configure it to wait for a reasonable amount of time based on your audio length (e.g., 30 seconds for short files). + +Alternatively, you can use a **Loop** node with a condition to poll the transcript status until it's `completed` or `error`. + + + + + +Test your workflow. The final HTTP Request node will return the completed transcript with the transcribed text and any additional features you enabled. + + + + +## Using AssemblyAI Features + +You can enable various AssemblyAI features by adding parameters to the JSON body in your initial transcription request: + +### Speaker Diarization + +Identify different speakers in your audio: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3", + "speaker_labels": true +} +``` + +### Sentiment Analysis + +Analyze the sentiment of the transcribed text: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3", + "sentiment_analysis": true +} +``` + +### Auto Chapters + +Automatically segment your audio into chapters: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3", + "auto_chapters": true +} +``` + +### PII Redaction + +Redact personally identifiable information: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3", + "redact_pii": true, + "redact_pii_policies": ["medical_condition", "credit_card_number", "ssn"] +} +``` + +### Entity Detection + +Extract entities like names, organizations, and locations: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3", + "entity_detection": true +} +``` + +## Common Workflow Patterns + +### Transcribe and Store in Google Sheets + +1. Use the HTTP Request nodes to transcribe audio with AssemblyAI +2. Add a Google Sheets node to append the transcript to a spreadsheet +3. Map the transcript text and metadata to your sheet columns + +### Transcribe and Send via Email + +1. Transcribe audio using AssemblyAI +2. Add a Gmail or Send Email node +3. Include the transcript text in the email body + +### Transcribe and Analyze with AI + +1. Transcribe audio with AssemblyAI +2. Add an OpenAI or other LLM node +3. Use the transcript as input for further analysis, summarization, or question answering + +### Process Audio from Cloud Storage + +1. Add a trigger node for Google Drive, Dropbox, or S3 +2. When a new audio file is uploaded, get its public URL +3. Submit the URL to AssemblyAI for transcription +4. Store or process the results as needed + +## Polling for Transcript Completion + +Since transcription is asynchronous, you need to poll for the result. Here's a recommended approach: + +1. Submit the transcription request and capture the transcript ID +2. Use a **Loop** node with the following configuration: + - Add an HTTP Request node inside the loop to check the transcript status + - Add an **IF** node to check if `status` equals `completed` or `error` + - If not complete, add a **Wait** node (e.g., 5 seconds) before the next iteration + - Exit the loop when the status is `completed` or `error` + +## Using Webhooks for Completion Notifications + +For a more efficient approach, you can use webhooks instead of polling: + + + + +Add a **Webhook** node to your workflow and copy the webhook URL. + + + + + +In your initial transcription request, add the webhook URL: + +```json +{ + "audio_url": "https://example.com/your-audio-file.mp3", + "webhook_url": "https://your-n8n-instance.com/webhook/your-webhook-id" +} +``` + + + + + +When the transcription is complete, AssemblyAI will send a POST request to your webhook with the transcript ID. You can then retrieve the full transcript using another HTTP Request node. + + + + +## Uploading Local Files + +If your audio file is not publicly accessible, you can upload it to AssemblyAI first: + + + + +Add an HTTP Request node to upload the file: + +1. Set **Method** to `POST` +2. Set **URL** to `https://api.assemblyai.com/v2/upload` +3. Use **Header Auth** with your API key +4. Under **Body**, select **Binary File** and select your audio file + + + + + +The upload will return an `upload_url`. Use this URL as the `audio_url` in your transcription request. + + + + +## Error Handling + +Add error handling to your workflow to manage failed transcriptions: + +1. Use the **IF** node to check if the transcript status is `error` +2. Add nodes to handle the error case (e.g., send a notification, log the error) +3. Access the error message in `$json["error"]` + +## Additional Resources + +- [n8n AssemblyAI Integration Page](https://n8n.io/integrations/assemblyai/) +- [n8n HTTP Request Node Documentation](https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) +- [AssemblyAI API Reference](/api-reference) +- [n8n Workflow Templates](https://n8n.io/workflows/) + +## Example Use Cases + +### Meeting Transcription Pipeline + +Automatically transcribe meeting recordings uploaded to cloud storage, extract action items using sentiment analysis and entity detection, and send summaries to team members. + +### Podcast Processing + +Transcribe podcast episodes, generate chapters automatically, create searchable transcripts, and publish them to your CMS. + +### Customer Support Analysis + +Transcribe support calls, analyze sentiment, detect PII for compliance, and store insights in your CRM or database. + +### Content Moderation + +Transcribe user-generated audio content, use content moderation to flag inappropriate content, and automatically filter or review flagged items. From e968608dca4a684138c7ca7a4fc08486420a7750 Mon Sep 17 00:00:00 2001 From: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Date: Wed, 19 Nov 2025 08:41:59 +0000 Subject: [PATCH 2/4] Fix n8n integration URL slug to match Fern-generated slug Co-Authored-By: Ryan Seams --- fern/pages/06-integrations/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fern/pages/06-integrations/index.mdx b/fern/pages/06-integrations/index.mdx index f1c9acf5..4a224c42 100644 --- a/fern/pages/06-integrations/index.mdx +++ b/fern/pages/06-integrations/index.mdx @@ -62,7 +62,7 @@ AssemblyAI seamlessly integrates with a variety of tools and platforms to enhanc Automate workflows with n8n's fair-code automation platform and integrate AssemblyAI. From 47c6432a8ba75518b152a505184561ebebaecf00 Mon Sep 17 00:00:00 2001 From: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Date: Fri, 21 Nov 2025 00:35:40 +0000 Subject: [PATCH 3/4] Update n8n integration docs with official messaging and fix CSS syntax error - Update n8n integration documentation to match official n8n.io page messaging - Add emphasis on 1000+ app integrations and n8n's automation platform - Add popular integration pairings section (Google Sheets, Gmail, Slack, OpenAI, etc.) - Fix CSS syntax error in custom-styles.css (missing closing parenthesis in rgba functions) - This CSS fix unblocks CI for all PRs Co-Authored-By: Ryan Seams --- fern/assets/custom-styles.css | 6 +++--- fern/pages/06-integrations/n8n.mdx | 20 ++++++++++++++++++-- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/fern/assets/custom-styles.css b/fern/assets/custom-styles.css index 3e5a11a4..3f3083f4 100644 --- a/fern/assets/custom-styles.css +++ b/fern/assets/custom-styles.css @@ -344,11 +344,11 @@ body#fern-docs .fern-card h2 { .gradient-card .fern-card .text-base { font-weight: 500; - color: rgba(var(--body-text),var(--tw-text-opacity,1) !important; + color: rgba(var(--body-text),var(--tw-text-opacity,1)) !important; } .gradient-card .fern-card .t-muted p { - color: rgba(var(--body-text),var(--tw-text-opacity,1) !important; + color: rgba(var(--body-text),var(--tw-text-opacity,1)) !important; } .sdk-card .fern-card { @@ -429,4 +429,4 @@ body#fern-docs .fern-card h2 { @keyframes oppositePulse { 0%, 100% { transform: scale(1.1); } 50% { transform: scale(1); } -} \ No newline at end of file +} diff --git a/fern/pages/06-integrations/n8n.mdx b/fern/pages/06-integrations/n8n.mdx index aa1f3a10..2752aa5e 100644 --- a/fern/pages/06-integrations/n8n.mdx +++ b/fern/pages/06-integrations/n8n.mdx @@ -1,10 +1,12 @@ --- title: "n8n Integration with AssemblyAI" -description: "Use AssemblyAI with n8n to transcribe and analyze audio in your automation workflows." +description: "Integrate AssemblyAI with 1000+ apps and services using n8n's automation platform." hide-nav-links: true --- -[n8n](https://n8n.io/) is a powerful workflow automation tool that allows you to connect various services and build complex automations without extensive coding. With n8n's HTTP Request node, you can integrate AssemblyAI's speech-to-text and audio intelligence capabilities into your workflows to transcribe audio files, analyze conversations, and extract insights. +Unlock the full potential of AssemblyAI and [n8n's](https://n8n.io/) automation platform by connecting AssemblyAI's speech-to-text and audio intelligence capabilities with over 1,000 apps, data sources, services, and n8n's built-in AI features. + +Use n8n's pre-authenticated HTTP Request node to create powerful automations with AssemblyAI, giving you the flexibility to build workflows on any stack. The AssemblyAI integration is built and maintained by AssemblyAI and verified by n8n. n8n offers both a cloud-hosted version and a self-hosted option, giving you flexibility in how you deploy your automations. @@ -15,6 +17,20 @@ Before you begin, you'll need: - An [AssemblyAI API key](https://www.assemblyai.com/app/api-keys) - An n8n account (either [n8n Cloud](https://app.n8n.cloud/register) or a self-hosted instance) +## Popular Integration Pairings + +AssemblyAI works seamlessly with n8n's most popular nodes, enabling you to build powerful automation workflows: + +- **Google Sheets** - Store transcripts and analysis results in spreadsheets +- **Gmail** - Send transcription results via email +- **Slack** - Post transcripts to channels or send direct messages +- **OpenAI** - Combine transcription with AI analysis and summarization +- **Google Drive** - Automatically transcribe audio files from cloud storage +- **Notion** - Save transcripts to your knowledge base +- **Airtable** - Organize transcription data in databases +- **Discord** - Share transcripts in Discord channels +- **Telegram** - Send transcription results via Telegram bots + ## Quickstart This guide shows you how to transcribe an audio file using AssemblyAI in an n8n workflow. From 25971ae699cb0a2962ed6861e89efd2ccbcbaeed Mon Sep 17 00:00:00 2001 From: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Date: Fri, 21 Nov 2025 00:56:04 +0000 Subject: [PATCH 4/4] Add comprehensive n8n workflow tutorial with 5-step guide - Step 1: Upload audio file (local/remote/Google Drive options) - Step 2: Submit transcription with available features (speaker labels, language detection, etc.) - Step 3: Poll for completion with loop and status checking - Step 4: Process transcript output (format, save to Google Sheets, email, Slack) - Step 5: Retrieve additional data (redacted audio, sentences, paragraphs) - Step 6: Delete transcript for cleanup - Added webhook alternative to polling - Added error handling section - Added example workflow JSON for import - Included code examples for formatting transcripts Co-Authored-By: Ryan Seams --- fern/pages/06-integrations/n8n.mdx | 433 ++++++++++++++++++++--------- 1 file changed, 294 insertions(+), 139 deletions(-) diff --git a/fern/pages/06-integrations/n8n.mdx b/fern/pages/06-integrations/n8n.mdx index 2752aa5e..5a5c9b55 100644 --- a/fern/pages/06-integrations/n8n.mdx +++ b/fern/pages/06-integrations/n8n.mdx @@ -31,222 +31,377 @@ AssemblyAI works seamlessly with n8n's most popular nodes, enabling you to build - **Discord** - Share transcripts in Discord channels - **Telegram** - Send transcription results via Telegram bots -## Quickstart +## Complete Workflow Tutorial -This guide shows you how to transcribe an audio file using AssemblyAI in an n8n workflow. +This comprehensive tutorial walks you through building a complete AssemblyAI workflow in n8n that: +1. Uploads an audio file to AssemblyAI +2. Submits a transcription request with advanced features +3. Polls for completion +4. Processes the transcript output +5. Retrieves additional data (redacted audio, sentences) +6. Cleans up by deleting the transcript - - +### Step 1: Upload Audio File -Create a new workflow in n8n or open an existing one. Add an HTTP Request node to your workflow canvas. +The first step is to upload your audio file to AssemblyAI. You have several options depending on where your audio file is located. - +#### Option A: Upload Local File - +If you have a local audio file, use an HTTP Request node to upload it to AssemblyAI: -Configure the HTTP Request node to submit a transcription request to AssemblyAI: +1. Add an **HTTP Request** node to your workflow +2. Configure the node: + - **Method**: `POST` + - **URL**: `https://api.assemblyai.com/v2/upload` + - **Authentication**: Select **Generic Credential Type** → **Header Auth** + - **Name**: `authorization` + - **Value**: Your AssemblyAI API key + - **Body**: Select **Binary File** + - **Input Binary Field**: Select the field containing your audio file data -1. Set **Method** to `POST` -2. Set **URL** to `https://api.assemblyai.com/v2/transcript` -3. Under **Authentication**, select **Generic Credential Type** and choose **Header Auth** -4. Create a new credential: - - Set **Name** to `authorization` - - Set **Value** to your AssemblyAI API key -5. Under **Body**, select **JSON** and add your transcription parameters: +The response will contain an `upload_url` that you'll use in the next step. + +#### Option B: Use Remote File URL + +If your audio file is already hosted online (e.g., on a CDN, S3, or public URL), you can skip the upload step and use the URL directly in Step 2. + +#### Option C: Get File from Google Drive + +To transcribe files from Google Drive: + +1. Add a **Google Drive** trigger node or **Google Drive** node +2. Configure it to: + - **Operation**: Download a file + - **File ID**: The ID of the audio file you want to transcribe +3. Connect this to the upload HTTP Request node from Option A + +The Google Drive node will output the file as binary data, which the upload node will send to AssemblyAI. + +### Step 2: Submit Transcription Request + +Now submit the transcription request with your desired features enabled. + +1. Add an **HTTP Request** node +2. Configure the node: + - **Method**: `POST` + - **URL**: `https://api.assemblyai.com/v2/transcript` + - **Authentication**: Use the same **Header Auth** credential from Step 1 + - **Body**: Select **JSON** + - **JSON Body**: ```json { - "audio_url": "https://example.com/your-audio-file.mp3" + "audio_url": "{{$node['HTTP Request'].json['upload_url']}}", + "speaker_labels": true, + "language_detection": true, + "sentiment_analysis": true, + "entity_detection": true, + "auto_chapters": true } ``` - - - +If you're using a remote file URL (Option B), replace the `audio_url` value with your file URL. -Add a second HTTP Request node to poll for the transcript result: +#### Available Features -1. Set **Method** to `GET` -2. Set **URL** to `https://api.assemblyai.com/v2/transcript/{{$json["id"]}}` - - This uses the transcript ID from the previous step -3. Use the same **Header Auth** credential you created earlier +You can enable any combination of these features in your transcription request: - +- **speaker_labels**: Identify different speakers (diarization) +- **language_detection**: Automatically detect the language +- **sentiment_analysis**: Analyze sentiment of each sentence +- **entity_detection**: Extract names, organizations, locations +- **auto_chapters**: Automatically segment into chapters +- **auto_highlights**: Extract key highlights +- **content_safety**: Detect sensitive content +- **iab_categories**: Categorize content by IAB taxonomy +- **summarization**: Generate a summary (requires model parameter) +- **redact_pii**: Redact personally identifiable information +- **redact_pii_audio**: Generate redacted audio file +- **redact_pii_policies**: Specify which PII types to redact - +The response will contain a `transcript_id` that you'll use to poll for completion. -Add a **Wait** node between the two HTTP Request nodes to give AssemblyAI time to process the audio. Configure it to wait for a reasonable amount of time based on your audio length (e.g., 30 seconds for short files). +### Step 3: Poll for Transcription Completion -Alternatively, you can use a **Loop** node with a condition to poll the transcript status until it's `completed` or `error`. +Since transcription is asynchronous, you need to poll the API until the transcript is ready. - +1. Add a **Loop** node after the transcription request +2. Inside the loop, add an **HTTP Request** node: + - **Method**: `GET` + - **URL**: `https://api.assemblyai.com/v2/transcript/{{$node['HTTP Request 1'].json['id']}}` + - **Authentication**: Use the same **Header Auth** credential - +3. Add an **IF** node to check the status: + - **Condition**: `{{$json['status']}}` equals `completed` or `error` + - **True branch**: Exit the loop (transcript is ready) + - **False branch**: Continue polling -Test your workflow. The final HTTP Request node will return the completed transcript with the transcribed text and any additional features you enabled. +4. On the False branch, add a **Wait** node: + - **Wait Time**: 5 seconds + - Connect back to the HTTP Request node to poll again - - +5. Configure the Loop node: + - **Max Iterations**: 100 (to prevent infinite loops) -## Using AssemblyAI Features +When the transcript is complete, the response will contain the full transcript data including: +- `text`: The complete transcription +- `words`: Array of individual words with timestamps +- `utterances`: Array of speaker utterances (if speaker_labels enabled) +- `sentiment_analysis_results`: Sentiment data (if enabled) +- `entities`: Detected entities (if enabled) +- `chapters`: Auto-generated chapters (if enabled) -You can enable various AssemblyAI features by adding parameters to the JSON body in your initial transcription request: +### Step 4: Process Transcript Output -### Speaker Diarization +Now that you have the completed transcript, you can process it and send it to other services. -Identify different speakers in your audio: +#### Create Human-Readable Transcript -```json -{ - "audio_url": "https://example.com/your-audio-file.mp3", - "speaker_labels": true -} -``` +To format the transcript into a readable format: -### Sentiment Analysis +1. Add a **Code** node (JavaScript or Python) +2. Use this code to format utterances into a readable transcript: -Analyze the sentiment of the transcribed text: +**JavaScript:** +```javascript +const transcript = $input.item.json; +let formattedText = ''; -```json -{ - "audio_url": "https://example.com/your-audio-file.mp3", - "sentiment_analysis": true +if (transcript.utterances) { + // Format with speaker labels + for (const utterance of transcript.utterances) { + formattedText += `Speaker ${utterance.speaker}: ${utterance.text}\n\n`; + } +} else { + // Use plain text if no speaker labels + formattedText = transcript.text; } -``` - -### Auto Chapters - -Automatically segment your audio into chapters: -```json -{ - "audio_url": "https://example.com/your-audio-file.mp3", - "auto_chapters": true -} +return { formattedText }; ``` -### PII Redaction +**Python:** +```python +transcript = _input.item.json +formatted_text = '' -Redact personally identifiable information: +if 'utterances' in transcript and transcript['utterances']: + # Format with speaker labels + for utterance in transcript['utterances']: + formatted_text += f"Speaker {utterance['speaker']}: {utterance['text']}\n\n" +else: + # Use plain text if no speaker labels + formatted_text = transcript['text'] -```json -{ - "audio_url": "https://example.com/your-audio-file.mp3", - "redact_pii": true, - "redact_pii_policies": ["medical_condition", "credit_card_number", "ssn"] -} +return {'formattedText': formatted_text} ``` -### Entity Detection +#### Save to Google Sheets -Extract entities like names, organizations, and locations: +To save the transcript to Google Sheets: -```json -{ - "audio_url": "https://example.com/your-audio-file.mp3", - "entity_detection": true -} -``` +1. Add a **Google Sheets** node +2. Configure it: + - **Operation**: Append or Update + - **Document**: Select your spreadsheet + - **Sheet**: Select the sheet name + - **Columns**: Map the data: + - Column A: `{{$node['HTTP Request 2'].json['id']}}` (Transcript ID) + - Column B: `{{$node['Code'].json['formattedText']}}` (Formatted transcript) + - Column C: `{{$node['HTTP Request 2'].json['audio_duration']}}` (Duration) + - Column D: `{{new Date().toISOString()}}` (Timestamp) -## Common Workflow Patterns +#### Send via Email -### Transcribe and Store in Google Sheets +To email the transcript: -1. Use the HTTP Request nodes to transcribe audio with AssemblyAI -2. Add a Google Sheets node to append the transcript to a spreadsheet -3. Map the transcript text and metadata to your sheet columns +1. Add a **Gmail** or **Send Email** node +2. Configure it: + - **To**: Recipient email address + - **Subject**: `Transcript Ready: {{$node['HTTP Request 2'].json['id']}}` + - **Body**: `{{$node['Code'].json['formattedText']}}` -### Transcribe and Send via Email +#### Post to Slack -1. Transcribe audio using AssemblyAI -2. Add a Gmail or Send Email node -3. Include the transcript text in the email body +To post the transcript to Slack: -### Transcribe and Analyze with AI +1. Add a **Slack** node +2. Configure it: + - **Operation**: Send Message + - **Channel**: Select your channel + - **Text**: `New transcript completed:\n\n{{$node['Code'].json['formattedText']}}` -1. Transcribe audio with AssemblyAI -2. Add an OpenAI or other LLM node -3. Use the transcript as input for further analysis, summarization, or question answering +### Step 5: Retrieve Additional Data -### Process Audio from Cloud Storage +AssemblyAI provides additional endpoints to retrieve more data from your completed transcript. -1. Add a trigger node for Google Drive, Dropbox, or S3 -2. When a new audio file is uploaded, get its public URL -3. Submit the URL to AssemblyAI for transcription -4. Store or process the results as needed +#### Get Redacted Audio -## Polling for Transcript Completion +If you enabled `redact_pii_audio` in your transcription request, you can retrieve the redacted audio file: -Since transcription is asynchronous, you need to poll for the result. Here's a recommended approach: +1. Add an **HTTP Request** node +2. Configure it: + - **Method**: `GET` + - **URL**: `https://api.assemblyai.com/v2/transcript/{{$node['HTTP Request 2'].json['id']}}/redacted-audio` + - **Authentication**: Use the same **Header Auth** credential + - **Response Format**: Binary -1. Submit the transcription request and capture the transcript ID -2. Use a **Loop** node with the following configuration: - - Add an HTTP Request node inside the loop to check the transcript status - - Add an **IF** node to check if `status` equals `completed` or `error` - - If not complete, add a **Wait** node (e.g., 5 seconds) before the next iteration - - Exit the loop when the status is `completed` or `error` +The response will contain the redacted audio file with PII removed. -## Using Webhooks for Completion Notifications +#### Get Sentences -For a more efficient approach, you can use webhooks instead of polling: +To retrieve the transcript broken down by sentences: - - +1. Add an **HTTP Request** node +2. Configure it: + - **Method**: `GET` + - **URL**: `https://api.assemblyai.com/v2/transcript/{{$node['HTTP Request 2'].json['id']}}/sentences` + - **Authentication**: Use the same **Header Auth** credential -Add a **Webhook** node to your workflow and copy the webhook URL. +The response will contain an array of sentences with: +- `text`: The sentence text +- `start`: Start time in milliseconds +- `end`: End time in milliseconds +- `confidence`: Confidence score +- `speaker`: Speaker label (if speaker_labels enabled) - +#### Get Paragraphs - +To retrieve the transcript broken down by paragraphs: -In your initial transcription request, add the webhook URL: +1. Add an **HTTP Request** node +2. Configure it: + - **Method**: `GET` + - **URL**: `https://api.assemblyai.com/v2/transcript/{{$node['HTTP Request 2'].json['id']}}/paragraphs` + - **Authentication**: Use the same **Header Auth** credential -```json -{ - "audio_url": "https://example.com/your-audio-file.mp3", - "webhook_url": "https://your-n8n-instance.com/webhook/your-webhook-id" -} -``` +### Step 6: Delete Transcript - +Once you're done processing the transcript, you can delete it from AssemblyAI to clean up: - +1. Add an **HTTP Request** node at the end of your workflow +2. Configure it: + - **Method**: `DELETE` + - **URL**: `https://api.assemblyai.com/v2/transcript/{{$node['HTTP Request 2'].json['id']}}` + - **Authentication**: Use the same **Header Auth** credential -When the transcription is complete, AssemblyAI will send a POST request to your webhook with the transcript ID. You can then retrieve the full transcript using another HTTP Request node. +The transcript will be permanently deleted from AssemblyAI's servers. - - +## Alternative: Using Webhooks -## Uploading Local Files +Instead of polling for completion, you can use webhooks for a more efficient approach: -If your audio file is not publicly accessible, you can upload it to AssemblyAI first: +1. Add a **Webhook** node to your workflow +2. Set it to **Production** mode and copy the webhook URL +3. In your transcription request (Step 2), add the webhook URL: - - +```json +{ + "audio_url": "{{$node['HTTP Request'].json['upload_url']}}", + "webhook_url": "https://your-n8n-instance.com/webhook/your-webhook-id", + "speaker_labels": true +} +``` -Add an HTTP Request node to upload the file: +When the transcription is complete, AssemblyAI will send a POST request to your webhook with the transcript status. You can then retrieve the full transcript using the transcript ID from the webhook payload. -1. Set **Method** to `POST` -2. Set **URL** to `https://api.assemblyai.com/v2/upload` -3. Use **Header Auth** with your API key -4. Under **Body**, select **Binary File** and select your audio file +## Error Handling - +Add error handling to your workflow to manage failures gracefully: - +1. After the polling loop, add an **IF** node to check for errors: + - **Condition**: `{{$json['status']}}` equals `error` + +2. On the True branch (error case): + - Add a **Slack** or **Email** node to send an error notification + - Include the error message: `{{$json['error']}}` -The upload will return an `upload_url`. Use this URL as the `audio_url` in your transcription request. +3. On the False branch (success case): + - Continue with the normal workflow - - +## Example Workflow JSON -## Error Handling +Here's a complete workflow JSON that you can import into n8n: -Add error handling to your workflow to manage failed transcriptions: +```json +{ + "name": "AssemblyAI Complete Workflow", + "nodes": [ + { + "parameters": {}, + "name": "Start", + "type": "n8n-nodes-base.start", + "position": [250, 300] + }, + { + "parameters": { + "method": "POST", + "url": "https://api.assemblyai.com/v2/upload", + "authentication": "genericCredentialType", + "genericAuthType": "httpHeaderAuth", + "sendBody": true, + "bodyContentType": "raw", + "rawContentType": "application/octet-stream" + }, + "name": "Upload Audio", + "type": "n8n-nodes-base.httpRequest", + "position": [450, 300] + }, + { + "parameters": { + "method": "POST", + "url": "https://api.assemblyai.com/v2/transcript", + "authentication": "genericCredentialType", + "genericAuthType": "httpHeaderAuth", + "sendBody": true, + "bodyContentType": "json", + "jsonBody": "{\n \"audio_url\": \"{{$json['upload_url']}}\",\n \"speaker_labels\": true,\n \"language_detection\": true\n}" + }, + "name": "Submit Transcription", + "type": "n8n-nodes-base.httpRequest", + "position": [650, 300] + }, + { + "parameters": { + "method": "GET", + "url": "https://api.assemblyai.com/v2/transcript/{{$node['Submit Transcription'].json['id']}}", + "authentication": "genericCredentialType", + "genericAuthType": "httpHeaderAuth" + }, + "name": "Check Status", + "type": "n8n-nodes-base.httpRequest", + "position": [850, 300] + }, + { + "parameters": { + "method": "DELETE", + "url": "https://api.assemblyai.com/v2/transcript/{{$node['Check Status'].json['id']}}", + "authentication": "genericCredentialType", + "genericAuthType": "httpHeaderAuth" + }, + "name": "Delete Transcript", + "type": "n8n-nodes-base.httpRequest", + "position": [1250, 300] + } + ], + "connections": { + "Start": { + "main": [[{"node": "Upload Audio", "type": "main", "index": 0}]] + }, + "Upload Audio": { + "main": [[{"node": "Submit Transcription", "type": "main", "index": 0}]] + }, + "Submit Transcription": { + "main": [[{"node": "Check Status", "type": "main", "index": 0}]] + }, + "Check Status": { + "main": [[{"node": "Delete Transcript", "type": "main", "index": 0}]] + } + } +} +``` -1. Use the **IF** node to check if the transcript status is `error` -2. Add nodes to handle the error case (e.g., send a notification, log the error) -3. Access the error message in `$json["error"]` +This is a simplified version. You'll need to add the polling loop, error handling, and output processing nodes as described in the tutorial above ## Additional Resources