Skip to content

Conversation

@steb-mich
Copy link

Hi,
I tried integrating an n8n workflow as an AI assistant. I'm running LM Studio locally, and inside Docker I have both n8n and Qdrant running. I created a workflow there and saved its JSON under ai-assistant_hub/n8n.

If someone wants to add, for example, the Godot documentation as RAG (Retrieval-Augmented Generation) to the assistant, that can be done via the n8n workflow. I did it quickly and a bit dirty for instance, I generated the session ID in ai_assistant_hub/assistants/ai_assistant_resource.gd. However, one could also use the assistant's name as the session ID.

If you continue working on the assistant, it would be great if you could incorporate that. I think n8n is a really cool way to get better answers from the AI. You could even, for example, add project code to the Qdrant database to give the AI knowledge about your own project.

Thanks for creating such a cool coding assistant!

@FlamxGames
Copy link
Owner

Hi, this is nice, but I want to be careful about not merging things that would not work in other's computers or make the UI confusing for others using a different LLM provider. For instance the GODOT_Doc.json file seems to be specific to your setup and in n8n_ai_api.gd model deepseek-r1-0528-qwen3-8b is hardcoded, as well as some URL.

In addition, there are a few things to discuss / decide:

  1. What is the best way to store the session unique ID? Adding it to AIAssistantResource goes against the design, as one assistant type can have multiple instances. Unless I'm missing something about how this ID is being used.

    • This is similar to what was discussed here. We need to figure out first a good way to make assistants persistent, without going against the current design.
  2. What would be the simplest setup for other users? If someone uses n8n but does not have your setup (LM Studio, Docker, Qdrant) will this still work for them?

@steb-mich
Copy link
Author

Hi, that makes total sense — and I completely agree that care is needed to avoid introducing setup-specific or confusing elements into the main codebase.

Just to clarify, my goal wasn't to propose this as a direct merge, but rather to demonstrate one possible way to integrate an n8n workflow as part of an AI assistant. It's more of a proof of concept to inspire future integrations or ideas.

Some things in my code are indeed redundant or specific to my setup. For example, the model is actually chosen inside the n8n workflow, so the hardcoded model in n8n_ai_api.gd isn’t necessary. And regarding the session ID — I noticed your assistants have unique names, which could be repurposed as a session identifier instead of generating one manually. The ID is used in n8n workflow for identifying the chat memory. For example you can use the same n8n workflow with different chat history, so assistant x have different chat history from assistant y and so on.

Also I am using locally running n8n because I am more flexible with my setup. But the n8n workflow can be set up on official n8n website and is totally usable with OpenAI or Grok or ollama or some other llm provider. also it can be easily changed and modified if someone wants to use it.

Thanks again for reviewing it. I just wanted to share the idea and show what's possible with tools like n8n, even if the current form isn’t suitable for merging as-is. Also the benefit of n8n is that you can implement easily an document or multiple documents as RAG for better answers from llm. If someone more competent than me can take a look at that approach, maybe they can create a better solution for that.

@FlamxGames
Copy link
Owner

FlamxGames commented Jun 7, 2025

Ok, yes I got the feeling that was your intention, but I was not sure. Thank you for contributing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants