-
Notifications
You must be signed in to change notification settings - Fork 2
Query Node
Interfaces with Large Language Models (LLMs) to process text prompts and generate responses. Acts as a bridge between the node graph system and local LLM installations, enabling prompt-based text generation and processing. Supports both single and batch prompt processing, with options to limit processing for development or resource management.
- Automatically detects and connects to local LLM installations
- Supports dynamic LLM selection and switching
- Provides fallback mechanisms when preferred LLM is unavailable
- Handles both single and batch prompt processing
- Maintains response history
- Supports forced response regeneration
- Provides clean, formatted LLM responses
When True, restricts processing to only the first prompt. Useful for testing or managing resource usage.
Stores the history of LLM responses. Updated after each successful processing.
Identifier for the target LLM (e.g., "Ollama"). Defaults to "Ollama" but can be auto-detected.
Triggers automatic LLM detection and updates llm_name with found installation.
Forces reprocessing of current prompts and updates responses regardless of cache.