This directory contains implementation of clients that can interact with AIOPsLab. These clients are some baselines that we have implemented and evaluated to help you get started.
- GPT: A naive GPT series LLM agent with only shell access.
- DeepSeek: A naive DeepSeek series LLM agent with only shell access.
- Qwen: A naive Qwen series LLM agent with only shell access.
- vLLM: A naive vLLM agent with any open source LLM deployed locally and only shell access.
- GPT with Azure OpenAI: A naive GPT4-based LLM agent for Azure OpenAI, using identity-based authentication.
- ReAct: A naive LLM agent that uses the ReAct framework.
- FLASH: A naive LLM agent that uses status supervision and hindsight integration components to ensure the high reliability of workflow execution.
- OpenRouter: A naive OpenRouter LLM agent with only shell access.
- Generic OpenAI: A generic agent that works with any provider exposing the OpenAI Chat Completions API (
/v1/chat/completions), such as Poe, vLLM, LocalAI, standard OpenAI deployments, or other compatible services. Thebase_urland model are fully configurable via environment variables.
The vLLM client allows you to run local open-source models as an agent for AIOpsLab tasks. This approach is particularly useful when you want to:
- Use your own hardware for inference
- Experiment with different open-source models
- Work in environments without internet access to cloud LLM providers
-
Launch the vLLM server:
# Make the script executable chmod +x ./clients/launch_vllm.sh # Run the script ./clients/launch_vllm.sh
This will launch vLLM in the background using the default model (Qwen/Qwen2.5-3B-Instruct).
-
Check server status:
# View the log file to confirm the server is running cat vllm_Qwen_Qwen2.5-3B-Instruct.log -
Customize the model (optional): Edit
launch_vllm.shto change the model:# Open the file nano ./clients/launch_vllm.sh # Change the MODEL variable to your preferred model # Example: MODEL="mistralai/Mistral-7B-Instruct-v0.1"
-
Run the vLLM agent:
python clients/vllm.py
- Poetry for dependency management
- Sufficient GPU resources for your chosen model
- The model must support the OpenAI chat completion API format
The vLLM client connects to http://localhost:8000/v1 by default. If you've configured vLLM to use a different port or host, update the base_url in clients/utils/llm.py in the vLLMClient class.
The following environment variables are used by various clients. Copy .env.example to .env and fill in your API keys:
cp .env.example .env
# Edit .env with your actual API keysOPENAI_API_KEY: OpenAI API key for GPT clientsDEEPSEEK_API_KEY: DeepSeek API key for DeepSeek clientDASHSCOPE_API_KEY: Alibaba DashScope API key for Qwen clientOPENROUTER_API_KEY: OpenRouter API key for OpenRouter clientGROQ_API_KEY: Groq API key for Groq-based clients
OPENROUTER_MODEL: OpenRouter model to use (default:openai/gpt-4o-mini)USE_WANDB: Enable Weights & Biases logging (default:false)
The Generic OpenAI client works with any provider that implements the OpenAI Chat Completions API (/v1/chat/completions), such as Poe, vLLM, LocalAI, or standard OpenAI.
Set the following environment variables:
OPENAI_COMPATIBLE_API_KEY: API key for your target endpoint (required)OPENAI_COMPATIBLE_BASE_URL: Base URL of your target endpoint, e.g.https://api.poe.com/llm/v1(required)OPENAI_COMPATIBLE_MODEL: Model name to use, e.g.MiniMax-Text-01(default:gpt-4o)
Then run:
python clients/generic_openai.pyThe script gpt_azure_identity.py supports keyless authentication for securely accessing Azure OpenAI endpoints. It supports two authentication methods:
Steps
- The user must have the appropriate role assigned (e.g.,
Cognitive Services OpenAI User) on the Azure OpenAI resource. - Run the following command to authenticate (How to install the Azure CLI?):
az login --scope https://cognitiveservices.azure.com/.default- Follow the official documentation to assign a user-assigned managed identity to the VM where the client script would be run:
Add a user-assigned managed identity to a VM - The managed identity must have the appropriate role assigned (e.g.,
Cognitive Services OpenAI User) on the Azure OpenAI resource. - Specify the managed identity to use by setting the following environment variable before running the script:
export AZURE_CLIENT_ID=<client-id>Please ensure the required Azure configuration is provided using the /configs/example_azure_config.yml file, or use it as a template to create a new configuration file.