Skip to content

Conversation

@mateuszruszkowski
Copy link

@mateuszruszkowski mateuszruszkowski commented Jan 20, 2026

Summary

  • Add Z.AI LLM provider using OpenAI-compatible API with GLM-4 models
  • Add Z.AI configuration to LLMProvider enum and config fields
  • Improve env loading to check CWD first for project-specific overrides
  • Add Z.AI configuration documentation to .env.example

Files Changed

  • apps/backend/.env.example - Z.AI configuration documentation
  • apps/backend/cli/utils.py - Improved env loading (CWD first)
  • apps/backend/integrations/graphiti/config.py - Z.AI config fields and provider enum
  • apps/backend/integrations/graphiti/providers_pkg/factory.py - Z.AI routing
  • apps/backend/integrations/graphiti/providers_pkg/llm_providers/__init__.py - Export
  • apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py - New Z.AI provider

Test plan

  • Module imports successfully verified
  • Z.AI provider enum accessible
  • Backend tests pass (1621 tests)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added Z.A.I (ZAI) as a selectable LLM provider with environment configuration, examples, and provider listing updates.
    • Support for a custom OpenAI base URL to enable alternative OpenAI-compatible endpoints for LLMs and embeddings.
  • Chores

    • Environment loading precedence improved so project-specific .env values override base example values when present.

✏️ Tip: You can customize this high-level summary in your review settings.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mateuszruszkowski, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the flexibility of the Graphiti backend by integrating Z.AI as a new LLM provider. By leveraging Z.AI's OpenAI-compatible API, the system can now utilize GLM-4 models, offering more choices for users. The changes also include a more robust environment variable loading process, allowing for easier project-specific configurations, and comprehensive documentation to guide setup.

Highlights

  • New LLM Provider Integration: Added support for Z.AI as a new Large Language Model (LLM) provider, leveraging its OpenAI-compatible API and GLM-4 based models.
  • Configuration Enhancements: Introduced new configuration fields for Z.AI (ZAI_API_KEY, ZAI_BASE_URL, ZAI_MODEL) and a generic OPENAI_BASE_URL to support custom OpenAI-compatible endpoints.
  • Improved Environment Variable Loading: Modified the .env file loading mechanism to prioritize a .env file in the current working directory (CWD), enabling project-specific overrides.
  • Documentation Update: Updated .env.example with detailed documentation for the new Z.AI configuration and the OPENAI_BASE_URL.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds Z.AI provider support and optional OpenAI base URL: updates env examples, env loading precedence, Graphiti config/enum, factory wiring, package exports, new ZAI provider module, and conditional base_url support for LLM/embedder clients.

Changes

Cohort / File(s) Summary
Environment & Examples
apps/backend/.env.example
Added ZAI env keys (ZAI_API_KEY, ZAI_BASE_URL, ZAI_MODEL) and examples showing Z.AI via OpenAI-compatible base URL and via dedicated ZAI provider.
Env Loader
apps/backend/cli/utils.py
setup_environment now loads a cwd ./.env (if present and distinct) before the script-dir env file, giving cwd values precedence for missing keys.
Graphiti Config & Providers List
apps/backend/integrations/graphiti/config.py
Added ZAI to LLMProvider; new fields openai_base_url, zai_api_key, zai_base_url, zai_model; from_env reads OPENAI_BASE_URL and ZAI_*; get_available_providers advertises zai when configured.
Factory & Exports
apps/backend/integrations/graphiti/providers_pkg/factory.py, apps/backend/integrations/graphiti/providers_pkg/llm_providers/__init__.py
Exported create_zai_llm_client and added a zai branch in create_llm_client to construct the ZAI client.
ZAI Provider Implementation
apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py
New module: create_zai_llm_client(config) validates ZAI config, dynamically imports graphiti-core types, builds LLMConfig with api_key/model/base_url, and returns an OpenAI-compatible client with Z.AI compatibility adjustments.
Embedder & OpenAI LLM config changes
apps/backend/integrations/graphiti/providers_pkg/embedder_providers/openai_embedder.py, apps/backend/integrations/graphiti/providers_pkg/llm_providers/openai_llm.py
Both now conditionally include base_url in constructed config objects when openai_base_url is provided, enabling custom OpenAI/OpenRouter-style endpoints.

Sequence Diagram

sequenceDiagram
    actor App
    participant EnvLoader as Environment Loader
    participant Config as GraphitiConfig
    participant Factory as LLM Factory
    participant ZAIModule as ZAI Provider
    participant OpenAIClient as OpenAI Client

    App->>EnvLoader: setup_environment()
    EnvLoader->>EnvLoader: Load cwd/.env (if exists)
    EnvLoader->>EnvLoader: Load script_dir/.env

    App->>Config: GraphitiConfig.from_env()
    Config->>Config: Read OPENAI_BASE_URL, ZAI_API_KEY, ZAI_BASE_URL, ZAI_MODEL
    Config-->>App: config with zai_* and openai_base_url

    App->>Factory: create_llm_client("zai", config)
    Factory->>ZAIModule: create_zai_llm_client(config)
    ZAIModule->>ZAIModule: Validate zai_api_key & zai_base_url
    ZAIModule->>ZAIModule: Import graphiti-core deps
    ZAIModule->>ZAIModule: Build LLMConfig(api_key, model, base_url)
    ZAIModule->>OpenAIClient: OpenAIClient(LLMConfig)
    OpenAIClient-->>ZAIModule: client instance
    ZAIModule-->>Factory: client instance
    Factory-->>App: OpenAI-compatible client
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • AlexMadera

Poem

🐰 I hopped through envs with a curious cheer,
Found Z.AI keys and tucked them near,
Wired the factory, set base_url right,
Clients now chat through morning and night,
I twitch my nose and ship with delight.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding Z.AI provider support to Graphiti, which is the primary objective across all modified files.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sentry
Copy link

sentry bot commented Jan 20, 2026

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Z.AI LLM provider, enhancing Graphiti's multi-provider capabilities. It includes necessary configuration fields, environment variable updates, and a new client implementation that leverages the OpenAI-compatible API. Additionally, the environment loading mechanism has been improved to prioritize project-specific .env files. The changes are well-structured and integrate smoothly with the existing provider framework. I've identified a couple of areas for improvement regarding provider availability checks and model capability handling for the new Z.AI client.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@apps/backend/.env.example`:
- Around line 357-365: Clarify Z.AI integration choices in the .env.example by
either adding a short explanatory comment near the existing Example 1b or adding
a new example block showing the dedicated Z.AI provider; mention the difference
between using GRAPHITI_LLM_PROVIDER=openai with OPENAI_BASE_URL (custom
OpenAI-compatible endpoint) versus using GRAPHITI_LLM_PROVIDER=zai (native zai
handling), and include corresponding env variables like
OPENAI_API_KEY/OPENAI_BASE_URL/OPENAI_MODEL/GRAPHITI_EMBEDDER_PROVIDER=voyage
and the alternative vars for the zai provider (e.g., GRAPHITI_LLM_PROVIDER=zai
and VOYAGE_API_KEY), so users know when to pick each approach and which vars to
set.

In `@apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py`:
- Around line 54-56: Remove the dead/orphan comment about determining if the
model supports reasoning that appears above the return
OpenAIClient(config=llm_config) in zai_llm.py; either delete that comment
entirely or replace it with real reasoning-detection logic that inspects
llm_config (or the chosen model identifier) and sets a flag or calls a function
(e.g., detect_reasoning_capability) before returning the OpenAIClient, ensuring
you update any related variables instead of leaving an unused comment.

@mateuszruszkowski mateuszruszkowski force-pushed the feat/zai-provider-support branch from 4ed851d to 62096db Compare January 20, 2026 22:09
@mateuszruszkowski mateuszruszkowski force-pushed the feat/zai-provider-support branch from 0a1cce3 to 03aefab Compare January 21, 2026 11:02
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/backend/.env.example`:
- Around line 282-299: Update the Z.AI environment variable placeholders to
match the project's established convention (e.g., use a masked key like
sk-xxxxxxxx or pa-xxxxxxxx) instead of "your_key_here"; specifically change the
ZAI_API_KEY placeholder and any example values for ZAI_BASE_URL and ZAI_MODEL to
use the consistent masked format alongside explanatory comments so they match
other variables such as OPENAI_API_KEY and PA_API_KEY in the file.

AndyMik90 and others added 6 commits January 21, 2026 13:39
The previous fix only removed partial directories but not broken symlinks.
CI logs showed that an existing broken symlink pointing to
"../../../../../../apps/frontend/node_modules" was skipped, causing
electron-builder to fail with ENOENT during macOS code signing.

Changes:
- Check if existing symlink points to correct target (../../node_modules)
- Remove incorrect or broken symlinks before creating new one
- Add strict verification that symlink exists AND resolves correctly
- Fail fast with clear error if verification fails

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Windows junctions don't appear as symlinks to bash's -L test, causing
the verification step to fail even when the junction was created
successfully.

Changes:
- Skip symlink check (-L) on Windows since junctions are different
- Verify link works by checking electron package is accessible
- Add more diagnostic output on failure

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix get_available_providers to check both zai_api_key AND zai_base_url
- Implement reasoning detection for Z.AI LLM client (GLM-4 models support reasoning)
- Pass openai_base_url to OpenAI LLM client for custom endpoint support
- Pass openai_base_url to OpenAI embedder client for custom endpoint support
- Clarify .env.example with two Z.AI integration approaches:
  - Via OpenAI provider with custom base URL (Example 1b)
  - Via dedicated zai provider (Example 1c)

Addresses review comments from:
- Gemini Code Assist (config.py, zai_llm.py)
- CodeRabbit (.env.example, zai_llm.py)
- Sentry (HIGH severity bug: openai_base_url not passed to clients)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Z.AI uses its own parameter names (e.g., 'thinking') and doesn't support
OpenAI's 'reasoning' or 'verbosity' parameters. Always disable them for
compatibility, similar to how OpenRouter provider handles this.

Addresses: Sentry review comment about incompatible parameters

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Changed placeholder from 'your_key_here' to 'zai-xxxxxxxx...' to match
the convention used by other API keys in the file.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@mateuszruszkowski mateuszruszkowski force-pushed the feat/zai-provider-support branch from b532a3b to 34a3993 Compare January 21, 2026 12:46
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/backend/integrations/graphiti/config.py (1)

10-12: Update docstring to include Z.AI provider.

The module docstring lists supported providers but doesn't include Z.AI.

📝 Proposed fix
 Multi-Provider Support (V2):
-- LLM Providers: OpenAI, Anthropic, Azure OpenAI, Ollama, Google AI, OpenRouter
+- LLM Providers: OpenAI, Anthropic, Azure OpenAI, Ollama, Google AI, OpenRouter, Z.AI
 - Embedder Providers: OpenAI, Voyage AI, Azure OpenAI, Ollama, Google AI, OpenRouter
🤖 Fix all issues with AI agents
In `@apps/backend/.env.example`:
- Around line 282-299: Update the LLM provider selection line in the
.env.example to include "zai" so readers can choose Z.AI; ensure this aligns
with the existing ZAI_* entries (ZAI_API_KEY, ZAI_BASE_URL, ZAI_MODEL) and add
"zai" to the comma-separated provider list or enum that documents available
providers (the same line that lists other providers like openai, anthropic,
etc.), keeping formatting consistent with surrounding comments.

mateuszruszkowski added a commit to mateuszruszkowski/Auto-Claude that referenced this pull request Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants