Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(wren-ai-service): Exclude 'alias' from model parameters in LLM and Embedder processors #1353

Merged
merged 1 commit into from
Mar 4, 2025

Conversation

paopa
Copy link
Member

@paopa paopa commented Mar 4, 2025

This PR fixes an issue where the 'alias' parameter was being incorrectly included in the model's additional parameters for both LLM and Embedder processors. The fix ensures that 'alias' is properly excluded along with 'model' and 'kwargs' when processing model configurations.

Changes made:

  • Updated model_additional_params filtering in llm_processor to exclude 'alias'
  • Updated model_additional_params filtering in embedder_processor to exclude 'alias'

Summary by CodeRabbit

  • Refactor
    • Improved the logic for processing model parameters by excluding an extra attribute, ensuring that only relevant details are considered in configuration.

@paopa paopa added module/ai-service ai-service related ci/ai-service ai-service related labels Mar 4, 2025
Copy link
Contributor

coderabbitai bot commented Mar 4, 2025

Walkthrough

This pull request modifies the llm_processor and embedder_processor functions in the wren-ai-service/src/providers/__init__.py file. The dictionary comprehension in both functions is updated to additionally exclude the "alias" key when constructing the model_additional_params dictionary. The overall structure of each function remains the same while altering the data filtering process to omit the "alias" parameter.

Changes

File Change Summary
wren-ai-service/.../init.py Updated dictionary comprehensions in llm_processor and embedder_processor to exclude the "alias" key from the parameters.

Possibly related PRs

Poem

I'm a rabbit, hopping through lines of code,
Skipping the "alias" on my joyful road.
Keys and values in a playful dance,
Adjusted parameters now have their chance.
With every change, I nod and smile,
Coding carrots in style for a little while!
🥕🐇

✨ Finishing Touches
  • 📝 Generate Docstrings

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@paopa paopa force-pushed the fix/remove-alias-from-model-additional-params branch from 97ba669 to 5ec380d Compare March 4, 2025 06:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
wren-ai-service/src/providers/__init__.py (1)

125-127: Consider adding unit tests for this parameter exclusion.

While the code change is straightforward, it might be worth adding unit tests to verify that 'alias' is correctly excluded from the generated model configurations to prevent regression in the future.

def test_alias_exclusion_in_llm_processor():
    """Test that 'alias' is correctly excluded from model_additional_params in llm_processor."""
    entry = {
        "type": "llm",
        "provider": "test_provider",
        "models": [
            {
                "model": "test_model",
                "alias": "test_alias",
                "kwargs": {}
            }
        ]
    }
    result = llm_processor(entry)
    model_config = result["test_provider.test_alias"]
    assert "alias" not in model_config, "alias should be excluded from model configuration"

def test_alias_exclusion_in_embedder_processor():
    """Test that 'alias' is correctly excluded from model_additional_params in embedder_processor."""
    entry = {
        "type": "embedder",
        "provider": "test_provider",
        "models": [
            {
                "model": "test_model",
                "alias": "test_alias"
            }
        ]
    }
    result = embedder_processor(entry)
    model_config = result["test_provider.test_alias"]
    assert "alias" not in model_config, "alias should be excluded from model configuration"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 236c87c and 5ec380d.

📒 Files selected for processing (1)
  • wren-ai-service/src/providers/__init__.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: pytest
  • GitHub Check: Analyze (go)
🔇 Additional comments (3)
wren-ai-service/src/providers/__init__.py (3)

73-73: Correctly excludes 'alias' from model parameters.

The change ensures that 'alias' parameter is used only for model identification but not included in the model_additional_params dictionary that gets passed to the actual LLM provider implementation. This is consistent with how 'model' and 'kwargs' are handled, and prevents potentially unwanted parameters from reaching the provider API.


127-127: Correctly excludes 'alias' from embedder model parameters.

Similar to the LLM processor change, this ensures 'alias' is excluded from model_additional_params in the embedder processor. The change maintains consistency between both processor functions and prevents potentially unwanted parameters from being passed to the embedder provider.


71-73:

❓ Verification inconclusive

Verify the alias usage in existing configurations.

The change looks good, but I'd recommend verifying that all existing configurations that might use 'alias' continue to work correctly after this change. The 'alias' field is still used for identification purposes on line 71, but now will not appear in the final model configuration.


🏁 Script executed:

#!/bin/bash
# Search for any code or configuration files that might be using 'alias' with these models
echo "Looking for configuration files that might use the alias parameter:"
rg -l "'alias'|\"alias\"" --type yaml --type json
rg -l "\.get\(.*'alias'|\.get\(.*\"alias\"" --type python

echo "Checking if tests might be affected by this change:"
rg -l "model_additional_params" --type python

Length of output: 557


Below is an updated verification script to re-run the search for “alias” usage in Python files using a file glob pattern. This will help uncover any alias retrievals (e.g. via the .get() method) that our original command missed due to the unrecognized file type error:

Next Steps:

  • Manually verify if the occurrence in wren-ai-service/src/pipelines/generation/utils/vega-lite-schema-v5.json using the alias field is expected.
  • Check the output of the above script to ensure no unexpected usage of the alias field occurs elsewhere in Python code.
  • Confirm that the removal of alias from the final model configuration does not break any functionality in tests or configurations.

Action: Please review the new search results and confirm that all configurations referencing “alias” are either updated or unaffected by the change.

Review Priority: Verify that removing alias from the final configuration is safe in all contexts.


🏁 Script executed:

#!/bin/bash
echo "Searching for alias usage in Python files:"
rg -g '*.py' -l "\.get\(.*('alias'|\"alias\")" .

echo "Searching for model_additional_params usage in Python files:"
rg -g '*.py' -l "model_additional_params" .

Length of output: 534


Action Required: Verify alias usage in external configuration files.

  • Verification indicates that the only Python reference to alias extraction is in wren-ai-service/src/providers/__init__.py (as expected).
  • A configuration file (wren-ai-service/src/pipelines/generation/utils/vega-lite-schema-v5.json) still contains the alias parameter. Please manually verify that its usage is correct and that removing alias from the final model configuration does not lead to unintended side effects.
  • Confirm that tests referencing model_additional_params in the providers file and elsewhere remain valid.

@paopa paopa requested a review from wwwy3y3 March 4, 2025 06:08
@paopa paopa merged commit 436935b into main Mar 4, 2025
10 checks passed
@paopa paopa deleted the fix/remove-alias-from-model-additional-params branch March 4, 2025 08:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/ai-service ai-service related module/ai-service ai-service related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants