Skip to content

feat: Two-stage evaluation pipeline for efficiency #14

@don-petry

Description

@don-petry

Problem

Every interaction is sent through full LLM evaluation, even non-rule interactions (casual questions, code generation, etc.). This wastes LLM calls and adds unnecessary latency/cost.

Proposed Solution

Implement a two-stage evaluation pipeline:

  • Stage 1 (lightweight): Quick classification — "Does this interaction contain a persistent preference or rule?" — using a low-cost model or heuristic
  • Stage 2 (full): Extract and professionalize the rule using the full evaluator

Expected 50%+ reduction in LLM calls since many interactions are routine code assistance with no rule content.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions