Skip to content

Response timeout causes duplicate memory entries when LLM retries #50

@kornelrabczak

Description

@kornelrabczak

Problem

When using the agentic_memory_write tool via an LLM agent (e.g., Claude, GPT), responses frequently take a long time and eventually timeout (MCP error -32001: Request timed out).

The LLM, not knowing whether the write actually succeeded before the timeout, retries sending the same note. This creates duplicate memory entries for the same factual statement.

Steps to Reproduce

  1. Use an LLM agent connected to agentic-memory MCP
  2. Call agentic_memory_write with any content multiple times
  3. Observe MCP timeout error -32001
  4. LLM retries the same call
  5. Check stored memories - the note may now exist twice

Expected Behavior

  • Either the write succeeds and a timely confirmation is returned, or
  • If a timeout occurs, there should be an idempotency mechanism (e.g., content-based deduplication) so retries don't produce duplicates.

Suggested Fixes

  1. Response time optimization - investigate why writes are slow and reduce latency to stay within MCP timeouts.
  2. Content deduplication - hash incoming content and skip insertion if an identical memory already exists.
  3. Idempotency key - allow callers to pass an optional idempotency key so retries are safely ignored.
  4. Timeout guidance - document recommended MCP client timeout settings for this server.

Impact

  • Knowledge base integrity is compromised by duplicate entries.
  • Downstream search and linking are polluted with redundant data.
  • Agent workflows become unreliable, operators can't tell what was actually stored.

Metadata

Metadata

Assignees

No one assigned

    Labels

    0.0.xbugSomething isn't working

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions