Skip to content

Comments

feat: add memory API, search, session export, and normalize API#68

Merged
Koopa0 merged 1 commit intomainfrom
feat/api-normalization-search-memory
Feb 20, 2026
Merged

feat: add memory API, search, session export, and normalize API#68
Koopa0 merged 1 commit intomainfrom
feat/api-normalization-search-memory

Conversation

@Koopa0
Copy link
Owner

@Koopa0 Koopa0 commented Feb 20, 2026

  • Memory CRUD endpoints (GET/PATCH/DELETE /api/v1/memories)
  • Cross-session search with CJK trigram fallback
  • Session export (JSON + Markdown)
  • Stats endpoint
  • Paginated list responses ({items, total})
  • Error envelope with status field
  • Input validation limits (WriteFile, URL, selector, command args)
  • Data lifecycle (retention, cleanup scheduler)
  • Strict audit fixes (mapMemoryError helper, package-level types, tests)

responses

  - Memory CRUD endpoints (GET/PATCH/DELETE /api/v1/memories)
  - Cross-session search with CJK trigram fallback
  - Session export (JSON + Markdown)
  - Stats endpoint
  - Paginated list responses ({items, total})
  - Error envelope with status field
  - Input validation limits (WriteFile, URL, selector, command args)
  - Data lifecycle (retention, cleanup scheduler)
  - Strict audit fixes (mapMemoryError helper, package-level types,
    tests)
@Koopa0 Koopa0 merged commit 71d7d7d into main Feb 20, 2026
4 of 8 checks passed
@Koopa0 Koopa0 deleted the feat/api-normalization-search-memory branch February 20, 2026 17:34
SessionStore: a.SessionStore,
MemoryStore: a.MemoryStore,
Pool: a.DBPool,
CSRFSecret: []byte(cfg.HMACSecret),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security: CSRF Secret Handling

The CSRF secret is set as []byte(cfg.HMACSecret). If cfg.HMACSecret is not validated for sufficient entropy and length, this could compromise CSRF protection. Ensure that cfg.HMACSecret is at least 32 bytes of cryptographically secure random data. If not, add validation logic in the configuration loading or validation phase to enforce this requirement.

Recommended solution:

if len(cfg.HMACSecret) < 32 {
    return fmt.Errorf("HMACSecret must be at least 32 bytes for CSRF protection")
}

Comment on lines +99 to 100
CREATE UNIQUE INDEX IF NOT EXISTS idx_memories_owner_content_unique
ON memories(owner_id, md5(content)) WHERE active = true;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential collision risk with md5(content) for uniqueness

The unique index on md5(content) (lines 99-100) is used to enforce deduplication of active memories per owner. However, MD5 is not collision-resistant, and while collisions are rare, they are possible. For critical deduplication, consider using a stronger hash function such as SHA256, which is available in PostgreSQL via the pgcrypto extension:

CREATE UNIQUE INDEX IF NOT EXISTS idx_memories_owner_content_unique
ON memories(owner_id, encode(digest(content, 'sha256'), 'hex')) WHERE active = true;

This reduces the risk of accidental duplicates due to hash collisions.

Comment on lines +2 to +3
ALTER TABLE messages DROP COLUMN IF EXISTS search_text;
ALTER TABLE messages DROP COLUMN IF EXISTS text_content;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Irreversible Data Loss Risk: Dropping the search_text and text_content columns will permanently remove all data stored in them. Ensure that this migration is only run when data loss is acceptable, or consider backing up the data before executing.

Recommended Solution: Add a warning or backup step before running this migration, or wrap the migration in a transaction to ensure atomicity.

Comment on lines +1 to +3
DROP INDEX IF EXISTS idx_messages_search_text;
ALTER TABLE messages DROP COLUMN IF EXISTS search_text;
ALTER TABLE messages DROP COLUMN IF EXISTS text_content;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential Dependency Conflicts: Dropping columns and indexes without checking for dependencies (such as triggers, views, or application code) may cause runtime errors or inconsistencies.

Recommended Solution: Verify that no other database objects or application logic depend on these columns or index before executing the migration.

Comment on lines +5 to +9
-- NOTE: CONCURRENTLY cannot run inside a transaction block.
-- golang-migrate runs each file in a transaction by default;
-- the operator must run this migration manually with:
-- psql -f 000004_trigram_search.up.sql
-- or disable transactions in the migration tool.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Migration Execution Risk:
The migration requires manual execution due to the use of CONCURRENTLY, which cannot run inside a transaction block. If the operator overlooks this requirement, the migration will fail, potentially leaving the database in an inconsistent state.

Recommendation:
Consider splitting the migration into two files: one for enabling the extension (which can run in a transaction), and another for index creation (to be run manually or with transaction disabled). Alternatively, provide automated checks or scripts to ensure proper execution.

Comment on lines +56 to 61
if err := json.NewEncoder(w).Encode(envelope{Error: &Error{Status: status, Code: code, Message: message}}); err != nil {
if logger == nil {
logger = slog.Default()
}
logger.Error("encoding JSON error response", "error", err)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error Handling in WriteError:
If JSON encoding fails, the client receives only the status code and no response body, which may hinder client-side error handling and debugging. Consider returning a minimal error message in plain text or a fallback JSON structure to ensure the client always receives a response body.

Example:

w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("Internal Server Error"))

Comment on lines +69 to +71
if !ok || userID == "" {
WriteError(w, http.StatusForbidden, "forbidden", "user identity required", logger)
return "", false

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Audit Logging in requireUserID:
When user identity extraction fails, the function returns a 403 error but does not log the failure. For security and audit purposes, consider logging these events to track unauthorized access attempts.

Example:

if logger == nil {
    logger = slog.Default()
}
logger.Warn("user identity missing or invalid", "request", r)

return
}

limit := min(parseIntParam(r, "limit", 20), 100)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential Negative Limit Issue:
The calculation for limit uses min(parseIntParam(r, "limit", 20), 100), but if parseIntParam returns a negative value, limit could be negative. This could lead to unexpected behavior or security vulnerabilities in downstream code.

Recommended Solution:
Add a check to ensure limit is non-negative:

limit := min(max(parseIntParam(r, "limit", 20), 0), 100)

Comment on lines +40 to +42
offset := parseIntParam(r, "offset", 0)
if offset > 10000 {
WriteError(w, http.StatusBadRequest, "invalid_offset", "offset must be 10000 or less", h.logger)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Offset Parameter Not Checked for Negativity:
The offset parameter is only checked for being greater than 10000, but negative values are not prevented. Passing a negative offset could result in undefined or erroneous behavior in the search logic.

Recommended Solution:
Add a check to ensure offset is non-negative:

if offset < 0 || offset > 10000 {
    WriteError(w, http.StatusBadRequest, "invalid_offset", "offset must be between 0 and 10000", h.logger)
    return
}

Comment on lines +67 to +68
go ch.startPendingCleanup(ctx)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential goroutine failure without recovery:

The goroutine started for pending query cleanup (go ch.startPendingCleanup(ctx)) does not include error handling or panic recovery. If the goroutine panics or encounters an error, it may silently fail, leading to resource leaks or unhandled expired queries.

Recommendation:
Wrap the goroutine logic in a deferred recovery block and log any errors or panics to ensure failures are visible and do not compromise server stability.

 go func() {
     defer func() {
         if r := recover(); r != nil {
             logger.Error("pending cleanup goroutine panicked", "error", r)
         }
     }()
     ch.startPendingCleanup(ctx)
 }()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant