diff --git a/README.md b/README.md index 9110ea9..91a9dd3 100644 --- a/README.md +++ b/README.md @@ -169,6 +169,49 @@ Cortex Memory includes a powerful web-based dashboard (`cortex-mem-insights`) th These visual tools help you understand how Cortex Memory is performing and how your AI agent's memory is evolving over time. +# 🌟 Community Showcase: Cortex TARS + +Meet **Cortex TARS** — a production-ready AI-native TUI (Terminal User Interface) application that demonstrates the true power of Cortex Memory. Built as a "second brain" companion, Cortex TARS brings **auditory presence** to your AI experience and can truly hear and remember your voice in the real world, showcases how persistent memory transforms AI interactions from fleeting chats into lasting, intelligent partnerships. + +## What Makes Cortex TARS Special? + +Cortex TARS is more than just a chatbot — it's a comprehensive AI assistant platform that leverages Cortex Memory's advanced capabilities: + +### 🎭 Multi-Agent Management +Create and manage multiple AI personas, each with distinct personalities, system prompts, and specialized knowledge areas. Whether you need a coding assistant, a creative writing partner, or a productivity coach, Cortex TARS lets you run them all simultaneously with complete separation. + +### 💾 Persistent Role Memory +Every agent maintains its own long-term memory, learning from interactions over time. Your coding assistant remembers your coding style and preferences; your writing coach adapts to your voice and goals. No more repeating yourself — each agent grows smarter with every conversation. + +### 🔒 Memory Isolation +Advanced memory architecture ensures complete isolation between agents and users. Each agent's knowledge base is separate, preventing cross-contamination while enabling personalized experiences across different contexts and use cases. + +### 🎤 Real-Time Audio-to-Memory (The Game Changer) +**This is where Cortex TARS truly shines.** With real-time device audio capture, Cortex TARS can listen to your conversations, meetings, or lectures and automatically convert them into structured, searchable memories. Imagine attending a meeting while Cortex TARS silently captures key insights, decisions, and action items — all stored and ready for instant retrieval later. No more frantic note-taking or forgotten details! + +## Why Cortex TARS Matters + +Cortex TARS isn't just an example — it's a fully functional application that demonstrates: + +- **Real-world production readiness**: Built with Rust, it's fast, reliable, and memory-safe +- **Seamless Cortex Memory integration**: Shows best practices for leveraging the memory framework +- **Practical AI workflows**: From multi-agent conversations to audio capture and memory extraction +- **User-centric design**: Beautiful TUI interface with intuitive controls and rich features + +## Explore Cortex TARS + +Ready to see Cortex Memory in action? Dive into the Cortex TARS project: + +```bash +cd examples/cortex-mem-tars +cargo build --release +cargo run --release +``` + +Check out the [Cortex TARS README](examples/cortex-mem-tars/README.md) for detailed setup instructions, configuration guides, and usage examples. + +**Cortex TARS proves that Cortex Memory isn't just a framework — it's the foundation for building intelligent, memory-aware applications that truly understand and remember.** + # 🏆 Benchmark Cortex Memory has been rigorously evaluated against LangMem using the **LOCOMO dataset** (50 conversations, 150 questions) through a standardized memory system evaluation framework. The results demonstrate Cortex Memory's superior performance across multiple dimensions. diff --git a/examples/cortex-mem-tars/README.md b/examples/cortex-mem-tars/README.md index 408c55e..daaaa2a 100644 --- a/examples/cortex-mem-tars/README.md +++ b/examples/cortex-mem-tars/README.md @@ -1,184 +1,365 @@ -# Cortex Memory TARS +# Cortex TARS -这是一个基于 Cortex Memory 的 TUI(终端用户界面)聊天应用,具有记忆功能。它能够记住用户的对话历史和个人信息,提供更智能的对话体验。 +

+ 🎧 Share Your Auditory Presence with AI — A Next-Gen Personal Agent Powered by Cortex Memory +

-## 功能特性 +Cortex TARS is a production-ready TUI (Terminal User Interface) application that brings **auditory presence** to your AI experience. Built on **Cortex Memory**, it's not just a chatbot — it's an intelligent AI assistant platform that can truly hear and remember your voice in the real world. Cortex TARS maintains deep links with your memory, capturing and preserving your auditory experiences through its extensible API capabilities. -- 🧠 **记忆功能**:自动记忆用户的对话历史和个人信息 -- 🤖 **智能 AI 助手**:支持多个机器人配置,每个机器人可以有不同的系统提示词 -- 📝 **Markdown 渲染**:支持 Markdown 格式的消息显示 -- 💾 **对话导出**:可以将对话导出到剪贴板 -- 🔧 **灵活配置**:支持自定义 LLM API、向量存储等配置 -- 🎨 **现代化 TUI**:基于 ratatui 的美观终端界面 +## ✨ Key Features -## 安装 +### 🎭 Multi-Agent Management +Create and manage multiple AI personas, each with distinct personalities, system prompts, and specialized knowledge areas. Whether you need a coding assistant, a creative writing partner, or a productivity coach, Cortex TARS lets you run them all simultaneously. -### 前置要求 +### 💾 Persistent Role Memory +Every agent maintains its own long-term memory, learning from interactions over time. Your coding assistant remembers your coding style and preferences; your writing coach adapts to your voice and goals. Powered by Cortex Memory's intelligent memory management. -- Rust 1.70 或更高版本 -- Qdrant 向量数据库(可选,用于记忆功能) -- OpenAI API 密钥或其他兼容的 LLM API +### 🔒 Memory Isolation +Advanced memory architecture ensures complete isolation between agents and users. Each agent's knowledge base is separate, preventing cross-contamination while enabling personalized experiences across different contexts. -### 构建项目 +### 🎨 Modern TUI Experience +- **Beautiful Interface**: Built with ratatui for a polished, responsive terminal experience +- **Multiple Themes**: Choose from 5 pre-built themes (Default, Dark, Forest, Ocean, Sunset) +- **Markdown Support**: Rich text rendering with full Markdown syntax +- **Stream Responses**: Real-time streaming AI responses for smooth conversations +- **Message Export**: Export conversations to clipboard with a single command + +### 🔌 Extensible API Integration +Cortex TARS provides a REST API server that enables external services to interact with the memory system: + +- **Store Mode**: External services can store information directly to the memory system +- **Chat Mode**: External messages can be injected as user input for AI processing +- **Health Check**: Monitor API service status +- **Memory Retrieval**: Query and list stored memories programmatically + +## 📋 Prerequisites + +- **Rust** 1.70 or later +- **Qdrant** vector database (for memory functionality) +- **OpenAI-compatible** LLM API endpoint + +## 🚀 Installation + +### Clone and Build ```bash -cd examples/cortex-mem-tars-new +cd examples/cortex-mem-tars cargo build --release ``` -## 配置 +The compiled binary will be available at `target/release/cortex-mem-tars`. -### 1. 创建配置文件 +## ⚙️ Configuration -将 `config.example.toml` 复制为 `config.toml` 并修改相应的配置: +### 1. Create Configuration File + +Copy the example configuration: ```bash cp config.example.toml config.toml ``` -### 2. 修改配置 +### 2. Edit Configuration -编辑 `config.toml` 文件,至少需要配置以下内容: +Edit `config.toml` with your settings: ```toml +[qdrant] +url = "http://localhost:6334" +collection_name = "cortex_mem" +timeout_secs = 30 + [llm] api_base_url = "https://api.openai.com/v1" -api_key = "your-actual-api-key" +api_key = "your-api-key-here" model_efficient = "gpt-4o-mini" +temperature = 0.7 +max_tokens = 2000 [embedding] api_base_url = "https://api.openai.com/v1" -api_key = "your-actual-api-key" +api_key = "your-api-key-here" model_name = "text-embedding-3-small" - -[qdrant] -url = "http://localhost:6334" +batch_size = 100 + +[memory] +max_memories = 10000 +similarity_threshold = 0.65 +max_search_results = 50 +auto_enhance = true +deduplicate = true + +[api] +port = 8080 +api_key = "ANYTHING_YOU_LIKE" +enable_cors = true ``` -### 3. 启动 Qdrant(可选,用于记忆功能) - -如果你使用记忆功能,需要启动 Qdrant 向量数据库: +### 3. Start Qdrant ```bash -# 使用 Docker +# Using Docker docker run -p 6334:6334 qdrant/qdrant -# 或使用本地安装 +# Or use local installation qdrant ``` -## 使用方法 +## 🎮 Usage -### 运行应用 +### Basic Commands ```bash -cargo run --release +# Run with enhanced memory saving (saves conversations on exit) +cortex-mem-tars --enhance-memory-saver + +# Run with API server enabled for external integrations +cortex-mem-tars --enable-audio-connect --audio-connect-mode store + +# Chat mode: external messages are treated as user input +cortex-mem-tars --enable-audio-connect --audio-connect-mode chat ``` -### 基本操作 +### Keyboard Shortcuts -- **Enter**:发送消息 -- **Shift+Enter**:换行 -- **Ctrl+L**:打开/关闭日志面板 -- **Esc**:关闭日志面板 -- **Ctrl+H**:显示帮助信息 -- **Ctrl+C**:清空会话 -- **Ctrl+D**:导出对话到剪贴板 -- **q**:退出程序 +| Key | Action | +|-----|--------| +| `Enter` | Send message | +| `Shift+Enter` | New line in input | +| `Ctrl+C` | Clear current session | +| `Ctrl+D` | Export conversation to clipboard | +| `Ctrl+H` | Show help modal | +| `Ctrl+T` | Open theme selector | +| `Ctrl+B` | Open bot management | +| `q` | Quit application (in bot selection) | +| `Esc` | Close modal / Return to previous state | -### 命令 +### Bot Management -在输入框中输入以下命令: +Cortex TARS supports multiple AI bots with different personalities: -- `/quit`:退出程序 -- `/clear`:清空会话 -- `/help`:显示帮助信息 -- `/dump`:导出对话到剪贴板 +1. **Create a New Bot**: Press `Ctrl+B` → Select "Create Bot" +2. **Set Bot Properties**: + - **Name**: Display name for the bot + - **System Prompt**: The bot's personality and behavior instructions + - **Password**: Optional access password for security +3. **Edit/Delete Bots**: Manage existing bots through the bot management interface -## 项目结构 +Each bot maintains its own independent memory, ensuring complete separation of knowledge and context. +## 🔌 API Integration + +Cortex TARS provides a REST API for external services to interact with the memory system. + +### API Endpoints + +#### Health Check +```bash +GET http://localhost:8080/api/memory/health ``` -cortex-mem-tars-new/ -├── src/ -│ ├── main.rs # 主程序入口 -│ ├── app.rs # 应用程序主逻辑 -│ ├── agent.rs # Agent 实现(包括记忆功能) -│ ├── config.rs # 配置管理 -│ ├── infrastructure.rs # 基础设施(LLM、向量存储、记忆管理器) -│ ├── logger.rs # 日志系统 -│ └── ui.rs # TUI 界面 -├── config.example.toml # 配置文件示例 -└── README.md # 本文件 + +#### Store Memory (Store Mode) +```bash +POST http://localhost:8080/api/memory/store +Content-Type: application/json + +{ + "content": "The user mentioned they prefer Rust over Python", + "source": "audio_listener", + "timestamp": "2024-01-07T10:30:00Z", + "speaker_type": "user", + "speaker_confidence": 0.95 +} +``` + +#### Retrieve Memories +```bash +GET http://localhost:8080/api/memory/retrieve?query=user%20preferences&limit=5 ``` -## 核心功能 +#### List Memories +```bash +GET http://localhost:8080/api/memory/list?speaker_type=user&limit=10 +``` -### 1. 记忆功能 +### Request/Response Models -应用会自动: +**StoreMemoryRequest**: +- `content` (string): Text content to store +- `source` (string): Source identifier (e.g., "audio_listener") +- `timestamp` (string): ISO 8601 timestamp +- `speaker_type` (string): "user" or "other" +- `speaker_confidence` (float): 0-1 confidence score -- 在启动时加载用户的基本信息(个人特征、事实信息等) -- 在对话过程中使用记忆工具检索相关信息 -- 在退出时将对话历史保存到记忆系统 +**MemoryItem**: +- `id` (string): Unique memory ID +- `content` (string): Stored content +- `timestamp` (string): When it was stored +- `speaker_type` (string): Speaker identifier +- `relevance` (float): Search relevance score -### 2. 多机器人支持 +## 🏗️ Architecture -可以在配置目录中创建多个机器人配置,每个机器人可以有: +``` +cortex-mem-tars/ +├── src/ +│ ├── main.rs # Application entry point +│ ├── app.rs # Core application logic +│ ├── agent.rs # AI agent with Cortex Memory integration +│ ├── config.rs # Configuration management +│ ├── infrastructure.rs # LLM, vector store, memory manager setup +│ ├── api_server.rs # REST API server +│ ├── api_models.rs # API request/response models +│ ├── logger.rs # Logging system +│ ├── ui.rs # TUI interface and rendering +│ └── lib.rs # Library exports +├── config.example.toml # Configuration template +└── README.md # This file +``` -- 不同的名称 -- 不同的系统提示词 -- 不同的访问密码 +## 🧠 How Memory Works -### 3. 流式响应 +Cortex TARS leverages Cortex Memory's intelligent memory system: -支持实时的流式 AI 响应,提供更流畅的对话体验。 +1. **Automatic Extraction**: The system automatically extracts key facts and insights from conversations +2. **Semantic Storage**: Memories are stored as vectors for intelligent retrieval +3. **Context Awareness**: The agent retrieves relevant memories before generating responses +4. **Memory Optimization**: Periodic optimization consolidates and refines memories +5. **Agent Isolation**: Each agent's memory is completely separate from others -## 故障排除 +### Memory Flow -### 1. 无法连接到 Qdrant +```mermaid +sequenceDiagram + participant User + participant TARS as Cortex TARS + participant Memory as Cortex Memory + participant LLM as LLM Service -确保 Qdrant 正在运行并且 URL 配置正确: + User->>TARS: Send message + TARS->>Memory: Retrieve relevant memories + Memory-->>TARS: Return context + TARS->>LLM: Generate response with context + LLM-->>TARS: Stream response + TARS->>User: Display response + TARS->>Memory: Store conversation (if enabled) +``` + +## 🔍 Advanced Features + +### Memory Enhancement + +Enable enhanced memory saving to automatically store conversations: ```bash -curl http://localhost:6334/health +cortex-mem-tars --enhance-memory-saver ``` -### 2. API 密钥错误 +This feature: +- Saves entire conversation history to memory on exit +- Preserves context across sessions +- Enables long-term learning and personalization -检查 `config.toml` 中的 API 密钥是否正确。 +### External Integration -### 3. 记忆功能不工作 +The API server enables external services to: -- 确保 Qdrant 正在运行 -- 检查 API 密钥是否正确 -- 查看日志面板获取详细错误信息 +1. **Store Information**: External services can push data to memory +2. **Inject Messages**: Send messages as if typed by the user +3. **Query Memory**: Retrieve stored information programmatically -## 开发 +Example use cases: +- Voice recognition services storing transcribed conversations +- Meeting assistants capturing action items +- Automation tools logging system events +- IoT devices storing sensor data with context -### 运行测试 +### Service Status Monitoring + +Cortex TARS continuously monitors LLM service availability and displays status in the UI: +- 🟢 **Active**: Service is responding normally +- 🔴 **Inactive**: Service is unavailable +- 🟡 **Initing**: Service is initializing + +## 🛠️ Development + +### Run Tests ```bash cargo test ``` -### 检查代码 +### Check Code ```bash cargo check ``` -### 格式化代码 +### Format Code ```bash cargo fmt ``` -## 许可证 +### Build with Optimizations + +```bash +cargo build --release +``` + +## 🐛 Troubleshooting + +### Qdrant Connection Issues + +Verify Qdrant is running: + +```bash +curl http://localhost:6334/health +``` + +Check your `config.toml` Qdrant URL configuration. + +### LLM API Errors + +- Verify API key is correct in `config.toml` +- Check API endpoint URL +- Ensure you have sufficient API credits +- Review logs for detailed error messages + +### Memory Not Working + +- Ensure Qdrant is running and accessible +- Verify API keys for both LLM and embedding services +- Check memory configuration thresholds +- Enable logging for detailed diagnostics + +### Bot Configuration Issues + +Bot configurations are stored in: +- Current directory: `./bots.json` +- System config: `~/.config/cortex/mem-tars/bots.json` + +Check file permissions and JSON syntax if bots don't load. + +## 📚 Resources + +- [Cortex Memory Documentation](https://github.com/sopaco/cortex-mem/tree/main/litho.docs) +- [Cortex Memory Core](../../cortex-mem-core) +- [Cortex Memory Rig Integration](../../cortex-mem-rig) +- [RatATUI Framework](https://github.com/ratatui-org/ratatui) +- [Rig Agent Framework](https://github.com/0xPlaygrounds/rig) + +## 📄 License + +MIT License - see [LICENSE](../../LICENSE) for details. + +## 🙏 Acknowledgments -MIT +- **Cortex Memory**: The intelligent memory framework powering persistent AI memory +- **RatATUI**: Beautiful terminal UI framework +- **Rig**: LLM agent framework for building intelligent systems +- **Qdrant**: High-performance vector database for semantic search -## 致谢 +--- -- [Cortex Memory](https://github.com/sopaco/cortex-mem) - 记忆管理系统 -- [RatATUI](https://github.com/ratatui-org/ratatui) - TUI 框架 -- [Rig](https://github.com/0xPlaygrounds/rig) - LLM Agent 框架 +**Cortex TARS** - Where AI meets persistent memory in the terminal. 🚀 diff --git a/examples/cortex-mem-tars/src/app.rs b/examples/cortex-mem-tars/src/app.rs index b0babb6..861f180 100644 --- a/examples/cortex-mem-tars/src/app.rs +++ b/examples/cortex-mem-tars/src/app.rs @@ -162,6 +162,15 @@ impl App { let backend = CrosstermBackend::new(stdout); let mut terminal = ratatui::Terminal::new(backend).context("无法创建终端")?; + // 添加短暂的延迟,确保任何自动发送的事件都被处理掉 + // 特别是在 Windows 上,某些终端可能会在启动时自动发送 Enter 键事件 + tokio::time::sleep(Duration::from_millis(100)).await; + + // 清空事件队列,忽略启动时的任何自动事件 + while event::poll(Duration::from_millis(10)).unwrap_or(false) { + let _ = event::read(); + } + let mut last_log_update = Instant::now(); let mut last_service_check = Instant::now(); let tick_rate = Duration::from_millis(100);