Releases: LettuceAI/app
1.2.0
Highlights
- Major investment in desktop UX, especially the Create Character flow.
- Significant expansion of the prompt system, including a new Prompt Structure Viewer and new runtime prompt-injection controls.
- Broad chat stability/performance hardening and large dynamic-memory improvements.
- Continued embedding/ONNX runtime reliability work across desktop, Android, and Windows.
- Expanded provider/model ecosystem (including NVIDIA NIM) and safer import behavior.
Desktop UI and Character Creation Redesign
- Added responsive desktop layouts across character creation steps.
- Reworked character creation step order to improve setup flow.
- Redesigned character “extras” inputs for clearer, faster editing.
- Improved character create/edit with fallback-model selector support.
- Fixed UX friction in character flows, including create-step ordering and navigation consistency.
- Added metadata handling improvements for imported cards and avatar URL behavior.
- Added lorebook import support in character creation workflows.
Prompt System Upgrades
- Added Prompt Structure Viewer in the system prompt editor to preview message composition.
- Added conditional prompt injection mode.
- Added interval prompt injection mode.
- Added runtime option to condense prompts into a single system message.
- Fixed prompt import behavior to correctly respect
prompt_order. - Fixed buggy drag-and-drop reorder behavior in prompt entry editor.
- Improved prompt-related import UX and editor predictability.
Chat, Group Chat, and UI
- Added shared
ChatLayoutfor persistent background behavior across chat sub-routes. - Added shared
GroupChatLayoutwith lifted data loading. - Added branch-to-group-chat action from message actions.
- Added lorebook-usage visibility per message.
- Added safe-area padding fixes for chat footer and bottom menu.
- Removed unwanted dark overlay above background images.
- Fixed chat search back-button sizing and related UI polish.
- Improved session back-stack handling across settings/history navigation.
- Fixed persona selection conflicts during scroll interactions.
- Removed duplicate dismiss controls in chat memories error state.
Chat Stability and Performance
- Fixed dynamic-memory listener leak during async chat setup.
- Bounded attachment cache growth in session hooks.
- Ignored stale attachment loads after chat state transitions.
- Fixed cleanup of jump-to-message RAF/timeout resources.
- Improved message memo checks with derived display props.
- Reduced attachment diff cost in chat memoization.
- Added fallback model retry logic with usage attribution.
- Disabled fallback attempts when no fallback model is configured.
- Added swap-places mode with role-aware generation.
- Reverted one streaming animation perf change after validation feedback.
Dynamic Memory System
- Added cursor-delta summarization of new messages.
- Added self-healing cursor behavior after deletes/rewinds.
- Added deduplication by cosine similarity at memory creation.
- Added adaptive decay rate based on access count.
- Added category tagging for memories.
- Added hybrid retrieval using similarity + recency + access frequency.
- Added configurable retrieval selection limit.
- Added smart and cosine retrieval strategies.
- Added memory panel category filter chips.
- Added memory activity log redesign with timeline/collapsible UX.
- Auto-refresh of memory views after dynamic-memory completion.
- Enforced gating behavior for dynamic-memory manual mode.
Embeddings, ONNX Runtime, and Android
- Fixed ONNX runtime bundling and dylib path handling.
- Fixed dev rebuild-loop behavior tied to ONNX runtime integration.
- Pinned/standardized dylib preloading and path behavior.
- Ensured Android ONNX resource directory and packaging consistency.
- Improved desktop guards around ONNX runtime initialization.
- Improved handling of ORT init result variants/booleans.
- Added pre-step for embedding download and runtime ORT fetch.
- Extracted Windows DLL dependencies for ONNX runtime packaging.
- Locked ORT version to
2.0.0-rc.10. - Added embedding model v3 support and multi-version management.
- Added experimental keep-loaded embedding runtime with cache reset on version switch.
- Fixed Android post-regenerate WebView freeze and tracing consistency.
- Made Android ONNX runtime init deterministic.
Providers, Models, Endpoints, and Security
- Added NVIDIA NIM provider.
- Added custom-provider tool-choice mode configurability.
- Added OpenRouter free-model toggle in model selector.
- Improved model selector search and suggestions.
- Added custom endpoint config persistence and auth/model-fetch mapping controls.
- Hid
llama.cppprovider on mobile onboarding/settings where unsupported. - Added security toggle to disable remote avatar downloads on card import.
- Disabled Chutes API key validation where it blocked onboarding flows.
Lorebooks, Import/Export, and Content Controls
- Added world-info import/export and creation import action.
- Added character card metadata support and lorebook import path improvements.
- Added new “pure mode” content filtering system.
Usage, Sync, and Reliability
- Added app-time tracking backend support and analytics view.
- Enforced host-authoritative manifest diff in sync logic.
- Improved DB reset error surfacing and reset-in-place behavior.
Tooling, CI, and Maintenance
- Migrated workflows to Blacksmith.
- Switched workflows to Bun and refreshed README/tooling docs.
- Added
libclangdependency for Windows CI builds. - Removed duplicate Cargo libraries and cleaned project config.
- Added
.gitignoreupdates and docs-folder ignore adjustments. - Fixed Tailwind warning noise in UI build paths.
What's Changed
- .github/workflows: Migrate workflows to Blacksmith runners by @blacksmith-sh[bot] in #8
- Lock ort dependency to exact version 2.0.0-rc.10 by @Copilot in #9
New Contributors
- @blacksmith-sh[bot] made their first contribution in #8
Full Changelog: 1.1.0...1.2.0
Android 1.1.0 & Desktop Beta 3
LettuceAI Android 1.1.0 & Desktop Beta 3: Discovery, Group Chats, Smart Creator, Prompt Editor & Local Inference
Hi everyone! 👋
This release introduces the new Discovery system, full multi-character group chats, a redesigned Smart Creator, a modular Prompt Editor, and deeper local inference controls across Android and Desktop. It also includes broad UI, stability, and workflow refinements!
Discovery
Browse, preview, and import community characters directly from Character Tavern.
- Browse Trending, Popular, and Newest cards or search directly
- Preview full card details before importing
- Pure Mode filters NSFW results automatically (blurred avatars until added)
Group Chats
A new chat mode where multiple characters share one conversation.
- Automatic speaker selection, or force a character with
@mention - Start roleplay groups with custom scenes
- More stable long sessions with improved abort handling and streaming fixes
Smart Creator
A goal-based creation flow with streaming and expanded entity support.
- Create Characters, Personas, and Lorebooks with a new goal selector
- Streaming responses and inline previews during creation
- Smart Tool Selection toggle with manual tool presets and per-tool control
- Image generation support with model selection in Advanced Settings
- Preview modes for Personas and Lorebooks
Help Me Reply
Reply suggestions are now faster, stream in real time, and support multiple styles.
- Streaming suggestions with conversation and roleplay styles
- Per-feature model selection and max token controls
Prompting System
Prompts are now modular, entry-based, and easier to manage.
- Redesigned Prompt Editor with auto-scroll and mobile renaming
- Per-entry roles and injection controls (including in-chat entries)
- Import and export System Prompt presets
- Redesigned System Prompts UI and removal of model-level prompts
- Added
{{user}}placeholder support and improved scene directions
Import & Export
Broader format compatibility for sharing and migration.
- Unified Entity Card (UEC) import/export
- Chara Card v1, v2, and v3 import support
- Export characters as UEC, Chara Card v2, and v3
- Personas can now be exported from the Library
Local Inference
Stronger local model support and advanced controls.
- Built-in llama.cpp runtime for desktop builds
- Ollama now uses native endpoints
- Automatic context length recommendations to prevent hardware crashes
- Toggle to merge same-role messages for Ollama/llama.cpp compatibility
- Advanced settings for local inference and
<think>tag support - CUDA support attempted for llama.cpp (currently disabled)
UI, UX & Stability
Large refinements across editors, settings, and performance.
- Redesigned Advanced Settings and Dynamic Memory pages
- Refreshed creation menu and new full-screen scene editor
- Improved persona selector and chat model selector menus
- Long-press reordering for Lorebook and System Prompt entries on mobile
- Redesigned toasts with sticky unsaved changes protection (mobile bottom)
- Fixed avatar display issues and Usage page text overflow
- Simplified bottom navigation with larger icons and hidden labels
- Dynamic Memory works correctly after 120+ messages with fixed counters
- Fixed Mistral reasoning parameter handling and custom endpoint base URL display
- Cost calculation fixes with a new Recalculate option
- ONNX Runtime downgraded for broader device compatibility
- Improved logging with a diagnostics section and global error integration
- Embedding model loading now includes additional fail-safes
Full Changelog: 1.0.0...1.1.0
Android 1.0 Release & Desktop Beta 2
🚀 LettuceAI Android Release & Desktop Beta 2: Text-to-Speech, Character Creator, Reply Helper Sync
Hi everyone! 👋
This release brings LettuceAI to Android along with the second desktop beta update.
It introduces Text-to-Speech voices, reply generation assistance, encrypted device-to-device sync, enhanced accessibility features, and per-character voice playback controls.
These updates focus on expressiveness, comfort, and smoother roleplay workflows.
🌟 Highlights
✨ AI Character Creator
The AI Character Creator helps you build fully-formed characters through conversation. Instead of filling forms, you describe the character you have in mind, and the Creator guides you step-by-step.
- Conversational guided character creation
- Automatic field filling (name, traits, description, etc.)
- Optional starting scenes to define tone
- Attach avatars and reference material
You can stop at any time; everything remains editable in the manual editor.
🔊 Text-to-Speech Voices
Characters can now speak using natural-sounding generated voices. You can assign a voice per-character and optionally enable automatic playback for new replies.
- Device TTS: Uses your system’s built-in voice engine
- ElevenLabs: Natural voice synthesis with custom voice support
- Gemini TTS: Neural speech generation with custom voice support
- Create custom voices with style descriptions and reuse them across characters
Generated audio is cached locally to reduce repeated regenerations.
💭 Reply Helper
Reply Helper can generate a suggested reply for your current message, either starting from scratch or expanding the text you already wrote.
- Use my text as base: Improve or complete your draft
- Write something new: Generate a fresh reply
- Regenerate: Try multiple suggestions
🔄 Encrypted Device Sync
Sync lets you transfer your data securely between your own devices without accounts or cloud storage.
- Peer-to-peer encrypted transfer
- No servers or permanent connections
- Manual sync started when needed
💚 Accessibility Improvements
Accessibility options now include sound and haptic feedback for key chat events such as sending, successful replies, and errors.
- Per-event volume controls
- Optional haptic feedback with selectable intensity
- Lightweight and non-intrusive
🗣️ Character Voice Playback
You can now play voice audio for individual messages, or enable autoplay so your character always speaks when replying.
- Assign a default voice per character
- Optional autoplay
- Manual playback button per message
🎬 Scene Directions
Scenes now support private “direction” notes that are hidden from the chat UI and used only to guide model behavior during the opening context of a scene.
🛠️ General Improvements
- Improved character editing workflow
- Better consistency across Android and Desktop
- Internal cleanup and UI polish
🐛 Bug Fixes & Behaviour Improvements
- Reasoning now works correctly with the Google Gemini endpoint
- Fixed an issue where Dynamic Memory processing could cancel when switching pages
- Fixed an issue where characters could be duplicated unexpectedly
- Added a retry button to the embedding download screen
- Fixed Backup settings failing to load existing backups
- Redesigned the Edit Model page into a single-page layout
- Disabled reasoning controls for the Mistral endpoint
- Optimised entry animations in Settings
- Optimised Markdown rendering performance
- Added support for
(...)and[...]as italic formatting shortcuts - Added Scene Directions to help guide starting scene behaviour
Full Changelog: 1.0-beta.6.2...1.0.0
v1.0-beta-1
Pre-release v1.0-beta-1 - First public beta release
After months of development, LettuceAI is finally ready for its first public beta!
This release marks the beginning of our privacy-first AI role-playing experience, which is powered entirely by local storage, user-owned API keys, and a fully cross-platform architecture.
⚠️ Beta is only for Android devices ⚠️
Known issues & limitations
This is a beta, so please expect rough edges:
- Some UI animations and transitions may stutter on mobile
- Welcome/Onboarding process can occasionally fail in rare cases (workaround: press to skip and add provider/model from the settings)
- Model switching may not persist between sessions yet
- No voice or gesture support (planned for v1.0)
Feedback & bug reports
Your feedback will directly shape the 1.0 release.
Please report issues via [GitHub Issues] or join our community chat for quick discussions and testing feedback.
1.0-beta.6.2
LettuceAI 1.0-Beta 6.2
Bug Fixes:
- Fix backup system not saving all data
- Fix character context loss after backup restore
- Fix OpenRouter and MistralAI reasoning support
- Fix image backups failing to load
New Features:
- Add Ollama and LM Studio endpoint support
- Add custom OpenAI / Anthropic-compatible endpoints
Improvement:
- Increase request timeout to 15 minutes
Full Changelog: 1.0-beta.6...1.0-beta.6.1
1.0-beta.6
🚀 LettuceAI 1.0-beta.6: Dynamic Memory v2, Lorebooks, In-Chat Image Generation & Major Performance Improvements
Hi everyone! 👋
Beta 6 is a major systems and UX update focused on memory accuracy, world consistency, creative flexibility, and performance.
This release introduces Dynamic Memory v2, a new embedding model, Lorebooks, in-chat image generation, and a large number of UI and internal improvements.
This update is all about making long conversations faster, more coherent, and easier to control, while expanding what’s possible inside a single chat.
🌟 Highlights
🧠 Dynamic Memory v2
Dynamic Memory has been significantly upgraded.
- Faster and more responsive memory handling
- Much higher accuracy when recalling relevant information
- Improved behavior in long-running chats
- Better stability after multiple memory cycles
Dynamic Memory v2 is designed to scale cleanly as conversations grow.
🧬 New Embedding Model
A new embedding model powers memory retrieval in Beta 6.
- ~50% smaller than the previous model
- Faster inference
- Supports up to 4096 tokens (previously 512)
- Enables higher-capacity and more accurate memory queries
Existing memories remain compatible.
🔍 Context Enrichment (Experimental)
An experimental Context Enrichment feature has been introduced.
- Enhances memory queries using the new embedding model
- Improves recall accuracy, especially in follow-up messages
- Helps reduce ambiguity during semantic search
This feature is currently experimental and may evolve in future releases.
📖 Lorebooks (New)
Lorebooks introduce a structured way to inject world, character, and knowledge information into chats.
- Define locations, factions, rules, history, and concepts
- Automatically injected when relevant
- Treated as established canon
Lorebooks improve consistency across scenes and long roleplay sessions while staying separate from character memory.
🖼️ In-Chat Image Generation
Images can now be generated directly inside conversations.
- Supported for models that expose image generation
- Enables visual storytelling and richer creative workflows
- Integrated directly into chat flow
🤖 Model & API Improvements
-
Added support for the Chutes API endpoint
-
Introduced an OpenAI-compatible API endpoint with extensive customization:
- Custom user / assistant role names
- Flexible chat completion behavior
-
Added Reasoning support for models that expose reasoning tokens
🧭 Chat & Workflow Improvements
🔁 Rewind to Here
- Resume conversations from any previous user message
- Explore alternate paths without losing history
⚙️ Chat Settings Panel
- New per-chat configuration panel
- Easier control over chat-specific behavior
🎨 UI & Layout Improvements
-
Redesigned Character Cards for better clarity and hierarchy
-
Chat Header memory button now shows:
- Memory status
- Amount of memory currently in use
-
Improved consistency across chat, settings, and character screens
-
Refined spacing, typography, and interaction feedback
-
Reduced visual noise in frequently used views
-
Redesigned chat history layout for readability
🖥️ Desktop Builds (Desktop-Beta-1)
LettuceAI continues to be available as beta desktop builds alongside mobile.
- Windows:
.msiinstaller,.exeportable build - Linux:
.AppImage,.deb,.rpm
Desktop builds are still considered beta while platform-specific issues and edge cases are being refined.
Functionality matches the mobile app unless otherwise noted.
⚡ Performance Improvements
- Long chats now load up to ~8× faster
- Character list on the homepage loads faster and scrolls more smoothly
- Improved internal state handling and caching logic
- Backup system robustness significantly improved
🐛 Bug Fixes
- Fixed an issue where Dynamic Memory could get stuck after cycle 2
- Fixed an app freeze caused by corrupted or invalid backup files
- Fixed an incorrect Google API endpoint URL
❤️ Thank You
Beta 6 is a foundational release that strengthens LettuceAI’s core systems while expanding creative and technical flexibility.
Your feedback continues to shape LettuceAI into a deeply customizable, privacy-first AI companion built for long-term conversations and roleplay.
Full Changelog: 1.0-beta.5...1.0-beta.6
1.0-beta.5
🚀 LettuceAI 1.0-beta.5 - Multimodel Support, Chat Branching, Advanced Editing Tools & Major UI Redesigns
Hi everyone! 👋
Beta 5 is a massive feature update focused on creative control, customization, and workflow improvements.
This release brings multimodel support, chat branching, encrypted backups, and a suite of new editing tools,along with major UI redesigns and big internal optimizations.
This update is all about giving users more power over how they create, roleplay, organize, and customize their experience in LettuceAI.
🌟 Highlights
🎨 Avatar Gradient Customization
You can now fully customize avatar gradients with up to three colors, allowing complete creative freedom over character aesthetics.
🤖 Multimodel Support
LettuceAI now allows users to mix different models across different tasks:
- Chat models
- Image generation models
- Memory summarizer
- Embedding manager
This gives users more flexibility and much finer control over performance, cost, and creativity.
🌿 Chat Branching Arrives
One of the most requested features is finally here.
- Create alternate conversation paths
- Explore “what if?” scenarios
- Continue a branch with a different character
- Preserve original timelines while experimenting freely
Perfect for roleplaying, storytelling, or testing character behaviors.
🖼️ Image Features
🧬 Generate Avatar Images
Users can now create or regenerate avatars using models that support image generation.
📷 Post Images in Chat
If the model supports image input, you can now attach and send images within sessions.
This unlocks visual storytelling, image-based prompts, and much deeper creative possibilities.
🧭 Customization & Editing Tools
✂️ Avatar Positioning & Scene Editing
You can now:
- Move & resize character and persona avatars
- Reposition & crop the chat background image
This provides pixel-level control over visual layouts.
🔐 Encrypted Backup System
Securely back up and restore your entire setup with password-protected encryption.
📚 UI & Layout Redesigns
🧩 Redesigned Character Card Layout
Cleaner structure, better hierarchy, and improved readability inside chats.
🗂️ Tabbed Edit Character Page
All editing tools are now organized into tabs, making the experience more intuitive.
🖼️ Avatar Editor Overhaul
A new unified design for both Characters and Personas.
🧬 Persona Creation Page Refresh
Now visually consistent with the “Create Character” page.
🫙 Improved Library Empty States
Cleaner, more informative empty state when no content exists.
⚡ Performance & System Improvements
🔄 Dynamic Memory Responsiveness Boost
The memory system now reacts and updates faster, reducing latency and improving consistency on long sessions.
🧭 Predictable Navigation System
Navigation state logic has been rewritten to be more stable across view transitions.
🚀 Internal Optimizations (a lot of them)
This update includes a deep performance pass:
- Reduced unnecessary component re-renders
- Optimized SQLite queries for lower overhead
- Improved store subscription batching
- Smoother UI transitions
- Lower memory usage on long-running sessions
- Faster startup times
Beta 5 should feel noticeably faster, especially on mid-range devices.
🐛 Bug Fixes
- Skipping the welcome page no longer causes a database panic.
- Summaries are now properly included in outgoing API requests.
- Branches now generate valid, unique IDs.
Extra Improvements
- New
{{context_summary}}and{{key_memories}}placeholders added to the Prompt system. - Added more consistent layout structure across creation/edit flows.
❤️Thank You
Beta 5 continues to build on the foundation established in Beta 4. More power, more stability, more creativity.
Your feedback is helping shape LettuceAI into a fully-customizable, privacy-first role-play companion that grows with you.
Full Changelog:
1.0-beta_4...1.0-beta.5
1.0-beta_4
🚀 LettuceAI 1.0-beta_4 - Memory System, Library UI, Navigation Overhaul & More
Hi everyone! 👋
This is the biggest and most important update LettuceAI has had so far.
Beta 4 has a brand-new navigation layout, a fully redesigned Library page, an in-house embedding model, and the long-awaited Manual + Dynamic Memory systems, all powered by a new SQLite database backend.
🌟 Highlights
Modernized Navigation
- Completely redesigned top navigation bar following modern UI standards
- Settings moved from bottom navigation to the top
- Bottom navigation now hosts the brand-new Library section
New Library Page
- Characters and Personas are now displayed in beautiful card layouts
- Avatar-based dynamic background colors
- Cleaner, more visual experience
- Character/Persona management moved out of Settings
Manual & Dynamic Memory Arrive
Introducing a full memory system designed for long-term, consistent roleplay:
📌 Manual Memory
- Users can pin important facts or details
- Not restricted by token window
⚠️ May increase token usage on paid models if used excessively
🔄 Dynamic Memory (Efficient + Long-Term)
Dynamic Memory uses a sliding window of the last N messages.
Every N messages, the app:
- Summarizes them in the background
- Stores the summary as a long-term memory
- Uses our in-house embedding model (lettuce-emb-512d-v1) to retrieve only the most relevant memories for each response. The model is trained on a large dataset of roleplay conversations and is fully optimized for medium-end mobile devices, using less energy and achieving response times under 400ms.
This system has a huge benefit:
Dynamic Memory stops whole chat histories from being sent to the model, which uses up tokens much less quickly and makes long-term roleplay cheaper, especially with paid models.
Users can also choose which model powers:
- The memory summarizer
- The memory manager (embedding search)
Together, this provides a persistent experience without burning tokens.
🌐 New Provider Endpoint Support
Beta 4 brings a huge wave of new AI provider integrations.
LettuceAI now supports the following additional endpoints:
- Deepseek
- Featherless AI
- Google Gemini
- MoonShot AI
- NanoGPT
- Qwen
- Anannas AI
- xAI
- zAI
Storage System Rebuilt (Breaking Change)
All storage has been migrated from mixed .json / .bin formats to a robust SQLite (.db) backend.
Character & Persona Import/Export
- Added import/export support via
.jsonfiles - Easy backup, sharing, and syncing between devices
UI & UX Improvements
- Massive consistency pass across the entire app
- Optimized ChatHistory and Character/Persona edit pages
- Pinned messages are now fully functional
- Required-variable checks added to the Prompt Editor
- Bottom popup animations redesigned to eliminate stutter
Improved Usage Tracking
Usage events are now typed, including:
- Chat
- Regenerate
- Continue
- Summaries (tool calls)
- Memory manager actions (tool calls)
General Improvements
- Smoother transitions and cleaner component layouts
- Many polish fixes across spacing, alignment, and typography
Migration Notes
- This update performs major database migrations
- Old
.json/.binformats are no longer supported (App will auto migrate)
❤️ Thank You
Beta 4 lays the foundation for the next generation of LettuceAI. A more intelligent, persistent, and efficient role-play experience with memory that actually works.
Full Changelog: 1.0-beta_3.2...1.0-beta_4
1.0-beta_3.2
Hi everyone! 👋
This update focuses on better character customisation, more control over model behaviour, and several UX improvements that make the app feel more consistent.
Persona Avatars & Character Visuals
We have now fully supported persona avatars, which are saved correctly across sessions.
We have also introduced dynamic gradient backgrounds for character cards.
These gradients are generated from the colours of the character avatars, giving every character a distinct visual identity.
This feature is optional (enabled by default) and can be toggled in Settings.

Model Parameter Controls
We have added support for additional sampling and behaviour controls:
- Frequency Penalty
- Presence Penalty
- Top-K Sampling
You can now adjust how repetitive, creative or structured the model's behaviour is, character by character.
There is also a new API Parameter Support List modal.
This shows which parameters are supported by the currently selected model, whether app-default or character-specific.
Fixes & Stability
- Fixed avatars not being saved properly
- Improved spacing in Chat header, history and settings.
- Improved persona loading behavior across restarts
- Fixed Response styles' Custom menu using old UI design
- Added ability to cancel message regenerations.
Thanks again for your support 💚
Full Changelog: 1.0-beta_3.1...1.0-beta_3.2
v1.0-beta_3.1
Hi everyone! 👋
This update focuses on simplifying the system prompt architecture, improving character navigation, and preparing the app for the upcoming Manual Memory system.
Custom System Prompt Rework
Previously, we used multiple system prompt scopes (App-wide, Model, Character).
While this approach was flexible, it made the app harder to understand and maintain.
This update removes all scope layers and replaces them with a single, simplified prompt flow.
Now:
- Easier to understand and use the system prompts manager.
- System behaviour is more predictable.
- Editing prompts is much easier.
- Characters behave more consistently in long conversations.
New default system prompt (much better performance)
We also rewrote the default system prompt entirely.
This version produces:
- A more stable tone and personality
- Better conversation depth
- Stronger memory-like consistency, even without memory enabled yet
To switch to the new default prompt:
Settings > System Prompts > App default > "Reset" (next to prompt content) > scroll bottom > "Update Template".
Note: This change will only effect the new chats/sessions.
Character & Persona Search Page
There is now a search page to help you quickly filter:
- Characters
- Personas
This is especially helpful if you have many character setups or role variations.
Message Pinning (Early Feature)
You can now pin messages in chat.
Currently, this feature is visual-only and does not affect AI behaviour.
This feature forms the basis of the upcoming Manual Memory system.
Pinned messages will soon act as memory anchors.
The AI will use these to maintain context and identity during sessions.
For now, pin any messages that you want the AI to remember later.
Other Improvements
- General UI clarity improvements
- Cleaned up old prompt and scene-handling code
- Smoother request/cancel flow
- Logging updates and internal stability improvements
Thanks again for your support 💚
This update paves the way for Beta_4, which will introduce Manual Memory.
Full Changelog: 1.0-beta_3...1.0-beta_3.1