Support Memory Weave is a FastAPI + LangGraph + PostgreSQL system that transforms raw customer support messages into structured, searchable, and LLM-ready tickets.
The system includes an ingestion pipeline, automated ticket structuring, semantic vector retrieval, and suggested reply generation, all backed by realistic conversational data.
- Accepts raw customer support messages
- Reconstructs conversation context
- Creates or extends conversation threads
- Writes conversations, messages, and tickets into PostgreSQL
A multi-step workflow that generates a structured support ticket:
- Issue type classification
- Severity assessment
- Sentiment detection
- Short and long summaries
- Action recommendations
- Fully deterministic and expandable state-machine design
Using PostgreSQL + SQLAlchemy:
- Stores conversations, messages, structured tickets
- Ensures relational consistency
- Provides a backend for retrieval and analytics
- Script to ingest curated subsets of the Customer Support on Twitter dataset
- Loads real inbound user tweets
- Automatically transforms them into structured tickets
- Enables realistic testing and evaluation
- Uses SentenceTransformers (
all-MiniLM-L6-v2) to embed ticket summaries - Stores vector embeddings directly in PostgreSQL (JSON format)
- Retrieval powered by cosine similarity
/tickets/{id}/suggest-replyreturns semantically similar tickets
- Retrieves similar tickets as evidence
- Synthesizes a suggested reply using contextual patterns
- Modular design — ready to be swapped with real LLM calls later
Interactive Swagger UI for all routes:
Raw message
↓
FastAPI → PostgreSQL storage
↓
LangGraph Ticket Structurer
↓
Structured Ticket (issue type, severity, sentiment, summaries, action)
↓
Embedding generation → stored in PostgreSQL
↓
Semantic vector search → similar tickets
↓
Suggested reply generation
app/
├── api/
│ └── v1/
│ └── tickets.py
├── core/
│ ├── config.py
│ └── embedding_client.py
├── db/
│ ├── base.py
│ ├── models.py
│ └── session.py
├── graphs/
│ └── ticket_structurer_graph.py
├── schemas/
│ └── tickets.py
├── services/
│ ├── structuring.py
│ └── retrieval.py
└── main.py
scripts/
├── load_twitter_dataset.py
└── embed_all_tickets.py
| Layer | Tool |
|---|---|
| API | FastAPI |
| Workflow | LangGraph |
| Database | PostgreSQL + SQLAlchemy |
| Embeddings | SentenceTransformers |
| Retrieval | Cosine similarity |
| LLM Integration | OpenAI / Gemini / Vertex (planned) |
| Dataset | Customer Support on Twitter (Kaggle) |
This project uses a curated subset of the Customer Support on Twitter dataset:
Axelbrooke, S. (2017). Customer Support on Twitter. Kaggle.
DOI: 10.34740/KAGGLE/DSV/8841
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
The full dataset contains 2.8M+ tweets across a wide range of brands and support scenarios.
For local development, this project uses a filtered subset consisting only of inbound customer messages (inbound=True), prepared via a Kaggle notebook and exported as a lightweight CSV.
This dataset provides realistic, modern conversational support data for evaluating ticket structuring, memory, embedding-based retrieval, and reply suggestion workflows.
python3 -m venv venv
source venv/bin/activatepip install -r requirements.txtPOSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=support_memory_weave
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
REDIS_HOST=localhost
REDIS_PORT=6379
docker-compose up -duvicorn app.main:app --reload- Added dataset ingestion pipeline
- Added local embeddings (SentenceTransformers)
- Added semantic vector search via cosine similarity
- Enhanced suggested reply engine
- Retains all v0.1.0 functionality
- Initial MVP: FastAPI + LangGraph structuring + PostgreSQL storage