| 🤖 AI Assistant | ✍️ Compile & Preview | 📚 Templates |
|---|---|---|
| Chat / Agent history Tools multi-step edits |
TexLive / Tectonic / Auto PDF preview & download |
ACL / CVPR / NeurIPS / ICML One-click conversion |
| 🔧 Advanced Editing | 🗂️ Project Management | ⚙️ Configuration |
|---|---|---|
| AI autocomplete / Diff / diagnose | Multi-project + file tree + upload | OpenAI-compatible endpoint Local-first privacy |
| 🔍 Search | 📊 Charting | 🧠 Recognition |
|---|---|---|
| WebSearch / PaperSearch | Chart from tables | Formula/Chart recognition |
| 👥 Collaboration | 📝 Peer Review | |
|---|---|---|
| Multi-user real-time editing Cursor sync & online management |
AI Review Report / Consistency Missing Citations / Compile Summary |
Warning
🚧 Template Transfer is under testing
The Template Transfer feature is currently in beta and may contain known or unknown bugs. If you encounter any issues, please report them via Issues.
Tip
🆕 2025-02 · Real-time Collaboration
Multi-user simultaneous editing is now available, powered by CRDT with automatic conflict resolution and cursor sync. Current version requires a server with a public IP; invite remote collaborators via token-based links.
OpenPrism is a local-first LaTeX + AI workspace for academic writing, optimized for fast editing, controlled changes, and privacy.
- Chat mode: read-only Q&A
- Agent mode: generate diffs for confirmation
- Tools mode: multi-step tools + multi-file edits
- Tasks: polish, rewrite, restructure, translate, custom
- Autocomplete: Option/Alt + / or Cmd/Ctrl + Space, Tab to accept
- Engines: TexLive / Tectonic / Auto fallback
- Preview toolbar: zoom, fit width, 100%, download PDF
- Compile log: error parsing + one-click diagnose + jump to error
- Views: PDF / Figures / Diff
- Built-ins: ACL / CVPR / NeurIPS / ICML
- Conversion: one-click template switch with content preserved
- Projects panel: manage multiple projects
- File tree: create/rename/delete/upload/drag
- BibTeX: quick create
references.bib
- LLM Endpoint: OpenAI-compatible, supports custom base_url
- Local storage: settings saved to browser localStorage
- TexLive config: customizable TexLive resources
- Language switch: toggle 中文/English in the top bar
- WebSearch: online search with summaries
- PaperSearch: academic paper search with citation info
- Table-to-chart: generate charts directly from tables
- Smart recognition: formulas and charts auto-detected
- AI Quality Check: automated paper quality assessment
- Full Review Report: generate detailed reviewer-style review comments
- Consistency Check: terminology and symbol consistency detection
- Missing Citations: find statements that need citations
- Compile Log Summary: summarize compile errors and fix suggestions
- Multi-user editing: multiple users edit the same document simultaneously with real-time sync
- Cursor & selection sync: each user's cursor displayed in a distinct color, visible in real time
- Online user list: collaboration panel shows currently connected users and their status
- Invite to collaborate: invite others via link or token to join the editing session
- Node.js >= 18.0.0
- npm >= 9.0.0
- OS: Windows / macOS / Linux
OpenPrism requires a LaTeX engine to generate PDFs. Choose one of the following options based on your OS:
Option 1: TexLive (Recommended)
- Linux (Ubuntu/Debian):
sudo apt-get update sudo apt-get install texlive-full
- Linux (CentOS/RHEL):
sudo yum install texlive texlive-* - macOS:
brew install --cask mactex
- Windows: Download TexLive installer
Option 2: Tectonic (Lightweight)
- Linux/macOS:
curl --proto '=https' --tlsv1.2 -fsSL https://drop-sh.fullyjustified.net | sh
- Windows: Download Tectonic installer
Note: TexLive full installation is ~5-7GB, Tectonic is lighter but with fewer features. TexLive is recommended for Linux servers.
# 1. Clone repository
git clone https://github.com/OpenDCAI/OpenPrism.git
cd OpenPrism
# 2. Install dependencies
npm install
# 3. Start dev server (frontend + backend)
npm run devAccess:
- Frontend: http://localhost:5173
- Backend: http://localhost:8787
# 1. Build frontend and backend
npm run build
# 2. Start production server
npm start# 1. Install Node.js (Ubuntu example)
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
# 2. Install TexLive
sudo apt-get update
sudo apt-get install -y texlive-full
# 3. Verify installation
node --version # Should show >= 18.0.0
pdflatex --version # Should show TexLive version
# 4. Clone and deploy project
git clone https://github.com/OpenDCAI/OpenPrism.git
cd OpenPrism
npm install
npm run build
# 5. Configure environment variables (optional)
cat > .env << EOF
OPENPRISM_LLM_ENDPOINT=https://api.openai.com/v1/chat/completions
OPENPRISM_LLM_API_KEY=your-api-key
OPENPRISM_LLM_MODEL=gpt-4o-mini
OPENPRISM_DATA_DIR=/var/openprism/data
PORT=8787
EOF
# 6. Start service
npm start
# 7. Use PM2 for process management (recommended)
sudo npm install -g pm2
pm2 start npm --name "openprism" -- start
pm2 save
pm2 startupCreate a .env file in the project root (optional):
# LLM Configuration
OPENPRISM_LLM_ENDPOINT=https://api.openai.com/v1/chat/completions
OPENPRISM_LLM_API_KEY=your-api-key
OPENPRISM_LLM_MODEL=gpt-4o-mini
# Data storage path
OPENPRISM_DATA_DIR=./data
# Backend service port
PORT=8787OpenPrism supports any OpenAI-compatible endpoint, including custom base_url:
Method 1: Environment Variables
# .env file
OPENPRISM_LLM_ENDPOINT=https://api.openai.com/v1/chat/completions
OPENPRISM_LLM_API_KEY=sk-your-api-key
OPENPRISM_LLM_MODEL=gpt-4o-miniMethod 2: Frontend Settings Panel
- Click the "Settings" button in the frontend interface
- Fill in API Endpoint, API Key, and Model
- Configuration is automatically saved to browser localStorage
Supported Third-party Services:
- OpenAI:
https://api.openai.com/v1 - Azure OpenAI:
https://your-resource.openai.azure.com/openai/deployments/your-deployment - Other compatible services:
https://api.apiyi.com/v1
Supported Compilation Engines:
pdflatex- Standard LaTeX enginexelatex- Supports Unicode and Chineselualatex- Supports Lua scriptinglatexmk- Automated build tooltectonic- Modern lightweight engine
Configuration Method:
- Select compilation engine in frontend "Settings" panel
- Set to "Auto" for automatic fallback to available engines
- Customize TexLive resource path
Default data storage is in ./data directory, can be modified via environment variable:
# Custom data directory
OPENPRISM_DATA_DIR=/var/openprism/dataDirectory Structure:
data/
├── projects/ # User projects
│ ├── project-1/
│ │ ├── main.tex
│ │ └── references.bib
│ └── project-2/
└── templates/ # Template cache
OpenPrism includes a built-in real-time collaboration system based on CRDT (Yjs) + WebSocket, allowing multiple users to edit the same document simultaneously without any third-party service.
Add the following to your .env file:
# Token signing secret (must change for production)
OPENPRISM_COLLAB_TOKEN_SECRET=your-secure-random-string
# Require token for collaboration (default: true, set false for local dev)
OPENPRISM_COLLAB_REQUIRE_TOKEN=true
# Token TTL in seconds (default: 86400 = 24 hours)
OPENPRISM_COLLAB_TOKEN_TTL=86400- Deploy: Deploy OpenPrism to a server with a public IP, configure a domain and HTTPS
- Generate invite: Click "Generate Invite Link" in the collaboration panel on the editor page
- Share link: Send the generated link to your collaborator
- Join: Collaborator opens the link, token is verified automatically, and they enter the editor
- Edit together: Multiple cursors visible in real time, edits sync automatically, conflicts resolved by CRDT
Nginx Reverse Proxy (Recommended, For Public Servers)
Collaboration requires WebSocket. Nginx must be configured with upgrade headers:
server {
listen 443 ssl;
server_name your-domain.com;
location / {
proxy_pass http://127.0.0.1:8787;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}Tip: Local access (127.0.0.1) bypasses token verification by default, suitable for local development.
No Public Server? Use Tunnel (ngrok)
You can collaborate remotely without a public server. OpenPrism has built-in tunnel support — one command exposes your local service to the internet.
- Sign up for a free ngrok account and get your authtoken
- Run the following commands:
export NGROK_AUTHTOKEN=your_token_here
npm run tunnel:ngrok
- On startup, the terminal prints a public URL. Share it with your collaborator:
OpenPrism started at http://localhost:8787
Tunnel active (ngrok):
Public URL: https://xxxx.ngrok-free.app
Share this URL to collaborate remotely!
- Your collaborator opens the URL in their browser and starts editing in real-time
| Option | Command | Notes |
|---|---|---|
| localtunnel | npm run tunnel |
Zero-config, but may be unstable |
| Cloudflare Tunnel | npm run tunnel:cf |
Requires cloudflared installed |
Note: Tunnel is off by default. Regular
npm startdoes not create a tunnel. You can also set it via env var:OPENPRISM_TUNNEL=ngrok npm start
- Create Project: Create new project in Projects panel and select template
- Write Paper: Edit LaTeX in Files tree
- AI Edits: Switch to Agent / Tools, generate diff and confirm
- Compile & Preview: Click "Compile PDF", preview on right side
- Export PDF: Click "Download PDF" in preview toolbar
OpenPrism/
├── apps/
│ ├── frontend/ # React + Vite frontend
│ │ ├── src/
│ │ │ ├── app/App.tsx # Main application logic
│ │ │ ├── api/client.ts # API calls
│ │ │ └── latex/ # TexLive integration
│ └── backend/ # Fastify backend
│ └── src/index.js # API / compile / LLM proxy
├── templates/ # LaTeX templates (ACL/CVPR/NeurIPS/ICML)
├── data/ # Project storage directory (default)
└── README.md
Welcome to submit Issues or PRs:
- Fork the repository
- Create a new branch
- Commit your changes
- Submit a PR
Development commands:
npm run dev
npm run dev:frontend
npm run dev:backend
npm run buildMIT License. See LICENSE.
- Tectonic
- CodeMirror
- PDF.js
- LangChain
- React / Fastify













