Reading is an intelligent reading system that collects, curates, and organizes tech articles into weekly reading collections. Features AI-powered content processing, rich markdown editing, and a clean web interface for focused reading experiences.
🌐 Demo: reading.qijun.io · 📋 RSS Sources
- RSS integration & web scraping from multiple sources
- AI-powered content classification & tagging
- Article notes and skip functionality for better organization
- Create curated article collections with custom titles and descriptions
- Rich markdown editor with AI assistant for content creation
- Draft/publish workflow with cover images
- Public reading interface for published collections
- Automatic article summarization
- Content filtering by interest and relevance
- Smart categorization and quality assessment
- AI-powered image generation for collection covers
- Frontend: Next.js 15, TypeScript, Tailwind CSS, Shadcn/ui
- Backend: Python 3.8+, SQLite, RSS parser, LLM APIs
- Database: SQLite with migration system using yoyo-migrations
- Node.js 18+ and pnpm
- Python 3.8+
- SQLite
git clone https://github.com/yourusername/reading.git
cd reading
pnpm install
cd packages/tasks && pip install -r requirements.txt
- Copy environment file:
cp .env.example .env
- Configure API keys and database settings in
.env
- Initialize database:
yoyo apply -d sqlite:///data/reading.db packages/tasks/migrations/
Start both services:
# Terminal 1 - Web frontend
cd packages/web && pnpm dev
# Terminal 2 - Article scraping (optional)
cd packages/tasks && python scraper.py
Visit http://localhost:3000
to access the application.
Admin interface: http://localhost:3000/admin
The application uses password-based authentication for admin features:
-
Generate password hash:
cd packages/web node generate-hash.js your-admin-password
-
Add to
.env.local
:ADMIN_PASSWORD_HASH_ENCODED=<generated-hash>
-
Access admin via:
http://localhost:3000/auth?token=your-access-token
- Public: Browse published collections and articles (read-only)
- Authenticated: Full CRUD access to collections, articles, and admin features
./scripts/deploy.sh
- Web: Next.js frontend
- Scraper: scheduled article collection
- SQLite with volume persistence
# Apply all pending migrations
yoyo apply -d sqlite:///data/reading.db packages/tasks/migrations/
# List migration status
yoyo list -d sqlite:///data/reading.db packages/tasks/migrations/
./scripts/data-manager.sh backup # Backup database
./scripts/data-manager.sh restore # Restore from backup
./scripts/data-manager.sh export # Export SQL dump
# Run migrations first for new installations
yoyo apply -d sqlite:///data/reading.db packages/tasks/migrations/
# Preview articles to be removed
./scripts/clean-database.sh --dry-run
# Clean specific source
./scripts/clean-database.sh --source "Hacker News"
# Test on limited articles
./scripts/clean-database.sh --limit 50 --dry-run
# Check processing status
./scripts/clean-database.sh --status
# Execute cleanup (after preview)
./scripts/clean-database.sh --confirm
packages/
├── web/ # Next.js frontend
│ ├── src/app/ # App router pages
│ ├── src/components/ # Reusable UI components
│ ├── src/services/ # API service layers
│ └── src/lib/ # Database and utilities
└── tasks/ # Python backend
├── migrations/ # Database schema migrations
├── scraper.py # RSS feed scraper
├── article_filter.py # AI-powered filtering
└── llm_processing.py # LLM integration
Web Frontend (Next.js)
cd packages/web
pnpm dev # Start development server (localhost:3000)
pnpm build # Build for production
pnpm start # Start production server
pnpm lint # Run ESLint
Python Backend
cd packages/tasks
pip install -r requirements.txt # Install dependencies
python scraper.py # Run article scraper
make lint # Run code quality checks
make format # Format code with black and isort
LLM_API_ENDPOINT
andLLM_API_KEY
for AI processingDASHSCOPE_API_KEY
for AI image generation (optional)ADMIN_PASSWORD_HASH_ENCODED
for authentication- Database path:
../../data/reading.db
(relative to project root)
📄 Licensed under MIT.
✨ Issues & Feature Requests