Capture screen context and understand your activities with AI.
Features Β· Quick Start Β· Commands Β· Configuration Β· Contributing
Glimpse is a lightweight CLI tool that periodically captures your screen and uses Vision Language Models (VLM) to understand what you're doing. It stores searchable context that helps you recall past activities.
- Features
- Quick Start
- Commands
- Configuration
- Privacy
- Development
- Roadmap
- Contributing
- License
- Acknowledgments
| Feature | Description |
|---|---|
| πΈ Screen Capture | Fast, cross-platform screenshot capture |
| π€ VLM Analysis | Automatic understanding of screen content using GPT-4o or local Ollama |
| π Full-Text Search | FTS5-powered search through your activity history |
| π Daily Summaries | See what you did today at a glance |
| π Privacy First | All data stored locally, sensitive content detection |
# Clone the repository
git clone https://github.com/MonadWorks/glimpse.git
cd glimpse
# Install with pip
pip install -e .
# Or install with optional dependencies
pip install -e ".[full]"Set your OpenAI API key:
glimpse config set vlm.api_key sk-your-api-key-hereOr use an environment variable:
export OPENAI_API_KEY=sk-your-api-key-here# Capture and analyze your screen
glimpse capture
# Search your history
glimpse search "python project"
# See today's activity summary
glimpse today
# View all configuration
glimpse config showCapture a screenshot and analyze it with VLM.
# Basic capture
glimpse capture
# Capture without VLM analysis
glimpse capture --no-analyze
# Save screenshot to disk
glimpse capture --save
# Capture specific monitor
glimpse capture -m 2Search your captured context history.
# Simple search
glimpse search "meeting notes"
# FTS5 syntax supported
glimpse search "python AND machine learning"
# Filter by date
glimpse search "code" --from 2025-01-01 --to 2025-01-31
# Limit results
glimpse search "project" -n 5Show a summary of today's activities.
# Brief summary
glimpse today
# Detailed timeline
glimpse today --detailManage configuration.
# Show all config
glimpse config show
# Set a value
glimpse config set vlm.api_key sk-xxx
glimpse config set vlm.model gpt-4o
glimpse config set capture.interval 60
# Get a value
glimpse config get vlm.modelShow database statistics.
glimpse statsList available monitors for capture.
glimpse monitorsConfiguration is stored in ~/.glimpse/config.yaml:
# Capture settings
capture:
interval: 30 # Seconds between captures (for daemon mode)
monitors: [1] # Which monitors to capture
skip_similarity: 0.95 # Skip similar screenshots
# VLM settings
vlm:
provider: openai # openai / ollama
model: gpt-4o-mini # Model to use
api_key: null # Or use OPENAI_API_KEY env var
# Ollama settings (when provider=ollama)
ollama:
host: http://localhost:11434
model: llava:13b
# Storage settings
storage:
path: ~/.glimpse # Data directory
save_screenshots: false # Save original screenshots
# Privacy settings
privacy:
retention_days: 30 # Auto-delete after N days
excluded_apps: # Skip these apps
- "1Password"
- "Keychain Access"Glimpse is designed with privacy in mind:
- Local Storage β All data stays on your machine in
~/.glimpse/ - No Cloud Sync β Nothing is uploaded unless you explicitly use cloud VLM APIs
- Sensitive Detection β VLM is prompted to skip password fields and sensitive content
- App Exclusion β Configure apps to never capture
- Auto Cleanup β Old data is automatically deleted based on retention settings
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run linter
ruff check .
# Type checking
mypy glimpseGlimpse uses SQLite with FTS5 for efficient full-text search. Data is stored in:
~/.glimpse/
βββ config.yaml # Configuration
βββ glimpse.db # SQLite database
βββ screenshots/ # Optional screenshot storage
- Daemon mode for continuous capture
- Vector/semantic search
- Smart change detection (skip similar screens)
- Todo extraction from activities
- Export to Markdown/JSON
- Web UI for browsing history
Contributions are welcome! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please make sure to:
- Update tests as appropriate
- Follow the existing code style
- Update documentation for any new features
This project is licensed under the MIT License - see the LICENSE file for details.
Inspired by Rewind and MineContext, but designed to be lightweight, open-source, and privacy-focused.