Welcome to the repository behind my live trading experiment where ChatGPT manages a real-money micro-cap portfolio, now enhanced with a comprehensive Streamlit Portfolio Management Application.
Starting with just $100, this project answers a simple but powerful question:
Can large language models like ChatGPT actually generate alpha using real-time market data?
- ChatGPT receives real-time trading data on portfolio holdings
- Strict stop-loss rules and risk management apply
- Weekly deep research sessions for portfolio reevaluation
- Performance data tracked and published regularly
Check out the latest results in docs/experiment_details and follow weekly updates on SubStack.
Currently outperforming the Russell 2K benchmark
This repository now includes a full-featured Streamlit web application for portfolio management and analysis, built with enterprise-grade architecture and comprehensive testing.
- 📱 Real-time Portfolio Dashboard - Live tracking (Finnhub in production, synthetic in dev)
- 📈 Performance Analytics - Historical charts, KPIs, performance metrics
- 💰 Trading Interface - Buy/sell stocks with validation
- 👁️ Watchlist Management - Track potential investments
- 📊 Data Export - Download snapshots & history
- 🗄️ SQLite Database - Persistent local data storage
- ⚡ High-Performance Caching - TTL-based market data caching with 80%+ API call reduction
- 🛡️ Comprehensive Error Handling - Standardized error recovery and logging
- 🎛️ Configurable Settings - Centralized configuration for easy customization
- 🧪 95%+ Test Coverage - Comprehensive test suite with performance benchmarks
- Modular Design - Clean separation of concerns with dedicated modules
- Error Resilience - Graceful handling of network failures and data issues
- Performance Optimized - Intelligent caching reduces external API dependencies
- Configuration Driven - Easy customization without code changes
- Test Coverage - Extensive testing including integration, performance, and edge cases
# Clone the repository
git clone https://github.com/bradnunnally/ChatGPT-Micro-Cap-Experiment.git
cd ChatGPT-Micro-Cap-Experiment
# Install dependencies
pip install -r requirements.txt
## Launch the application (synthetic data, no network)
cp .env.example .env # ensure APP_ENV=dev_stage
streamlit run app.py # or: APP_ENV=dev_stage streamlit run app.pyBy default in dev_stage the app now synthesizes ~90 calendar days (business-day sampled) of deterministic OHLCV data for any ticker you reference (seeded for reproducibility). Two extra illustrative tickers (e.g. NVDA, TSLA) are supported out of the box just like AAPL/MSFT—simply add them to your watchlist or trade them; synthetic history is generated on demand.
python app.py --env productionStrategy selection uses APP_ENV: dev_stage -> deterministic synthetic data (90d history window), production -> Finnhub with per-endpoint JSON caching under data/cache.
Legacy Yahoo Finance code has been removed. A unified provider layer now supports:
| Mode | Source | Usage |
|---|---|---|
| Production | Finnhub | Live quotes, profiles, news, earnings (API key required) |
Development (dev_stage) |
Synthetic | Deterministic OHLCV + placeholder fundamentals (offline) |
Automatic capability detection hides unsupported columns (e.g. Spread, ADV20) when plan lacks bid/ask or candles. Synthetic mode guarantees offline operation and stable test runs.
Caching (production):
- Quotes: 30s TTL
- Candles: 1h TTL
- Profile / News / Earnings / Bid-Ask: 1d TTL
The app will open at http://localhost:8501 with a clean interface ready for portfolio management.
- Frontend: Streamlit web interface with responsive design
- Backend: Python services for trading, market data, and portfolio management
- Database: SQLite for reliable local data persistence
- Market Data: Finnhub (production) or deterministic synthetic generator (dev)
- Testing: Comprehensive test suite with ~89% coverage (target ≥80%)
- Python 3.13+ - Core application runtime
- Streamlit - Modern web application framework
- Pandas + NumPy - Data manipulation and analysis
- finnhub-python - Market data SDK
- SQLite - Local database for data persistence
- Plotly - Interactive data visualizations
- Pytest - Comprehensive testing framework
ChatGPT-Micro-Cap-Experiment/
├── app.py # Main Streamlit application entry point
├── config/ # Configuration package (settings & providers)
├── portfolio.py # Portfolio management logic
├── requirements.txt # Python dependencies
├── pytest.ini # Pytest configuration
├── .streamlit/config.toml # Streamlit configuration
├── components/ # Reusable UI components
│ └── nav.py # Navigation component
├── data/ # Data management layer
│ ├── db.py # Database connection and operations
│ ├── portfolio.py # Portfolio data models
│ ├── watchlist.py # Watchlist data models
│ └── trading.db # SQLite database file
├── pages/ # Streamlit pages
│ ├── user_guide_page.py # User guide and help page
│ ├── performance_page.py # Portfolio performance analytics
│ └── watchlist.py # Stock watchlist management
├── services/ # Business logic layer
│ ├── logging.py # Application logging
│ ├── market.py # Market data services
│ ├── portfolio_service.py # Portfolio business logic
│ ├── session.py # Session management
│ ├── trading.py # Trading operations
│ └── watchlist_service.py # Watchlist business logic
├── ui/ # UI components and layouts
│ ├── cash.py # Cash management interface
│ ├── dashboard.py # Main dashboard interface
│ ├── forms.py # Trading forms
│ ├── summary.py # Portfolio summary views
│ └── user_guide.py # User guide content
├── tests/ # Comprehensive test suite (95%+ coverage)
│ ├── conftest.py # Pytest configuration and fixtures
│ ├── test_comprehensive_summary.py # Core summary testing (650+ lines)
│ ├── test_performance_benchmarks.py # Performance testing (300+ lines)
│ ├── test_configuration_centralization.py # Config testing
│ ├── test_error_handling.py # Error handling testing
│ ├── test_coverage_completion.py # Targeted coverage completion
│ ├── test_*.py # Additional test files
│ └── mock_streamlit.py # Streamlit mocking utilities
├── scripts/ # Development and utility scripts
│ └── run_tests_with_coverage.py # Test runner with coverage
├── archive/ # Archived legacy scripts
│ ├── generate_graph.py # Legacy data visualization
│ └── migrate_csv_to_sqlite.py # Legacy data migration
└── docs/ # Documentation and analysis
├── experiment_details/ # Detailed experiment documentation
└── results-6-30-7-25.png # Performance results
make install # create .venv and install deps + dev tools
make lint # ruff + black check + mypy (scoped)
make test # run pytest
make run # streamlit run app.pyNotes:
- Python 3.13 is expected; a local .venv is used by Makefile targets.
- Ruff is configured to sort imports and ignore style in tests; run
ruff --fixto auto-apply safe fixes. - Mypy is run on
services/core/*for a clean, incremental type baseline; expand scope later as desired.
CI: A GitHub Actions workflow runs lint, type-checks, and tests on PRs to dev_stage and main.
Core validation and models: shared validators live in services/core/validation.py and are consumed by immutable dataclasses in services/core/models.py. Trading helpers delegate to these validators while keeping legacy boolean return semantics.
# Run full test suite
pytest
# Run with coverage report
pytest --cov=. --cov-report=html
# Run test suite with coverage helper script
python scripts/run_tests_with_coverage.py
# Run specific test file
pytest tests/test_portfolio_manager.py- 95%+ Test Coverage - Comprehensive testing across all major modules including ui.summary.py
- Performance Benchmarks - Automated performance testing with caching effectiveness validation
- Integration Testing - End-to-end workflow validation and error scenario testing
- Type Hints - Full type annotation for better code reliability
- Modular Architecture - Clean separation of concerns with enhanced error handling
- Configuration Management - Centralized configuration system with environment-aware settings
This project emits structured JSON logs to stdout for easy ingestion and analysis.
- Format: one JSON object per line, including timestamp, level, message, logger, and correlation_id.
- Correlation ID: a stable ID is set per Streamlit session; CLI tools generate a new one per run. You can also set a temporary ID via a context manager for specific actions.
- Audit Trail: trades and domain events are recorded via an audit logger for traceability.
Key APIs
- Logging helpers live in
infra/logging.py:get_logger(name): standard JSON loggerget_correlation_id(),set_correlation_id(cid),new_correlation_id()audit.trade(action, *, ticker, shares, price, status="success", reason=None, **extra)audit.event(name, **attrs)
Domain Errors
- Centralized in
core/errors.pyand used across services/UI/CLI:ValidationError(subclassesValueError) – invalid user/model inputMarketDataDownloadError(subclassesRuntimeError) – download failuresNoMarketDataError(subclassesValueError) – no market data availableRepositoryError(subclassesRuntimeError) – DB/repository failuresConfigError,NotFoundError,PermissionError– additional categories
- Legacy shim:
services/exceptions/validation.ValidationErroraliasescore.errors.ValidationErrorfor backward compatibility.
Usage conventions
- Always raise domain-specific exceptions from services.
- UI/CLI layers should catch domain errors, log them (JSON), surface user-friendly messages, and avoid raw tracebacks in logs.
- Streamlit app seeds a session-level
correlation_idso logs from interactions can be traced end-to-end.
Example patterns (conceptual)
- Create a logger:
logger = get_logger(__name__) - Emit audit entry:
audit.trade("buy", ticker="AAPL", shares=10, price=150.0, status="success") - Set a scoped correlation ID in scripts:
with new_correlation_id(): ...
The application uses SQLite for data storage in the data/ directory. Configuration options are available in:
.streamlit/config.toml- Streamlit app configuration and themingpytest.ini- Test configuration and coverage settings
- Launch Application: Run
streamlit run app.py - Add Initial Cash: Use the cash management section to fund your account
- Start Trading: Buy your first stocks using the trading interface
- Track Performance: Monitor your portfolio's performance over time
- Monitor Dashboard: Check current positions and P&L
- Review Watchlist: Track potential investment opportunities
- Execute Trades: Buy/sell positions based on your strategy
- Analyze Performance: Review historical performance and metrics
- Live Market Data (production): Finnhub quotes subject to plan limits
- Synthetic Mode: Guarantees zero external calls (
APP_ENV=dev_stage) - Data Persistence: All portfolio data stored locally (SQLite)
- Risk Management: Always maintain appropriate position sizing and risk controls
- Educational Purpose: This application is for educational and experimental use
Timeline: June 2025 - December 2025
Starting Capital: $100
Current Status: Active trading with performance tracking
Updates: Weekly performance reports published on SubStack
Feel free to:
- Report bugs or suggest improvements
- Submit pull requests for new features
- Use this as a blueprint for your own experiments
- Share feedback and results
- Email: nathanbsmith.business@gmail.com
- Blog: SubStack Updates
- Issues: GitHub Issues for bug reports and feature requests
Disclaimer: This is an experimental project for educational purposes. Past performance does not guarantee future results. Please invest responsibly.
Create an immutable self-contained copy (code + its own virtualenv) you can launch independently of your dev workspace.
make freeze VERSION=1.0.0
This produces: dist/release-1.0.0/
Contents:
app.pyand all source files.venv/isolated virtual environmentlaunch.shstartup scriptVERSIONfile containing the version string
./dist/release-1.0.0/launch.sh
The script sets APP_ENV=production by default (override when calling: APP_ENV=dev_stage ./dist/release-1.0.0/launch.sh).
make freeze VERSION=1.0.1
make freeze VERSION=1.0.2
Each run creates a fresh directory; older ones remain untouched for rollback/comparison.
tar -czf portfolio_release_v1.0.0.tgz -C dist release-1.0.0
rm -rf dist/release-1.0.0
- Zero external dependencies (no Docker)
- Stable snapshot insulated from active development
- Simple rollback (keep previous folder)
- Fast rebuild time (rsync + pip install)
If you need a single-file binary later, you can explore PyInstaller—see project notes or ask for a recipe.
If you'd like a double-clickable macOS app that starts the Streamlit UI and opens your browser, use the included Automator-friendly launcher script at macos/launch_portfolio_manager.sh.
Prerequisites:
- Ensure the repository is on the Mac you want to run from and that the launcher script is executable.
- Optional but recommended: create the local virtualenv used by development (the script will activate
.venvif present).
Make the launcher executable (run from the repo root):
chmod +x macos/launch_portfolio_manager.shQuick manual test (run in a terminal from the repo root):
./macos/launch_portfolio_manager.sh
# then visit http://localhost:8501 if the browser doesn't open automaticallyCreate an Automator Application (double-clickable):
- Open the Automator app on macOS.
- Choose "New Document" → "Application".
- In the Actions library search for "Run Shell Script" and drag it into the workflow pane.
- Set the shell to
/bin/zshand paste the following script, replacing the path if your checkout is in a different location:
cd "/Users/bradnunnally/ChatGPT-Micro-Cap-Experiment"
./macos/launch_portfolio_manager.sh- Save the Automator application (for example:
Portfolio Manager.app) somewhere convenient (Applications or Desktop). - Double-click the saved
.appto launch the Streamlit app. The default browser should open to the UI and logs are written tologs/streamlit.outin the repo.
Troubleshooting:
- If nothing happens when you double-click the Automator app, open Terminal and run the launcher manually to see output:
cd "/Users/bradnunnally/ChatGPT-Micro-Cap-Experiment"
./macos/launch_portfolio_manager.sh
tail -n 200 logs/streamlit.out- If the script can't find the virtualenv, create it with the project's Makefile helper:
make install- To stop a running Streamlit started by the launcher, find and kill the process (example):
ps aux | grep streamlit
kill <pid>Optional enhancements:
- Add the saved Automator app to the Dock for one-click access.
- Create a
launchdplist to auto-start the Automator app at login (advanced).
Requirements coverage:
- Add Automator instructions to
README.md— Done - Provide step-by-step Automator setup + troubleshooting — Done
