Skip to content

Comments

feat: Phase 2 DuckDB data exchange + E2B Simple Adapter (ADR-0041)#67

Merged
padak merged 4 commits intomainfrom
feature/duckdb-data-exchange
Feb 19, 2026
Merged

feat: Phase 2 DuckDB data exchange + E2B Simple Adapter (ADR-0041)#67
padak merged 4 commits intomainfrom
feature/duckdb-data-exchange

Conversation

@padak
Copy link
Member

@padak padak commented Feb 19, 2026

Summary

  • E2B Simple Adapter (ADR-0041): New lightweight adapter (~100 LOC) that installs osiris-pipeline from PyPI in E2B sandbox, replacing the ~1500 LOC ProxyWorker approach
  • Secure secret injection: Only env vars referenced in osiris_connections.yaml via ${VAR} patterns are forwarded to the sandbox (replaces naive pattern-matching that leaked unrelated secrets)
  • --stream-events CLI flag: Enables JSON Lines output of events/metrics to stdout for PyPI-based E2B execution
  • DuckDB migration fixes: Path handling, session logging, and test updates across 50+ files for Phase 2 DuckDB data exchange compatibility

Test plan

  • 14 new unit tests for E2BSimpleAdapter (init, prepare, execute, collect, stdout parsing, env var extraction)
  • 14 existing ExecutionAdapter contract tests pass (no regression)
  • Ruff lint passes on all modified files

padak added 4 commits December 2, 2025 10:33
Migrate all drivers from DataFrame-based to DuckDB table-based data
exchange as specified in ADR 0043.

## Changes

### Drivers Migrated
- MySQL extractor: streaming via SQLAlchemy yield_per to DuckDB tables
- PostHog extractor: pagination streams to DuckDB, preserves state
- GraphQL extractor: pagination streams to DuckDB tables
- DuckDB processor: reads/writes tables in shared database
- Supabase writer: reads from DuckDB tables (dual-mode for compat)

### Runtime Updates
- runner_v0: input resolution handles table references
- proxy_worker: removed spilling logic (~50 lines), simplified result caching

### New API Contract
- Extractors return: {"table": step_id, "rows": count}
- Writers accept: inputs["table"] with table name
- All data flows through shared pipeline_data.duckdb

### Benefits
- Memory: O(batch_size) constant instead of O(n)
- No spilling: eliminated Parquet save/load workaround
- Query pushdown: SQL directly on DuckDB tables
- Simpler code: one shared database per session

### Tests Updated
- test_duckdb_multi_input.py: new MockContext pattern
- test_filesystem_csv_extractor.py: expect table-based output
- test_graphql_extractor_driver.py: MockContext with DuckDB
- ADR 0043: Change status from "Proposed" to "Accepted"
- Add Phase 2 completion document with migration details
- Update CLAUDE.md driver development guidelines:
  - Add ctx.get_db_connection() to Context API
  - Replace DataFrame-based patterns with DuckDB table patterns
  - Add Extractor, Processor, Writer pattern examples
  - Remove legacy df_*/df key handling documentation
- Implement E2BSimpleAdapter for PyPI-based E2B execution (~100 lines vs ~1500 ProxyWorker)
- Add targeted secret injection: scan osiris_connections.yaml for ${VAR} refs instead of leaking all secret-like env vars
- Add --stream-events CLI flag for JSON Lines event/metric output
- Add 14 unit tests for adapter (init, prepare, execute, collect, stdout parsing, env var extraction)
- Fix path handling and session logging across codebase for DuckDB migration compatibility
@padak padak merged commit 65fbbc6 into main Feb 19, 2026
9 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant