Releases: exaforge/extropy
v0.4.0
Extropy v0.4.0 Release Notes
Release date: 2026-02-24
This document is the full release narrative for v0.4.0. It is intentionally
more detailed than the high-level changelog entry and is designed for operators,
maintainers, and applied-research users upgrading existing studies.
Executive Overview
v0.4.0 is the first release where the platform behavior is strongly aligned with
the current study architecture:
- scenario-first pipeline ownership,
- explicit quality gates across sampling and networking,
- timeline-safe simulation semantics for evolving scenarios,
- stricter automation contracts for agentic harnesses.
The release includes broad runtime and tooling hardening from spec generation
through final simulation analysis, with emphasis on realism, determinism, and
operational clarity.
Why This Release Matters
In v0.3.0 and earlier, there were still mixed-era command and data contracts.
Those contracts made it easier for automation to drift into stale command patterns
or for long evolving scenarios to end too early due to static convergence heuristics.
v0.4.0 addresses these structural issues by:
- simplifying the command path to one canonical pipeline,
- giving scenario stage ownership over scenario semantics (including household focus),
- making simulation logic explicit about evolving timelines and new-information epochs,
- improving data quality gates and topological checks before expensive simulation runs.
Major Improvements by Area
1) CLI and Pipeline Architecture
The command flow is now explicitly:
extropy spec -> extropy scenario -> extropy persona -> extropy sample -> extropy network -> extropy simulate -> extropy results
What changed:
- scenario extension generation is integrated in
extropy scenario(instead of split command chains), - output/inspection flow is centered on
resultsandquery, - command docs and operator docs are aligned with implemented flags and behaviors.
Operational impact:
- less command ambiguity,
- fewer stale script paths,
- clearer ownership of population vs scenario semantics.
2) Simulation Runtime Realism and Evolving Timelines
v0.4.0 brings structural changes to evolving scenario handling:
- timeline-aware stopping behavior:
- convergence/quiescence auto-stop is no longer blindly applied when future timeline events remain,
- explicit override available via scenario config and runtime flag semantics.
- provenance/epoch-aware re-reasoning:
- new information can trigger re-reasoning in a principled way,
- supports committed-agent reconsideration patterns without content keyword heuristics.
- conversation interleaving:
- conversations can be interleaved during timestep reasoning loops,
- avoids unrealistic "all talk only after all reasoning" dynamics.
Operational impact:
- improved temporal realism for multi-event scenarios,
- reduced premature stop risk in long-horizon studies,
- better alignment between information arrival and agent updates.
3) Sampling, Household Coherence, and Scenario Ownership
Sampling and household pathways were hardened:
- stronger lifecycle normalization and household coherence behaviors,
- exact count guarantees improved in sampling loops,
- better partner/dependent naming and member consistency handling.
Also, household semantics are now clearly scenario-owned where they affect downstream
simulation behavior. This prevents base-population artifacts from leaking scenario-specific
assumptions.
Operational impact:
- higher-quality sample datasets,
- fewer contradictory household/persona states before network and sim stages,
- clearer study reproducibility.
4) Network Generation and Topology Gating
Network stage received quality and determinism upgrades:
- generated network config defaults are enabled by default for meaningful topology,
- deterministic structural role handling improved,
- strict gate behavior hardened for practical acceptance criteria,
- resource and worker auto-tuning improved for large studies.
Operational impact:
- fewer low-information network outputs,
- better reproducibility across reruns,
- clearer gate outcomes before simulation spend.
5) Provider Runtime Reliability and Long-Call Stability
Provider and runtime call paths were hardened:
- long-response stability improvements (including streaming-oriented fixes),
- async client reuse and event-loop-close handling fixes,
- rate/queue and token accounting reliability improvements.
Operational impact:
- lower failure probability during high-fidelity or long-duration runs,
- better run stability under high throughput settings.
6) THINK/SAY and Reasoning Trace Fidelity
The simulation now better tracks and persists internal/external divergence:
THINKvsSAYpathways are persisted and classifiable,- macro context and intent accountability are strengthened in prompt flow,
- conversation turn behavior is more explicit by fidelity tier.
Operational impact:
- stronger micro-realism in agent expression and social signaling,
- better post-run diagnostics for behavior interpretation.
Breaking Changes and Required Migration Work
The following are breaking or operationally significant:
extendworkflow removed from active user path.- global/root
--jsonusage removed from CLI surface; usecli.mode=agent. - estimate command is hidden/disabled pending parity work; do not depend on it in release automation.
- scenario/household semantics ownership shifted to scenario artifacts.
- scripts using older
--study-dbassumptions should migrate to study-folder +--studyconventions.
Migration Checklist
- Set automation mode:
extropy config set cli.mode agent
- Update pipeline scripts to canonical flow:
spec -> scenario -> persona -> sample -> network -> simulate -> results
- Verify scenario and persona file path assumptions:
scenario/<name>/scenario.vN.yamlscenario/<name>/persona.vN.yaml
- Remove hard dependencies on estimate in CI paths until re-enabled.
- Re-baseline long evolving scenario comparisons because stopping/re-reasoning semantics changed.
Known Operational Guidance for v0.4.0
- For expensive studies, run gated dry passes first (sample/network validation) before full sim.
- Use one small simulation smoke pass before large multi-timestep runs.
- Keep run metadata and scenario versions with seed references for comparability.
- Prefer execution through an agentic harness for deterministic triage and replay.
Versioning and Packaging
- Package version:
0.4.0 - Version source:
extropy/__init__.py - Publish workflow: GitHub release-triggered PyPI publish flow remains unchanged.
Suggested GitHub Release Body (Copy/Paste)
Title:
v0.4.0 - Scenario-first pipeline hardening, timeline-safe simulation, and runtime reliability
Body:
v0.4.0 is a major architecture and operations release. It finalizes the scenario-first command path, hardens sampling/network quality gates, and upgrades simulation behavior for evolving timelines with timeline-aware stopping, epoch-based re-reasoning, and improved conversation scheduling.
This release also includes broad command/documentation alignment, provider/runtime stability improvements, and stronger reasoning trace fidelity (THINK vs SAY persistence and accountability context).
Breaking changes include removal of legacy extend workflow paths from active operation, removal of root --json command usage patterns in favor of cli.mode=agent, and temporary hiding/disablement of estimate until parity work is complete.
See CHANGELOG.md for full detail and migration checklist.
v0.3.0 — Simulation v2: Conversations, Cognition, and Fidelity
v0.3.0 — Simulation v2: Conversations, Cognition, and Fidelity
Major release bringing multi-turn agent conversations, cognitive self-awareness, and fidelity-tiered simulation.
🗣️ Phase D: Conversations & Social Posts
- Agent-agent conversations: Agents can now
talk_toeach other during simulation, with multi-turn LLM-driven dialogue - Agent-NPC conversations: Agents talk to household dependents (kids, elderly parents) as NPCs with generated profiles
- Conversation state changes: Conversations update sentiment, conviction, and internal reactions
- Social posts: Agents can broadcast to their network via social media posts
- Priority scoring: Conversations prioritized by relationship weight × edge strength
🧠 Phase E: Cognitive Architecture
- Emotional trajectory: Agents aware of their emotional arc ("I've been getting more anxious")
- Conviction self-awareness: Agents track certainty changes ("I've been firm but my certainty is slipping")
- Repetition detection: Trigram Jaccard similarity detects stale reasoning, nudges agents to go deeper
- THINK vs SAY separation: High-fidelity mode separates internal monologue from public statements
⚡ Phase F: Fidelity Tiers
- Three tiers:
low(fast/cheap),medium(balanced),high(full cognitive features) - Merged-pass reasoning: Single LLM call for low/medium fidelity vs two-pass for high
- Cost optimization: 3-10x cost reduction at lower fidelity with graceful degradation
🔄 Phase C: Contagion & Timeline Events
- Timeline events: Inject events at specific timesteps (protests, counter-petitions, lawsuits)
- Merged-pass schema: Combined role-play + classification in single call
- Conviction-aware sharing: High-conviction agents more likely to spread information
🏠 Household & Identity
- Household sampling:
agent_focuscontrols who is simulated vs NPC (families, couples, individuals) - LLM-researched configs:
NameConfigandHouseholdConfiggenerated per-population for cultural accuracy - Partner/dependent relationships: Structural edges and shared attributes within households
🔧 Fixes & Improvements
- Azure OpenAI compatibility: Full schema compliance (
additionalProperties: false, completerequiredarrays) - Persona rendering: Fixed duplicate sections, added punctuation
- agent_focus prompt: Clarified household sampling mode semantics
Validation
ruff check .✓ruff format --check .✓pytest -q— 811 passed
Breaking Changes
- Simulation schemas updated for Azure compatibility (no user-facing changes)
agent_focuskeywords now control household sampling modes
Migration
No migration required from v0.2.x. New features are additive.
v0.2.3
Refresh PyPI project metadata/readme after package rename to extropy-run.
v0.2.2
Rename PyPI package to extropy-run and publish first extropy-run release.
v0.2.1
Publish extropy package to PyPI after repo/package rename.
v0.2.0
Highlights
- Added per-run token usage and cost tracking in simulation outputs (
meta.json). - Added schema-driven categorical null phrasing (
null_options/null_phrase) for persona rendering. - Added dependency auto-inference during constraint binding and related validator coverage.
- Removed checked-in study artifacts from the core package repo.
Validation
ruff check .ruff format --check .pytest -q(637 passed)
v0.1.4
What's New
Simulation Dynamics
- Added propagation damping controls (
decay_per_hop,max_hops) and bounded spread behavior in simulation propagation. - Added option-level friction support for categorical outcomes to better model behavior persistence under social pressure.
- Improved state handling for public/private dynamics and stabilization behavior in simulation engine.
Validation & Runtime Correctness
- Fixed scenario validation result construction so errors/warnings are preserved and surfaced correctly.
- Fixed scenario file-reference validation to resolve relative paths against the scenario file location.
- Fixed validator/runtime contract for spread modifiers (
edge_weightis now recognized in validation). - Fixed boolean expression consistency in safe evaluation (
true/falsenow handled consistently at runtime). - Fixed expression syntax false-positives for valid escaped apostrophes and string literals.
Network Config Reliability
- Fixed network config generation so degree multiplier condition values are typed correctly (boolean/number/string) instead of string-only.
- Removed legacy preset network config from runtime; network behavior is now fully config-driven.
CLI
- Improved scenario detection in
entropy validateto handlescenario.yamlfilenames directly.
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's New
- Chat Completions API support for Azure OpenAI models (DeepSeek-V3.2, Kimi-K2.5, gpt-5-mini)
simulation.api_formatconfig key (auto-defaults:chat_completionsfor Azure,responsesfor OpenAI)- Async reasoning timeouts (30s/20s) to prevent batch hangs
- Fix: rate limit overrides now applied to both pivotal and routine limiters
- Defensive input validation: rescale 0-1 conviction scores, clamp out-of-range sentiment
- DeepSeek-V3.2 and Kimi-K2.5 pricing added
v0.1.2
What's New
Azure OpenAI Support
- New provider:
azure_openai— works in both pipeline and simulation zones - Reuses
OpenAIProviderby swapping in Azure SDK clients at construction time - Configure via:
entropy config set simulation.provider azure_openai - Env vars:
AZURE_OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_DEPLOYMENT - Default API version:
2025-03-01-preview(required for Responses API)
Progress Display Fixes
- Fix stale data in progress display when 0 agents to reason
- Cap position names at 40 chars to prevent layout overflow
- Type
AgentDoneCallbackwithReasoningResponseinstead ofAny - Remove redundant
avg_sentiment/avg_convictionproperties
Tests
- 600 tests across 15 test files
v0.1.1
What's Changed
New Features
entropy estimate- Predict simulation cost (LLM calls, tokens, USD) without running it- Adaptive network calibration - Binary search for target average degree
- Claude Code skill - Pipeline assistance integration
Bug Fixes
- Fix relative path resolution in scenario files - commands now work from any directory
- Fix
float_to_convictionreturning string instead of float - Fix rate limiter 429 storms - staggered task launches, per-model splitting, concurrency caps
- Fix async HTTP client cleanup before event loop shutdown
- Register missing
personacommand - Add missing simulation config keys (
pivotal_model,routine_model,rate_tier)
Improvements
- Code cleanup: performance, reliability, tests
- CI: enable uv cache, add workflow_dispatch triggers
- 158 new simulation validation tests
Full Changelog: v0.1.0...v0.1.1