A human-governed system for producing defensible, phase-gate decisions in regulated environments.
This repository demonstrates the RGDS operating model: human-governed, evidence-linked, schema-validated, and explicitly non-agentic— designed to preserve decision accountability, auditability, and regulatory trust.
- Decision logs now require options enumeration (at least two).
- Evidence items must declare completeness:
complete,partial, orplaceholder. - Residual risk is captured explicitly (what remains true after you proceed).
- Named human accountability is required (decision owner + approvers).
- AI assistance disclosure is structured and mandatory when AI is used.
A decision log is considered governance-complete only when it records:
- Decision question + decision deadline
- Options considered (at least two)
- Evidence base with completeness classification
- Risk posture + residual risk
- Outcome + (if conditional) owned conditions with verification evidence
- Named human accountability (owner + approvers)
- AI assistance disclosure (if AI was used)
- Canonical Reference Decisions
- What Changed in v2.0.0
- Decision Log Minimum Requirements
- What Problem This Solves
- What This Repository Is (and Is Not)
- How to Read This Repository (Non-Technical Overview)
- Core Concepts
- Where AI Fits in the System
- Repository Structure
- Key Docs
- Why This Matters in Production
- Who This Is For
- Status
If you read only one place in this repository, start here.
The following canonical decision records demonstrate the RGDS operating model in concrete, reviewable form.
| File | Canonical scenario demonstrated |
|---|---|
examples/README.md |
How to read and compare example decisions |
examples/rgds-dec-0001.json |
Canonical conditional_go (explicit conditions, owned follow-ups) |
examples/rgds-dec-0002-no-go.json |
Canonical no_go with defensible rationale and re-entry logic |
examples/rgds-dec-0003-defer-required-evidence.json |
Canonical defer / abstain due to missing required evidence |
examples/rgds-dec-0004-regulatory-interaction.json |
Regulatory interaction or escalation decision logic |
examples/rgds-dec-0005-ind-conditional-go-author-at-risk.json |
IND-style conditional_go (author-at-risk drafting, reviewer triage, publishing lock points) |
examples/rgds-dec-0006-ai-assisted-conditional-go.json |
AI-assisted conditional_go (bounded AI disclosure, human authority preserved) |
These examples demonstrate the intended RGDS operating model: human-governed, evidence-linked, schema-validated, and explicitly non-agentic.
The following governance concepts were first made explicit in v1.4.0 and are now enforced and extended in v2.0.0.
Version 1.4.0 makes previously implicit governance decisions explicit, based on observed failure modes in real IND delivery and cross-functional review.
| Concept | What is now explicit | Why it matters |
|---|---|---|
| Evidence completeness | complete / partial / placeholder | prevents “false confidence” from undocumented placeholders |
| Downstream propagation | declarations when evidence or decisions change | prevents silent ripple effects across artifacts |
| Risk posture benchmarking | not just declaration | forces defensible rationale for tolerance/assumptions |
| Decision authority scope | scope + escalation paths | prevents unclear accountability and “who approved this?” gaps |
| Bounded AI disclosure | confidence band + human override | makes AI use reviewable without changing accountability |
These changes do not introduce automation or autonomy.
They tighten decision defensibility.
In regulated programs, decisions often fail after they are made.
Not because teams lacked expertise, but because:
- scope changed late without being logged
- assumptions and risk posture were implicit
- reviewer routing decisions were undocumented
- contingency plans existed only informally
- decision context could not be reconstructed later
Traditional documentation emphasizes inputs (documents, analyses, reports).
RGDS treats the decision itself as the primary artifact.
RGDS is informed by synthesis of real delivery experience, including public IND submission discussions, regulatory strategy perspectives, and operational interviews. These sources are treated as signal inputs, not prescriptions, and are translated into explicit, auditable decision structure.
For the evidence-to-design rationale behind RGDS, see:
→ docs/why-rgds-exists.md
| Statement | Practical meaning |
|---|---|
| a decision-support operating model for phase-gated workflows | decisions become governed artifacts, not informal meeting outcomes |
| a structured method for making decisions defensible at the time they are made | captures rationale before memory decay and handoff loss |
| a human-governed system with explicit ownership and approval | named owner + reviewer(s) + approver(s), with escalation |
| compatible with regulated delivery, quality review, and audit expectations | decision records designed for audit reconstruction |
| a schema-backed decision log system that makes decision context, risk, and ownership auditable | required fields and validation discipline enforce completeness |
| Statement | What is explicitly excluded |
|---|---|
| an autonomous decision system | no autonomous gate outcomes |
| an AI agent platform | no agents, orchestration, or self-directed execution |
| a recommendation engine | no “system decides” or “system recommends” authority |
| a compliance checkbox or document dump | evidence must be linked and interpreted as decision inputs |
No component in this repository is allowed to silently decide, approve, or accept risk.
RGDS is an independent reference implementation intended to demonstrate decision-governance patterns in regulated, phase-gated environments.
It is:
- not a production system
- not regulatory advice
- not a compliance framework
- not an autonomous or agentic system
RGDS does not make decisions.
It records how decisions are made, governed, and defended.
All regulatory, quality, and approval responsibilities remain with the human decision-makers and organizations using this material.
This repository is organized around decisions, not tools, models, or automation.
RGDS is intentionally learned by example first, with documentation and schemas serving to explain what those examples enforce.
Start with the examples/ directory.
| Step | What to read | Why |
|---|---|---|
| 1 | one canonical decision example (0001, 0003, or 0005) |
see the operating model in a real decision |
| 2 | the decision-log schema (decision-log.schema.json) |
understand what is enforced vs. optional |
| 3 | the evaluation plan (evaluation/evaluation-plan.md) |
see how decision quality is assessed |
- AI governance reviewers:
docs/ai-assistance-policy.md→0006→ AI Governance repository
Each decision record represents a single, concrete phase-gate outcome (e.g., conditional-go, defer, no-go, or escalation).
Documentation explains why the system is designed this way. Examples show how it actually works.
| Dimension | What is captured |
|---|---|
| Decision | what was decided |
| Rationale | why it was decided |
| Evidence | what evidence was used (and its quality) |
| Risk | what risks and gaps were accepted |
| Accountability | who owned, reviewed, and approved the decision |
| Controls | what conditions, follow-ups, or fallback actions exist |
| Evidence completeness | complete / partial / placeholder |
| Propagation | whether downstream artifacts must be updated if the decision changes |
Executives, quality reviewers, and auditors should be able to understand why a decision was reasonable without reading code.
This repository reflects the role of a principal-level analyst translating complex delivery realities into durable decision infrastructure.
It is intended to demonstrate how delivery experience, governance constraints and applied AI considerations can be translated into defensible decision systems.
The primary artifact of RGDS.
A Decision Log records:
- the decision question and outcome
- options considered
- evidence used with confidence ratings
- known gaps, assumptions, and scope changes
- explicit risk posture and residual risk
- named decision owner, reviewers, and approvers
- conditions, actions, and fallback plans
- a durable audit trail
The Decision Log is the system of record for governance.
RGDS formalizes execution realities observed during IND preparation:
| Execution reality observed in IND delivery | RGDS mechanism / field |
|---|---|
| phase-appropriate tolerance and trade-offs must be stated | risk_posture |
| placeholders must be governed and verified | author-at-risk drafting + evidence completeness |
| reviewer triage decisions must be explicit | review_plan |
| late discoveries and scope volatility must be auditable | scope_change_events[] |
| cross-module surprises must be prevented | dependency_map[] |
| evidence readiness is rate limiting | data_readiness_status[] → formalized under evidence completeness in v1.4.0 |
| publishing is constrained by lock points | publishing_plan |
| decisions must tie back to program intent | tpp_links[] |
The existing decision_category + regulatory_context fields model pre-IND / FDA interaction strategy as a first-class decision.
These additions reflect real execution realities without introducing automation risk.
For a cross-role view of who owns what, see:
→ docs/role-decision-artifact-matrix.md
RGDS evaluates:
| Evaluation focus | How it is assessed |
|---|---|
| authority scope, escalation paths, and downstream propagation requirements | structured review criteria |
| AI assistance disclosure (when applicable) | explicit decision-log disclosure fields |
| evidence completeness, risk posture, propagation awareness | scorecards and rubrics |
Evaluation focuses on decision quality, not model performance in isolation.
Evaluation is performed through structured review criteria and scorecards, not automated model metrics.
Governance is encoded directly into the decision artifact:
- explicit ownership and accountability
- separation of reviewers and approvers
- support for conditional-go, defer, and no-go outcomes
- escalation and re-review rules
- bounded, disclosed AI assistance
Stopping early is treated as risk reduction, not failure.
RGDS supports optional, bounded AI assistance under explicit governance constraints.
The formal governance covenants that define:
- permitted AI assistance,
- explicit prohibitions (including non-agentic requirements), and
- human ownership and approval obligations
are maintained externally to preserve separation of concerns.
RGDS remains fully valid in the absence of AI.
For the authoritative governance definition, see:
RGDS AI Governance (Covenants)
→ https://github.com/mj3b/rgds-ai-governance
RGDS is valid with no AI at all.
When AI is used, it is used only as bounded assistance to help humans produce or review decision artifacts faster—without changing who owns the decision or what counts as evidence.
AI may be used for reviewable support tasks such as:
| Task | Example use | Constraint |
|---|---|---|
| Summarization | draft summary of a source report or meeting notes | human edits and signs off |
| Extraction | pull structured fields (dates, study IDs, endpoints, risks) | output treated as draft |
| Comparison / diffing | highlight inconsistencies (e.g., IB vs M2.6 vs Protocol) | human resolves conflicts |
| Structured drafting | draft decision-log sections (context, options, risks) | owner finalizes content |
| Checklist support | flag missing fields or schema mismatches | does not “approve” compliance |
AI must not:
- decide, approve, or reject a gate outcome
- act as an “evidence of record” source by default
- silently accept scope changes, risk posture, or reviewer routing
- execute actions (publishing, submissions, notifications) without explicit human authorization
- fabricate citations, source data, or regulatory rationale
If AI assistance is used for a decision artifact, the usage must be disclosed in the decision log using explicit, schema-defined fields.
These fields are enforced by schema, governed by policy, and demonstrated in canonical examples.
| Disclosure | Field | Meaning |
|---|---|---|
| AI used? | ai_assistance.used |
transparency |
| Tool identity | ai_assistance.tool_name |
which AI system was used |
| Purpose | ai_assistance.tool_purpose |
what task the AI assisted with |
| Human review | ai_assistance.human_review[] |
review tier(s) and findings |
| Human overrides | ai_assistance.human_override_log[] |
corrective interventions and rationale |
| AI risk assessment | ai_assistance.ai_risk_assessment |
confidence band and documented cautions |
-
Schema enforcement:
→decision-log/decision-log.schema.json -
Governance policy:
→docs/ai-assistance-policy.md -
Worked example:
→examples/rgds-dec-0006-ai-assisted-conditional-go.json
This disclosure is informational only. It does not transfer authority, approval rights, or risk ownership.
The human decision owner remains fully responsible for final content, evidence interpretation, and decision outcome.
AI output is never treated as evidence by default.
If an AI output influences a decision, the human owner must:
- link to the underlying source artifacts used, and
- record the AI output as a drafting aid or analysis note, not as primary evidence.
Every decision must remain defensible without the AI output present.
This principle originated in v1.x and remains unchanged in v2.0.0.
RGDS v1.x intentionally contains no bundled AI models, agents, or orchestration logic.
This is deliberate:
- the core problem in regulated programs is usually governance failure, not lack of analysis
- adding automation before decision discipline increases risk (silent changes, unclear ownership, weak audit trails)
- RGDS must remain usable in environments where AI is restricted or not trusted
AI can be layered later as optional tooling around RGDS (e.g., diffing, extraction, drafting), but the governed decision record and validation discipline remain the foundation.
rgds/
├── decision-log/
│ ├── decision-log.schema.json
│ ├── decision-log.schema.yaml
│ └── decision-log.template.yaml
├── examples/
│ ├── rgds-dec-0001.json
│ ├── rgds-dec-0002-no-go.json
│ ├── rgds-dec-0003-defer-required-evidence.json
│ ├── rgds-dec-0004-regulatory-interaction.json
│ ├── rgds-dec-0005-ind-conditional-go-author-at-risk.json
│ ├── rgds-dec-0006-ai-assisted-conditional-go.json
│ └── README.md
├── evaluation/
│ ├── evaluation-plan.md
│ ├── evidence-quality-rubric.md
│ └── scorecard-template.csv
├── docs/
│ ├── why-rgds-exists.md
│ ├── decision-log.md
│ ├── governance.md
│ ├── change-control-log.md
│ ├── ai-assistance-policy.md
│ └── role-decision-artifact-matrix.md
├── scripts/
│ ├── validate_decision_log.py
│ └── validate_all_examples.py
├── .github/workflows/
│ └── validate.yml
├── Makefile
└── requirements.txt
| File | What it is |
|---|---|
docs/why-rgds-exists.md |
Evidence-to-design rationale (signals → RGDS mechanisms) |
docs/decision-log.md |
How to interpret decision logs |
docs/governance.md |
Governance rules and enforcement intent |
This index provides direct links to all human-readable documentation in the RGDS repository. It is intended to help reviewers quickly locate authoritative explanations, governance rules, and evaluation criteria without navigating the full directory tree.
| File | Purpose |
|---|---|
README.md |
Primary orientation document explaining RGDS purpose, scope, governance stance, and how to read the repository |
| File | What it explains |
|---|---|
docs/why-rgds-exists.md |
Evidence-to-design rationale (delivery signals → RGDS mechanisms) |
docs/decision-log.md |
How to read, interpret, and review RGDS decision logs |
docs/governance.md |
Governance rules, authority separation, and enforcement intent |
docs/ai-assistance-policy.md |
Bounded AI usage policy: permitted use, prohibitions, disclosure, and controls |
docs/change-control-log.md |
Versioned record of governance and schema changes |
docs/role-decision-artifact-matrix.md |
Cross-role ownership matrix for decisions and supporting artifacts |
| File | Canonical scenario demonstrated |
|---|---|
examples/README.md |
How to read and compare example decisions |
examples/rgds-dec-0001.json |
Canonical conditional_go (explicit conditions, owned follow-ups) |
examples/rgds-dec-0002-no-go.json |
Canonical no_go with defensible rationale and re-entry logic |
examples/rgds-dec-0003-defer-required-evidence.json |
Canonical defer / abstain due to missing required evidence |
examples/rgds-dec-0004-regulatory-interaction.json |
Regulatory interaction or escalation decision logic |
examples/rgds-dec-0005-ind-conditional-go-author-at-risk.json |
IND-style conditional_go (author-at-risk drafting, reviewer triage, publishing lock points) |
examples/rgds-dec-0006-ai-assisted-conditional-go.json |
AI-assisted conditional_go (bounded AI disclosure, human authority preserved) |
All example decisions are schema-validated and demonstrate the intended RGDS operating model.
| File | Evaluation role |
|---|---|
evaluation/evaluation-plan.md |
How RGDS decisions are evaluated for quality and defensibility |
evaluation/evidence-quality-rubric.md |
Criteria for assessing evidence completeness and confidence |
evaluation/scorecard-template.csv |
Structured scorecard for decision review (CSV; used for scoring and audit evidence) |
RGDS defers AI authority boundaries and non-agentic constraints to a separate governance repository to preserve separation of concerns.
| Resource | Purpose |
|---|---|
| RGDS AI Governance (Covenants) | Non-agentic AI contract, authority boundaries, and removability guarantees |
rgds-ai-governance |
Canonical source of AI governance definitions |
- Executives / Approvers:
README.md→ one canonical example (0001or0005) - Quality / Governance reviewers:
docs/governance.md→docs/decision-log.md→ evaluation docs - AI governance reviewers:
docs/ai-assistance-policy.md→ AI Governance repository - Auditors: example decisions + evaluation artifacts (traceability and completeness)
This index is intended to make RGDS review finite, navigable, and auditable.
RGDS prevents failure modes that routinely appear in regulated delivery:
- silent risk acceptance
- undocumented scope changes and downstream ripple effects
- unclear reviewer accountability
- decisions without fallback planning
- late discovery of misalignment after a gate closes
- false confidence created by undocumented placeholders
By forcing decisions, evidence, risk, ownership, and contingency into a single governed record, RGDS enables faster decisions without sacrificing auditability.
RGDS is written for:
- Principal AI Business Analysts
- Program and delivery leaders in regulated environments
- Quality, governance, and risk stakeholders
- Executives responsible for phase-gate approvals
It assumes familiarity with regulated delivery — not machine learning research.
v2.0.0 — Whitepaper-aligned reference implementation (breaking update).
Includes:
- decision log schema with enforced governance requirements
- mandatory options analysis, evidence completeness, and residual risk capture
- structured, bounded AI assistance disclosure (non-agentic by design)
- canonical examples grounded in real IND execution
- evaluation and governance artifacts
- CI validation of schema and semantic invariants
This repository is an independent case study, not a production system.