Skip to content
/ rgds Public

A reference implementation for human-governed, defensible phase-gate decisions in regulated environments (non-agentic, schema-validated).

License

Notifications You must be signed in to change notification settings

mj3b/rgds

Repository files navigation

RGDS — Regulated Gate Decision Support

Status: Independent Case Study Human Governed Non-Agentic Decision Defensibility RTM Coverage Governance Traceability Audit Ready Audit Ready No Autonomy

License Stars CI Validation

Schema Enforced

A human-governed system for producing defensible, phase-gate decisions in regulated environments.

This repository demonstrates the RGDS operating model: human-governed, evidence-linked, schema-validated, and explicitly non-agentic— designed to preserve decision accountability, auditability, and regulatory trust.


What Changed in v2.0.0

  • Decision logs now require options enumeration (at least two).
  • Evidence items must declare completeness: complete, partial, or placeholder.
  • Residual risk is captured explicitly (what remains true after you proceed).
  • Named human accountability is required (decision owner + approvers).
  • AI assistance disclosure is structured and mandatory when AI is used.

Decision Log Minimum Requirements

A decision log is considered governance-complete only when it records:

  • Decision question + decision deadline
  • Options considered (at least two)
  • Evidence base with completeness classification
  • Risk posture + residual risk
  • Outcome + (if conditional) owned conditions with verification evidence
  • Named human accountability (owner + approvers)
  • AI assistance disclosure (if AI was used)

Table of Contents


Canonical Reference Decisions

If you read only one place in this repository, start here.

The following canonical decision records demonstrate the RGDS operating model in concrete, reviewable form.

File Canonical scenario demonstrated
examples/README.md How to read and compare example decisions
examples/rgds-dec-0001.json Canonical conditional_go (explicit conditions, owned follow-ups)
examples/rgds-dec-0002-no-go.json Canonical no_go with defensible rationale and re-entry logic
examples/rgds-dec-0003-defer-required-evidence.json Canonical defer / abstain due to missing required evidence
examples/rgds-dec-0004-regulatory-interaction.json Regulatory interaction or escalation decision logic
examples/rgds-dec-0005-ind-conditional-go-author-at-risk.json IND-style conditional_go (author-at-risk drafting, reviewer triage, publishing lock points)
examples/rgds-dec-0006-ai-assisted-conditional-go.json AI-assisted conditional_go (bounded AI disclosure, human authority preserved)

These examples demonstrate the intended RGDS operating model: human-governed, evidence-linked, schema-validated, and explicitly non-agentic.


Governance Baseline Introduced in v1.4.0

The following governance concepts were first made explicit in v1.4.0 and are now enforced and extended in v2.0.0.

Version 1.4.0 makes previously implicit governance decisions explicit, based on observed failure modes in real IND delivery and cross-functional review.

Concepts first formalized in v1.4.0

Concept What is now explicit Why it matters
Evidence completeness complete / partial / placeholder prevents “false confidence” from undocumented placeholders
Downstream propagation declarations when evidence or decisions change prevents silent ripple effects across artifacts
Risk posture benchmarking not just declaration forces defensible rationale for tolerance/assumptions
Decision authority scope scope + escalation paths prevents unclear accountability and “who approved this?” gaps
Bounded AI disclosure confidence band + human override makes AI use reviewable without changing accountability

These changes do not introduce automation or autonomy.
They tighten decision defensibility.


What Problem This Solves

In regulated programs, decisions often fail after they are made.

Not because teams lacked expertise, but because:

  • scope changed late without being logged
  • assumptions and risk posture were implicit
  • reviewer routing decisions were undocumented
  • contingency plans existed only informally
  • decision context could not be reconstructed later

Traditional documentation emphasizes inputs (documents, analyses, reports).
RGDS treats the decision itself as the primary artifact.

RGDS is informed by synthesis of real delivery experience, including public IND submission discussions, regulatory strategy perspectives, and operational interviews. These sources are treated as signal inputs, not prescriptions, and are translated into explicit, auditable decision structure.

For the evidence-to-design rationale behind RGDS, see:
docs/why-rgds-exists.md


What This Repository Is (and Is Not)

This is

Statement Practical meaning
a decision-support operating model for phase-gated workflows decisions become governed artifacts, not informal meeting outcomes
a structured method for making decisions defensible at the time they are made captures rationale before memory decay and handoff loss
a human-governed system with explicit ownership and approval named owner + reviewer(s) + approver(s), with escalation
compatible with regulated delivery, quality review, and audit expectations decision records designed for audit reconstruction
a schema-backed decision log system that makes decision context, risk, and ownership auditable required fields and validation discipline enforce completeness

This is not

Statement What is explicitly excluded
an autonomous decision system no autonomous gate outcomes
an AI agent platform no agents, orchestration, or self-directed execution
a recommendation engine no “system decides” or “system recommends” authority
a compliance checkbox or document dump evidence must be linked and interpreted as decision inputs

No component in this repository is allowed to silently decide, approve, or accept risk.


Important Notice

RGDS is an independent reference implementation intended to demonstrate decision-governance patterns in regulated, phase-gated environments.

It is:

  • not a production system
  • not regulatory advice
  • not a compliance framework
  • not an autonomous or agentic system

RGDS does not make decisions.
It records how decisions are made, governed, and defended.

All regulatory, quality, and approval responsibilities remain with the human decision-makers and organizations using this material.


How to Read This Repository (Non-Technical Overview)

This repository is organized around decisions, not tools, models, or automation.

RGDS is intentionally learned by example first, with documentation and schemas serving to explain what those examples enforce.

Start with the examples/ directory.

A typical review path

Step What to read Why
1 one canonical decision example (0001, 0003, or 0005) see the operating model in a real decision
2 the decision-log schema (decision-log.schema.json) understand what is enforced vs. optional
3 the evaluation plan (evaluation/evaluation-plan.md) see how decision quality is assessed
  • AI governance reviewers: docs/ai-assistance-policy.md0006 → AI Governance repository

Each decision record represents a single, concrete phase-gate outcome (e.g., conditional-go, defer, no-go, or escalation).

Documentation explains why the system is designed this way. Examples show how it actually works.

Each decision record shows

Dimension What is captured
Decision what was decided
Rationale why it was decided
Evidence what evidence was used (and its quality)
Risk what risks and gaps were accepted
Accountability who owned, reviewed, and approved the decision
Controls what conditions, follow-ups, or fallback actions exist
Evidence completeness complete / partial / placeholder
Propagation whether downstream artifacts must be updated if the decision changes

Executives, quality reviewers, and auditors should be able to understand why a decision was reasonable without reading code.

This repository reflects the role of a principal-level analyst translating complex delivery realities into durable decision infrastructure.

It is intended to demonstrate how delivery experience, governance constraints and applied AI considerations can be translated into defensible decision systems.


Core Concepts

Decision Log

The primary artifact of RGDS.

A Decision Log records:

  • the decision question and outcome
  • options considered
  • evidence used with confidence ratings
  • known gaps, assumptions, and scope changes
  • explicit risk posture and residual risk
  • named decision owner, reviewers, and approvers
  • conditions, actions, and fallback plans
  • a durable audit trail

The Decision Log is the system of record for governance.


IND Delivery Alignment (v1.3 → v1.4)

RGDS formalizes execution realities observed during IND preparation:

Execution reality observed in IND delivery RGDS mechanism / field
phase-appropriate tolerance and trade-offs must be stated risk_posture
placeholders must be governed and verified author-at-risk drafting + evidence completeness
reviewer triage decisions must be explicit review_plan
late discoveries and scope volatility must be auditable scope_change_events[]
cross-module surprises must be prevented dependency_map[]
evidence readiness is rate limiting data_readiness_status[] → formalized under evidence completeness in v1.4.0
publishing is constrained by lock points publishing_plan
decisions must tie back to program intent tpp_links[]

The existing decision_category + regulatory_context fields model pre-IND / FDA interaction strategy as a first-class decision.

These additions reflect real execution realities without introducing automation risk.

For a cross-role view of who owns what, see:
docs/role-decision-artifact-matrix.md


Evaluation

RGDS evaluates:

Evaluation focus How it is assessed
authority scope, escalation paths, and downstream propagation requirements structured review criteria
AI assistance disclosure (when applicable) explicit decision-log disclosure fields
evidence completeness, risk posture, propagation awareness scorecards and rubrics

Evaluation focuses on decision quality, not model performance in isolation.

Evaluation is performed through structured review criteria and scorecards, not automated model metrics.


Governance

Governance is encoded directly into the decision artifact:

  • explicit ownership and accountability
  • separation of reviewers and approvers
  • support for conditional-go, defer, and no-go outcomes
  • escalation and re-review rules
  • bounded, disclosed AI assistance

Stopping early is treated as risk reduction, not failure.


AI Governance Reference

RGDS supports optional, bounded AI assistance under explicit governance constraints.

The formal governance covenants that define:

  • permitted AI assistance,
  • explicit prohibitions (including non-agentic requirements), and
  • human ownership and approval obligations

are maintained externally to preserve separation of concerns.

RGDS remains fully valid in the absence of AI.

For the authoritative governance definition, see:
RGDS AI Governance (Covenants)
https://github.com/mj3b/rgds-ai-governance


Where AI Fits in the System

RGDS is valid with no AI at all.

When AI is used, it is used only as bounded assistance to help humans produce or review decision artifacts faster—without changing who owns the decision or what counts as evidence.

Permitted AI-Assisted Tasks (Bounded)

AI may be used for reviewable support tasks such as:

Task Example use Constraint
Summarization draft summary of a source report or meeting notes human edits and signs off
Extraction pull structured fields (dates, study IDs, endpoints, risks) output treated as draft
Comparison / diffing highlight inconsistencies (e.g., IB vs M2.6 vs Protocol) human resolves conflicts
Structured drafting draft decision-log sections (context, options, risks) owner finalizes content
Checklist support flag missing fields or schema mismatches does not “approve” compliance

Prohibited Uses (Non-Agentic Boundary)

AI must not:

  • decide, approve, or reject a gate outcome
  • act as an “evidence of record” source by default
  • silently accept scope changes, risk posture, or reviewer routing
  • execute actions (publishing, submissions, notifications) without explicit human authorization
  • fabricate citations, source data, or regulatory rationale

What Gets Logged When AI Is Used (v2.0.0)

If AI assistance is used for a decision artifact, the usage must be disclosed in the decision log using explicit, schema-defined fields.

These fields are enforced by schema, governed by policy, and demonstrated in canonical examples.

Disclosure Field Meaning
AI used? ai_assistance.used transparency
Tool identity ai_assistance.tool_name which AI system was used
Purpose ai_assistance.tool_purpose what task the AI assisted with
Human review ai_assistance.human_review[] review tier(s) and findings
Human overrides ai_assistance.human_override_log[] corrective interventions and rationale
AI risk assessment ai_assistance.ai_risk_assessment confidence band and documented cautions

This disclosure is informational only. It does not transfer authority, approval rights, or risk ownership.

The human decision owner remains fully responsible for final content, evidence interpretation, and decision outcome.

Evidence Rule

AI output is never treated as evidence by default.

If an AI output influences a decision, the human owner must:

  • link to the underlying source artifacts used, and
  • record the AI output as a drafting aid or analysis note, not as primary evidence.

Every decision must remain defensible without the AI output present.

Why RGDS Contains No Built-In AI Components (Design Principle)

This principle originated in v1.x and remains unchanged in v2.0.0.

RGDS v1.x intentionally contains no bundled AI models, agents, or orchestration logic.

This is deliberate:

  • the core problem in regulated programs is usually governance failure, not lack of analysis
  • adding automation before decision discipline increases risk (silent changes, unclear ownership, weak audit trails)
  • RGDS must remain usable in environments where AI is restricted or not trusted

AI can be layered later as optional tooling around RGDS (e.g., diffing, extraction, drafting), but the governed decision record and validation discipline remain the foundation.


Repository Structure

rgds/
├── decision-log/
│   ├── decision-log.schema.json
│   ├── decision-log.schema.yaml
│   └── decision-log.template.yaml
├── examples/
│   ├── rgds-dec-0001.json
│   ├── rgds-dec-0002-no-go.json
│   ├── rgds-dec-0003-defer-required-evidence.json
│   ├── rgds-dec-0004-regulatory-interaction.json
│   ├── rgds-dec-0005-ind-conditional-go-author-at-risk.json
│   ├── rgds-dec-0006-ai-assisted-conditional-go.json
│   └── README.md
├── evaluation/
│   ├── evaluation-plan.md
│   ├── evidence-quality-rubric.md
│   └── scorecard-template.csv
├── docs/
│   ├── why-rgds-exists.md
│   ├── decision-log.md
│   ├── governance.md
│   ├── change-control-log.md
│   ├── ai-assistance-policy.md
│   └── role-decision-artifact-matrix.md
├── scripts/
│   ├── validate_decision_log.py
│   └── validate_all_examples.py
├── .github/workflows/
│   └── validate.yml
├── Makefile
└── requirements.txt

Key Docs

File What it is
docs/why-rgds-exists.md Evidence-to-design rationale (signals → RGDS mechanisms)
docs/decision-log.md How to interpret decision logs
docs/governance.md Governance rules and enforcement intent

Complete Documentation Index

This index provides direct links to all human-readable documentation in the RGDS repository. It is intended to help reviewers quickly locate authoritative explanations, governance rules, and evaluation criteria without navigating the full directory tree.


Root Orientation

File Purpose
README.md Primary orientation document explaining RGDS purpose, scope, governance stance, and how to read the repository

docs/ — Governance, Rationale, and Interpretation

File What it explains
docs/why-rgds-exists.md Evidence-to-design rationale (delivery signals → RGDS mechanisms)
docs/decision-log.md How to read, interpret, and review RGDS decision logs
docs/governance.md Governance rules, authority separation, and enforcement intent
docs/ai-assistance-policy.md Bounded AI usage policy: permitted use, prohibitions, disclosure, and controls
docs/change-control-log.md Versioned record of governance and schema changes
docs/role-decision-artifact-matrix.md Cross-role ownership matrix for decisions and supporting artifacts

examples/ — Canonical Decision Records

File Canonical scenario demonstrated
examples/README.md How to read and compare example decisions
examples/rgds-dec-0001.json Canonical conditional_go (explicit conditions, owned follow-ups)
examples/rgds-dec-0002-no-go.json Canonical no_go with defensible rationale and re-entry logic
examples/rgds-dec-0003-defer-required-evidence.json Canonical defer / abstain due to missing required evidence
examples/rgds-dec-0004-regulatory-interaction.json Regulatory interaction or escalation decision logic
examples/rgds-dec-0005-ind-conditional-go-author-at-risk.json IND-style conditional_go (author-at-risk drafting, reviewer triage, publishing lock points)
examples/rgds-dec-0006-ai-assisted-conditional-go.json AI-assisted conditional_go (bounded AI disclosure, human authority preserved)

All example decisions are schema-validated and demonstrate the intended RGDS operating model.


evaluation/ — Decision Quality and Governance Assessment

File Evaluation role
evaluation/evaluation-plan.md How RGDS decisions are evaluated for quality and defensibility
evaluation/evidence-quality-rubric.md Criteria for assessing evidence completeness and confidence
evaluation/scorecard-template.csv Structured scorecard for decision review (CSV; used for scoring and audit evidence)

External Governance Reference (Authoritative)

RGDS defers AI authority boundaries and non-agentic constraints to a separate governance repository to preserve separation of concerns.

Resource Purpose
RGDS AI Governance (Covenants) Non-agentic AI contract, authority boundaries, and removability guarantees
rgds-ai-governance Canonical source of AI governance definitions

Reviewer Navigation Guide

  • Executives / Approvers: README.md → one canonical example (0001 or 0005)
  • Quality / Governance reviewers: docs/governance.mddocs/decision-log.md → evaluation docs
  • AI governance reviewers: docs/ai-assistance-policy.md → AI Governance repository
  • Auditors: example decisions + evaluation artifacts (traceability and completeness)

This index is intended to make RGDS review finite, navigable, and auditable.


Why This Matters in Production

RGDS prevents failure modes that routinely appear in regulated delivery:

  • silent risk acceptance
  • undocumented scope changes and downstream ripple effects
  • unclear reviewer accountability
  • decisions without fallback planning
  • late discovery of misalignment after a gate closes
  • false confidence created by undocumented placeholders

By forcing decisions, evidence, risk, ownership, and contingency into a single governed record, RGDS enables faster decisions without sacrificing auditability.


Who This Is For

RGDS is written for:

  • Principal AI Business Analysts
  • Program and delivery leaders in regulated environments
  • Quality, governance, and risk stakeholders
  • Executives responsible for phase-gate approvals

It assumes familiarity with regulated delivery — not machine learning research.


Status

v2.0.0 — Whitepaper-aligned reference implementation (breaking update).

Includes:

  • decision log schema with enforced governance requirements
  • mandatory options analysis, evidence completeness, and residual risk capture
  • structured, bounded AI assistance disclosure (non-agentic by design)
  • canonical examples grounded in real IND execution
  • evaluation and governance artifacts
  • CI validation of schema and semantic invariants

This repository is an independent case study, not a production system.