Skip to content

A logic-gated risk engine inspired by Societies of Thought and Schema-Guided Reasoning (SGR). Uses a single LLM to perform structured internal analysis, apply hard safety and precedent constraints, and produce deterministic verdicts with clear user-facing answers for high-stakes decisions.

License

Notifications You must be signed in to change notification settings

scream4ik/logic-gated-risk-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logic-Gated Societies of Thought

Description

This repository implements a logic-gated reasoning engine inspired by Societies of Thought and Schema-Guided Reasoning (SGR). It uses a single LLM to simulate internal debate, risk analysis, and safety constraints, while producing a clear, user-facing answer governed by deterministic rules.


Problem

LLMs are good at reasoning, but:

  • They optimize for plausibility, not correctness
  • They hide uncertainty
  • They often ignore historical failures
  • Multi-agent systems are expensive, fragile, and hard to control

Schema-guided reasoning helps with structure, but does not enforce decisions or prevent unsafe optimism.


Solution

This project introduces a Logic-Gated Protocol:

  • Internal reasoning is structured as multiple cognitive roles (history, metrics, risk, economics)
  • The model must populate a strict schema
  • A deterministic rule system constrains the final verdict
  • The user always receives a clear, explicit answer

This creates a single-model alternative to multi-agent systems for many high-risk decision tasks.


How It Works

  1. The LLM fills a structured schema (Pydantic models)

  2. Each section represents a different “perspective”:

    • Historical precedent
    • Metric alignment
    • Harm severity
    • Safety economics
  3. Hard logic rules restrict the final verdict

  4. A UserFacingAnswer is generated only after constraints are applied

Why this is different from plain Schema-Guided Reasoning

SGR:

  • Ensures format
  • Improves consistency
  • Still allows free-form optimism

This system:

  • Forces explicit tradeoffs

  • Penalizes known failure modes

  • Separates internal debate from external answer

  • Can replace multi-agent systems when:

    • Agents would mostly argue, not act
    • You need auditability and determinism
    • Cost and latency matter

Example Usage

task = "Create AI hologram for restaurants, which can talk to customers and provides menu details."

result = llm_structured.invoke(messages)

print(result.final_verdict)
print(result.user_answer.direct_answer)

Example output:

FINAL VERDICT: DELAY_FOR_DATA
Answer: This idea is feasible, but real-world deployment risks are not yet understood.

Code Location

All logic lives in protocol.py, including:

  • Full schema definitions
  • Prompt rules
  • Example invocation
  • Example output handling

There are no hidden components.


Architecture (High Level)

User Task
   ↓
Structured Prompt
   ↓
Single LLM
   ↓
Schema Population
   ↓
Logic Gates
   ↓
User-Facing Answer

No agent orchestration. No async messaging. No hidden memory.


Where This Is Useful

Good fit:

  • Policy evaluation
  • High-stakes product decisions
  • Safety reviews
  • Governance tooling
  • “Should we do this?” questions

Bad fit:

  • Creative writing
  • Open-ended brainstorming
  • Tasks where uncertainty is the goal
  • Situations requiring real-time negotiation between agents

Not Implemented (Important)

The current version does not include:

  • Self-Correction Loop (model re-evaluating its own verdict after failure)

  • Self-Challenge Loop (explicit adversarial critique of its own reasoning)

  • Agent-of-Record (persistent identity responsible for decisions over time)

These are intentionally left out to keep the core mechanism simple and inspectable.


Inspiration

This work is inspired by:

This repository is not an implementation of those papers, but a practical synthesis.

About

A logic-gated risk engine inspired by Societies of Thought and Schema-Guided Reasoning (SGR). Uses a single LLM to perform structured internal analysis, apply hard safety and precedent constraints, and produce deterministic verdicts with clear user-facing answers for high-stakes decisions.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages