Skip to content

基於極低位元量化與仿生動力學的類腦運算框架

Notifications You must be signed in to change notification settings

Matrix-Meta/PNB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PNB: Refactored Neuro-Bit Architecture

Overview

This repository hosts the next-generation neural architecture for PNB (Project Neuro-Bit). The goal is to integrate four cutting-edge, non-mainstream neural paradigms to build an Intel XMX-accelerated, Cache-less, Bio-inspired Dynamic Neural Architecture.

The Four Pillars (Core Architecture)

This project is built upon four fundamental pillars, designed to overcome the limitations of traditional deep learning (memory bottlenecks, lack of adaptability):

1. Deep Equilibrium Models (DEQ) - The Backbone

  • Concept: Infinite-depth networks using only a single layer's parameters. Solves for the fixed point (Equilibrium) via recursive iteration.
  • Advantage: O(1) Memory. Perfect for "Cache-less" operation as it eliminates the need to store intermediate activations for hundreds of layers.

2. Modern Hopfield Networks (Attention is All You Need) - The Memory

  • Concept: Reinterpreting the Transformer Attention mechanism as a massive-capacity "Associative Memory Network".
  • Advantage: One-shot Retrieval. Provides the network with powerful memory and pattern completion capabilities.

3. Tripartite Liquid State Machine (Neuron-Glia) - The Dynamics

  • Concept: Introduces "Astrocytes" (Star-shaped Glial Cells) as a third component. They do not transmit signals but listen to neuronal activity and dynamically regulate synaptic weights (Plasticity).
  • Advantage: Adaptability. Allows the network to self-regulate to the "Edge of Chaos", the state of maximal computational power.

4. Equilibrium Propagation (EqProp) - The Learning

  • Concept: A physics-based energy learning rule to replace Backpropagation.
  • Advantage: Bio-plausible & Ultra-low Memory. No need to store a massive Computation Graph for differentiation; learning is driven by the energy difference between two states (free phase vs. clamped phase).

Synthesis: PNB Architecture Strategy

  • Backbone: DEQ for infinite-depth recursive structure (XMX-accelerated dense compute).
  • Memory: Hopfield Layer replacing traditional layers for associative recall.
  • Dynamics: Astrocyte mechanism for dynamic weight regulation.
  • Learning: EqProp for training, enabling true Cache-less On-chip Learning.

Development Status

  • Status: Pre-Alpha / Architectural Design Phase
  • Target Hardware: Intel Arc / Xeon (XMX Acceleration)

About

基於極低位元量化與仿生動力學的類腦運算框架

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •