Skip to content

This is a JavaScript implementaion of ELM network invented by Guang-Bin Huang in 2006 but allows for more creative extensibility like deep ELM's and chained ELM's also providing a way to create an ecosystem of ELM models that are orchestratable

License

Notifications You must be signed in to change notification settings

projectgarsot/AsterMind-ELM

 
 

Repository files navigation

AsterMind-ELM

npm version npm downloads license: MIT

A modular Extreme Learning Machine (ELM) library for JS/TS (browser + Node).


🚀 What you can build — and why this is groundbreaking

AsterMind brings instant, tiny, on-device ML to the web. It lets you ship models that train in milliseconds, predict with microsecond latency, and run entirely in the browser — no GPU, no server, no tracking. With Kernel ELMs, Online ELM, DeepELM, and Web Worker offloading, you can create:

  • Private, on-device classifiers (language, intent, toxicity, spam) that retrain on user feedback
  • Real-time retrieval & reranking with compact embeddings (ELM, KernelELM, Nyström whitening) for search and RAG
  • Interactive creative tools (music/drum generators, autocompletes) that respond instantly
  • Edge analytics: regressors/classifiers from data that never leaves the page
  • Deep ELM chains: stack encoders → embedders → classifiers for powerful pipelines, still tiny and transparent

Why it matters: ELMs give you closed-form training (no heavy SGD), interpretable structure, and tiny memory footprints.
AsterMind modernizes ELM with kernels, online learning, workerized training, robust preprocessing, and deep chaining — making seriously fast ML practical for every web app.


🆕 New in this release

  • Kernel ELMs (KELMs) — exact and Nyström kernels (RBF/Linear/Poly/Laplacian/Custom) with ridge solve
  • Whitened Nyström — optional (K_{mm}^{-1/2}) whitening via symmetric eigendecomposition
  • Online ELM (OS-ELM) — streaming RLS updates with forgetting factor (no full retrain)
  • DeepELM — multi-layer stacked ELM with non-linear projections
  • Web Worker adapter — off-main-thread training/prediction for ELM and KELM
  • Matrix upgrades — Jacobi eigendecomp, invSqrtSym, improved Cholesky
  • EmbeddingStore 2.0 — unit-norm vectors, ring buffer capacity, metadata filters
  • ELMChain+Embeddings — safer chaining with dimension checks, JSON I/O
  • Activations — added linear and gelu; centralized registry
  • Configs — split into Numeric and Text configs; stronger typing
  • UMD exportswindow.astermind exposes ELM, OnlineELM, KernelELM, DeepELM, KernelRegistry, EmbeddingStore, ELMChain, etc.
  • Robust preprocessing — safer encoder path, improved error handling

See Releases for full changelog.


📑 Table of Contents

  1. Introduction
  2. Features
  3. Kernel ELMs (KELM)
  4. Online ELM (OS-ELM)
  5. DeepELM
  6. Web Worker Adapter
  7. Installation
  8. Usage Examples
  9. Suggested Experiments
  10. Why Use AsterMind
  11. Core API Documentation
  12. Method Options Reference
  13. ELMConfig Options
  14. Prebuilt Modules
  15. Text Encoding Modules
  16. UI Binding Utility
  17. Data Augmentation Utilities
  18. IO Utilities (Experimental)
  19. Embedding Store
  20. Utilities: Matrix & Activations
  21. Adapters & Chains
  22. Workers: ELMWorker & ELMWorkerClient
  23. Example Demos and Scripts
  24. Experiments and Results
  25. Releases
  26. License

🌟 AsterMind: Decentralized ELM Framework Inspired by Nature

Welcome to AsterMind, a modular, decentralized ML framework built around cooperating Extreme Learning Machines (ELMs) that self-train, self-evaluate, and self-repair — like the nervous system of a starfish.

How This ELM Library Differs from a Traditional ELM

This library preserves the core Extreme Learning Machine idea — random hidden layer, nonlinear activation, closed-form output solve — but extends it with:

  • Multiple activations (ReLU, LeakyReLU, Sigmoid, Linear, GELU)
  • Xavier/Uniform/He initialization
  • Dropout on hidden activations
  • Sample weighting
  • Metrics gate (RMSE, MAE, Accuracy, F1, Cross-Entropy, R²)
  • JSON export/import
  • Model lifecycle management
  • UniversalEncoder for text (char/token)
  • Data augmentation utilities
  • Chaining (ELMChain) for stacked embeddings
  • Weight reuse (simulated fine-tuning)
  • Logging utilities

AsterMind is designed for:

  • Lightweight, in-browser ML pipelines
  • Transparent, interpretable predictions
  • Continuous, incremental learning
  • Resilient systems with no single point of failure

✨ Features

  • ✅ Modular Architecture
  • ✅ Closed-form training (ridge / pseudoinverse)
  • ✅ Activations: relu, leakyrelu, sigmoid, tanh, linear, gelu
  • ✅ Initializers: uniform, xavier, he
  • ✅ Numeric + Text configs
  • ✅ Kernel ELM with Nyström + whitening
  • ✅ Online ELM (RLS) with forgetting factor
  • ✅ DeepELM (stacked layers)
  • ✅ Web Worker adapter
  • ✅ Embeddings & Chains for retrieval and deep pipelines
  • ✅ JSON import/export
  • ✅ Self-governing training
  • ✅ Flexible preprocessing
  • ✅ Lightweight deployment (ESM + UMD)
  • ✅ Retrieval and classification utilities
  • ✅ Zero server/GPU — private, on-device ML

🧠 Kernel ELMs (KELM)

Supports Exact and Nyström modes with RBF/Linear/Poly/Laplacian/Custom kernels.
Includes whitened Nyström (persisted whitener for inference parity).

import { KernelELM, KernelRegistry } from '@astermind/astermind-elm';

const kelm = new KernelELM({
  outputDim: Y[0].length,
  kernel: { type: 'rbf', gamma: 1 / X[0].length },
  mode: 'nystrom',
  nystrom: { m: 256, strategy: 'kmeans++', whiten: true },
  ridgeLambda: 1e-2,
});
kelm.fit(X, Y);

🔁 Online ELM (OS-ELM)

Stream updates via Recursive Least Squares (RLS) with optional forgetting factor. Supports He/Xavier/Uniform initializers.

import { OnlineELM } from '@astermind/astermind-elm';
const ol = new OnlineELM({ inputDim: D, outputDim: K, hiddenUnits: 256 });
ol.init(X0, Y0);
ol.update(Xt, Yt);
ol.predictProbaFromVectors(Xq);

Notes

  • forgettingFactor controls how fast older observations decay (default 1.0).
  • Two natural embedding modes: hidden (activations) or logits (pre-softmax). Use with ELMAdapter (see below).

🌊 DeepELM

Stack multiple ELM layers for deep nonlinear embeddings and an optional top ELM classifier.

import { DeepELM } from '@astermind/astermind-elm';
const deep = new DeepELM({
  inputDim: D,
  layers: [{ hiddenUnits: 128 }, { hiddenUnits: 64 }],
  numClasses: K
});
// 1) Unsupervised layer-wise training (autoencoders Y=X)
const X_L = deep.fitAutoencoders(X);
// 2) Supervised head (ELM) on last layer features
deep.fitClassifier(X_L, Y);
// 3) Predict
const probs = deep.predictProbaFromVectors(Xq);

JSON I/O
toJSON() and fromJSON() persist the full stack (AEs + classifier).


🧵 Web Worker Adapter

Move heavy ops off the main thread. Provides ELMWorker + ELMWorkerClient for RPC-style training/prediction with progress events.

  • Initialize with initELM(config) or initOnlineELM(config)
  • Train via train / trainFromData / fit / update
  • Predict via predict, predictFromVector, or predictLogits
  • Subscribe to progress callbacks per call

See Workers for full API.


🚀 Installation

NPM (scoped package):

npm install @astermind/astermind-elm
# or
pnpm add @astermind/astermind-elm
# or
yarn add @astermind/astermind-elm

CDN / <script> (UMD global astermind):

<!-- jsDelivr -->
<script src="https://cdn.jsdelivr.net/npm/@astermind/astermind-elm/dist/astermind.umd.js"></script>

<!-- or unpkg -->
<script src="https://unpkg.com/@astermind/astermind-elm/dist/astermind.umd.js"></script>

<script>
  const { ELM, KernelELM } = window.astermind;
</script>

Repository:


🛠️ Usage Examples

Basic ELM Classifier

import { ELM } from "@astermind/astermind-elm";

const config = { categories: ['English', 'French'], hiddenUnits: 128 };
const elm = new ELM(config);

// Load or train logic here
const results = elm.predict("bonjour");
console.log(results);

CommonJS / Node:

const { ELM } = require("@astermind/astermind-elm");

Kernel ELM / DeepELM: see above examples.


🧪 Suggested Experiments

  • Compare retrieval performance with Sentence-BERT and TFIDF.
  • Experiment with activations and token vs char encoding.
  • Deploy in-browser retraining workflows.

🌿 Why Use AsterMind?

Because you can build AI systems that:

  • Are decentralized.
  • Self-heal and retrain independently.
  • Run in the browser.
  • Are transparent and interpretable.

📚 Core API Documentation

ELM

  • train, trainFromData, predict, predictFromVector, getEmbedding, predictLogitsFromVectors, JSON I/O, metrics
  • loadModelFromJSON, saveModelAsJSONFile
  • Evaluation: RMSE, MAE, Accuracy, F1, Cross-Entropy, R²
  • Config highlights: ridgeLambda, weightInit (uniform | xavier | he), seed

OnlineELM

  • init, update, fit, predictLogitsFromVectors, predictProbaFromVectors, embeddings (hidden/logits), JSON I/O
  • Config highlights: inputDim, outputDim, hiddenUnits, activation, ridgeLambda, forgettingFactor

KernelELM

  • fit, predictProbaFromVectors, getEmbedding, JSON I/O
  • mode: 'exact' | 'nystrom', kernels: rbf | linear | poly | laplacian | custom

DeepELM

  • fitAutoencoders(X), transform(X), fitClassifier(X_L, Y), predictProbaFromVectors(X)
  • toJSON(), fromJSON() for full-pipeline persistence

ELMChain

  • sequential embeddings through multiple encoders

TFIDFVectorizer

  • vectorize, vectorizeAll

KNN

  • find(queryVec, dataset, k, topX, metric)

📘 Method Options Reference

train(augmentationOptions?, weights?)

  • augmentationOptions: { suffixes, prefixes, includeNoise }
  • weights: sample weights

trainFromData(X, Y, options?)

  • X: Input matrix
  • Y: Label matrix or one-hot
  • options: { reuseWeights, weights }

predict(text, topK)

  • text: string
  • topK: number of predictions

predictFromVector(vector, topK)

  • vector: numeric
  • topK: number of predictions

saveModelAsJSONFile(filename?)

  • filename: optional file name

⚙️ ELMConfig Options Reference

Option Type Description
categories string[] List of labels the model should classify. (Required)
hiddenUnits number Number of hidden layer units (default: 50).
maxLen number Max length of input sequences (default: 30).
activation string Activation function (relu, tanh, etc.).
encoder any Custom UniversalEncoder instance (optional).
charSet string Character set used for encoding.
useTokenizer boolean Use token-level encoding.
tokenizerDelimiter RegExp Tokenizer regex.
exportFileName string Filename to export JSON.
metrics object Thresholds (rmse, mae, accuracy, etc.).
log object Logging config.
dropout number Dropout rate.
weightInit string Initializer. (uniform
ridgeLambda number Ridge penalty for closed-form solve.
seed number PRNG seed for reproducibility.

🧩 Prebuilt Modules and Custom Modules

Includes: AutoComplete, EncoderELM, CharacterLangEncoderELM, FeatureCombinerELM, ConfidenceClassifierELM, IntentClassifier, LanguageClassifier, VotingClassifierELM, RefinerELM.

Each exposes .train(), .predict(), .loadModelFromJSON(), .saveModelAsJSONFile(), .encode().

Custom modules can be built on top.


✨ Text Encoding Modules

Includes TextEncoder, Tokenizer, UniversalEncoder.
Supports char-level & token-level, normalization, n-grams.


🖥️ UI Binding Utility

bindAutocompleteUI(model, inputElement, outputElement, topK) helper.
Binds model predictions to live HTML input.


✨ Data Augmentation Utilities

Augment with prefixes, suffixes, noise.
Example: Augment.generateVariants("hello", "abc", { suffixes:["world"], includeNoise:true }).


⚠️ IO Utilities (Experimental)

JSON/CSV/TSV import/export, schema inference.
Experimental and may be unstable.


🧰 Embedding Store

Lightweight vector store with cosine/dot/euclidean KNN, unit-norm storage, ring buffer capacity.

Usage

import { EmbeddingStore } from '@astermind/astermind-elm';

const store = new EmbeddingStore({ capacity: 5000, normalize: true });
store.add({ id: 'doc1', vector: [/* ... */], meta: { title: 'Hello' } });
const hits = store.query({ vector: q, k: 10, metric: 'cosine' });

🔧 Utilities: Matrix & Activations

Matrix – internal linear algebra utilities (multiply, transpose, addRegularization, solveCholesky, etc.).
Activationsrelu, leakyrelu, sigmoid, tanh, linear, gelu, plus softmax, derivatives, and helpers (get, getDerivative, getPair).


🔗 Adapters & Chains

ELMAdapter wraps an ELM or OnlineELM to behave like an encoder for ELMChain:

import { ELMAdapter, wrapELM, wrapOnlineELM } from '@astermind/astermind-elm';

const enc1 = wrapELM(elm);                          // uses elm.getEmbedding(X)
const enc2 = wrapOnlineELM(online, { mode: 'logits' }); // 'hidden' or 'logits'
const chain = new ELMChain([enc1, enc2], { normalizeFinal: true });

const Z = chain.getEmbedding(X); // stacked embeddings

🧱 Workers: ELMWorker & ELMWorkerClient

ELMWorker (inside a Web Worker) exposes a tolerant RPC surface:

  • lifecycle: initELM, initOnlineELM, dispose, getKind, setVerbose
  • training: train, fit, update, trainFromData (all routed appropriately)
  • prediction: predict, predictFromVector, predictLogits
  • progress events: { type:'progress', phase, pct } during training

ELMWorkerClient (on the main thread) is a thin promise-based RPC client:

import { ELMWorkerClient } from '@astermind/astermind-elm/worker';

const client = new ELMWorkerClient(new Worker(new URL('./ELMWorker.js', import.meta.url)));
await client.initELM({ categories:['A','B'], hiddenUnits:128 });

await client.elmTrain({}, (p) => console.log(p.phase, p.pct));
const preds = await client.elmPredict('bonjour', 5);

🧪 Example Demos and Scripts

Run with npm run dev:* (autocomplete, lang, chain, news).
Fully in-browser.


🧪 Experiments and Results

Includes dropout tuning, hybrid retrieval, ensemble distillation, multi-level pipelines.
Results reported (Recall@1, Recall@5, MRR).


📦 Releases

v2.1.0 — 2025-09-19

New features: Kernel ELM, Nyström whitening, OnlineELM, DeepELM, Worker adapter, EmbeddingStore 2.0, activations linear/gelu, config split.
Fixes: Xavier init, encoder guards, dropout scaling.
Breaking: Config now NumericConfig|TextConfig.


📄 License

MIT License


“AsterMind doesn’t just mimic a brain—it functions more like a starfish: fully decentralized, self-evaluating, and self-repairing.”

About

This is a JavaScript implementaion of ELM network invented by Guang-Bin Huang in 2006 but allows for more creative extensibility like deep ELM's and chained ELM's also providing a way to create an ecosystem of ELM models that are orchestratable

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 62.2%
  • JavaScript 37.8%