A modular Extreme Learning Machine (ELM) library for JS/TS (browser + Node).
AsterMind brings instant, tiny, on-device ML to the web. It lets you ship models that train in milliseconds, predict with microsecond latency, and run entirely in the browser — no GPU, no server, no tracking. With Kernel ELMs, Online ELM, DeepELM, and Web Worker offloading, you can create:
- Private, on-device classifiers (language, intent, toxicity, spam) that retrain on user feedback
- Real-time retrieval & reranking with compact embeddings (ELM, KernelELM, Nyström whitening) for search and RAG
- Interactive creative tools (music/drum generators, autocompletes) that respond instantly
- Edge analytics: regressors/classifiers from data that never leaves the page
- Deep ELM chains: stack encoders → embedders → classifiers for powerful pipelines, still tiny and transparent
Why it matters: ELMs give you closed-form training (no heavy SGD), interpretable structure, and tiny memory footprints.
AsterMind modernizes ELM with kernels, online learning, workerized training, robust preprocessing, and deep chaining — making seriously fast ML practical for every web app.
- Kernel ELMs (KELMs) — exact and Nyström kernels (RBF/Linear/Poly/Laplacian/Custom) with ridge solve
- Whitened Nyström — optional (K_{mm}^{-1/2}) whitening via symmetric eigendecomposition
- Online ELM (OS-ELM) — streaming RLS updates with forgetting factor (no full retrain)
- DeepELM — multi-layer stacked ELM with non-linear projections
- Web Worker adapter — off-main-thread training/prediction for ELM and KELM
- Matrix upgrades — Jacobi eigendecomp, invSqrtSym, improved Cholesky
- EmbeddingStore 2.0 — unit-norm vectors, ring buffer capacity, metadata filters
- ELMChain+Embeddings — safer chaining with dimension checks, JSON I/O
- Activations — added linear and gelu; centralized registry
- Configs — split into Numeric and Text configs; stronger typing
- UMD exports —
window.astermindexposesELM,OnlineELM,KernelELM,DeepELM,KernelRegistry,EmbeddingStore,ELMChain, etc. - Robust preprocessing — safer encoder path, improved error handling
See Releases for full changelog.
- Introduction
- Features
- Kernel ELMs (KELM)
- Online ELM (OS-ELM)
- DeepELM
- Web Worker Adapter
- Installation
- Usage Examples
- Suggested Experiments
- Why Use AsterMind
- Core API Documentation
- Method Options Reference
- ELMConfig Options
- Prebuilt Modules
- Text Encoding Modules
- UI Binding Utility
- Data Augmentation Utilities
- IO Utilities (Experimental)
- Embedding Store
- Utilities: Matrix & Activations
- Adapters & Chains
- Workers: ELMWorker & ELMWorkerClient
- Example Demos and Scripts
- Experiments and Results
- Releases
- License
Welcome to AsterMind, a modular, decentralized ML framework built around cooperating Extreme Learning Machines (ELMs) that self-train, self-evaluate, and self-repair — like the nervous system of a starfish.
How This ELM Library Differs from a Traditional ELM
This library preserves the core Extreme Learning Machine idea — random hidden layer, nonlinear activation, closed-form output solve — but extends it with:
- Multiple activations (ReLU, LeakyReLU, Sigmoid, Linear, GELU)
- Xavier/Uniform/He initialization
- Dropout on hidden activations
- Sample weighting
- Metrics gate (RMSE, MAE, Accuracy, F1, Cross-Entropy, R²)
- JSON export/import
- Model lifecycle management
- UniversalEncoder for text (char/token)
- Data augmentation utilities
- Chaining (ELMChain) for stacked embeddings
- Weight reuse (simulated fine-tuning)
- Logging utilities
AsterMind is designed for:
- Lightweight, in-browser ML pipelines
- Transparent, interpretable predictions
- Continuous, incremental learning
- Resilient systems with no single point of failure
- ✅ Modular Architecture
- ✅ Closed-form training (ridge / pseudoinverse)
- ✅ Activations: relu, leakyrelu, sigmoid, tanh, linear, gelu
- ✅ Initializers: uniform, xavier, he
- ✅ Numeric + Text configs
- ✅ Kernel ELM with Nyström + whitening
- ✅ Online ELM (RLS) with forgetting factor
- ✅ DeepELM (stacked layers)
- ✅ Web Worker adapter
- ✅ Embeddings & Chains for retrieval and deep pipelines
- ✅ JSON import/export
- ✅ Self-governing training
- ✅ Flexible preprocessing
- ✅ Lightweight deployment (ESM + UMD)
- ✅ Retrieval and classification utilities
- ✅ Zero server/GPU — private, on-device ML
Supports Exact and Nyström modes with RBF/Linear/Poly/Laplacian/Custom kernels.
Includes whitened Nyström (persisted whitener for inference parity).
import { KernelELM, KernelRegistry } from '@astermind/astermind-elm';
const kelm = new KernelELM({
outputDim: Y[0].length,
kernel: { type: 'rbf', gamma: 1 / X[0].length },
mode: 'nystrom',
nystrom: { m: 256, strategy: 'kmeans++', whiten: true },
ridgeLambda: 1e-2,
});
kelm.fit(X, Y);Stream updates via Recursive Least Squares (RLS) with optional forgetting factor. Supports He/Xavier/Uniform initializers.
import { OnlineELM } from '@astermind/astermind-elm';
const ol = new OnlineELM({ inputDim: D, outputDim: K, hiddenUnits: 256 });
ol.init(X0, Y0);
ol.update(Xt, Yt);
ol.predictProbaFromVectors(Xq);Notes
forgettingFactorcontrols how fast older observations decay (default 1.0).- Two natural embedding modes: hidden (activations) or logits (pre-softmax). Use with
ELMAdapter(see below).
Stack multiple ELM layers for deep nonlinear embeddings and an optional top ELM classifier.
import { DeepELM } from '@astermind/astermind-elm';
const deep = new DeepELM({
inputDim: D,
layers: [{ hiddenUnits: 128 }, { hiddenUnits: 64 }],
numClasses: K
});
// 1) Unsupervised layer-wise training (autoencoders Y=X)
const X_L = deep.fitAutoencoders(X);
// 2) Supervised head (ELM) on last layer features
deep.fitClassifier(X_L, Y);
// 3) Predict
const probs = deep.predictProbaFromVectors(Xq);JSON I/O
toJSON() and fromJSON() persist the full stack (AEs + classifier).
Move heavy ops off the main thread. Provides ELMWorker + ELMWorkerClient for RPC-style training/prediction with progress events.
- Initialize with
initELM(config)orinitOnlineELM(config) - Train via
train/trainFromData/fit/update - Predict via
predict,predictFromVector, orpredictLogits - Subscribe to progress callbacks per call
See Workers for full API.
NPM (scoped package):
npm install @astermind/astermind-elm
# or
pnpm add @astermind/astermind-elm
# or
yarn add @astermind/astermind-elmCDN / <script> (UMD global astermind):
<!-- jsDelivr -->
<script src="https://cdn.jsdelivr.net/npm/@astermind/astermind-elm/dist/astermind.umd.js"></script>
<!-- or unpkg -->
<script src="https://unpkg.com/@astermind/astermind-elm/dist/astermind.umd.js"></script>
<script>
const { ELM, KernelELM } = window.astermind;
</script>Repository:
- GitHub: https://github.com/infiniteCrank/AsterMind-ELM
- NPM: https://www.npmjs.com/package/@astermind/astermind-elm
Basic ELM Classifier
import { ELM } from "@astermind/astermind-elm";
const config = { categories: ['English', 'French'], hiddenUnits: 128 };
const elm = new ELM(config);
// Load or train logic here
const results = elm.predict("bonjour");
console.log(results);CommonJS / Node:
const { ELM } = require("@astermind/astermind-elm");Kernel ELM / DeepELM: see above examples.
- Compare retrieval performance with Sentence-BERT and TFIDF.
- Experiment with activations and token vs char encoding.
- Deploy in-browser retraining workflows.
Because you can build AI systems that:
- Are decentralized.
- Self-heal and retrain independently.
- Run in the browser.
- Are transparent and interpretable.
train,trainFromData,predict,predictFromVector,getEmbedding,predictLogitsFromVectors, JSON I/O, metricsloadModelFromJSON,saveModelAsJSONFile- Evaluation: RMSE, MAE, Accuracy, F1, Cross-Entropy, R²
- Config highlights:
ridgeLambda,weightInit(uniform|xavier|he),seed
init,update,fit,predictLogitsFromVectors,predictProbaFromVectors, embeddings (hidden/logits), JSON I/O- Config highlights:
inputDim,outputDim,hiddenUnits,activation,ridgeLambda,forgettingFactor
fit,predictProbaFromVectors,getEmbedding, JSON I/Omode: 'exact' | 'nystrom', kernels:rbf | linear | poly | laplacian | custom
fitAutoencoders(X),transform(X),fitClassifier(X_L, Y),predictProbaFromVectors(X)toJSON(),fromJSON()for full-pipeline persistence
- sequential embeddings through multiple encoders
vectorize,vectorizeAll
find(queryVec, dataset, k, topX, metric)
augmentationOptions:{ suffixes, prefixes, includeNoise }weights: sample weights
X: Input matrixY: Label matrix or one-hotoptions:{ reuseWeights, weights }
text: stringtopK: number of predictions
vector: numerictopK: number of predictions
filename: optional file name
| Option | Type | Description |
|---|---|---|
categories |
string[] |
List of labels the model should classify. (Required) |
hiddenUnits |
number |
Number of hidden layer units (default: 50). |
maxLen |
number |
Max length of input sequences (default: 30). |
activation |
string |
Activation function (relu, tanh, etc.). |
encoder |
any |
Custom UniversalEncoder instance (optional). |
charSet |
string |
Character set used for encoding. |
useTokenizer |
boolean |
Use token-level encoding. |
tokenizerDelimiter |
RegExp |
Tokenizer regex. |
exportFileName |
string |
Filename to export JSON. |
metrics |
object |
Thresholds (rmse, mae, accuracy, etc.). |
log |
object |
Logging config. |
dropout |
number |
Dropout rate. |
weightInit |
string |
Initializer. (uniform |
ridgeLambda |
number |
Ridge penalty for closed-form solve. |
seed |
number |
PRNG seed for reproducibility. |
Includes: AutoComplete, EncoderELM, CharacterLangEncoderELM, FeatureCombinerELM, ConfidenceClassifierELM, IntentClassifier, LanguageClassifier, VotingClassifierELM, RefinerELM.
Each exposes .train(), .predict(), .loadModelFromJSON(), .saveModelAsJSONFile(), .encode().
Custom modules can be built on top.
Includes TextEncoder, Tokenizer, UniversalEncoder.
Supports char-level & token-level, normalization, n-grams.
bindAutocompleteUI(model, inputElement, outputElement, topK) helper.
Binds model predictions to live HTML input.
Augment with prefixes, suffixes, noise.
Example: Augment.generateVariants("hello", "abc", { suffixes:["world"], includeNoise:true }).
JSON/CSV/TSV import/export, schema inference.
Experimental and may be unstable.
Lightweight vector store with cosine/dot/euclidean KNN, unit-norm storage, ring buffer capacity.
Usage
import { EmbeddingStore } from '@astermind/astermind-elm';
const store = new EmbeddingStore({ capacity: 5000, normalize: true });
store.add({ id: 'doc1', vector: [/* ... */], meta: { title: 'Hello' } });
const hits = store.query({ vector: q, k: 10, metric: 'cosine' });Matrix – internal linear algebra utilities (multiply, transpose, addRegularization, solveCholesky, etc.).
Activations – relu, leakyrelu, sigmoid, tanh, linear, gelu, plus softmax, derivatives, and helpers (get, getDerivative, getPair).
ELMAdapter wraps an ELM or OnlineELM to behave like an encoder for ELMChain:
import { ELMAdapter, wrapELM, wrapOnlineELM } from '@astermind/astermind-elm';
const enc1 = wrapELM(elm); // uses elm.getEmbedding(X)
const enc2 = wrapOnlineELM(online, { mode: 'logits' }); // 'hidden' or 'logits'
const chain = new ELMChain([enc1, enc2], { normalizeFinal: true });
const Z = chain.getEmbedding(X); // stacked embeddingsELMWorker (inside a Web Worker) exposes a tolerant RPC surface:
- lifecycle:
initELM,initOnlineELM,dispose,getKind,setVerbose - training:
train,fit,update,trainFromData(all routed appropriately) - prediction:
predict,predictFromVector,predictLogits - progress events:
{ type:'progress', phase, pct }during training
ELMWorkerClient (on the main thread) is a thin promise-based RPC client:
import { ELMWorkerClient } from '@astermind/astermind-elm/worker';
const client = new ELMWorkerClient(new Worker(new URL('./ELMWorker.js', import.meta.url)));
await client.initELM({ categories:['A','B'], hiddenUnits:128 });
await client.elmTrain({}, (p) => console.log(p.phase, p.pct));
const preds = await client.elmPredict('bonjour', 5);Run with npm run dev:* (autocomplete, lang, chain, news).
Fully in-browser.
Includes dropout tuning, hybrid retrieval, ensemble distillation, multi-level pipelines.
Results reported (Recall@1, Recall@5, MRR).
New features: Kernel ELM, Nyström whitening, OnlineELM, DeepELM, Worker adapter, EmbeddingStore 2.0, activations linear/gelu, config split.
Fixes: Xavier init, encoder guards, dropout scaling.
Breaking: Config now NumericConfig|TextConfig.
MIT License
“AsterMind doesn’t just mimic a brain—it functions more like a starfish: fully decentralized, self-evaluating, and self-repairing.”