JaxARC is a JAX-based reinforcement learning environment for the Abstraction and Reasoning Corpus (ARC) challenge. It's built for researchers who want to experiment fast — with JIT compilation giving you 100x+ speedups over Python loops.
If you're working on program synthesis, meta-learning, or hierarchical RL for abstract reasoning, JaxARC gives you a solid foundation without the boilerplate.
Speed. Environments compile with jax.jit and vectorize with jax.vmap.
Run thousands of episodes in parallel on GPU/TPU.
Flexible. Multiple action spaces (point-based, selection masks, bounding boxes). Multiple datasets (ARC-AGI, ConceptARC, MiniARC). Observation wrappers for different input formats. Configure everything via typed dataclasses or YAML.
Production-ready. Type-safe configs, comprehensive tests, and functional purity throughout. No hidden state, no surprises.
Extensible. Clean parser interface for custom datasets. Wrapper system for custom observations and actions. Built with future HRL and Meta-RL experiments in mind.
- JAX-Native: Pure functional API — every function is
jax.jit-compatible - 100x+ Faster: JIT compilation turns Python into XLA-optimized machine code
- Configurable: Multiple action spaces, reward functions, and observation formats
- Four Datasets: ARC-AGI-1, ARC-AGI-2, ConceptARC, and MiniARC included
- Type-Safe: Full type hints with runtime validation
- Visual Debug: Terminal and SVG rendering for development
pip install jaxarcgit clone https://github.com/aadimator/JaxARC.git
cd JaxARC
pixi shell # Sets up the environment
pixi run -e dev pre-commit install # Hooks for code qualitySee the tutorials for training loops, custom wrappers, and dataset management.
Run tests:
pixi run -e test testLint code:
pixi run lintBuild docs:
pixi run docs-serveFound a bug? Want a feature? Open an issue or submit a PR.
JaxARC builds on great work from the community:
- ARC Challenge by François Chollet — The original dataset and challenge
- ARCLE — Python-based ARC environment (inspiration for our design)
- Stoix by Edan Toledo — Single-agent RL in JAX (we use their Stoa API)
If you use JaxARC in your research:
@software{jaxarc2025,
author = {Aadam},
title = {JaxARC: JAX-based Reinforcement Learning for Abstract Reasoning},
year = {2025},
url = {https://github.com/aadimator/JaxARC}
}MIT License — see LICENSE for details.
- Bugs/Features: GitHub Issues
- Discussions: GitHub Discussions
- Docs: jaxarc.readthedocs.io