A small “guess the output” training game built from single-file C++ programs and a static Next.js frontend.
This repository is a proof of concept: many problems + explanations were generated by an AI (LLM), so you should always verify correctness (especially around undefined behavior, compiler differences, and subtle language rules).
Live Demo: https://gto.demo.blanpied.fr/
- Each problem is a single C++ file in
problems/src/. - A problem can:
- Print some stdout (normal run),
- Fail to compile (compilation error),
- Crash/fail at runtime (runtime error),
- Intentionally demonstrate undefined behavior (UB).
- Players either:
- Type the exact expected stdout, or
- Choose a generic outcome (Compilation error / Runtime error / Undefined behavior).
- Expected outputs (or errors) are generated automatically by compiling/running the programs and exporting JSON consumed by the web UI.
- C++ compiler (e.g.,
g++) - Python 3
- Node.js + npm (only if you want to run the web UI locally)
python3 problems/run_all.pyThis generates/updates:
web/data/problems.generated.jsonweb/data/problems.index.json
cd problems
make problem NAME=p0001
make run NAME=p0001cd web
npm install
npm run dev
# open http://localhost:3000Notes:
- The frontend is static (no backend) to keep hosting simple.
- Do not hand-edit
web/data/*.generated.json—regenerate it via the Python script.
- problems/ — C++ problem sources and build utilities
- problems/src/ — C++ single-file problems (p0001.cpp, ...)
- problems/Makefile — build & run helpers
- problems/run_all.py — generator script (produces
web/data/*) - problems/problems.json — problem metadata (id, title, difficulty, explanation, concepts)
- web/ — Next.js static frontend
- One problem = one C++ program file.
- Keep programs compact; they may be tricky/non-idiomatic on purpose to test real understanding.
- Outcomes supported by the game:
- stdout output,
- compilation error,
- runtime error,
- undefined behavior (must be marked in metadata).
- Create a new file in
problems/src/(e.g.,p0042.cpp). - Add a matching entry in
problems/problems.json(see schema below). - Run:
python3 problems/run_all.py
- Open the web app and confirm it shows the new problem + expected result.
This file is the “authoritative” list of problems and explanations. It is a JSON array where each object has:
id(string):"p0001","p0002", …title(string)difficulty(number): integer in[1–5]concepts(string[]): keyword tags (used for filtering/presets)explanation(string): Markdown shown after a correct answerstdin(string, optional): fixed stdin content if the program reads inputUB(bool, optional):trueif the problem intentionally triggers undefined behavior
After running python3 problems/run_all.py, each problem object is enriched with generated fields (notably the code and the measured outcome), including:
result.errorType:"no-error"|"compilation-error"|"runtime-error"|"undefined-behavior"result.stdout: only whenerrorTypeis"no-error"result.errorMessage: styled tokens (only when there is an error)
The web UI reads this generated file to display the code, the “expected” outcome, and the explanation together.
The home page offers presets / filters to build a training session. In plain terms:
- Difficulty range (
difficultyMin→difficultyMax) - Attempts per problem (
maxAttemptsPerProblem, unlimited when null) - Concept tags filter (
concepts) - Problems per session (
problemsPerSession, endless when null) - Timers: either per-problem (
problemTimer) or whole-session (sessionTimer) (mutually exclusive) - Ordering: random vs progressive
- Optional “show output difference” to help compare guesses vs expected output
This repository is shared under the GNU General Public License v3.0 (GPL-3.0). See LICENSE for details.
Author: Timothée Blanpied — February 2026