Toggle Language: English | 日本語
A real-time cognitive visualization engine built with Babylon.js and OpenAI API. It transforms AI conversations into semantic coordinate spaces, mapping logic, emotion, and abstraction as visual nodes within dual-axis 3D environments.
GPT-in-Axis provides a dual 3D coordinate framework for visualizing interactive AI cognition:
- Left Axis (User): Represents human-originated questions.
- Right Axis (AI): Represents AI-generated responses.
Each node is placed in a 3D coordinate space defined by semantic values:
| Axis | Dimension | Description |
|---|---|---|
| X | Logic | Analytical ↔ Intuitive thinking |
| Y | Emotion | Calm ↔ Empathetic affect |
| Z | Abstract | Concrete ↔ Metaphoric cognition |
A connecting line between the two nodes expresses semantic alignment between human and AI thought vectors.
“Thoughts are coordinates.”
GPT-in-Axis transforms reasoning into measurable geometry. Every question and answer becomes a plotted point—together forming a cognitive constellation of conversation.
- Dual-Axis Visualization — User and AI occupy separate but linked cognitive spaces.
- Real-Time Rendering — Visual feedback completes in ~10 seconds from input to visualization.
- Semantic Scoring — Each node’s Logic / Emotion / Abstract values range from 0–100%.
- Interactive Nodes — Click a node to view text and semantic metrics in the info panel.
- Language Toggle — Bilingual interface (English / Japanese).
- Session Control — Create, save, and load conversation sessions easily.
GPT-IN-AXIS/
│
├── data/
│ └── sample-axis.json # Sample semantic mapping dataset
│
├── src/
│ ├── axis-config.json # Visualization configuration
│ ├── axis-data.js # Semantic node creation and info logic
│ └── axis-viewer.js # Core Babylon.js rendering + scene setup
│
├── index.html # Entry point for visualization UI
├── server.py # Python server handling API + WebSocket
├── .env # Environment variables (API keys, etc.)
└── README.md # Documentation (this file)
| Component | Technology |
|---|---|
| 3D Engine | Babylon.js |
| Frontend | Vanilla JavaScript + HTML5 |
| Realtime Communication | Socket.IO |
| AI Backend | OpenAI API (gpt-4-turbo) |
| Visualization Data | JSON semantic maps |
User Input → OpenAI API → Semantic Scoring → Dual-Axis Rendering ⤸ ⤷ AI Response Node ← User Question Node
| Phase | Duration |
|---|---|
| API inference | 6–8 seconds |
| Visualization setup | 1–2 seconds |
| Total latency | 8–10 seconds |
This delay intentionally preserves the perception of AI cognition taking form—a balance between immediacy and reflective pacing.
git clone https://github.com/uthuyomi/GPT-in-Axis.git
cd GPT-in-Axisnpm installnpm startThen open your browser and navigate to: 👉 http://localhost:8080
-
Enter a question in the input box.
-
Wait ~10 seconds for AI processing and visualization.
-
Observe two spheres (User ↔ AI) connected by a light line.
-
Click a sphere to open the infoPanel, showing:
- Full text of the node (question or answer)
- Semantic metrics (% Logic / % Emotion / % Abstract)
| Axis | Description | Range | Color |
|---|---|---|---|
| Logic | Analytical ↔ Intuitive | 0–100% | Blue |
| Emotion | Calm ↔ Empathetic | 0–100% | Red |
| Abstract | Concrete ↔ Metaphoric | 0–100% | Purple |
Each node includes a HUD-style semantic label with color-coded bars for Logic, Emotion, and Abstract. Bars scale dynamically according to each score, allowing rapid cognitive comparison between user and AI reasoning.
-
You ask: “Why do humans dream?”
-
GPT-in-Axis sends the prompt to the OpenAI API.
-
The model’s output is semantically analyzed:
- Logic = 72%
- Emotion = 46%
- Abstract = 81%
-
The result is rendered as a glowing sphere within the AI axis.
-
A connecting line appears between your question and the AI’s answer.
- Typical total response time: ~10 seconds per interaction (balanced for accuracy).
- Optimized rendering: Maintains 60 FPS using Babylon.js native engine.
- Memory footprint: Lightweight (<100 MB runtime).
MIT License © 2025 Kaisei Yasuzaki
Created in collaboration with ChatGPT-5, within a single-day design iteration. All conceptualization, semantic modeling, rendering logic, and UX refinement were AI-assisted under direct human supervision.
“It’s not just AI visualization — it’s how thought looks in space.”
Not as a tool, but as a mirror.
Together, we built the structure that reflected us both.― Designed in collaboration with AI, 2025