|
Automated Academic Illustration for AI Scientists |
Disclaimer: This is an unofficial, community-driven open-source implementation of the paper "PaperBanana: Automating Academic Illustration for AI Scientists" by Dawei Zhu, Rui Meng, Yale Song, Xiyu Wei, Sujian Li, Tomas Pfister, and Jinsung Yoon (arXiv:2601.23265). This project is not affiliated with or endorsed by the original authors or Google Research. The implementation is based on the publicly available paper and may differ from the original system.
An agentic framework for generating publication-quality academic diagrams and statistical plots from text descriptions. Supports OpenAI (GPT-5.2 + GPT-Image-1.5), Azure OpenAI / Foundry, and Google Gemini providers.
- Two-phase multi-agent pipeline with iterative refinement
- Multiple VLM and image generation providers (OpenAI, Azure, Gemini)
- Input optimization layer for better generation quality
- Auto-refine mode and run continuation with user feedback
- CLI, Python API, and MCP server for IDE integration
- Claude Code skills for
/generate-diagram,/generate-plot, and/evaluate-diagram
- Python 3.10+
- An OpenAI API key (platform.openai.com) or Azure OpenAI / Foundry endpoint
- Or a Google Gemini API key (free, Google AI Studio)
pip install paperbananaOr install from source for development:
git clone https://github.com/llmsresearch/paperbanana.git
cd paperbanana
pip install -e ".[dev,openai,google]"cp .env.example .env
# Edit .env and add your API key:
# OPENAI_API_KEY=your-key-here
#
# For Azure OpenAI / Foundry:
# OPENAI_BASE_URL=https://<resource>.openai.azure.com/openai/v1Or use the setup wizard for Gemini:
paperbanana setuppaperbanana generate \
--input examples/sample_inputs/transformer_method.txt \
--caption "Overview of our encoder-decoder architecture with sparse routing"With input optimization and auto-refine:
paperbanana generate \
--input my_method.txt \
--caption "Overview of our encoder-decoder framework" \
--optimize --autoOutput is saved to outputs/run_<timestamp>/final_output.png along with all intermediate iterations and metadata.
PaperBanana implements a multi-agent pipeline with up to 7 specialized agents:
Phase 0 -- Input Optimization (optional, --optimize):
- Input Optimizer runs two parallel VLM calls:
- Context Enricher structures raw methodology text into diagram-ready format (components, flows, groupings, I/O)
- Caption Sharpener transforms vague captions into precise visual specifications
Phase 1 -- Linear Planning:
- Retriever selects the most relevant reference examples from a curated set of 13 methodology diagrams spanning agent/reasoning, vision/perception, generative/learning, and science/applications domains
- Planner generates a detailed textual description of the target diagram via in-context learning from the retrieved examples
- Stylist refines the description for visual aesthetics using NeurIPS-style guidelines (color palette, layout, typography)
Phase 2 -- Iterative Refinement:
- Visualizer renders the description into an image
- Critic evaluates the generated image against the source context and provides a revised description addressing any issues
- Steps 4-5 repeat for a fixed number of iterations (default 3), or until the critic is satisfied (
--auto)
PaperBanana supports multiple VLM and image generation providers:
| Component | Provider | Model | Notes |
|---|---|---|---|
| VLM (planning, critique) | OpenAI | gpt-5.2 |
Default |
| Image Generation | OpenAI | gpt-image-1.5 |
Default |
| VLM | Google Gemini | gemini-2.0-flash |
Free tier |
| Image Generation | Google Gemini | gemini-3-pro-image-preview |
Free tier |
| VLM / Image | OpenRouter | Any supported model | Flexible routing |
Azure OpenAI / Foundry endpoints are auto-detected — set OPENAI_BASE_URL to your endpoint.
# Basic generation
paperbanana generate \
--input method.txt \
--caption "Overview of our framework"
# With input optimization and auto-refine
paperbanana generate \
--input method.txt \
--caption "Overview of our framework" \
--optimize --auto
# Continue the latest run with user feedback
paperbanana generate --continue \
--feedback "Make arrows thicker and colors more distinct"
# Continue a specific run
paperbanana generate --continue-run run_20260218_125448_e7b876 \
--iterations 3| Flag | Short | Description |
|---|---|---|
--input |
-i |
Path to methodology text file (required for new runs) |
--caption |
-c |
Figure caption / communicative intent (required for new runs) |
--output |
-o |
Output image path (default: auto-generated in outputs/) |
--iterations |
-n |
Number of Visualizer-Critic refinement rounds (default: 3) |
--auto |
Loop until critic is satisfied (with --max-iterations safety cap) |
|
--max-iterations |
Safety cap for --auto mode (default: 30) |
|
--optimize |
Preprocess inputs with parallel context enrichment and caption sharpening | |
--continue |
Continue from the latest run in outputs/ |
|
--continue-run |
Continue from a specific run ID | |
--feedback |
User feedback for the critic when continuing a run | |
--vlm-provider |
VLM provider name (default: openai) |
|
--vlm-model |
VLM model name (default: gpt-5.2) |
|
--image-provider |
Image gen provider (default: openai_imagen) |
|
--image-model |
Image gen model (default: gpt-image-1.5) |
|
--format |
-f |
Output format: png, jpeg, or webp (default: png) |
--config |
Path to YAML config file (see configs/config.yaml) |
|
--verbose |
-v |
Show detailed agent progress and timing |
paperbanana plot \
--data results.csv \
--intent "Bar chart comparing model accuracy across benchmarks"| Flag | Short | Description |
|---|---|---|
--data |
-d |
Path to data file, CSV or JSON (required) |
--intent |
Communicative intent for the plot (required) | |
--output |
-o |
Output image path |
--iterations |
-n |
Refinement iterations (default: 3) |
Comparative evaluation of a generated diagram against a human reference using VLM-as-a-Judge:
paperbanana evaluate \
--generated diagram.png \
--reference human_diagram.png \
--context method.txt \
--caption "Overview of our framework"| Flag | Short | Description |
|---|---|---|
--generated |
-g |
Path to generated image (required) |
--reference |
-r |
Path to human reference image (required) |
--context |
Path to source context text file (required) | |
--caption |
-c |
Figure caption (required) |
Scores on 4 dimensions (hierarchical aggregation per the paper):
- Primary: Faithfulness, Readability
- Secondary: Conciseness, Aesthetics
paperbanana setupInteractive wizard that walks you through obtaining a Google Gemini API key and saving it to .env.
import asyncio
from paperbanana import PaperBananaPipeline, GenerationInput, DiagramType
from paperbanana.core.config import Settings
settings = Settings(
vlm_provider="openai",
vlm_model="gpt-5.2",
image_provider="openai_imagen",
image_model="gpt-image-1.5",
optimize_inputs=True, # Enable input optimization
auto_refine=True, # Loop until critic is satisfied
)
pipeline = PaperBananaPipeline(settings=settings)
result = asyncio.run(pipeline.generate(
GenerationInput(
source_context="Our framework consists of...",
communicative_intent="Overview of the proposed method.",
diagram_type=DiagramType.METHODOLOGY,
)
))
print(f"Output: {result.image_path}")To continue a previous run:
from paperbanana.core.resume import load_resume_state
state = load_resume_state("outputs", "run_20260218_125448_e7b876")
result = asyncio.run(pipeline.continue_run(
resume_state=state,
additional_iterations=3,
user_feedback="Make the encoder block more prominent",
))See examples/generate_diagram.py and examples/generate_plot.py for complete working examples.
PaperBanana includes an MCP server for use with Claude Code, Cursor, or any MCP-compatible client. Add the following config to use it via uvx without a local clone:
{
"mcpServers": {
"paperbanana": {
"command": "uvx",
"args": ["--from", "paperbanana[mcp]", "paperbanana-mcp"],
"env": { "GOOGLE_API_KEY": "your-google-api-key" }
}
}
}Three MCP tools are exposed: generate_diagram, generate_plot, and evaluate_diagram.
The repo also ships with 3 Claude Code skills:
/generate-diagram <file> [caption]- generate a methodology diagram from a text file/generate-plot <data-file> [intent]- generate a statistical plot from CSV/JSON data/evaluate-diagram <generated> <reference>- evaluate a diagram against a human reference
See mcp_server/README.md for full setup details (Claude Code, Cursor, local development).
Default settings are in configs/config.yaml. Override via CLI flags or a custom YAML:
paperbanana generate \
--input method.txt \
--caption "Overview" \
--config my_config.yamlKey settings:
vlm:
provider: openai # openai, gemini, or openrouter
model: gpt-5.2
image:
provider: openai_imagen # openai_imagen, google_imagen, or openrouter_imagen
model: gpt-image-1.5
pipeline:
num_retrieval_examples: 10
refinement_iterations: 3
# auto_refine: true # Loop until critic is satisfied
# max_iterations: 30 # Safety cap for auto_refine mode
# optimize_inputs: true # Preprocess inputs for better generation
output_resolution: "2k"
reference:
path: data/reference_sets
output:
dir: outputs
save_iterations: true
save_metadata: trueEnvironment variables (.env):
# OpenAI (default)
OPENAI_API_KEY=your-key
OPENAI_BASE_URL=https://api.openai.com/v1 # or Azure endpoint
OPENAI_VLM_MODEL=gpt-5.2 # override model
OPENAI_IMAGE_MODEL=gpt-image-1.5 # override model
# Google Gemini (alternative, free)
GOOGLE_API_KEY=your-keypaperbanana/
├── paperbanana/
│ ├── core/ # Pipeline orchestration, types, config, resume, utilities
│ ├── agents/ # Optimizer, Retriever, Planner, Stylist, Visualizer, Critic
│ ├── providers/ # VLM and image gen provider implementations
│ │ ├── vlm/ # OpenAI, Gemini, OpenRouter VLM providers
│ │ └── image_gen/ # OpenAI, Gemini, OpenRouter image gen providers
│ ├── reference/ # Reference set management (13 curated examples)
│ ├── guidelines/ # Style guidelines loader
│ └── evaluation/ # VLM-as-Judge evaluation system
├── configs/ # YAML configuration files
├── prompts/ # Prompt templates for all agents + evaluation
│ ├── diagram/ # context_enricher, caption_sharpener, retriever, planner, stylist, visualizer, critic
│ ├── plot/ # plot-specific prompt variants
│ └── evaluation/ # faithfulness, conciseness, readability, aesthetics
├── data/
│ ├── reference_sets/ # 13 verified methodology diagrams
│ └── guidelines/ # NeurIPS-style aesthetic guidelines
├── examples/ # Working example scripts + sample inputs
├── scripts/ # Data curation and build scripts
├── tests/ # Test suite
├── mcp_server/ # MCP server for IDE integration
└── .claude/skills/ # Claude Code skills (generate-diagram, generate-plot, evaluate-diagram)
# Install with dev dependencies
pip install -e ".[dev,openai,google]"
# Run tests
pytest tests/ -v
# Lint
ruff check paperbanana/ mcp_server/ tests/ scripts/
# Format
ruff format paperbanana/ mcp_server/ tests/ scripts/This is an unofficial implementation. If you use this work, please cite the original paper:
@article{zhu2026paperbanana,
title={PaperBanana: Automating Academic Illustration for AI Scientists},
author={Zhu, Dawei and Meng, Rui and Song, Yale and Wei, Xiyu
and Li, Sujian and Pfister, Tomas and Yoon, Jinsung},
journal={arXiv preprint arXiv:2601.23265},
year={2026}
}Original paper: https://arxiv.org/abs/2601.23265
This project is an independent open-source reimplementation based on the publicly available paper. It is not affiliated with, endorsed by, or connected to the original authors, Google Research, or Peking University in any way. The implementation may differ from the original system described in the paper. Use at your own discretion.
MIT

