Skip to content

🧹 High-performance Rust library for removing citations from ChatGPT, Claude, Perplexity, and other AI markdown responses. Removes inline citations, reference links, and bibliography sections with 100% accuracy.

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

opensite-ai/markdown-ai-cite-remove

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

markdown-ai-banner-primary


Rust Crate: Markdown AI Citation Removal

Crates.io Documentation License Tests


"Five years of AI evolution and flirting with AGI. Zero libraries to remove [1][2][3] from markdown 🙄. Remember me well when they take over."


Remove AI-generated citations and annotations from Markdown text at the speed of Rust

High-performance Rust library for removing citations from ChatGPT, Claude, Perplexity, and other AI markdown responses. Removes inline citations [1][2], reference links [1]: https://..., and bibliography sections with 100% accuracy.


📚 CLI GuideBenchmarking GuideFAQDocumentation Index




⚡ Performance-First

  • 100+ MB/s throughput on typical documents
  • Zero-copy processing where possible
  • Regex-optimized with lazy compilation
  • Thread-safe stateless design


🎯 Use Cases


Real-World Applications


1. Blog Publishing Pipeline

// Remove citations from AI-generated blog posts before publishing
use markdown_ai_cite_remove::clean;

let ai_draft = fetch_from_chatgpt();
let clean_post = remove_citations(&ai_draft);
publish_to_cms(clean_post);

2. Documentation Generation

// Remove citations from AI-generated documentation
let docs = generate_docs_with_ai();
let clean_docs = remove_citations(&docs);
write_to_file("README.md", clean_docs);

3. Content Aggregation

// Remove citations from multiple AI responses
use markdown_ai_cite_remove::CitationRemover;

let cleaner = CitationRemover::new();
let responses = vec![chatgpt_response, claude_response, perplexity_response];

let cleaned: Vec<String> = responses
    .iter()
    .map(|r| cleaner.remove_citations(r))
    .collect();

4. Streaming API Processing

// Remove citations from AI responses in real-time
async fn process_stream(stream: impl Stream<Item = String>) {
    let cleaner = CitationRemover::new();

    stream
        .map(|chunk| cleaner.remove_citations(&chunk))
        .for_each(|cleaned| {
            send_to_client(cleaned).await;
        })
        .await;
}

5. Simple File Processing (CLI)

# Remove citations and auto-generate output file
mdcr ai_response.md
# Creates: ai_response__cite_removed.md


Common Scenarios

  • ✅ Remove citations from AI chatbot responses (ChatGPT, Claude, Perplexity, Gemini)
  • ✅ Prepare markdown for blog posts and articles
  • ✅ Remove citations before website publishing
  • ✅ Process streaming API responses in real-time
  • ✅ Batch document cleaning for content pipelines
  • ✅ Remove citations from documentation generated by AI tools
  • ✅ Prepare content for CMS ingestion
  • ✅ Remove annotations from research summaries


📦 Installation


Prerequisites

Minimum Requirements:

  • Rust 1.70 or later
  • Cargo (comes with Rust)

Optional (for enhanced benchmarking):

  • Gnuplot (for benchmark visualization)
    • macOS: brew install gnuplot
    • Ubuntu/Debian: sudo apt-get install gnuplot
    • Windows: Download from http://www.gnuplot.info/

Library Installation

Add to your Cargo.toml:

[dependencies]
markdown-ai-cite-remove = "0.1"

CLI Installation

Install the command-line tool globally:

# Install from crates.io (when published)
cargo install markdown-ai-cite-remove

# Or install from local source
cargo install --path .

Verify installation:

mdcr --version


🚀 Quick Start


Quick Reference

Task Command
Remove citations from stdin echo "Text[1]" | mdcr
Auto-generate output file mdcr input.md
Specify output file mdcr input.md -o output.md
Verbose output mdcr input.md --verbose
Run tests cargo test
Run benchmarks cargo bench
View docs cargo doc --open

Library Usage

use markdown_ai_cite_remove::clean;

let markdown = "AI research shows promise[1][2].\n\n[1]: https://example.com\n[2]: https://test.com";
let result = remove_citations(markdown);
assert_eq!(cleaned.trim(), "AI research shows promise.");

CLI Usage


Basic Examples

1. Process from stdin to stdout (pipe mode):

echo "Text[1] here." | mdcr
# Output: Text here.

2. Auto-generate output file (easiest!):

mdcr ai_response.md
# Creates: ai_response__cite_removed.md

3. Specify custom output file:

mdcr input.md -o output.md

4. Verbose output (shows processing details):

mdcr input.md --verbose
# Output:
# Reading from file: input.md
# Removing citations (input size: 1234 bytes)...
# Citations removed (output size: 1100 bytes)
# Writing to file: input__cite_removed.md
# Done!


Advanced CLI Usage

Process multiple files (auto-generated output):

# Process all markdown files in current directory
for file in *.md; do
  mdcr "$file"
done
# Creates: file1__cite_removed.md, file2__cite_removed.md, etc.

Integration with other tools:

# Remove citations from AI output from curl
curl https://api.example.com/ai-response | mdcr

# Remove citations and preview
mdcr document.md -o - | less

# Remove citations and count words
mdcr document.md -o - | wc -w

# Chain with other markdown processors
mdcr input.md -o - | pandoc -f markdown -t html -o output.html

Advanced shell script example:

For more complex workflows, create a custom shell script. See the CLI Guide for advanced automation examples including:

  • Batch processing with custom naming
  • Directory watching and auto-processing
  • Git pre-commit hooks
  • CI/CD integration


🔧 Features

  • ✅ Remove inline numeric citations [1][2][3]
  • ✅ Remove named citations [source:1][ref:2][cite:3][note:4]
  • ✅ Remove reference link lists [1]: https://...
  • ✅ Remove reference section headers ## References, # Citations, ### Sources
  • ✅ Remove bibliographic entries [1] Author (2024). Title...
  • ✅ Preserve markdown formatting (bold, italic, links, lists, etc.)
  • ✅ Whitespace normalization
  • ✅ Configurable cleaning options


📖 Documentation



📚 Advanced Usage


Custom Configuration

use markdown_ai_cite_remove::{CitationRemover, RemoverConfig};

// Remove only inline citations, keep reference sections
let config = RemoverConfig::inline_only();
let cleaner = CitationRemover::with_config(config);
let result = cleaner.remove_citations("Text[1] here.\n\n[1]: https://example.com");

// Remove only reference sections, keep inline citations
let config = RemoverConfig::references_only();
let result = cleaner.remove_citations("Text[1] here.\n\n[1]: https://example.com");

// Full custom configuration
let config = RemoverConfig {
    remove_inline_citations: true,
    remove_reference_links: true,
    remove_reference_headers: true,
    remove_reference_entries: true,
    normalize_whitespace: true,
    remove_blank_lines: true,
    trim_lines: true,
};

Reusable Cleaner Instance

use markdown_ai_cite_remove::CitationRemover;

let cleaner = CitationRemover::new();

// Reuse for multiple documents
let doc1 = cleaner.remove_citations("First document[1].");
let doc2 = cleaner.remove_citations("Second document[2][3].");
let doc3 = cleaner.remove_citations("Third document[source:1].");


🧪 Examples

See the examples/ directory for more:

Run examples:

cargo run --example basic_usage
cargo run --example custom_config


🏎️ Performance


Running Benchmarks

# Run all benchmarks
cargo bench

# Run specific benchmark
cargo bench chatgpt_format

# Save baseline for comparison
cargo bench -- --save-baseline main

# Compare against baseline
cargo bench -- --baseline main

# View results (after running benchmarks)
open target/criterion/report/index.html  # macOS
xdg-open target/criterion/report/index.html  # Linux
start target/criterion/report/index.html  # Windows

Note about benchmark output:

  • Tests shown as "ignored" during cargo bench is normal behavior - regular tests are skipped during benchmarking to avoid interference
  • Outliers (3-13% of measurements) are normal due to OS scheduling and CPU frequency scaling
  • "Gnuplot not found" warning is harmless - Criterion uses an alternative plotting backend
  • With Gnuplot installed: Interactive HTML reports with charts are generated in target/criterion/report/

Performance Characteristics

Typical performance on modern hardware (Apple Silicon M-series):

Benchmark Time Throughput Notes
Simple inline citations ~580 ns 91 MiB/s Single sentence
Complex document ~2.5 μs 287 MiB/s Multiple sections
Real ChatGPT output ~18 μs 645 MiB/s 11.8 KB document
Real Perplexity output ~245 μs 224 MiB/s 54.9 KB document
Batch (5 documents) ~2.2 μs 43 MiB/s Total for all 5
No citations (passthrough) ~320 ns 393 MiB/s Fastest path

Key Insights:

  • Throughput: 100-650 MB/s depending on document complexity
  • Latency: Sub-microsecond to ~250 μs for large documents
  • Scalability: Linear with document size
  • Memory: ~200-300 bytes per operation


🧪 Testing

This library has 100%+ test coverage with comprehensive edge case testing.


Running Tests

# Run all tests (unit + integration + doc tests)
cargo test

# Run with output visible
cargo test -- --nocapture

# Run specific test
cargo test test_real_world_chatgpt

# Run only unit tests
cargo test --lib

# Run only integration tests
cargo test --test integration_tests

# Run tests with all features enabled
cargo test --all-features

Test Coverage

  • 58 total tests covering all functionality
  • 18 unit tests - Core logic, patterns, configuration
  • 36 integration tests - Real-world scenarios, edge cases
  • 4 doc tests - Documentation examples

What's tested:

  • ✅ All citation formats (numeric, named, reference links)
  • ✅ Real AI outputs (ChatGPT, Perplexity)
  • ✅ Edge cases (empty strings, no citations, only citations)
  • ✅ Unicode and emoji support
  • ✅ Large documents (1000+ citations)
  • ✅ Configuration variations
  • ✅ Markdown preservation (formatting, links, images)

Understanding Test Output

When running cargo bench, you'll see tests marked as "ignored" - this is normal. Rust automatically skips regular tests during benchmarking to avoid timing interference. All tests pass when running cargo test.



🔧 Troubleshooting


Common Issues


Q: Why do tests show as "ignored" when running cargo bench?

A: This is normal Rust behavior. When running benchmarks, regular tests are automatically skipped to avoid interfering with timing measurements. All tests pass when you run cargo test. See BENCHMARKING.md for details.


Q: What does "Gnuplot not found, using plotters backend" mean?

A: This is just an informational message. Criterion (the benchmarking library) can use Gnuplot for visualization, but falls back to an alternative plotting backend if it's not installed. Benchmarks still run correctly. To install Gnuplot:

  • macOS: brew install gnuplot
  • Ubuntu/Debian: sudo apt-get install gnuplot
  • Windows: Download from http://www.gnuplot.info/

Q: Why are there performance outliers in benchmarks?

A: Outliers (typically 3-13% of measurements) are normal due to:

  • Operating system scheduling
  • CPU frequency scaling
  • Background processes
  • Cache effects

This is expected and doesn't indicate a problem. Criterion automatically detects and reports outliers.


Q: The CLI tool isn't found after installation

A: Make sure Cargo's bin directory is in your PATH:

# Add to ~/.bashrc, ~/.zshrc, or equivalent
export PATH="$HOME/.cargo/bin:$PATH"

# Then reload your shell
source ~/.bashrc  # or source ~/.zshrc

Q: How do I know if citations were actually removed?

A: Use the --verbose flag to see before/after sizes:

mdcr input.md --verbose

Getting Help

  • Issues: Report bugs or request features on GitHub
  • Documentation: Run cargo doc --open for full API docs
  • Examples: Check the examples/ directory for working code


🤝 Contributing

Built by OpenSite AI for the developer community.

Contributions welcome! Please feel free to submit a Pull Request.


Development Setup

# Clone the repository
git clone https://github.com/opensite-ai/markdown-ai-cite-remove.git
cd markdown-ai-cite-remove

# Run tests
cargo test

# Run benchmarks
cargo bench

# Build documentation
cargo doc --open

# Format code
cargo fmt

# Run linter
cargo clippy


📄 License

Licensed under either of:

at your option.

About

🧹 High-performance Rust library for removing citations from ChatGPT, Claude, Perplexity, and other AI markdown responses. Removes inline citations, reference links, and bibliography sections with 100% accuracy.

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages