diff --git a/assets/icons/readme-svg-banner.svg b/assets/icons/readme-svg-banner.svg deleted file mode 100644 index 68c278ca..00000000 --- a/assets/icons/readme-svg-banner.svg +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - - - - - - - - - README-AI - diff --git a/assets/images/llm-content/dalle/shellbox.png b/assets/images/llm-content/dalle/shellbox.png new file mode 100644 index 00000000..c45fe27d Binary files /dev/null and b/assets/images/llm-content/dalle/shellbox.png differ diff --git a/configuration/header/index.html b/configuration/header/index.html index 68cd481a..b1f05da9 100644 --- a/configuration/header/index.html +++ b/configuration/header/index.html @@ -5,7 +5,7 @@ "name": "README-AI", "url": "https://docs.readme-ai.com/" } -
Skip to content

Header Template Styles

A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.

Default Options

Example

Use the --header-style option to select from the following styles:

README-AI-logo

README-AI

Where Documentation Meets Innovation!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

precommit Ruff GNU%20Bash Pytest Docker Python GitHub%20Actions
Poetry AIOHTTP Material%20for%20MkDocs OpenAI Google%20Gemini Pydantic


Features:

  • Centered alignment
  • Logo above project name
  • Traditional README layout
  • Ideal for most projects

README-AI-STREAMLIT

Streamlining README creation with AI magic!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

Streamlit precommit Ruff GNU%20Bash Pytest Python Poetry



Features:

  • Left-aligned layout
  • Logo and title on same line
  • Space-efficient design
  • Perfect for smaller README files

PYFLINK-POC

Streamlining data flow with PyFlink power!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

GNU%20Bash Python AIOHTTP pandas Apache%20Kafka Apache%20Flink



Features:

  • Left-aligned text
  • Logo floated to the right
  • Contemporary asymmetric design
  • Great for documentation sites

README magic, AI-powered documentation made easy!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

Streamlit precommit Ruff GNU%20Bash Pytest Python Poetry



Features:

  • Full-width SVG banner support
  • Centered alignment
  • Scalable vector graphics
  • Ideal for custom branding

+      

Headers

A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.

Header Style Options

Example

Use the --header-style option to select from the following markdown header templates:

README-AI-logo

README-AI

Where Documentation Meets Innovation!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

precommit Ruff GNU%20Bash Pytest Docker Python GitHub%20Actions
Poetry AIOHTTP Material%20for%20MkDocs OpenAI Google%20Gemini Pydantic


Features:

  • Centered alignment
  • Logo above project name
  • Traditional README layout
  • Ideal for most projects

README-AI-STREAMLIT

Streamlining README creation with AI magic!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

Streamlit precommit Ruff GNU%20Bash Pytest Python Poetry



Features:

  • Left-aligned layout
  • Logo and title on same line
  • Space-efficient design
  • Perfect for smaller README files

PYFLINK-POC

Streamlining data flow with PyFlink power!

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

GNU%20Bash Python AIOHTTP pandas Apache%20Kafka Apache%20Flink



Features:

  • Left-aligned text
  • Logo floated to the right
  • Contemporary asymmetric design
  • Great for documentation sites

readme-ai-banner

❯ REPLACE-ME

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

Anthropic precommit Ruff GNU%20Bash Pytest Docker Python
GitHub%20Actions Poetry AIOHTTP Material%20for%20MkDocs OpenAI Google%20Gemini Pydantic



Features:

  • Full-width SVG banner support
  • Centered alignment
  • Scalable vector graphics
  • Ideal for custom branding

 ██████ ██████   ██   ████   ██   ██ ██████          ██   ██████
 ██  ██ ██      ████  ██  ██ ███ ███ ██             ████    ██
 ██████ ████   ██  ██ ██  ██ ██ █ ██ ████   ██████ ██  ██   ██
diff --git a/search/search_index.json b/search/search_index.json
index 7d12089d..f2223748 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"README-AI","text":"README-AI Your AI-Powered README Generator Designed for simplicity, customization, and developer productivity. 

README-AI is a developer tool and framework that combines robust data processing modules with generative AI models. It streamlines documentation and enhances developer productivity by auto-generating comprehensive README.md files.

With README-AI, you can:

  • Automate Documentation: Synchronize data from third-party sources and generate documentation automatically.
  • Customize & Flexibly Style: Choose from dozens of options for styling, formatting, badges, header designs, and more.
  • Support Multiple Languages & Projects: Work across a wide range of programming languages and project types.
  • Leverage Multiple LLMs: Compatible with OpenAI, Ollama, Anthropic, Google Gemini and Offline Mode.
  • Follow Markdown Best Practices: Create clean, professional-looking documentation using Markdown formatting best practices.
"},{"location":"#key-features","title":"Key Features","text":"
  • Automated README generation
  • Customizable output and styling
  • Language and project agnostic
  • Multi-LLM support
  • Markdown best practices
"},{"location":"#quick-start","title":"Quick Start","text":"

If you're ready to jump right in, here's how to get started:

"},{"location":"#installation","title":"Installation","text":"
pip install readmeai\n
"},{"location":"#usage","title":"Usage","text":"
readmeai -r <repository_url> -o <output_file>\n

Otherwise you can explore the documentation for more detailed information. Cheers!

"},{"location":"contributing/","title":"Contributing Guidelines","text":"

Thanks for your interest in contributing to readme-ai. Please review these guidelines to ensure a smooth process.

"},{"location":"contributing/#make-valuable-contributions","title":"Make Valuable Contributions","text":"

Strive to make useful, creative, and high quality contributions. This isn't meant to be a high bar, but more of a guiding principle and philosophy. Here's what we mean by these terms:

Useful: Solve common problems, use cases, bugs, or new features.

Creative: Innovative and helping us all grow and learn new things.

High Quality: Well-written, structured, and explained.

"},{"location":"contributing/#ways-to-contribute","title":"Ways to Contribute","text":"

To improve and grow the project, we need your help! Here are some ways to get involved:

Activity Ideas \ud83d\udc4b Discussions Start a discussion by asking a question or making a suggestion. \ud83d\udc1b Open an Issue Find unhandled exceptions and bugs in the codebase. \ud83d\udcc4 Documentation Write documentation for the project. \ud83e\uddea Testing Write unit tests to increase code coverage. \ud83e\udde9 Feature Requests Brainstorm new ideas such as a CLI option to select any language. \ud83d\udee0\ufe0f Code Contributions Contribute to the codebase and submit a pull request. \ud83d\udd22 Code Readability Find ways to make code more readable and easier to understand. \ud83e\udd14 Other Anything else you can think of!

These are just a few examples, and we welcome any other ideas you may have!

"},{"location":"contributing/#submitting-changes","title":"Submitting Changes","text":"
  1. Fork the repository and clone it locally.
  2. Create a new branch with a descriptive name i.e feature/new-feature-name or bugfix-issue-123.
  3. Make focused changes with clear commits.
  4. Open a pull request document the changes you've made and why they're necessary.
  5. Respond to code reviews from maintainers.
"},{"location":"contributing/#code-quality-expectations","title":"Code Quality Expectations","text":"
  • Clear, well-documented code
  • Include tests for new code
  • Follow project style standards
  • Rebase onto latest main branch
"},{"location":"contributing/#attribution","title":"Attribution","text":"

Contributors to our project will be acknowledged in the project's README.md file.

"},{"location":"contributing/#license","title":"License","text":"

By contributing to our project, you agree to license your contributions under the project's open source license. The project's license can be found in the LICENSE

Thank you for your interest in contributing to readme-ai! We appreciate your help and look forward to working with you.

"},{"location":"faq/","title":"README-AI Frequently Asked Questions","text":""},{"location":"faq/#general-questions","title":"General Questions","text":""},{"location":"faq/#q-what-is-readme-ai","title":"Q: What is README-AI?","text":"

A: README-AI is a tool that automatically generates comprehensive README files for your projects using artificial intelligence.

"},{"location":"faq/#q-which-ai-models-does-readme-ai-support","title":"Q: Which AI models does README-AI support?","text":"

A: README-AI primarily uses OpenAI's GPT models, but there are ongoing efforts to add support for other models like Claude, Azure OpenAI, Cohere, and Llama2 (via Replicate).

"},{"location":"faq/#installation-and-setup","title":"Installation and Setup","text":""},{"location":"faq/#q-how-do-i-install-readme-ai","title":"Q: How do I install README-AI?","text":"

A: You can install README-AI using pip:

pip install readmeai\n
Alternatively, you can use Docker:
docker run -it -e OPENAI_API_KEY=your_key_here -v \"$(pwd)\":/app zeroxeli/readme-ai:latest\n

"},{"location":"faq/#q-im-getting-an-error-when-trying-to-install-on-ubuntu-how-can-i-fix-it","title":"Q: I'm getting an error when trying to install on Ubuntu. How can I fix it?","text":"

A: If you're encountering issues with conda environment creation, try using a virtual environment with pip instead. Ensure you have Python 3.8 or higher installed.

"},{"location":"faq/#usage","title":"Usage","text":""},{"location":"faq/#q-how-do-i-generate-a-readme-for-my-project","title":"Q: How do I generate a README for my project?","text":"

A: Use the following command:

readmeai -o readme-ai.md -r https://github.com/your-username/your-repo\n
Replace the URL with your repository link.

"},{"location":"faq/#q-can-i-use-readme-ai-with-private-repositories","title":"Q: Can I use README-AI with private repositories?","text":"

A: Yes, but you may need to provide authentication. For Bitbucket, use the format:

https://username:bitbucket_apikey@bitbucket.org/username/repo\n

"},{"location":"faq/#q-does-readme-ai-work-with-gitlab-repositories","title":"Q: Does README-AI work with GitLab repositories?","text":"

A: Yes, README-AI supports GitLab repositories. Use the same command format as with GitHub repos.

"},{"location":"faq/#troubleshooting","title":"Troubleshooting","text":""},{"location":"faq/#q-im-getting-a-404-not-found-error-what-should-i-do","title":"Q: I'm getting a \"404 Not Found\" error. What should I do?","text":"

A: Ensure your OpenAI API key is correct and has sufficient permissions. Also, check if you're using the correct API endpoint.

"},{"location":"faq/#q-the-script-runs-but-doesnt-generate-a-file-why","title":"Q: The script runs but doesn't generate a file. Why?","text":"

A: Check the permissions in your current directory. Ensure README-AI has write access to create the output file.

"},{"location":"faq/#q-im-seeing-a-429-too-many-requests-error-how-can-i-resolve-this","title":"Q: I'm seeing a \"429 Too Many Requests\" error. How can I resolve this?","text":"

A: This error occurs when you've exceeded the rate limit for the OpenAI API. Wait a while before trying again, or consider upgrading your API plan.

"},{"location":"faq/#q-why-am-i-getting-a-notfound-object-is-not-iterable-error","title":"Q: Why am I getting a \"NotFound object is not iterable\" error?","text":"

A: This error may occur if you're using an incompatible model. Ensure you're using a supported model like \"gpt-3.5-turbo\" or \"gpt-4\".

"},{"location":"faq/#features-and-customization","title":"Features and Customization","text":""},{"location":"faq/#q-can-i-use-readme-ai-with-languages-other-than-english","title":"Q: Can I use README-AI with languages other than English?","text":"

A: While README-AI primarily generates content in English, there are ongoing efforts to add internationalization (i18n) support for languages like Spanish and Italian.

"},{"location":"faq/#q-is-it-possible-to-use-readme-ai-in-azure-devops","title":"Q: Is it possible to use README-AI in Azure DevOps?","text":"

A: While there isn't native integration, you could potentially use README-AI as part of your Azure DevOps pipeline by incorporating it into your build or release process.

"},{"location":"faq/#q-can-i-customize-the-openai-endpoint-or-model-used","title":"Q: Can I customize the OpenAI endpoint or model used?","text":"

A: There are ongoing efforts to make the configuration more extensible, including options to specify different endpoints (like Azure OpenAI) and models.

"},{"location":"faq/#contributing","title":"Contributing","text":""},{"location":"faq/#q-how-can-i-contribute-to-readme-ai","title":"Q: How can I contribute to README-AI?","text":"

A: You can contribute by submitting pull requests on GitHub. Areas of contribution include adding support for new AI models, improving documentation, adding tests, and fixing bugs.

If you have any other questions or issues, please check the GitHub repository or open a new issue for support.

"},{"location":"philosophy/","title":"Philosophy and Vision","text":""},{"location":"philosophy/#empowering-developers-enlightening-projects","title":"Empowering Developers, Enlightening Projects","text":"

Readme-ai envisions a future where every software project, regardless of size or complexity, is accompanied by clear, comprehensive, and up-to-date documentation. We strive to create an ecosystem where documentation is no longer an afterthought but an integral, effortless part of the development process.

"},{"location":"philosophy/#our-core-vision","title":"Our Core Vision","text":"
  1. Democratize Quality Documentation
  2. Make professional-grade documentation accessible to all developers, from hobbyists to enterprise teams.
  3. Break down language barriers by offering multilingual documentation generation.

  4. Accelerate Open Source Adoption

  5. Enhance the discoverability and usability of open source projects through superior documentation.
  6. Foster a more inclusive open source community by lowering the barrier to contribution.

  7. Evolve with AI Advancements

  8. Continuously integrate cutting-edge AI technologies to improve documentation quality and generation speed.
  9. Pioneer new ways of understanding and describing code structures and functionalities.

  10. Cultivate Documentation Best Practices

  11. Establish Readme-ai as the gold standard for project documentation in the software industry.
  12. Encourage a culture where well-documented projects are the norm, not the exception.

  13. Enhance Developer Productivity

  14. Free developers to focus on coding by automating the documentation process.
  15. Reduce the time from development to deployment by streamlining the documentation workflow.

  16. Promote Code Understanding

  17. Facilitate better code comprehension across teams and organizations.
  18. Bridge the gap between technical and non-technical stakeholders through clear, AI-generated explanations.

  19. Ensure Adaptability and Extensibility

  20. Create a flexible platform that can easily integrate with various development workflows and tools.
  21. Build a robust plugin ecosystem that allows the community to extend Readme-ai's capabilities.

  22. Champion Ethical AI Use

  23. Lead by example in the responsible and transparent use of AI in developer tools.
  24. Prioritize user privacy and data security in all aspects of our AI-driven processes.
"},{"location":"philosophy/#long-term-impact","title":"Long-Term Impact","text":"

We see Readme-ai as a catalyst for a paradigm shift in software development practices. By making high-quality documentation effortless and ubiquitous, we aim to:

  • Accelerate innovation by making it easier for developers to build upon each other's work.
  • Improve software quality by encouraging better-documented and more maintainable codebases.
  • Enhance collaboration within and between development teams through clearer project communication.
  • Increase the overall efficiency of the software development lifecycle.

Through Readme-ai, we aspire to create a world where every line of code is matched by a line of clear, concise, and helpful documentation, empowering developers and enlightening projects for the benefit of all.

"},{"location":"troubleshooting/","title":"Troubleshooting","text":""},{"location":"troubleshooting/#help-menus","title":"Help Menus","text":"

The --help flag can be used to view the help menu for a command, e.g., for readmeai:

\u276f readmeai --help\n
"},{"location":"troubleshooting/#viewing-the-version","title":"Viewing the Version","text":"

When seeking help, it's important to determine the version of readmeai that you're using \u2014 sometimes the problem is already solved in a newer version.

To check the installed version:

\u276f readmeai --version\n

The following are also valid:

\u276f readmeai -V\n\u276f readmeai pip --version\n
"},{"location":"troubleshooting/#open-an-issue-on-github","title":"Open an issue on GitHub","text":"

The issue tracker on GitHub is a good place to report bugs and request features. Make sure to search for similar issues first, as it is common for someone else to encounter the same problem.

"},{"location":"troubleshooting/#faq","title":"FAQ","text":"

See the FAQ for answers to common questions and troubleshooting tips.

"},{"location":"why/","title":"Why use README-AI?","text":"

In the fast-paced world of software development, clear and comprehensive documentation is crucial. Yet, creating and maintaining high-quality README files can be time-consuming and often overlooked. This is where Readme-ai comes in, revolutionizing the way developers approach project documentation.

"},{"location":"why/#automated-documentation-generation","title":"Automated Documentation Generation","text":"

Readme-ai harnesses the power of artificial intelligence to automatically generate detailed, structured README files for your projects. By analyzing your codebase, Readme-ai creates documentation that is:

  • Comprehensive: Covers all essential aspects of your project, from installation instructions to usage examples.
  • Consistent: Maintains a uniform structure across all your projects, enhancing readability and professionalism.
"},{"location":"why/#time-saving-and-efficient","title":"Time-Saving and Efficient","text":"
  • Focus on Coding: Spend more time writing code and less time worrying about documentation.
  • Quick Setup: Get started with minimal configuration, allowing you to generate a README in minutes.
  • Customizable Templates: Fine-tune the output to match your project's specific needs and your personal style.
"},{"location":"why/#enhanced-project-visibility","title":"Enhanced Project Visibility","text":"
  • Professional Appearance: Engage potential users and contributors with polished, well-structured documentation.
  • Comprehensive Overview: Provide a clear, concise summary of your project, making it easier for others to understand and use.
  • SEO-Friendly: Generated READMEs are optimized for search engines, improving your project's discoverability.
"},{"location":"why/#integration-and-flexibility","title":"Integration and Flexibility","text":"
  • Extensible: Customize and extend readme-ai to fit your specific documentation needs.
  • Multiple AI Backends: Choose from various AI providers (OpenAI, Ollama, Google Gemini) or use the offline mode for sensitive projects.
  • Language Agnostic: Works with a wide range of programming languages and project types, ensuring compatibility with your existing codebase.
"},{"location":"why/#community-driven-development","title":"Community-Driven Development","text":"
  • Open Source: Benefit from and contribute to a growing ecosystem of documentation tools and best practices.

  • Continuous Improvement: Regular updates and improvements driven by community feedback and contributions.

  • Shared Knowledge: Learn from and contribute to a gallery of exemplary READMEs generated by the community.

"},{"location":"why/#key-features","title":"Key Features","text":"
  1. Flexible README Generation: Combines robust repository context extraction with generative AI to create detailed and accurate README files.

  2. Customizable Output: Offers numerous CLI options for tailoring the README to your project's needs:

    • Badge styles and colors

    • Header designs

    • Table of contents styles

    • Project logos

  3. Language Agnostic: Works with a wide range of programming languages and project types, automatically detecting and summarizing key aspects of your codebase.

  4. Project Analysis: Automatically extracts and presents important information about your project:

    • Directory structure
    • File summaries
    • Dependencies
    • Setup instructions
  5. Multi-LLM Support: Compatible with various language model APIs, including:

    • OpenAI
    • Ollama
    • Anthropic
    • Google Gemini
    • Offline Mode
  6. Offline Mode: Generate a basic README structure without requiring an internet connection or API calls.

  7. Markdown Expertise: Leverages best practices in Markdown formatting for clean, professional-looking documentation.

"},{"location":"blog/","title":"Blog","text":"

\u2728 coming soon ...

"},{"location":"configuration/","title":"Configuration","text":"

README-AI offers a wide range of configuration options to customize your README generation. This page provides a comprehensive list of all available options with detailed explanations.

"},{"location":"configuration/#cli-options","title":"CLI Options","text":"Option Description Default Impact --align Text alignment in header center Affects the visual layout of the README header --api LLM API service offline Determines which AI service is used for content generation --badge-color Badge color (name or hex) 0080ff Customizes the color of status badges in the README --badge-style Badge icon style type flat Changes the visual style of status badges --base-url Base URL for the repository v1/chat/completions Used for API requests to the chosen LLM service --context-window Max context window of LLM API 3999 Limits the amount of context provided to the LLM --emojis Add emojis to README sections False Adds visual flair to section headers --header-style Header template style classic Changes the overall look of the README header --image Project logo image blue Sets the main image displayed in the README --model Specific LLM model to use gpt-3.5-turbo Chooses the AI model for content generation --output Output filename readme-ai.md Specifies the name of the generated README file --rate-limit Max API requests per minute 5 Prevents exceeding API rate limits --repository Repository URL or local path None Specifies the project to analyze --temperature Creativity level for generation 0.9 Controls the randomness of the AI's output --toc-style Table of contents style bullet Changes the format of the table of contents --top-p Top-p sampling probability 0.9 Fine-tunes the AI's output diversity --tree-depth Max depth of directory tree 2 Controls the detail level of the project structure

Some options have a significant impact on the generated README's appearance and content. Experiment with different settings to find the best configuration for your project.

"},{"location":"configuration/badges/","title":"Badges","text":"

A badge is a simple embeddable icon that displays various metrics such as the number of stars or forks for a repository, languages used in the project, CI/CD build status, test coverage, the license of the project, and more. Badges are a great way to provide quick information about your project to users and visitors.

README-AI offers various badge styles to enhance your project's README. This guide explains how to use and customize these badges.

"},{"location":"configuration/badges/#badge-styles","title":"Badge Styles","text":"

Use the --badge-style option to select from the following styles:

defaultflatflat-squarefor-the-badgeplasticskillsskills-lightsocial

"},{"location":"configuration/badges/#how-it-works","title":"How It Works","text":"

README-AI automatically detects your project's dependencies and technologies during the repository ingestion process. It then uses these dependencies and technologies to generate a comprehensive list of relevant badges for your project.

When you provide the --badge-style option to the readmeai command, two sets of badges are generated:

  1. Default Metadata Badges: The default set is always included in the generated README file. The default badges include the project license, last commit, top language, and total languages.
  2. Project Dependency Badges: When the --badge-style argument is provided to the CLI, a second badge set is generated, representing the extracted dependencies and metadata from your codebase.

The badge sets are formatted in the README header and provide the reader with a quick overview of the project's key metrics and technologies.

"},{"location":"configuration/badges/#example-usage","title":"Example Usage","text":"

Let's generate a README with custom badge colors and styles using the --badge-color and --badge-style options:

readmeai --badge-color orange \\\n         --badge-style flat-square \\\n         --repository https://github.com/eli64s/readme-ai\n

The command above generates a README with the following badge configuration:

Example

Badge Generation

Built with the tools and technologies:

The --badge-color option only modifies the default badge set, while the --badge-style option is applied to both the default and project dependency badges

"},{"location":"configuration/badges/#tips-for-using-badges","title":"Tips for Using Badges","text":"
  • Choose a badge style that complements your project's overall design.
  • Use badges to highlight relevant information about your project, such as license, build status, and test coverage.
  • Don't overuse badges \u2013 too many can clutter your README and make it hard to read.
  • Ensure that all badge links are correct and up-to-date.
  • Consider using custom badges for project-specific information or metrics.
"},{"location":"configuration/badges/#references","title":"References","text":"

Thank you to the following projects for providing the badges used in this project:

  • Shields.io
  • Aveek-Saha/GitHub-Profile-Badges
  • Ileriayo/Markdown-Badges
  • tandpfun/skill-icons ```
"},{"location":"configuration/emojis/","title":"Emojis","text":"

Emojis are a fun way to add some personality to your README.md file. README-AI allows you to automatically add emojis to all headers in the generated README file by providing the --emojis option to the readmeai command.

"},{"location":"configuration/emojis/#how-it-works","title":"How It Works","text":"

When you provide the --emojis option to the readmeai command, README-AI automatically adds emojis to all headers in the generated README file.

Default (emojis disabled)Emojis Enabled

"},{"location":"configuration/emojis/#readme-ai-streamlit","title":"README-AI-STREAMLIT","text":""},{"location":"configuration/emojis/#codebase-efficiency-elevated-documentation-enhanced","title":"Codebase Efficiency Elevated, Documentation Enhanced!","text":"

Built with the tools and technologies:

"},{"location":"configuration/emojis/#table-of-contents","title":"Table of Contents","text":"
  • Overview
  • Features
  • Project Structure
  • Project Index
  • Getting Started
  • Prerequisites
  • Installation
  • Usage
  • Tests
  • Roadmap
  • Contributing
  • License
  • Acknowledgments
"},{"location":"configuration/emojis/#overview","title":"Overview","text":"

...

"},{"location":"configuration/emojis/#readme-ai-streamlit_1","title":"README-AI-STREAMLIT","text":""},{"location":"configuration/emojis/#codebase-efficiency-elevated-documentation-enhanced_1","title":"Codebase Efficiency Elevated, Documentation Enhanced!","text":"

Built with the tools and technologies:

"},{"location":"configuration/emojis/#table-of-contents_1","title":"\ud83d\udcd2 Table of Contents","text":"
  • \ud83d\udccd Overview
  • \ud83d\udc7e Features
  • \ud83d\udcc1 Project Structure
  • \ud83d\udcc2 Project Index
  • \ud83d\ude80 Getting Started
  • \ud83d\udccb Prerequisites
  • \u2699\ufe0f Installation
  • \ud83e\udd16 Usage
  • \ud83e\uddea Tests
  • \ud83d\udccc Roadmap
  • \ud83d\udd30 Contributing
  • \ud83c\udf97 License
  • \ud83d\ude4c Acknowledgments
"},{"location":"configuration/emojis/#overview_1","title":"\ud83d\udccd Overview","text":"

...

"},{"location":"configuration/header/","title":"Header Template Styles","text":"

A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.

"},{"location":"configuration/header/#default-options","title":"Default OptionsREADME-AI-STREAMLIT","text":"

Example

Use the --header-style option to select from the following styles:

ClassicCompactModernSVGASCIIASCII_BOX

README-AI

Where Documentation Meets Innovation!

Built with the tools and technologies:

Features:

  • \u25ce Centered alignment
  • \u25ce Logo above project name
  • \u25ce Traditional README layout
  • \u25ce Ideal for most projects

Streamlining README creation with AI magic!

Built with the tools and technologies:

Features:

  • \u25ce Left-aligned layout
  • \u25ce Logo and title on same line
  • \u25ce Space-efficient design
  • \u25ce Perfect for smaller README files

PYFLINK-POC

Streamlining data flow with PyFlink power!

Built with the tools and technologies:

Features:

  • \u25ce Left-aligned text
  • \u25ce Logo floated to the right
  • \u25ce Contemporary asymmetric design
  • \u25ce Great for documentation sites

README magic, AI-powered documentation made easy!

Built with the tools and technologies:

Features:

  • \u25ce Full-width SVG banner support
  • \u25ce Centered alignment
  • \u25ce Scalable vector graphics
  • \u25ce Ideal for custom branding

\n\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588          \u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588\n\u2588\u2588  \u2588\u2588 \u2588\u2588      \u2588\u2588\u2588\u2588  \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588 \u2588\u2588\u2588 \u2588\u2588             \u2588\u2588\u2588\u2588    \u2588\u2588\n\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588  \u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588 \u2588 \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588   \u2588\u2588\n\u2588\u2588 \u2588\u2588  \u2588\u2588     \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588   \u2588\u2588 \u2588\u2588            \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588\n\u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588        \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588\n

\u276f REPLACE-ME

Built with the tools and technologies:

Features:

  • \u25ce Text-based art logo
  • \u25ce Minimal and retro style
  • \u25ce No image dependencies
  • \u25ce Good for terminal-focused tools

\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n\u2551                                                                    \u2551\n\u2551  \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588          \u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588   \u2551\n\u2551  \u2588\u2588  \u2588\u2588 \u2588\u2588      \u2588\u2588\u2588\u2588  \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588 \u2588\u2588\u2588 \u2588\u2588             \u2588\u2588\u2588\u2588    \u2588\u2588     \u2551\n\u2551  \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588  \u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588 \u2588 \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588   \u2588\u2588     \u2551\n\u2551  \u2588\u2588 \u2588\u2588  \u2588\u2588     \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588   \u2588\u2588 \u2588\u2588            \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588     \u2551\n\u2551  \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588        \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588   \u2551\n\u2551                                                                    \u2551\n\u2551                                                                    \u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n

\u276f REPLACE-ME

Built with the tools and technologies:

Features:

  • \u25ce Text-based art logo
  • \u25ce Minimal and retro style
  • \u25ce No image dependencies
  • \u25ce Good for terminal-focused tools

"},{"location":"configuration/header/#how-it-works","title":"How It Works","text":"

README-AI provides several ways to customize your header style:

  1. Default Styles: Choose from pre-defined header layouts
  2. Alignment Options: Set text and image alignment
  3. Custom Sizing: Adjust logo and image dimensions
  4. Badge Integration: Incorporates shield badges and tech stack icons

The selected style will determine how your project's name, logo, description, and badges are arranged in the header section.

"},{"location":"configuration/header/#examples","title":"Examples","text":""},{"location":"configuration/header/#using-classic-style","title":"Using Classic Style","text":"
readmeai --header-style classic --repository https://github.com/username/project\n
"},{"location":"configuration/header/#using-modern-style-with-custom-alignment","title":"Using Modern Style with Custom Alignment","text":"
readmeai --header-style modern --align left --repository https://github.com/username/project\n
"},{"location":"configuration/header/#combining-with-other-options","title":"Combining with Other Options","text":"
readmeai --header-style compact \\\n         --badge-style flat \\\n         --image gradient \\\n         --repository https://github.com/username/project\n
"},{"location":"configuration/header/#tips-for-using-header-styles","title":"Tips for Using Header Styles","text":"
  • Classic: Best for traditional open-source projects that need a professional look
  • Modern: Great for documentation sites and projects with longer READMEs
  • Compact: Ideal for smaller projects or when space is at a premium
  • SVG: Perfect for projects that need custom branding or full-width banners
  • ASCII: Good for terminal applications or when you want a retro feel

Consider these factors when choosing a header style: - Your project's target audience - The amount of content in your README - Whether you have a custom logo or banner - The overall aesthetic of your documentation - How the style works with your chosen badge style

Some header styles may look different on different platforms or markdown renderers. It's a good idea to test how your chosen style looks on your target platform.

"},{"location":"configuration/project_logo/","title":"Project Logo","text":"

A project logo is a visual representation of your project that appears at the top of your README file. It helps to brand your project and make it more recognizable. README-AI offers various options for adding a logo to your project's README.

"},{"location":"configuration/project_logo/#default-options","title":"Default Options","text":"

Use the --image option to select from the following logo styles:

BlueGradientBlackCloudPurpleGrey

"},{"location":"configuration/project_logo/#how-it-works","title":"How It Works","text":"

README-AI provides several ways to include a logo in your project's README:

  1. Default Images: Choose from a set of pre-defined logos.
  2. Custom Images: Use your own image by providing a URL or file path.
  3. LLM Images: Generate a unique logo using AI (requires an LLM API).

The selected or generated logo will be placed at the top of your README file, helping to visually identify your project.

"},{"location":"configuration/project_logo/#examples","title":"Examples","text":""},{"location":"configuration/project_logo/#selecting-a-default-image","title":"Selecting a Default Image","text":"

To use one of the default images, specify the image name with the --image option:

readmeai --image gradient --repository https://github.com/username/project\n
"},{"location":"configuration/project_logo/#providing-a-custom-image","title":"Providing a Custom Image","text":"

You can provide readme-ai with a custom image by using the --image custom option:

readmeai --image custom --repository https://github.com/username/project\n

You will be prompted to provide a path to your image on your local machine or via URL:

Provide an image file path or URL:\n
"},{"location":"configuration/project_logo/#llm-image-generation","title":"LLM Image Generation","text":"

To generate a logo using a text-to-image model from a LLM API (i.e. OpenAI), use the --image llm option:

readmeai --image llm --api openai --repository https://github.com/username/project\n

This will generate a unique logo that you can display in your project's documentation. The prompt used for text-to-image generation can be found here.

Example

The following examples show generated logos by readme-ai for various open-source projects:

1. eli64/readme-ai2. eli64/sreadme-ai3. eli64/readme-ai-streamlit4. PiyushSuthar/github-readme-quotes

README-AI

README-AI

README-AI-STREAMLIT

GITHUB-README-QUOTES

The quality and relevance of LLM-generated logos can vary. It's a good idea to review and potentially edit the generated logo to ensure it meets your project's needs.

"},{"location":"configuration/project_logo/#tips-for-using-project-logos","title":"Tips for Using Project Logos","text":"
  • Choose a logo that represents your project's purpose or theme.
  • Ensure the logo is clear and recognizable even at smaller sizes.
  • If using a custom image, make sure it's high quality and appropriately sized.
  • When using LLM-generated logos, you may want to generate several options to choose from.
  • Consider how the logo will look alongside your project's badges and other README content.
  • If your project is part of a larger organization or ecosystem, consider using a logo that aligns with that branding.
"},{"location":"configuration/table_of_contents/","title":"Table of Contents (ToC) Templates","text":"

README-AI offers flexible options for generating a Table of Contents (ToC) in your README file, directly from the command line. You can specify different styles of ToC generation using the --toc-style option when running the CLI.

"},{"location":"configuration/table_of_contents/#cli-usage-for-toc-styles","title":"CLI Usage for ToC Styles","text":"

When using the readmeai CLI, you can customize the Table of Contents by specifying one of the supported styles with the --toc-style flag.

"},{"location":"configuration/table_of_contents/#supported-toc-styles","title":"Supported ToC Styles:","text":"
  1. Bullet (bullet)
  2. Displays a simple bulleted list format for the Table of Contents.

  3. Fold (fold)

  4. Generates a collapsible Table of Contents. Users can click to expand the list of sections.

  5. Links (links)

  6. A quick link format that groups key sections under a \"Quick Links\" heading.

  7. Number (number)

  8. Numbers the sections in the Table of Contents for clear ordering and hierarchy.

  9. Roman (roman)

  10. Uses Roman numerals to number the sections, providing a classic and formal appearance.
"},{"location":"configuration/table_of_contents/#example-cli-command","title":"Example CLI Command","text":"

Here\u2019s how to generate a README file with a Numbered Table of Contents:

readmeai --repository ./my_project --toc-style number --output README.md\n

In this example: - The --repository flag specifies the local project directory. - The --toc-style number flag sets the Table of Contents to use a numbered format. - The --output README.md flag specifies the output file name.

"},{"location":"configuration/table_of_contents/#another-example-with-foldable-toc","title":"Another Example with Foldable ToC","text":"

To generate a README with a Foldable Table of Contents, run:

readmeai --repository ./my_project --toc-style fold --output README.md\n

This will create a ToC that is hidden by default, allowing users to expand it if needed.

"},{"location":"configuration/table_of_contents/#available-toc-styles","title":"Available ToC Styles","text":"
-tc, --toc-style [bullet|fold|links|number|roman]\n

The --toc-style option supports the following values:

  • bullet: Simple bullet list
  • fold: Collapsible ToC
  • links: Quick links format
  • number: Numbered list of sections
  • roman: Roman numeral list of sections
"},{"location":"configuration/table_of_contents/#additional-customizations","title":"Additional Customizations","text":"

You can further customize your README file with options like header styles (--header-style), logo image (--image), and badges (--badge-style).

"},{"location":"examples/gallery/","title":"Example Gallery","text":"

Here are some examples of README files generated by readme-ai for various projects using different languages and frameworks.

Language/Framework Output File Input Repository Description Python readme-python.md readme-ai Core readme-ai project TypeScript & React readme-typescript.md ChatGPT App React Native ChatGPT app PostgreSQL & DuckDB readme-postgres.md Buenavista Postgres proxy server Kotlin & Android readme-kotlin.md file.io Client Android file sharing app Streamlit readme-streamlit.md readme-ai-streamlit Streamlit UI for readme-ai app Rust & C readme-rust-c.md CallMon System call monitoring tool Docker & Go readme-go.md docker-gs-ping Dockerized Go app Java readme-java.md Minimal-Todo Minimalist todo Java app FastAPI & Redis readme-fastapi-redis.md async-ml-inference Async ML inference service Jupyter Notebook readme-mlops.md mlops-course MLOps course repository Apache Flink readme-local.md Local Directory Example using a local directory

See additional README files generated by readme-ai here

"},{"location":"guides/markdown_best_practices/","title":"Markdown Best Practices","text":"

This document provides a comprehensive guide to writing technical documentation using the GitHub flavored markdown spec. This guide includes examples of how to use various markdown elements to create visually appealing and informative documentation.

"},{"location":"guides/markdown_best_practices/#table-of-contents","title":"Table of Contents","text":"
  • Markdown Horizontal Rule
  • HTML Horizontal Rule
  • Table with Alignment
    • Multi-Line Table Cells
    • Task Lists
    • Merge Cells
  • Progress Bars
  • Highlighting
  • Underlining
  • Keyboard Shortcuts
    • Navigating
    • Editing
  • Centered Images
  • Horizontally Aligned Images
  • Small Images
  • Text Boxes
  • Text Wrapping
  • Inline Links
  • Reference Links
  • Simple Contact
  • Modern Contact with Social Icons
  • Contributing Graph
"},{"location":"guides/markdown_best_practices/#line-separators","title":"Line Separators","text":""},{"location":"guides/markdown_best_practices/#markdown-horizontal-rule","title":"Markdown Horizontal Rule","text":"
section end\n\n---\n
section end\n\n***\n
"},{"location":"guides/markdown_best_practices/#html-horizontal-rule","title":"HTML Horizontal Rule","text":"
<p>section end</p>\n\n<hr>\n
"},{"location":"guides/markdown_best_practices/#lists","title":"Lists","text":"

Things I need to do today: 1. Fix usability problem 2. Clean up the page * Make the headings bigger 2. Push my changes 3. Create code review * Describe my changes * Assign reviewers * Ask for feedback

"},{"location":"guides/markdown_best_practices/#tables","title":"Tables","text":""},{"location":"guides/markdown_best_practices/#table-with-alignment","title":"Table with Alignment","text":"
| Left Aligned | Centered | Right Aligned |\n| :---         | :---:    | ---:          |\n| Cell 1       | Cell 2   | Cell 3        |\n| Cell 4       | Cell 5   | Cell 6        |\n

This will render as:

Left Aligned Centered Right Aligned Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6"},{"location":"guides/markdown_best_practices/#multi-line-table-cells","title":"Multi-Line Table Cells","text":"
| Name | Details |\n| ---  | ---     |\n| Item1 | This text is on one line |\n| Item2 | This item has:<br>- Multiple items<br>- That we want listed separately |\n

This will render as:

Name Details Item1 This text is on one line Item2 This item has:- Multiple items- That we want listed separately"},{"location":"guides/markdown_best_practices/#task-lists","title":"Task Lists","text":"
| header 1 | header 2 |\n| ---      | ---      |\n| cell 1   | cell 2   |\n| cell 3   | <ul><li> - [ ] Task one </li><li> - [ ] Task two </li></ul> |\n

This will render as:

header 1 header 2 cell 1 cell 2 cell 3
  • - [ ] Task one
  • - [ ] Task two
"},{"location":"guides/markdown_best_practices/#merge-cells","title":"Merge Cells","text":"
<table>\n  <tr>\n    <td colspan=\"2\">I take up two columns!</td>\n  </tr>\n  <tr>\n    <td>First column</td>\n    <td>Second column</td>\n  </tr>\n</table>\n

This will render as:

I take up two columns! First column Second column"},{"location":"guides/markdown_best_practices/#text-styling-formatting","title":"Text Styling & Formatting","text":"
  • strikethrough or strikethrough
  • H2O is a liquid and C6H12O6 is a sugar.
  • 19th
  • X2 + Y2 = Z2
  • z2 + c
  • \u2003Install\u2003
"},{"location":"guides/markdown_best_practices/#progress-bars","title":"Progress Bars","text":"

22%

48%

77%

"},{"location":"guides/markdown_best_practices/#highlighting","title":"Highlighting","text":"

highlighted text.

"},{"location":"guides/markdown_best_practices/#underlining","title":"Underlining","text":"

I'm Underlined!

"},{"location":"guides/markdown_best_practices/#buttons-keyboard-shortcuts","title":"Buttons & Keyboard Shortcuts","text":"Click here Click here Or here Or here

Big Fat Button

"},{"location":"guides/markdown_best_practices/#keyboard-shortcuts","title":"Keyboard Shortcuts","text":"

Press Enter to go to the next page.

"},{"location":"guides/markdown_best_practices/#navigating","title":"Navigating","text":"

You can navigate through your items or search results using the keyboard. You can use Tab to cycle through results, and Shift + Tab to go backwards. Or use the arrow keys, \u2191, \u2192, \u2193 and \u2190.

To copy the selected text, press Ctrl + C.

"},{"location":"guides/markdown_best_practices/#editing","title":"Editing","text":"

Press Ctrl + S to save your changes. Select text and press Ctrl + B to make it bold.

"},{"location":"guides/markdown_best_practices/#math-equations","title":"Math Equations","text":"\\[ \\begin{aligned} \\dot{x} & = \\sigma(y-x) \\\\ \\dot{y} & = \\rho x - y - xz \\\\ \\dot{z} & = -\\beta z + xy \\end{aligned} \\] \\[ L = \\frac{1}{2} \\rho v^2 S C_L \\]"},{"location":"guides/markdown_best_practices/#images","title":"Images","text":""},{"location":"guides/markdown_best_practices/#simple-icons","title":"Simple Icons","text":""},{"location":"guides/markdown_best_practices/#docker","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#docker_1","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#centered-images","title":"Centered Images","text":""},{"location":"guides/markdown_best_practices/#horizontally-aligned-images","title":"Horizontally Aligned Images","text":""},{"location":"guides/markdown_best_practices/#small-images","title":"Small Images","text":"

Code documentation - Generated directory tree structure and summaries of the key files in your codebase.

Spike documentation - Generated directory tree structure and summaries of the key files in your codebase.

Chunking documentation - Generated directory tree structure and summaries of the key files in your codebase.

"},{"location":"guides/markdown_best_practices/#text-boxes","title":"Text Boxes","text":"This is text in the box. Much wow"},{"location":"guides/markdown_best_practices/#text-wrapping","title":"Text Wrapping","text":"

At the 2019 rendition of E3, an eccentric gamer in attendance interrupted Keanu Reeves' presentation of the role-playing game (RPG) Cyberpunk 2077, loudly claiming, \u201c\"You're breathtaking,\"\u201d which was directed at the actor-cum-presenter. The image macro used to build the \"You're Breathtaking\" meme generally features a still of Keanu Reeves pointing at someone in the audience in front of him - that someone is Peter Sark, though there are no images from Keanu's point of view that have since been used as part of the \"You're Breathtaking\" meme.

"},{"location":"guides/markdown_best_practices/#mermaid-diagrams","title":"Mermaid Diagrams","text":"
  • Mermaid Live Editor
graph TD;\n  A-->B;\n  A-->C;\n  B-->D;\n  C-->D;
graph TB\n\n  SubGraph1 --> SubGraph1Flow\n  subgraph \"SubGraph 1 Flow\"\n  SubGraph1Flow(SubNode 1)\n  SubGraph1Flow -- Choice1 --> DoChoice1\n  SubGraph1Flow -- Choice2 --> DoChoice2\n  end\n\n  subgraph \"Main Graph\"\n  Node1[Node 1] --> Node2[Node 2]\n  Node2 --> SubGraph1[Jump to SubGraph1]\n  SubGraph1 --> FinalThing[Final Thing]\nend
journey\n    title My working day\n    section Go to work\n      Make tea: 5: Me\n      Go upstairs: 3: Me\n      Do work: 1: Me, Cat\n    section Go home\n      Go downstairs: 5: Me\n      Sit down: 5: Me
graph TD\nA[FastAPI Application] --> B{HTTP Request}\nB --> C{Request Validation}\nC -- Valid --> D[Route Handler]\nC -- Invalid --> E[RequestValidationError]\nD --> F{Route Handler Execution}\nF -- Success --> G[Response]\nF -- Exception --> H{Exception Handling}\nH -- RequestError --> I[request_error_handler]\nH -- HTTPException --> J[http_exception_handler]\nH -- RequestValidationError --> K[request_validation_error_handler]\nH -- Unhandled Exception --> L[Internal Server Error]\nI --> M[Custom Error Response]\nJ --> N[HTTP Exception Response]\nK --> O[Validation Error Response]\nL --> P[Internal Server Error Response]\nM --> Q[Return Response]\nN --> Q\nO --> Q\nP --> Q
"},{"location":"guides/markdown_best_practices/#return-to-top","title":"Return To Top","text":"

Return

Return

Return

"},{"location":"guides/markdown_best_practices/#html-spacing-entities","title":"HTML Spacing Entities","text":"Name HTML Entity Description En space &ensp; Half the width of an em space Em space &emsp; Width of an em space (equal to the font size) Three-per-em space &emsp13; One-third of an em space Figure space &numsp; Width of a numeral (digit) Punctuation space &puncsp; Width of a period or comma Thin space &thinsp; Thinner than a regular space Hair space &hairsp; Thinner than a thin space Narrow no-break space &#8239; Non-breaking thin space

Note: The &emsp13; and &puncsp; entities may not be supported in all browsers. For the narrow no-break space, there isn't a named HTML entity, so the numeric character reference &#8239; is used.

"},{"location":"guides/markdown_best_practices/#links","title":"Links","text":""},{"location":"guides/markdown_best_practices/#inline-links","title":"Inline Links","text":"

inline link reference link

"},{"location":"guides/markdown_best_practices/#reference-links","title":"Reference Links","text":"

reference link 2

"},{"location":"guides/markdown_best_practices/#contact","title":"Contact","text":""},{"location":"guides/markdown_best_practices/#simple-contact","title":"Simple Contact","text":"

If you have any questions or comments, feel free to reach out to me! - Email: your-email@example.com - Twitter: @YourHandle

"},{"location":"guides/markdown_best_practices/#modern-contact-with-social-icons","title":"Modern Contact with Social Icons","text":"

For readme-ai issues and feature requests please visit our issues page, or start a discussion!

"},{"location":"guides/markdown_best_practices/#contributing-guidelines","title":"Contributing Guidelines","text":""},{"location":"guides/markdown_best_practices/#contributing-graph","title":"Contributing Graph","text":""},{"location":"guides/markdown_best_practices/#references","title":"References","text":"
  • github-markdown-tricks
"},{"location":"guides/markdown_best_practices/#footnotes","title":"Footnotes","text":"

Here's a sample sentence with a footnote.[^1]

[^1]: And here's the definition of the footnote.

"},{"location":"llms/","title":"Large Language Model (LLM) Integrationss","text":"

Readme-ai integrates seamlessly with various Large Language Model (LLM) services to generate high-quality README content. This page provides an overview of the supported LLM services and links to detailed information about each.

"},{"location":"llms/#supported-llm-services","title":"Supported LLM Services","text":"
  1. OpenAI
  2. Ollama
  3. Anthropic
  4. Google Gemini
  5. Offline Mode
"},{"location":"llms/#comparing-llm-services","title":"Comparing LLM Services","text":"Service Pros Cons OpenAI High-quality output, Versatile Requires API key, Costs associated Ollama Free, Privacy-focused, Offline May be slower, Requires local setup Anthropic Privacy-focused, Offline May be slower, Requires local setup Gemini Strong performance, Google integration Requires API key Offline No internet required, Fast Basic output, Limited customization"},{"location":"llms/#tips-for-optimal-results","title":"Tips for Optimal Results","text":"
  1. Experiment with different models: Try various LLM services and models to find the best fit for your project.
  2. Provide clear context: Ensure your repository has well-organized code and documentation to help the LLM generate more accurate content.
  3. Fine-tune with CLI options: Use readme-ai's CLI options to customize the output further after choosing your LLM service.
  4. Review and edit: Always review the generated README and make necessary edits to ensure accuracy and relevance to your project.

By leveraging these LLM integrations effectively, you can generate comprehensive and accurate README files for your projects with minimal effort. Here is a structured content tab implementation for integrating multiple APIs in README-AI, based on the detailed API integration information you provided:

"},{"location":"llms/#api-integrations","title":"\ud83d\ude80 API Integrations","text":"

README-AI supports multiple large language model (LLM) APIs for generating README files. The following tabs explain how to configure and use each supported API.

"},{"location":"llms/#api-configuration-tabs","title":"API Configuration Tabs","text":"AnthropicGeminiOllamaOpenAIOfflineMode
```sh\nreadmeai --api anthropic --model claude-3-opus-20240229 --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api gemini --model gemini-1.5-flash --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api ollama --model llama3 --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api openai --model gpt-3.5-turbo --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api offline --repository <REPO_URL_OR_PATH>\n```\n

```

"},{"location":"llms/anthropic/","title":"Installation Guide for readme-ai with Ollama","text":"

Get started with readme-ai using Ollama. This guide will show you how to install and run readme-ai using Ollama in your local environment.

Ollama Requirement

Ensure you have Ollama installed and running on your system. For the latest installation guide, visit the Ollama GitHub repository.

"},{"location":"llms/anthropic/#installation-using-ollama","title":"Installation Using Ollama","text":"

To use readme-ai with Ollama, follow these steps:

  1. Install Ollama

Ensure you have installed Ollama on your system. You can find detailed installation instructions on the Ollama GitHub page.

  1. Pull the LLM Model

Pull the required LLM model to use with Ollama:

ollama pull llama3:latest\n
  1. Start the Ollama Server

Start the Ollama server locally:

export OLLAMA_HOST=127.0.0.1 && ollama serve\n
  1. Run readme-ai with Ollama

After starting the server, run readme-ai with Ollama:

readmeai --api ollama --model llama3 --repository https://github.com/eli64s/readme-ai\n

Explanation of common arguments:

Argument Description --api Specifies the LLM API service to use (in this case, Ollama). --model Specifies the model to use with Ollama (e.g., llama3). --repository Specifies the GitHub repository or local directory path to analyze."},{"location":"llms/anthropic/#optional-dependencies","title":"Optional Dependencies","text":"

To use additional LLM providers like Anthropic or Google Gemini in addition to Ollama, install the optional dependencies:

Anthropic:

pip install readmeai[anthropic]\n

Google Gemini:

pip install readmeai[gemini]\n
"},{"location":"llms/anthropic/#usage","title":"Usage","text":""},{"location":"llms/anthropic/#setting-environment-variables","title":"Setting Environment Variables","text":"

Ollama Host

Set the Ollama host to enable the local server:

export OLLAMA_HOST=127.0.0.1\n

For Windows users, use:

set OLLAMA_HOST=127.0.0.1\n
"},{"location":"llms/anthropic/#running-readme-ai-with-ollama","title":"Running readme-ai with Ollama","text":"

Run readme-ai with the Ollama model:

readmeai --api ollama --model llama3 --repository https://github.com/eli64s/readme-ai\n

For a list of all available options, run:

readmeai --help\n
"},{"location":"llms/anthropic/#troubleshooting","title":"Troubleshooting","text":"
  1. Server Connection Issues: Ensure that the Ollama server is running and accessible. Verify that the host address is set correctly.
  2. Model Not Found: Make sure that the required LLM model is properly pulled using the ollama pull command.
  3. Permission Issues: Ensure you have the necessary permissions to run commands with Ollama. You may need administrative rights on your system.

For further help, you can open an issue on GitHub or refer to the official documentation.

"},{"location":"llms/google_gemini/","title":"Google Gemini","text":"

Google's Gemini models offer strong performance across a wide range of tasks.

"},{"location":"llms/google_gemini/#configuration","title":"Configuration","text":"
readmeai --repository <REPO_URL_OR_PATH> --api gemini --model gemini-1.5-flash\n
"},{"location":"llms/google_gemini/#available-models","title":"Available Models","text":"

It is recommended to use the following models:

  • gemini-1.5-pro
"},{"location":"llms/google_gemini/#best-practices","title":"Best Practices","text":"
  • Gemini models excel at understanding context and generating coherent text.
  • Ensure you have the necessary API credentials set up.
"},{"location":"llms/offline_mode/","title":"Offline Mode","text":"

Offline mode allows you to generate a README without an internet connection. This is useful when you want to quickly generate boilerplate README files.

"},{"location":"llms/offline_mode/#configuration","title":"Configuration","text":"
readmeai --api offline -r https://github.com/username/project\n
"},{"location":"llms/offline_mode/#offline-mode-readme-example","title":"Offline Mode README Example","text":"
  • readme-offline-mode.md
Note
  • Use offline mode for quick boilerplate generation or when you don't have internet access.
  • Customize the generated README manually after generation.
"},{"location":"llms/ollama/","title":"Ollama","text":"

Ollama is a privacy-focused, open-source tool for running open-source LLMs locally such as Llama 3, Mistral, and Gemma 2. Ollama can be used with readme-ai to generate README files with a variety of models and configurations from their model library.

"},{"location":"llms/ollama/#usage","title":"Usage","text":"

Start by downloading Ollama, and then pull a model such as Llama 3 or Mistral.

ollama pull llama3\n

Once you have the model, run the ollama server.

ollama run llama3\n

Then, you can use the readmeai CLI to generate README files using the Ollama API.

readmeai --api ollama --model llama3 -r https//github.com/username/project\n
Note
  • Slower README generation times may be experienced when using Ollama compared to cloud-based services.
"},{"location":"llms/openai/","title":"OpenAI","text":"

OpenAI's GPT models are known for their versatility and high-quality text generation.

"},{"location":"llms/openai/#configuration","title":"Configuration","text":"
readmeai --repository <REPO_URL_OR_PATH> --api openai --model gpt-3.5-turbo\n
"},{"location":"llms/openai/#available-models","title":"Available Models","text":"

Although not limited to the following, it is recommended to use the following models: - gpt-3.5-turbo - gpt-4 - gpt-4-turbo

"},{"location":"llms/openai/#best-practices","title":"Best Practices","text":"
  • Use gpt-3.5-turbo for faster generation and lower costs.
  • Use gpt-4 or gpt-4-turbo for more complex projects or when you need higher accuracy.
"},{"location":"quickstart/installation/","title":"Installation","text":"

Install readmeai using one of the following methods:

"},{"location":"quickstart/installation/#pip","title":"Pip","text":"

Pip is the default Python package manager and recommended for installing readmeai:

pip install readmeai\n
"},{"location":"quickstart/installation/#pipx","title":"Pipx","text":"

Alternatively, use pipx to install readmeai in an isolated environment:

pipx install readmeai\n
Why use pipx?

Using pipx allows you to install and run Python command-line applications in isolated environments, which helps prevent dependency conflicts with other Python projects.

"},{"location":"quickstart/installation/#optional-dependencies","title":"Optional Dependencies","text":"

To use additional LLM providers like Anthropic or Google Gemini, install the optional dependencies:

Anthropic:

pip install \"readmeai[anthropic]\"\n

Google Gemini:

pip install pip install \"readmeai[google-generativeai]\"\n

For usage instructions, see the Usage section.

"},{"location":"quickstart/prerequisites/","title":"Prerequisites","text":""},{"location":"quickstart/prerequisites/#system-requirements","title":"System Requirements","text":"
  • Python: 3.9 or higher.
  • Package Manage/Container: pip, pipx, or docker.
"},{"location":"quickstart/prerequisites/#repository-or-directory","title":"Repository or Directory","text":"

A Git repository or a local file system directory is required to generate a README file. Supported platforms include:

  • GitHub
  • GitLab
  • Bitbucket
  • File System

If your Git provider is not listed, open an issue or submit a pull request to add support for additional platforms.

"},{"location":"quickstart/prerequisites/#llm-api-key","title":"LLM API Key","text":"

To enable the full functionality of readmeai, an account and API key are required for one of the following providers:

  • OpenAI: OpenAI API
  • Ollama: Ollama API
  • Anthropic: Anthropic API
  • Google Gemini: Google Gemini API
  • Offline Mode: Runs readmeai offline, or without an API key.

For more information on setting up an API key, refer to the provider's documentation.

"},{"location":"usage/cli/","title":"CLI Usage","text":"

This guide covers the basic usage of readme-ai and provides examples for different LLM services.

"},{"location":"usage/cli/#basic-usage","title":"Basic Usage","text":"

The general syntax for using readme-ai is:

readmeai --repository <REPO_URL_OR_PATH> --api <LLM_SERVICE> [OPTIONS]\n

Replace <REPO_URL_OR_PATH> with your repository URL or local path, and <LLM_SERVICE> with your chosen LLM service (openai, ollama, gemini, or offline).

"},{"location":"usage/cli/#examples","title":"Examples","text":""},{"location":"usage/cli/#using-openai","title":"Using OpenAI","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api openai \\\n         --model gpt-3.5-turbo # (1)\n
  1. Model currently defaults to gpt-3.5-turbo
"},{"location":"usage/cli/#using-ollama","title":"Using Ollama","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api ollama \\\n         --model llama3\n
"},{"location":"usage/cli/#using-google-gemini","title":"Using Google Gemini","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api gemini \\\n         --model gemini-1.5-flash\n
"},{"location":"usage/cli/#offline-mode","title":"Offline Mode","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api offline\n
"},{"location":"usage/cli/#advanced-usage","title":"Advanced Usage","text":"

You can customize the output using various options:

readmeai --repository https://github.com/eli64s/readme-ai \\\n         --output readmeai.md \\\n         --api openai \\\n         --model gpt-4-turbo \\\n         --badge-color A931EC \\\n         --badge-style flat-square \\\n         --header-style compact \\\n         --toc-style fold \\\n         --temperature 0.1 \\\n         --tree-depth 2 \\\n         --image LLM \\\n         --emojis\n

For a full list of options, run:

readmeai --help\n

See the Configuration Options documentation for detailed explanations of each option.

"},{"location":"usage/cli/#tips-for-effective-usage","title":"Tips for Effective Usage","text":"
  1. Choose the right LLM: Different LLMs may produce varying results. Experiment to find the best fit for your project.
  2. Adjust temperature: Lower values (e.g., 0.1) produce more focused output, while higher values (e.g., 0.8) increase creativity.
  3. Use custom prompts: For specialized projects, consider using custom prompts to guide the AI's output.
  4. Review and edit: Always review the generated README and make necessary adjustments to ensure accuracy and relevance.
"},{"location":"usage/cli/#troubleshooting","title":"Troubleshooting","text":"

If you encounter any issues:

  1. Ensure you have the latest version of readme-ai installed.
  2. Check your API credentials if using OpenAI or Google Gemini.
  3. For Ollama, make sure the Ollama service is running locally.
  4. Consult the FAQ or open an issue for additional support.
"},{"location":"usage/docker/","title":"Docker","text":"

Running readme-ai in a containerized environment using Docker offers isolation of the application and its dependencies from the host system. This section details how to pull the Docker image from Docker Hub, build the Docker image from the source code, and run the Docker container.

Docker Installation

Before proceeding, ensure that Docker is installed and running on your system. If you haven't installed Docker yet, please visit the official Docker documentation for installation instructions.

"},{"location":"usage/docker/#pull-the-docker-image","title":"Pull the Docker Image","text":"

Pull the latest readme-ai image from Docker Hub:

docker pull zeroxeli/readme-ai:latest\n
"},{"location":"usage/docker/#build-the-docker-image","title":"Build the Docker Image","text":"

Alternatively, you can build the Docker image from the source code. This assumes you have cloned the readme-ai repository.

docker buildx build --platform linux/amd64,linux/arm64 -t readme-ai --push .\n
Buildx

Using docker buildx allows you to build multi-platform images, which means you can create Docker images that work on different architectures (e.g., amd64 and arm64). This is particularly useful if you want your Docker image to be compatible with a wider range of devices and environments, such as both standard servers and ARM-based devices like the Raspberry Pi.

"},{"location":"usage/docker/#run-the-docker-container","title":"Run the Docker Container","text":"

Run the readme-ai Docker container with the following command:

docker run -it --rm \\\n-e OPENAI_API_KEY=$OPENAI_API_KEY \\\n-v \"$(pwd)\":/app zeroxeli/readme-ai:latest \\\n-r https://github.com/eli64s/readme-ai \\\n--api openai\n

Explanation of the command arguments:

Argument Function -it Creates an interactive terminal. --rm Automatically removes the container when it exits. -e Passes your OpenAI API key as an environment variable. -v \"$(pwd)\":/app Mounts the current directory to the /app directory in the container, allowing access to the generated README file on your host system. -r Specifies the GitHub repository to analyze.

For Windows users, replace $(pwd) with %cd% in the command. For PowerShell, use ${PWD} instead.

"},{"location":"usage/docker/#cleanup","title":"Cleanup","text":"

If you want to remove the Docker image and container from your system, follow these steps.

1. Identify the Container

First, list all containers on your system.

docker ps -a\n

You should see output similar to the following:

CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS     NAMES\nabcdef123456   zeroxeli/readme-ai:latest   \"python main.py -r h\u2026\"   2 minutes ago    Up 2 minutes\n

Look for the container with ID abcdef123456.

2. Stop the Container

Stop the container using its ID.

docker stop abcdef123456\n

3. Remove the Container

Remove the container using its ID.

docker rm abcdef123456\n

4. Remove the Image

Remove the Docker image from your system.

docker rmi zeroxeli/readme-ai:latest\n
"},{"location":"usage/docker/#troubleshooting","title":"Troubleshooting","text":"
  1. If you encounter permission issues, ensure your user has the right permissions to run Docker commands.
  2. If the container fails to start, check that your OPENAI_API_KEY is correctly set and valid.
  3. For network-related issues, verify your internet connection and firewall settings.

For more detailed troubleshooting, refer to the official Docker documentation or open an issue on GitHub.

"},{"location":"usage/pip/","title":"Pip","text":"

After installation, you can run readme-ai with:

readmeai --api openai --repository https://github.com/eli64s/readme-ai\n
"},{"location":"usage/pip/#usage","title":"Usage","text":""},{"location":"usage/pip/#setting-environment-variables","title":"Setting Environment Variables","text":"

OpenAI API Key

Generate an OpenAI API key and set it as an environment variable:

export OPENAI_API_KEY=<your_api_key>\n

For Windows users, use:

set OPENAI_API_KEY=<your_api_key>\n

Anthropic API Key

To use the Anthropic API, set your API key:

export ANTHROPIC_API_KEY=<your_api_key>\n
"},{"location":"usage/pip/#running-readme-ai","title":"Running readme-ai","text":"

Run readme-ai with OpenAI:

readmeai --api openai --repository https://github.com/eli64s/readme-ai\n

Run readme-ai with Anthropic:

readmeai --api anthropic --repository https://github.com/eli64s/readme-ai\n

For a list of all available options, run:

readmeai --help\n
"},{"location":"usage/pip/#troubleshooting","title":"Troubleshooting","text":"
  1. Permission Issues: Ensure you have the necessary permissions to install Python packages. You may need to use sudo on Unix-based systems.
  2. Pipx Not Found: Make sure pipx is properly installed and available in your PATH. You can find installation instructions here.
  3. Missing Dependencies: Some advanced features require additional Python packages. Check the official documentation for a list of optional dependencies.

For further help, you can open an issue on GitHub or refer to the official documentation.

"},{"location":"usage/streamlit/","title":"\"> Streamlit","text":"

Run readme-ai directly in your browser on Streamlit Cloud. No installation required!

Source Code

Find the source code for the Streamlit app in the readme-ai repository

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"README-AI","text":"README-AI Your AI-Powered README Generator Designed for simplicity, customization, and developer productivity.

README-AI is a developer tool and framework that combines robust data processing modules with generative AI models. It streamlines documentation and enhances developer productivity by auto-generating comprehensive README.md files.

With README-AI, you can:

  • Automate Documentation: Synchronize data from third-party sources and generate documentation automatically.
  • Customize & Flexibly Style: Choose from dozens of options for styling, formatting, badges, header designs, and more.
  • Support Multiple Languages & Projects: Work across a wide range of programming languages and project types.
  • Leverage Multiple LLMs: Compatible with OpenAI, Ollama, Anthropic, Google Gemini and Offline Mode.
  • Follow Markdown Best Practices: Create clean, professional-looking documentation using Markdown formatting best practices.
"},{"location":"#key-features","title":"Key Features","text":"
  • Automated README generation
  • Customizable output and styling
  • Language and project agnostic
  • Multi-LLM support
  • Markdown best practices
"},{"location":"#quick-start","title":"Quick Start","text":"

If you're ready to jump right in, here's how to get started:

"},{"location":"#installation","title":"Installation","text":"
pip install readmeai\n
"},{"location":"#usage","title":"Usage","text":"
readmeai -r <repository_url> -o <output_file>\n

Otherwise you can explore the documentation for more detailed information. Cheers!

"},{"location":"contributing/","title":"Contributing Guidelines","text":"

Thanks for your interest in contributing to readme-ai. Please review these guidelines to ensure a smooth process.

"},{"location":"contributing/#make-valuable-contributions","title":"Make Valuable Contributions","text":"

Strive to make useful, creative, and high quality contributions. This isn't meant to be a high bar, but more of a guiding principle and philosophy. Here's what we mean by these terms:

Useful: Solve common problems, use cases, bugs, or new features.

Creative: Innovative and helping us all grow and learn new things.

High Quality: Well-written, structured, and explained.

"},{"location":"contributing/#ways-to-contribute","title":"Ways to Contribute","text":"

To improve and grow the project, we need your help! Here are some ways to get involved:

Activity Ideas \ud83d\udc4b Discussions Start a discussion by asking a question or making a suggestion. \ud83d\udc1b Open an Issue Find unhandled exceptions and bugs in the codebase. \ud83d\udcc4 Documentation Write documentation for the project. \ud83e\uddea Testing Write unit tests to increase code coverage. \ud83e\udde9 Feature Requests Brainstorm new ideas such as a CLI option to select any language. \ud83d\udee0\ufe0f Code Contributions Contribute to the codebase and submit a pull request. \ud83d\udd22 Code Readability Find ways to make code more readable and easier to understand. \ud83e\udd14 Other Anything else you can think of!

These are just a few examples, and we welcome any other ideas you may have!

"},{"location":"contributing/#submitting-changes","title":"Submitting Changes","text":"
  1. Fork the repository and clone it locally.
  2. Create a new branch with a descriptive name i.e feature/new-feature-name or bugfix-issue-123.
  3. Make focused changes with clear commits.
  4. Open a pull request document the changes you've made and why they're necessary.
  5. Respond to code reviews from maintainers.
"},{"location":"contributing/#code-quality-expectations","title":"Code Quality Expectations","text":"
  • Clear, well-documented code
  • Include tests for new code
  • Follow project style standards
  • Rebase onto latest main branch
"},{"location":"contributing/#attribution","title":"Attribution","text":"

Contributors to our project will be acknowledged in the project's README.md file.

"},{"location":"contributing/#license","title":"License","text":"

By contributing to our project, you agree to license your contributions under the project's open source license. The project's license can be found in the LICENSE

Thank you for your interest in contributing to readme-ai! We appreciate your help and look forward to working with you.

"},{"location":"faq/","title":"README-AI Frequently Asked Questions","text":""},{"location":"faq/#general-questions","title":"General Questions","text":""},{"location":"faq/#q-what-is-readme-ai","title":"Q: What is README-AI?","text":"

A: README-AI is a tool that automatically generates comprehensive README files for your projects using artificial intelligence.

"},{"location":"faq/#q-which-ai-models-does-readme-ai-support","title":"Q: Which AI models does README-AI support?","text":"

A: README-AI primarily uses OpenAI's GPT models, but there are ongoing efforts to add support for other models like Claude, Azure OpenAI, Cohere, and Llama2 (via Replicate).

"},{"location":"faq/#installation-and-setup","title":"Installation and Setup","text":""},{"location":"faq/#q-how-do-i-install-readme-ai","title":"Q: How do I install README-AI?","text":"

A: You can install README-AI using pip:

pip install readmeai\n
Alternatively, you can use Docker:
docker run -it -e OPENAI_API_KEY=your_key_here -v \"$(pwd)\":/app zeroxeli/readme-ai:latest\n

"},{"location":"faq/#q-im-getting-an-error-when-trying-to-install-on-ubuntu-how-can-i-fix-it","title":"Q: I'm getting an error when trying to install on Ubuntu. How can I fix it?","text":"

A: If you're encountering issues with conda environment creation, try using a virtual environment with pip instead. Ensure you have Python 3.8 or higher installed.

"},{"location":"faq/#usage","title":"Usage","text":""},{"location":"faq/#q-how-do-i-generate-a-readme-for-my-project","title":"Q: How do I generate a README for my project?","text":"

A: Use the following command:

readmeai -o readme-ai.md -r https://github.com/your-username/your-repo\n
Replace the URL with your repository link.

"},{"location":"faq/#q-can-i-use-readme-ai-with-private-repositories","title":"Q: Can I use README-AI with private repositories?","text":"

A: Yes, but you may need to provide authentication. For Bitbucket, use the format:

https://username:bitbucket_apikey@bitbucket.org/username/repo\n

"},{"location":"faq/#q-does-readme-ai-work-with-gitlab-repositories","title":"Q: Does README-AI work with GitLab repositories?","text":"

A: Yes, README-AI supports GitLab repositories. Use the same command format as with GitHub repos.

"},{"location":"faq/#troubleshooting","title":"Troubleshooting","text":""},{"location":"faq/#q-im-getting-a-404-not-found-error-what-should-i-do","title":"Q: I'm getting a \"404 Not Found\" error. What should I do?","text":"

A: Ensure your OpenAI API key is correct and has sufficient permissions. Also, check if you're using the correct API endpoint.

"},{"location":"faq/#q-the-script-runs-but-doesnt-generate-a-file-why","title":"Q: The script runs but doesn't generate a file. Why?","text":"

A: Check the permissions in your current directory. Ensure README-AI has write access to create the output file.

"},{"location":"faq/#q-im-seeing-a-429-too-many-requests-error-how-can-i-resolve-this","title":"Q: I'm seeing a \"429 Too Many Requests\" error. How can I resolve this?","text":"

A: This error occurs when you've exceeded the rate limit for the OpenAI API. Wait a while before trying again, or consider upgrading your API plan.

"},{"location":"faq/#q-why-am-i-getting-a-notfound-object-is-not-iterable-error","title":"Q: Why am I getting a \"NotFound object is not iterable\" error?","text":"

A: This error may occur if you're using an incompatible model. Ensure you're using a supported model like \"gpt-3.5-turbo\" or \"gpt-4\".

"},{"location":"faq/#features-and-customization","title":"Features and Customization","text":""},{"location":"faq/#q-can-i-use-readme-ai-with-languages-other-than-english","title":"Q: Can I use README-AI with languages other than English?","text":"

A: While README-AI primarily generates content in English, there are ongoing efforts to add internationalization (i18n) support for languages like Spanish and Italian.

"},{"location":"faq/#q-is-it-possible-to-use-readme-ai-in-azure-devops","title":"Q: Is it possible to use README-AI in Azure DevOps?","text":"

A: While there isn't native integration, you could potentially use README-AI as part of your Azure DevOps pipeline by incorporating it into your build or release process.

"},{"location":"faq/#q-can-i-customize-the-openai-endpoint-or-model-used","title":"Q: Can I customize the OpenAI endpoint or model used?","text":"

A: There are ongoing efforts to make the configuration more extensible, including options to specify different endpoints (like Azure OpenAI) and models.

"},{"location":"faq/#contributing","title":"Contributing","text":""},{"location":"faq/#q-how-can-i-contribute-to-readme-ai","title":"Q: How can I contribute to README-AI?","text":"

A: You can contribute by submitting pull requests on GitHub. Areas of contribution include adding support for new AI models, improving documentation, adding tests, and fixing bugs.

If you have any other questions or issues, please check the GitHub repository or open a new issue for support.

"},{"location":"philosophy/","title":"Philosophy and Vision","text":""},{"location":"philosophy/#empowering-developers-enlightening-projects","title":"Empowering Developers, Enlightening Projects","text":"

Readme-ai envisions a future where every software project, regardless of size or complexity, is accompanied by clear, comprehensive, and up-to-date documentation. We strive to create an ecosystem where documentation is no longer an afterthought but an integral, effortless part of the development process.

"},{"location":"philosophy/#our-core-vision","title":"Our Core Vision","text":"
  1. Democratize Quality Documentation
  2. Make professional-grade documentation accessible to all developers, from hobbyists to enterprise teams.
  3. Break down language barriers by offering multilingual documentation generation.

  4. Accelerate Open Source Adoption

  5. Enhance the discoverability and usability of open source projects through superior documentation.
  6. Foster a more inclusive open source community by lowering the barrier to contribution.

  7. Evolve with AI Advancements

  8. Continuously integrate cutting-edge AI technologies to improve documentation quality and generation speed.
  9. Pioneer new ways of understanding and describing code structures and functionalities.

  10. Cultivate Documentation Best Practices

  11. Establish Readme-ai as the gold standard for project documentation in the software industry.
  12. Encourage a culture where well-documented projects are the norm, not the exception.

  13. Enhance Developer Productivity

  14. Free developers to focus on coding by automating the documentation process.
  15. Reduce the time from development to deployment by streamlining the documentation workflow.

  16. Promote Code Understanding

  17. Facilitate better code comprehension across teams and organizations.
  18. Bridge the gap between technical and non-technical stakeholders through clear, AI-generated explanations.

  19. Ensure Adaptability and Extensibility

  20. Create a flexible platform that can easily integrate with various development workflows and tools.
  21. Build a robust plugin ecosystem that allows the community to extend Readme-ai's capabilities.

  22. Champion Ethical AI Use

  23. Lead by example in the responsible and transparent use of AI in developer tools.
  24. Prioritize user privacy and data security in all aspects of our AI-driven processes.
"},{"location":"philosophy/#long-term-impact","title":"Long-Term Impact","text":"

We see Readme-ai as a catalyst for a paradigm shift in software development practices. By making high-quality documentation effortless and ubiquitous, we aim to:

  • Accelerate innovation by making it easier for developers to build upon each other's work.
  • Improve software quality by encouraging better-documented and more maintainable codebases.
  • Enhance collaboration within and between development teams through clearer project communication.
  • Increase the overall efficiency of the software development lifecycle.

Through Readme-ai, we aspire to create a world where every line of code is matched by a line of clear, concise, and helpful documentation, empowering developers and enlightening projects for the benefit of all.

"},{"location":"troubleshooting/","title":"Troubleshooting","text":""},{"location":"troubleshooting/#help-menus","title":"Help Menus","text":"

The --help flag can be used to view the help menu for a command, e.g., for readmeai:

\u276f readmeai --help\n
"},{"location":"troubleshooting/#viewing-the-version","title":"Viewing the Version","text":"

When seeking help, it's important to determine the version of readmeai that you're using \u2014 sometimes the problem is already solved in a newer version.

To check the installed version:

\u276f readmeai --version\n

The following are also valid:

\u276f readmeai -V\n\u276f readmeai pip --version\n
"},{"location":"troubleshooting/#open-an-issue-on-github","title":"Open an issue on GitHub","text":"

The issue tracker on GitHub is a good place to report bugs and request features. Make sure to search for similar issues first, as it is common for someone else to encounter the same problem.

"},{"location":"troubleshooting/#faq","title":"FAQ","text":"

See the FAQ for answers to common questions and troubleshooting tips.

"},{"location":"why/","title":"Why use README-AI?","text":"

In the fast-paced world of software development, clear and comprehensive documentation is crucial. Yet, creating and maintaining high-quality README files can be time-consuming and often overlooked. This is where Readme-ai comes in, revolutionizing the way developers approach project documentation.

"},{"location":"why/#automated-documentation-generation","title":"Automated Documentation Generation","text":"

Readme-ai harnesses the power of artificial intelligence to automatically generate detailed, structured README files for your projects. By analyzing your codebase, Readme-ai creates documentation that is:

  • Comprehensive: Covers all essential aspects of your project, from installation instructions to usage examples.
  • Consistent: Maintains a uniform structure across all your projects, enhancing readability and professionalism.
"},{"location":"why/#time-saving-and-efficient","title":"Time-Saving and Efficient","text":"
  • Focus on Coding: Spend more time writing code and less time worrying about documentation.
  • Quick Setup: Get started with minimal configuration, allowing you to generate a README in minutes.
  • Customizable Templates: Fine-tune the output to match your project's specific needs and your personal style.
"},{"location":"why/#enhanced-project-visibility","title":"Enhanced Project Visibility","text":"
  • Professional Appearance: Engage potential users and contributors with polished, well-structured documentation.
  • Comprehensive Overview: Provide a clear, concise summary of your project, making it easier for others to understand and use.
  • SEO-Friendly: Generated READMEs are optimized for search engines, improving your project's discoverability.
"},{"location":"why/#integration-and-flexibility","title":"Integration and Flexibility","text":"
  • Extensible: Customize and extend readme-ai to fit your specific documentation needs.
  • Multiple AI Backends: Choose from various AI providers (OpenAI, Ollama, Google Gemini) or use the offline mode for sensitive projects.
  • Language Agnostic: Works with a wide range of programming languages and project types, ensuring compatibility with your existing codebase.
"},{"location":"why/#community-driven-development","title":"Community-Driven Development","text":"
  • Open Source: Benefit from and contribute to a growing ecosystem of documentation tools and best practices.

  • Continuous Improvement: Regular updates and improvements driven by community feedback and contributions.

  • Shared Knowledge: Learn from and contribute to a gallery of exemplary READMEs generated by the community.

"},{"location":"why/#key-features","title":"Key Features","text":"
  1. Flexible README Generation: Combines robust repository context extraction with generative AI to create detailed and accurate README files.

  2. Customizable Output: Offers numerous CLI options for tailoring the README to your project's needs:

    • Badge styles and colors

    • Header designs

    • Table of contents styles

    • Project logos

  3. Language Agnostic: Works with a wide range of programming languages and project types, automatically detecting and summarizing key aspects of your codebase.

  4. Project Analysis: Automatically extracts and presents important information about your project:

    • Directory structure
    • File summaries
    • Dependencies
    • Setup instructions
  5. Multi-LLM Support: Compatible with various language model APIs, including:

    • OpenAI
    • Ollama
    • Anthropic
    • Google Gemini
    • Offline Mode
  6. Offline Mode: Generate a basic README structure without requiring an internet connection or API calls.

  7. Markdown Expertise: Leverages best practices in Markdown formatting for clean, professional-looking documentation.

"},{"location":"blog/","title":"Blog","text":"

\u2728 coming soon ...

"},{"location":"configuration/","title":"Configuration","text":"

README-AI offers a wide range of configuration options to customize your README generation. This page provides a comprehensive list of all available options with detailed explanations.

"},{"location":"configuration/#cli-options","title":"CLI Options","text":"Option Description Default Impact --align Text alignment in header center Affects the visual layout of the README header --api LLM API service offline Determines which AI service is used for content generation --badge-color Badge color (name or hex) 0080ff Customizes the color of status badges in the README --badge-style Badge icon style type flat Changes the visual style of status badges --base-url Base URL for the repository v1/chat/completions Used for API requests to the chosen LLM service --context-window Max context window of LLM API 3999 Limits the amount of context provided to the LLM --emojis Add emojis to README sections False Adds visual flair to section headers --header-style Header template style classic Changes the overall look of the README header --image Project logo image blue Sets the main image displayed in the README --model Specific LLM model to use gpt-3.5-turbo Chooses the AI model for content generation --output Output filename readme-ai.md Specifies the name of the generated README file --rate-limit Max API requests per minute 5 Prevents exceeding API rate limits --repository Repository URL or local path None Specifies the project to analyze --temperature Creativity level for generation 0.9 Controls the randomness of the AI's output --toc-style Table of contents style bullet Changes the format of the table of contents --top-p Top-p sampling probability 0.9 Fine-tunes the AI's output diversity --tree-depth Max depth of directory tree 2 Controls the detail level of the project structure

Some options have a significant impact on the generated README's appearance and content. Experiment with different settings to find the best configuration for your project.

"},{"location":"configuration/badges/","title":"Badges","text":"

A badge is a simple embeddable icon that displays various metrics such as the number of stars or forks for a repository, languages used in the project, CI/CD build status, test coverage, the license of the project, and more. Badges are a great way to provide quick information about your project to users and visitors.

README-AI offers various badge styles to enhance your project's README. This guide explains how to use and customize these badges.

"},{"location":"configuration/badges/#badge-styles","title":"Badge Styles","text":"

Use the --badge-style option to select from the following styles:

defaultflatflat-squarefor-the-badgeplasticskillsskills-lightsocial

"},{"location":"configuration/badges/#how-it-works","title":"How It Works","text":"

README-AI automatically detects your project's dependencies and technologies during the repository ingestion process. It then uses these dependencies and technologies to generate a comprehensive list of relevant badges for your project.

When you provide the --badge-style option to the readmeai command, two sets of badges are generated:

  1. Default Metadata Badges: The default set is always included in the generated README file. The default badges include the project license, last commit, top language, and total languages.
  2. Project Dependency Badges: When the --badge-style argument is provided to the CLI, a second badge set is generated, representing the extracted dependencies and metadata from your codebase.

The badge sets are formatted in the README header and provide the reader with a quick overview of the project's key metrics and technologies.

"},{"location":"configuration/badges/#example-usage","title":"Example Usage","text":"

Let's generate a README with custom badge colors and styles using the --badge-color and --badge-style options:

readmeai --badge-color orange \\\n         --badge-style flat-square \\\n         --repository https://github.com/eli64s/readme-ai\n

The command above generates a README with the following badge configuration:

Example

Badge Generation

Built with the tools and technologies:

The --badge-color option only modifies the default badge set, while the --badge-style option is applied to both the default and project dependency badges

"},{"location":"configuration/badges/#tips-for-using-badges","title":"Tips for Using Badges","text":"
  • Choose a badge style that complements your project's overall design.
  • Use badges to highlight relevant information about your project, such as license, build status, and test coverage.
  • Don't overuse badges \u2013 too many can clutter your README and make it hard to read.
  • Ensure that all badge links are correct and up-to-date.
  • Consider using custom badges for project-specific information or metrics.
"},{"location":"configuration/badges/#references","title":"References","text":"

Thank you to the following projects for providing the badges used in this project:

  • Shields.io
  • Aveek-Saha/GitHub-Profile-Badges
  • Ileriayo/Markdown-Badges
  • tandpfun/skill-icons ```
"},{"location":"configuration/emojis/","title":"Emojis","text":"

Emojis are a fun way to add some personality to your README.md file. README-AI allows you to automatically add emojis to all headers in the generated README file by providing the --emojis option to the readmeai command.

"},{"location":"configuration/emojis/#how-it-works","title":"How It Works","text":"

When you provide the --emojis option to the readmeai command, README-AI automatically adds emojis to all headers in the generated README file.

Default (emojis disabled)Emojis Enabled

"},{"location":"configuration/emojis/#readme-ai-streamlit","title":"README-AI-STREAMLIT","text":""},{"location":"configuration/emojis/#codebase-efficiency-elevated-documentation-enhanced","title":"Codebase Efficiency Elevated, Documentation Enhanced!","text":"

Built with the tools and technologies:

"},{"location":"configuration/emojis/#table-of-contents","title":"Table of Contents","text":"
  • Overview
  • Features
  • Project Structure
  • Project Index
  • Getting Started
  • Prerequisites
  • Installation
  • Usage
  • Tests
  • Roadmap
  • Contributing
  • License
  • Acknowledgments
"},{"location":"configuration/emojis/#overview","title":"Overview","text":"

...

"},{"location":"configuration/emojis/#readme-ai-streamlit_1","title":"README-AI-STREAMLIT","text":""},{"location":"configuration/emojis/#codebase-efficiency-elevated-documentation-enhanced_1","title":"Codebase Efficiency Elevated, Documentation Enhanced!","text":"

Built with the tools and technologies:

"},{"location":"configuration/emojis/#table-of-contents_1","title":"\ud83d\udcd2 Table of Contents","text":"
  • \ud83d\udccd Overview
  • \ud83d\udc7e Features
  • \ud83d\udcc1 Project Structure
  • \ud83d\udcc2 Project Index
  • \ud83d\ude80 Getting Started
  • \ud83d\udccb Prerequisites
  • \u2699\ufe0f Installation
  • \ud83e\udd16 Usage
  • \ud83e\uddea Tests
  • \ud83d\udccc Roadmap
  • \ud83d\udd30 Contributing
  • \ud83c\udf97 License
  • \ud83d\ude4c Acknowledgments
"},{"location":"configuration/emojis/#overview_1","title":"\ud83d\udccd Overview","text":"

...

"},{"location":"configuration/header/","title":"Headers","text":"

A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.

"},{"location":"configuration/header/#header-style-options","title":"Header Style OptionsREADME-AI-STREAMLIT","text":"

Example

Use the --header-style option to select from the following markdown header templates:

CLASSICCOMPACTMODERNSVGASCIIASCII_BOX

README-AI

Where Documentation Meets Innovation!

Built with the tools and technologies:

Features:

  • \u25ce Centered alignment
  • \u25ce Logo above project name
  • \u25ce Traditional README layout
  • \u25ce Ideal for most projects

Streamlining README creation with AI magic!

Built with the tools and technologies:

Features:

  • \u25ce Left-aligned layout
  • \u25ce Logo and title on same line
  • \u25ce Space-efficient design
  • \u25ce Perfect for smaller README files

PYFLINK-POC

Streamlining data flow with PyFlink power!

Built with the tools and technologies:

Features:

  • \u25ce Left-aligned text
  • \u25ce Logo floated to the right
  • \u25ce Contemporary asymmetric design
  • \u25ce Great for documentation sites

\u276f REPLACE-ME

Built with the tools and technologies:

Features:

  • \u25ce Full-width SVG banner support
  • \u25ce Centered alignment
  • \u25ce Scalable vector graphics
  • \u25ce Ideal for custom branding

\n\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588          \u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588\n\u2588\u2588  \u2588\u2588 \u2588\u2588      \u2588\u2588\u2588\u2588  \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588 \u2588\u2588\u2588 \u2588\u2588             \u2588\u2588\u2588\u2588    \u2588\u2588\n\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588  \u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588 \u2588 \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588   \u2588\u2588\n\u2588\u2588 \u2588\u2588  \u2588\u2588     \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588   \u2588\u2588 \u2588\u2588            \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588\n\u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588        \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588\n

\u276f REPLACE-ME

Built with the tools and technologies:

Features:

  • \u25ce Text-based art logo
  • \u25ce Minimal and retro style
  • \u25ce No image dependencies
  • \u25ce Good for terminal-focused tools

\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n\u2551                                                                    \u2551\n\u2551  \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588          \u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588   \u2551\n\u2551  \u2588\u2588  \u2588\u2588 \u2588\u2588      \u2588\u2588\u2588\u2588  \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588 \u2588\u2588\u2588 \u2588\u2588             \u2588\u2588\u2588\u2588    \u2588\u2588     \u2551\n\u2551  \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588  \u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588 \u2588 \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588   \u2588\u2588     \u2551\n\u2551  \u2588\u2588 \u2588\u2588  \u2588\u2588     \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588   \u2588\u2588 \u2588\u2588            \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588     \u2551\n\u2551  \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588   \u2588\u2588   \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588        \u2588\u2588  \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588   \u2551\n\u2551                                                                    \u2551\n\u2551                                                                    \u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n

\u276f REPLACE-ME

Built with the tools and technologies:

Features:

  • \u25ce Text-based art logo
  • \u25ce Minimal and retro style
  • \u25ce No image dependencies
  • \u25ce Good for terminal-focused tools

"},{"location":"configuration/header/#how-it-works","title":"How It Works","text":"

README-AI provides several ways to customize your header style:

  1. Default Styles: Choose from pre-defined header layouts
  2. Alignment Options: Set text and image alignment
  3. Custom Sizing: Adjust logo and image dimensions
  4. Badge Integration: Incorporates shield badges and tech stack icons

The selected style will determine how your project's name, logo, description, and badges are arranged in the header section.

"},{"location":"configuration/header/#examples","title":"Examples","text":""},{"location":"configuration/header/#using-classic-style","title":"Using Classic Style","text":"
readmeai --header-style classic --repository https://github.com/username/project\n
"},{"location":"configuration/header/#using-modern-style-with-custom-alignment","title":"Using Modern Style with Custom Alignment","text":"
readmeai --header-style modern --align left --repository https://github.com/username/project\n
"},{"location":"configuration/header/#combining-with-other-options","title":"Combining with Other Options","text":"
readmeai --header-style compact \\\n         --badge-style flat \\\n         --image gradient \\\n         --repository https://github.com/username/project\n
"},{"location":"configuration/header/#tips-for-using-header-styles","title":"Tips for Using Header Styles","text":"
  • Classic: Best for traditional open-source projects that need a professional look
  • Modern: Great for documentation sites and projects with longer READMEs
  • Compact: Ideal for smaller projects or when space is at a premium
  • SVG: Perfect for projects that need custom branding or full-width banners
  • ASCII: Good for terminal applications or when you want a retro feel

Consider these factors when choosing a header style: - Your project's target audience - The amount of content in your README - Whether you have a custom logo or banner - The overall aesthetic of your documentation - How the style works with your chosen badge style

Some header styles may look different on different platforms or markdown renderers. It's a good idea to test how your chosen style looks on your target platform.

"},{"location":"configuration/project_logo/","title":"Project Logo","text":"

A project logo is a visual representation of your project that appears at the top of your README file. It helps to brand your project and make it more recognizable. README-AI offers various options for adding a logo to your project's README.

"},{"location":"configuration/project_logo/#default-options","title":"Default Options","text":"

Use the --image option to select from the following logo styles:

BlueGradientBlackCloudPurpleGrey

"},{"location":"configuration/project_logo/#how-it-works","title":"How It Works","text":"

README-AI provides several ways to include a logo in your project's README:

  1. Default Images: Choose from a set of pre-defined logos.
  2. Custom Images: Use your own image by providing a URL or file path.
  3. LLM Images: Generate a unique logo using AI (requires an LLM API).

The selected or generated logo will be placed at the top of your README file, helping to visually identify your project.

"},{"location":"configuration/project_logo/#examples","title":"Examples","text":""},{"location":"configuration/project_logo/#selecting-a-default-image","title":"Selecting a Default Image","text":"

To use one of the default images, specify the image name with the --image option:

readmeai --image gradient --repository https://github.com/username/project\n
"},{"location":"configuration/project_logo/#providing-a-custom-image","title":"Providing a Custom Image","text":"

You can provide readme-ai with a custom image by using the --image custom option:

readmeai --image custom --repository https://github.com/username/project\n

You will be prompted to provide a path to your image on your local machine or via URL:

Provide an image file path or URL:\n
"},{"location":"configuration/project_logo/#llm-image-generation","title":"LLM Image Generation","text":"

To generate a logo using a text-to-image model from a LLM API (i.e. OpenAI), use the --image llm option:

readmeai --image llm --api openai --repository https://github.com/username/project\n

This will generate a unique logo that you can display in your project's documentation. The prompt used for text-to-image generation can be found here.

Example

The following examples show generated logos by readme-ai for various open-source projects:

1. eli64/readme-ai2. eli64/sreadme-ai3. eli64/readme-ai-streamlit4. PiyushSuthar/github-readme-quotes

README-AI

README-AI

README-AI-STREAMLIT

GITHUB-README-QUOTES

The quality and relevance of LLM-generated logos can vary. It's a good idea to review and potentially edit the generated logo to ensure it meets your project's needs.

"},{"location":"configuration/project_logo/#tips-for-using-project-logos","title":"Tips for Using Project Logos","text":"
  • Choose a logo that represents your project's purpose or theme.
  • Ensure the logo is clear and recognizable even at smaller sizes.
  • If using a custom image, make sure it's high quality and appropriately sized.
  • When using LLM-generated logos, you may want to generate several options to choose from.
  • Consider how the logo will look alongside your project's badges and other README content.
  • If your project is part of a larger organization or ecosystem, consider using a logo that aligns with that branding.
"},{"location":"configuration/table_of_contents/","title":"Table of Contents (ToC) Templates","text":"

README-AI offers flexible options for generating a Table of Contents (ToC) in your README file, directly from the command line. You can specify different styles of ToC generation using the --toc-style option when running the CLI.

"},{"location":"configuration/table_of_contents/#cli-usage-for-toc-styles","title":"CLI Usage for ToC Styles","text":"

When using the readmeai CLI, you can customize the Table of Contents by specifying one of the supported styles with the --toc-style flag.

"},{"location":"configuration/table_of_contents/#supported-toc-styles","title":"Supported ToC Styles:","text":"
  1. Bullet (bullet)
  2. Displays a simple bulleted list format for the Table of Contents.

  3. Fold (fold)

  4. Generates a collapsible Table of Contents. Users can click to expand the list of sections.

  5. Links (links)

  6. A quick link format that groups key sections under a \"Quick Links\" heading.

  7. Number (number)

  8. Numbers the sections in the Table of Contents for clear ordering and hierarchy.

  9. Roman (roman)

  10. Uses Roman numerals to number the sections, providing a classic and formal appearance.
"},{"location":"configuration/table_of_contents/#example-cli-command","title":"Example CLI Command","text":"

Here\u2019s how to generate a README file with a Numbered Table of Contents:

readmeai --repository ./my_project --toc-style number --output README.md\n

In this example: - The --repository flag specifies the local project directory. - The --toc-style number flag sets the Table of Contents to use a numbered format. - The --output README.md flag specifies the output file name.

"},{"location":"configuration/table_of_contents/#another-example-with-foldable-toc","title":"Another Example with Foldable ToC","text":"

To generate a README with a Foldable Table of Contents, run:

readmeai --repository ./my_project --toc-style fold --output README.md\n

This will create a ToC that is hidden by default, allowing users to expand it if needed.

"},{"location":"configuration/table_of_contents/#available-toc-styles","title":"Available ToC Styles","text":"
-tc, --toc-style [bullet|fold|links|number|roman]\n

The --toc-style option supports the following values:

  • bullet: Simple bullet list
  • fold: Collapsible ToC
  • links: Quick links format
  • number: Numbered list of sections
  • roman: Roman numeral list of sections
"},{"location":"configuration/table_of_contents/#additional-customizations","title":"Additional Customizations","text":"

You can further customize your README file with options like header styles (--header-style), logo image (--image), and badges (--badge-style).

"},{"location":"examples/gallery/","title":"Example Gallery","text":"

Here are some examples of README files generated by readme-ai for various projects using different languages and frameworks.

Language/Framework Output File Input Repository Description Python readme-python.md readme-ai Core readme-ai project TypeScript & React readme-typescript.md ChatGPT App React Native ChatGPT app PostgreSQL & DuckDB readme-postgres.md Buenavista Postgres proxy server Kotlin & Android readme-kotlin.md file.io Client Android file sharing app Streamlit readme-streamlit.md readme-ai-streamlit Streamlit UI for readme-ai app Rust & C readme-rust-c.md CallMon System call monitoring tool Docker & Go readme-go.md docker-gs-ping Dockerized Go app Java readme-java.md Minimal-Todo Minimalist todo Java app FastAPI & Redis readme-fastapi-redis.md async-ml-inference Async ML inference service Jupyter Notebook readme-mlops.md mlops-course MLOps course repository Apache Flink readme-local.md Local Directory Example using a local directory

See additional README files generated by readme-ai here

"},{"location":"guides/markdown_best_practices/","title":"Markdown Best Practices","text":"

This document provides a comprehensive guide to writing technical documentation using the GitHub flavored markdown spec. This guide includes examples of how to use various markdown elements to create visually appealing and informative documentation.

"},{"location":"guides/markdown_best_practices/#table-of-contents","title":"Table of Contents","text":"
  • Markdown Horizontal Rule
  • HTML Horizontal Rule
  • Table with Alignment
    • Multi-Line Table Cells
    • Task Lists
    • Merge Cells
  • Progress Bars
  • Highlighting
  • Underlining
  • Keyboard Shortcuts
    • Navigating
    • Editing
  • Centered Images
  • Horizontally Aligned Images
  • Small Images
  • Text Boxes
  • Text Wrapping
  • Inline Links
  • Reference Links
  • Simple Contact
  • Modern Contact with Social Icons
  • Contributing Graph
"},{"location":"guides/markdown_best_practices/#line-separators","title":"Line Separators","text":""},{"location":"guides/markdown_best_practices/#markdown-horizontal-rule","title":"Markdown Horizontal Rule","text":"
section end\n\n---\n
section end\n\n***\n
"},{"location":"guides/markdown_best_practices/#html-horizontal-rule","title":"HTML Horizontal Rule","text":"
<p>section end</p>\n\n<hr>\n
"},{"location":"guides/markdown_best_practices/#lists","title":"Lists","text":"

Things I need to do today: 1. Fix usability problem 2. Clean up the page * Make the headings bigger 2. Push my changes 3. Create code review * Describe my changes * Assign reviewers * Ask for feedback

"},{"location":"guides/markdown_best_practices/#tables","title":"Tables","text":""},{"location":"guides/markdown_best_practices/#table-with-alignment","title":"Table with Alignment","text":"
| Left Aligned | Centered | Right Aligned |\n| :---         | :---:    | ---:          |\n| Cell 1       | Cell 2   | Cell 3        |\n| Cell 4       | Cell 5   | Cell 6        |\n

This will render as:

Left Aligned Centered Right Aligned Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6"},{"location":"guides/markdown_best_practices/#multi-line-table-cells","title":"Multi-Line Table Cells","text":"
| Name | Details |\n| ---  | ---     |\n| Item1 | This text is on one line |\n| Item2 | This item has:<br>- Multiple items<br>- That we want listed separately |\n

This will render as:

Name Details Item1 This text is on one line Item2 This item has:- Multiple items- That we want listed separately"},{"location":"guides/markdown_best_practices/#task-lists","title":"Task Lists","text":"
| header 1 | header 2 |\n| ---      | ---      |\n| cell 1   | cell 2   |\n| cell 3   | <ul><li> - [ ] Task one </li><li> - [ ] Task two </li></ul> |\n

This will render as:

header 1 header 2 cell 1 cell 2 cell 3
  • - [ ] Task one
  • - [ ] Task two
"},{"location":"guides/markdown_best_practices/#merge-cells","title":"Merge Cells","text":"
<table>\n  <tr>\n    <td colspan=\"2\">I take up two columns!</td>\n  </tr>\n  <tr>\n    <td>First column</td>\n    <td>Second column</td>\n  </tr>\n</table>\n

This will render as:

I take up two columns! First column Second column"},{"location":"guides/markdown_best_practices/#text-styling-formatting","title":"Text Styling & Formatting","text":"
  • strikethrough or strikethrough
  • H2O is a liquid and C6H12O6 is a sugar.
  • 19th
  • X2 + Y2 = Z2
  • z2 + c
  • \u2003Install\u2003
"},{"location":"guides/markdown_best_practices/#progress-bars","title":"Progress Bars","text":"

22%

48%

77%

"},{"location":"guides/markdown_best_practices/#highlighting","title":"Highlighting","text":"

highlighted text.

"},{"location":"guides/markdown_best_practices/#underlining","title":"Underlining","text":"

I'm Underlined!

"},{"location":"guides/markdown_best_practices/#buttons-keyboard-shortcuts","title":"Buttons & Keyboard Shortcuts","text":"Click here Click here Or here Or here

Big Fat Button

"},{"location":"guides/markdown_best_practices/#keyboard-shortcuts","title":"Keyboard Shortcuts","text":"

Press Enter to go to the next page.

"},{"location":"guides/markdown_best_practices/#navigating","title":"Navigating","text":"

You can navigate through your items or search results using the keyboard. You can use Tab to cycle through results, and Shift + Tab to go backwards. Or use the arrow keys, \u2191, \u2192, \u2193 and \u2190.

To copy the selected text, press Ctrl + C.

"},{"location":"guides/markdown_best_practices/#editing","title":"Editing","text":"

Press Ctrl + S to save your changes. Select text and press Ctrl + B to make it bold.

"},{"location":"guides/markdown_best_practices/#math-equations","title":"Math Equations","text":"\\[ \\begin{aligned} \\dot{x} & = \\sigma(y-x) \\\\ \\dot{y} & = \\rho x - y - xz \\\\ \\dot{z} & = -\\beta z + xy \\end{aligned} \\] \\[ L = \\frac{1}{2} \\rho v^2 S C_L \\]"},{"location":"guides/markdown_best_practices/#images","title":"Images","text":""},{"location":"guides/markdown_best_practices/#simple-icons","title":"Simple Icons","text":""},{"location":"guides/markdown_best_practices/#docker","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#docker_1","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#centered-images","title":"Centered Images","text":""},{"location":"guides/markdown_best_practices/#horizontally-aligned-images","title":"Horizontally Aligned Images","text":""},{"location":"guides/markdown_best_practices/#small-images","title":"Small Images","text":"

Code documentation - Generated directory tree structure and summaries of the key files in your codebase.

Spike documentation - Generated directory tree structure and summaries of the key files in your codebase.

Chunking documentation - Generated directory tree structure and summaries of the key files in your codebase.

"},{"location":"guides/markdown_best_practices/#text-boxes","title":"Text Boxes","text":"This is text in the box. Much wow"},{"location":"guides/markdown_best_practices/#text-wrapping","title":"Text Wrapping","text":"

At the 2019 rendition of E3, an eccentric gamer in attendance interrupted Keanu Reeves' presentation of the role-playing game (RPG) Cyberpunk 2077, loudly claiming, \u201c\"You're breathtaking,\"\u201d which was directed at the actor-cum-presenter. The image macro used to build the \"You're Breathtaking\" meme generally features a still of Keanu Reeves pointing at someone in the audience in front of him - that someone is Peter Sark, though there are no images from Keanu's point of view that have since been used as part of the \"You're Breathtaking\" meme.

"},{"location":"guides/markdown_best_practices/#mermaid-diagrams","title":"Mermaid Diagrams","text":"
  • Mermaid Live Editor
graph TD;\n  A-->B;\n  A-->C;\n  B-->D;\n  C-->D;
graph TB\n\n  SubGraph1 --> SubGraph1Flow\n  subgraph \"SubGraph 1 Flow\"\n  SubGraph1Flow(SubNode 1)\n  SubGraph1Flow -- Choice1 --> DoChoice1\n  SubGraph1Flow -- Choice2 --> DoChoice2\n  end\n\n  subgraph \"Main Graph\"\n  Node1[Node 1] --> Node2[Node 2]\n  Node2 --> SubGraph1[Jump to SubGraph1]\n  SubGraph1 --> FinalThing[Final Thing]\nend
journey\n    title My working day\n    section Go to work\n      Make tea: 5: Me\n      Go upstairs: 3: Me\n      Do work: 1: Me, Cat\n    section Go home\n      Go downstairs: 5: Me\n      Sit down: 5: Me
graph TD\nA[FastAPI Application] --> B{HTTP Request}\nB --> C{Request Validation}\nC -- Valid --> D[Route Handler]\nC -- Invalid --> E[RequestValidationError]\nD --> F{Route Handler Execution}\nF -- Success --> G[Response]\nF -- Exception --> H{Exception Handling}\nH -- RequestError --> I[request_error_handler]\nH -- HTTPException --> J[http_exception_handler]\nH -- RequestValidationError --> K[request_validation_error_handler]\nH -- Unhandled Exception --> L[Internal Server Error]\nI --> M[Custom Error Response]\nJ --> N[HTTP Exception Response]\nK --> O[Validation Error Response]\nL --> P[Internal Server Error Response]\nM --> Q[Return Response]\nN --> Q\nO --> Q\nP --> Q
"},{"location":"guides/markdown_best_practices/#return-to-top","title":"Return To Top","text":"

Return

Return

Return

"},{"location":"guides/markdown_best_practices/#html-spacing-entities","title":"HTML Spacing Entities","text":"Name HTML Entity Description En space &ensp; Half the width of an em space Em space &emsp; Width of an em space (equal to the font size) Three-per-em space &emsp13; One-third of an em space Figure space &numsp; Width of a numeral (digit) Punctuation space &puncsp; Width of a period or comma Thin space &thinsp; Thinner than a regular space Hair space &hairsp; Thinner than a thin space Narrow no-break space &#8239; Non-breaking thin space

Note: The &emsp13; and &puncsp; entities may not be supported in all browsers. For the narrow no-break space, there isn't a named HTML entity, so the numeric character reference &#8239; is used.

"},{"location":"guides/markdown_best_practices/#links","title":"Links","text":""},{"location":"guides/markdown_best_practices/#inline-links","title":"Inline Links","text":"

inline link reference link

"},{"location":"guides/markdown_best_practices/#reference-links","title":"Reference Links","text":"

reference link 2

"},{"location":"guides/markdown_best_practices/#contact","title":"Contact","text":""},{"location":"guides/markdown_best_practices/#simple-contact","title":"Simple Contact","text":"

If you have any questions or comments, feel free to reach out to me! - Email: your-email@example.com - Twitter: @YourHandle

"},{"location":"guides/markdown_best_practices/#modern-contact-with-social-icons","title":"Modern Contact with Social Icons","text":"

For readme-ai issues and feature requests please visit our issues page, or start a discussion!

"},{"location":"guides/markdown_best_practices/#contributing-guidelines","title":"Contributing Guidelines","text":""},{"location":"guides/markdown_best_practices/#contributing-graph","title":"Contributing Graph","text":""},{"location":"guides/markdown_best_practices/#references","title":"References","text":"
  • github-markdown-tricks
"},{"location":"guides/markdown_best_practices/#footnotes","title":"Footnotes","text":"

Here's a sample sentence with a footnote.[^1]

[^1]: And here's the definition of the footnote.

"},{"location":"llms/","title":"Large Language Model (LLM) Integrationss","text":"

Readme-ai integrates seamlessly with various Large Language Model (LLM) services to generate high-quality README content. This page provides an overview of the supported LLM services and links to detailed information about each.

"},{"location":"llms/#supported-llm-services","title":"Supported LLM Services","text":"
  1. OpenAI
  2. Ollama
  3. Anthropic
  4. Google Gemini
  5. Offline Mode
"},{"location":"llms/#comparing-llm-services","title":"Comparing LLM Services","text":"Service Pros Cons OpenAI High-quality output, Versatile Requires API key, Costs associated Ollama Free, Privacy-focused, Offline May be slower, Requires local setup Anthropic Privacy-focused, Offline May be slower, Requires local setup Gemini Strong performance, Google integration Requires API key Offline No internet required, Fast Basic output, Limited customization"},{"location":"llms/#tips-for-optimal-results","title":"Tips for Optimal Results","text":"
  1. Experiment with different models: Try various LLM services and models to find the best fit for your project.
  2. Provide clear context: Ensure your repository has well-organized code and documentation to help the LLM generate more accurate content.
  3. Fine-tune with CLI options: Use readme-ai's CLI options to customize the output further after choosing your LLM service.
  4. Review and edit: Always review the generated README and make necessary edits to ensure accuracy and relevance to your project.

By leveraging these LLM integrations effectively, you can generate comprehensive and accurate README files for your projects with minimal effort. Here is a structured content tab implementation for integrating multiple APIs in README-AI, based on the detailed API integration information you provided:

"},{"location":"llms/#api-integrations","title":"\ud83d\ude80 API Integrations","text":"

README-AI supports multiple large language model (LLM) APIs for generating README files. The following tabs explain how to configure and use each supported API.

"},{"location":"llms/#api-configuration-tabs","title":"API Configuration Tabs","text":"AnthropicGeminiOllamaOpenAIOfflineMode
```sh\nreadmeai --api anthropic --model claude-3-opus-20240229 --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api gemini --model gemini-1.5-flash --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api ollama --model llama3 --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api openai --model gpt-3.5-turbo --repository <REPO_URL_OR_PATH>\n```\n
```sh\nreadmeai --api offline --repository <REPO_URL_OR_PATH>\n```\n

```

"},{"location":"llms/anthropic/","title":"Installation Guide for readme-ai with Ollama","text":"

Get started with readme-ai using Ollama. This guide will show you how to install and run readme-ai using Ollama in your local environment.

Ollama Requirement

Ensure you have Ollama installed and running on your system. For the latest installation guide, visit the Ollama GitHub repository.

"},{"location":"llms/anthropic/#installation-using-ollama","title":"Installation Using Ollama","text":"

To use readme-ai with Ollama, follow these steps:

  1. Install Ollama

Ensure you have installed Ollama on your system. You can find detailed installation instructions on the Ollama GitHub page.

  1. Pull the LLM Model

Pull the required LLM model to use with Ollama:

ollama pull llama3:latest\n
  1. Start the Ollama Server

Start the Ollama server locally:

export OLLAMA_HOST=127.0.0.1 && ollama serve\n
  1. Run readme-ai with Ollama

After starting the server, run readme-ai with Ollama:

readmeai --api ollama --model llama3 --repository https://github.com/eli64s/readme-ai\n

Explanation of common arguments:

Argument Description --api Specifies the LLM API service to use (in this case, Ollama). --model Specifies the model to use with Ollama (e.g., llama3). --repository Specifies the GitHub repository or local directory path to analyze."},{"location":"llms/anthropic/#optional-dependencies","title":"Optional Dependencies","text":"

To use additional LLM providers like Anthropic or Google Gemini in addition to Ollama, install the optional dependencies:

Anthropic:

pip install readmeai[anthropic]\n

Google Gemini:

pip install readmeai[gemini]\n
"},{"location":"llms/anthropic/#usage","title":"Usage","text":""},{"location":"llms/anthropic/#setting-environment-variables","title":"Setting Environment Variables","text":"

Ollama Host

Set the Ollama host to enable the local server:

export OLLAMA_HOST=127.0.0.1\n

For Windows users, use:

set OLLAMA_HOST=127.0.0.1\n
"},{"location":"llms/anthropic/#running-readme-ai-with-ollama","title":"Running readme-ai with Ollama","text":"

Run readme-ai with the Ollama model:

readmeai --api ollama --model llama3 --repository https://github.com/eli64s/readme-ai\n

For a list of all available options, run:

readmeai --help\n
"},{"location":"llms/anthropic/#troubleshooting","title":"Troubleshooting","text":"
  1. Server Connection Issues: Ensure that the Ollama server is running and accessible. Verify that the host address is set correctly.
  2. Model Not Found: Make sure that the required LLM model is properly pulled using the ollama pull command.
  3. Permission Issues: Ensure you have the necessary permissions to run commands with Ollama. You may need administrative rights on your system.

For further help, you can open an issue on GitHub or refer to the official documentation.

"},{"location":"llms/google_gemini/","title":"Google Gemini","text":"

Google's Gemini models offer strong performance across a wide range of tasks.

"},{"location":"llms/google_gemini/#configuration","title":"Configuration","text":"
readmeai --repository <REPO_URL_OR_PATH> --api gemini --model gemini-1.5-flash\n
"},{"location":"llms/google_gemini/#available-models","title":"Available Models","text":"

It is recommended to use the following models:

  • gemini-1.5-pro
"},{"location":"llms/google_gemini/#best-practices","title":"Best Practices","text":"
  • Gemini models excel at understanding context and generating coherent text.
  • Ensure you have the necessary API credentials set up.
"},{"location":"llms/offline_mode/","title":"Offline Mode","text":"

Offline mode allows you to generate a README without an internet connection. This is useful when you want to quickly generate boilerplate README files.

"},{"location":"llms/offline_mode/#configuration","title":"Configuration","text":"
readmeai --api offline -r https://github.com/username/project\n
"},{"location":"llms/offline_mode/#offline-mode-readme-example","title":"Offline Mode README Example","text":"
  • readme-offline-mode.md
Note
  • Use offline mode for quick boilerplate generation or when you don't have internet access.
  • Customize the generated README manually after generation.
"},{"location":"llms/ollama/","title":"Ollama","text":"

Ollama is a privacy-focused, open-source tool for running open-source LLMs locally such as Llama 3, Mistral, and Gemma 2. Ollama can be used with readme-ai to generate README files with a variety of models and configurations from their model library.

"},{"location":"llms/ollama/#usage","title":"Usage","text":"

Start by downloading Ollama, and then pull a model such as Llama 3 or Mistral.

ollama pull llama3\n

Once you have the model, run the ollama server.

ollama run llama3\n

Then, you can use the readmeai CLI to generate README files using the Ollama API.

readmeai --api ollama --model llama3 -r https//github.com/username/project\n
Note
  • Slower README generation times may be experienced when using Ollama compared to cloud-based services.
"},{"location":"llms/openai/","title":"OpenAI","text":"

OpenAI's GPT models are known for their versatility and high-quality text generation.

"},{"location":"llms/openai/#configuration","title":"Configuration","text":"
readmeai --repository <REPO_URL_OR_PATH> --api openai --model gpt-3.5-turbo\n
"},{"location":"llms/openai/#available-models","title":"Available Models","text":"

Although not limited to the following, it is recommended to use the following models: - gpt-3.5-turbo - gpt-4 - gpt-4-turbo

"},{"location":"llms/openai/#best-practices","title":"Best Practices","text":"
  • Use gpt-3.5-turbo for faster generation and lower costs.
  • Use gpt-4 or gpt-4-turbo for more complex projects or when you need higher accuracy.
"},{"location":"quickstart/installation/","title":"Installation","text":"

Install readmeai using one of the following methods:

"},{"location":"quickstart/installation/#pip","title":"Pip","text":"

Pip is the default Python package manager and recommended for installing readmeai:

pip install readmeai\n
"},{"location":"quickstart/installation/#pipx","title":"Pipx","text":"

Alternatively, use pipx to install readmeai in an isolated environment:

pipx install readmeai\n
Why use pipx?

Using pipx allows you to install and run Python command-line applications in isolated environments, which helps prevent dependency conflicts with other Python projects.

"},{"location":"quickstart/installation/#optional-dependencies","title":"Optional Dependencies","text":"

To use additional LLM providers like Anthropic or Google Gemini, install the optional dependencies:

Anthropic:

pip install \"readmeai[anthropic]\"\n

Google Gemini:

pip install pip install \"readmeai[google-generativeai]\"\n

For usage instructions, see the Usage section.

"},{"location":"quickstart/prerequisites/","title":"Prerequisites","text":""},{"location":"quickstart/prerequisites/#system-requirements","title":"System Requirements","text":"
  • Python: 3.9 or higher.
  • Package Manage/Container: pip, pipx, or docker.
"},{"location":"quickstart/prerequisites/#repository-or-directory","title":"Repository or Directory","text":"

A Git repository or a local file system directory is required to generate a README file. Supported platforms include:

  • GitHub
  • GitLab
  • Bitbucket
  • File System

If your Git provider is not listed, open an issue or submit a pull request to add support for additional platforms.

"},{"location":"quickstart/prerequisites/#llm-api-key","title":"LLM API Key","text":"

To enable the full functionality of readmeai, an account and API key are required for one of the following providers:

  • OpenAI: OpenAI API
  • Ollama: Ollama API
  • Anthropic: Anthropic API
  • Google Gemini: Google Gemini API
  • Offline Mode: Runs readmeai offline, or without an API key.

For more information on setting up an API key, refer to the provider's documentation.

"},{"location":"usage/cli/","title":"CLI Usage","text":"

This guide covers the basic usage of readme-ai and provides examples for different LLM services.

"},{"location":"usage/cli/#basic-usage","title":"Basic Usage","text":"

The general syntax for using readme-ai is:

readmeai --repository <REPO_URL_OR_PATH> --api <LLM_SERVICE> [OPTIONS]\n

Replace <REPO_URL_OR_PATH> with your repository URL or local path, and <LLM_SERVICE> with your chosen LLM service (openai, ollama, gemini, or offline).

"},{"location":"usage/cli/#examples","title":"Examples","text":""},{"location":"usage/cli/#using-openai","title":"Using OpenAI","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api openai \\\n         --model gpt-3.5-turbo # (1)\n
  1. Model currently defaults to gpt-3.5-turbo
"},{"location":"usage/cli/#using-ollama","title":"Using Ollama","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api ollama \\\n         --model llama3\n
"},{"location":"usage/cli/#using-google-gemini","title":"Using Google Gemini","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api gemini \\\n         --model gemini-1.5-flash\n
"},{"location":"usage/cli/#offline-mode","title":"Offline Mode","text":"
readmeai --repository https://github.com/eli64s/readme-ai \\\n         --api offline\n
"},{"location":"usage/cli/#advanced-usage","title":"Advanced Usage","text":"

You can customize the output using various options:

readmeai --repository https://github.com/eli64s/readme-ai \\\n         --output readmeai.md \\\n         --api openai \\\n         --model gpt-4-turbo \\\n         --badge-color A931EC \\\n         --badge-style flat-square \\\n         --header-style compact \\\n         --toc-style fold \\\n         --temperature 0.1 \\\n         --tree-depth 2 \\\n         --image LLM \\\n         --emojis\n

For a full list of options, run:

readmeai --help\n

See the Configuration Options documentation for detailed explanations of each option.

"},{"location":"usage/cli/#tips-for-effective-usage","title":"Tips for Effective Usage","text":"
  1. Choose the right LLM: Different LLMs may produce varying results. Experiment to find the best fit for your project.
  2. Adjust temperature: Lower values (e.g., 0.1) produce more focused output, while higher values (e.g., 0.8) increase creativity.
  3. Use custom prompts: For specialized projects, consider using custom prompts to guide the AI's output.
  4. Review and edit: Always review the generated README and make necessary adjustments to ensure accuracy and relevance.
"},{"location":"usage/cli/#troubleshooting","title":"Troubleshooting","text":"

If you encounter any issues:

  1. Ensure you have the latest version of readme-ai installed.
  2. Check your API credentials if using OpenAI or Google Gemini.
  3. For Ollama, make sure the Ollama service is running locally.
  4. Consult the FAQ or open an issue for additional support.
"},{"location":"usage/docker/","title":"Docker","text":"

Running readme-ai in a containerized environment using Docker offers isolation of the application and its dependencies from the host system. This section details how to pull the Docker image from Docker Hub, build the Docker image from the source code, and run the Docker container.

Docker Installation

Before proceeding, ensure that Docker is installed and running on your system. If you haven't installed Docker yet, please visit the official Docker documentation for installation instructions.

"},{"location":"usage/docker/#pull-the-docker-image","title":"Pull the Docker Image","text":"

Pull the latest readme-ai image from Docker Hub:

docker pull zeroxeli/readme-ai:latest\n
"},{"location":"usage/docker/#build-the-docker-image","title":"Build the Docker Image","text":"

Alternatively, you can build the Docker image from the source code. This assumes you have cloned the readme-ai repository.

docker buildx build --platform linux/amd64,linux/arm64 -t readme-ai --push .\n
Buildx

Using docker buildx allows you to build multi-platform images, which means you can create Docker images that work on different architectures (e.g., amd64 and arm64). This is particularly useful if you want your Docker image to be compatible with a wider range of devices and environments, such as both standard servers and ARM-based devices like the Raspberry Pi.

"},{"location":"usage/docker/#run-the-docker-container","title":"Run the Docker Container","text":"

Run the readme-ai Docker container with the following command:

docker run -it --rm \\\n-e OPENAI_API_KEY=$OPENAI_API_KEY \\\n-v \"$(pwd)\":/app zeroxeli/readme-ai:latest \\\n-r https://github.com/eli64s/readme-ai \\\n--api openai\n

Explanation of the command arguments:

Argument Function -it Creates an interactive terminal. --rm Automatically removes the container when it exits. -e Passes your OpenAI API key as an environment variable. -v \"$(pwd)\":/app Mounts the current directory to the /app directory in the container, allowing access to the generated README file on your host system. -r Specifies the GitHub repository to analyze.

For Windows users, replace $(pwd) with %cd% in the command. For PowerShell, use ${PWD} instead.

"},{"location":"usage/docker/#cleanup","title":"Cleanup","text":"

If you want to remove the Docker image and container from your system, follow these steps.

1. Identify the Container

First, list all containers on your system.

docker ps -a\n

You should see output similar to the following:

CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS     NAMES\nabcdef123456   zeroxeli/readme-ai:latest   \"python main.py -r h\u2026\"   2 minutes ago    Up 2 minutes\n

Look for the container with ID abcdef123456.

2. Stop the Container

Stop the container using its ID.

docker stop abcdef123456\n

3. Remove the Container

Remove the container using its ID.

docker rm abcdef123456\n

4. Remove the Image

Remove the Docker image from your system.

docker rmi zeroxeli/readme-ai:latest\n
"},{"location":"usage/docker/#troubleshooting","title":"Troubleshooting","text":"
  1. If you encounter permission issues, ensure your user has the right permissions to run Docker commands.
  2. If the container fails to start, check that your OPENAI_API_KEY is correctly set and valid.
  3. For network-related issues, verify your internet connection and firewall settings.

For more detailed troubleshooting, refer to the official Docker documentation or open an issue on GitHub.

"},{"location":"usage/pip/","title":"Pip","text":"

After installation, you can run readme-ai with:

readmeai --api openai --repository https://github.com/eli64s/readme-ai\n
"},{"location":"usage/pip/#usage","title":"Usage","text":""},{"location":"usage/pip/#setting-environment-variables","title":"Setting Environment Variables","text":"

OpenAI API Key

Generate an OpenAI API key and set it as an environment variable:

export OPENAI_API_KEY=<your_api_key>\n

For Windows users, use:

set OPENAI_API_KEY=<your_api_key>\n

Anthropic API Key

To use the Anthropic API, set your API key:

export ANTHROPIC_API_KEY=<your_api_key>\n
"},{"location":"usage/pip/#running-readme-ai","title":"Running readme-ai","text":"

Run readme-ai with OpenAI:

readmeai --api openai --repository https://github.com/eli64s/readme-ai\n

Run readme-ai with Anthropic:

readmeai --api anthropic --repository https://github.com/eli64s/readme-ai\n

For a list of all available options, run:

readmeai --help\n
"},{"location":"usage/pip/#troubleshooting","title":"Troubleshooting","text":"
  1. Permission Issues: Ensure you have the necessary permissions to install Python packages. You may need to use sudo on Unix-based systems.
  2. Pipx Not Found: Make sure pipx is properly installed and available in your PATH. You can find installation instructions here.
  3. Missing Dependencies: Some advanced features require additional Python packages. Check the official documentation for a list of optional dependencies.

For further help, you can open an issue on GitHub or refer to the official documentation.

"},{"location":"usage/streamlit/","title":"\"> Streamlit","text":"

Run readme-ai directly in your browser on Streamlit Cloud. No installation required!

Source Code

Find the source code for the Streamlit app in the readme-ai repository

"}]} \ No newline at end of file