A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.
A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.
README-AI is a developer tool and framework that combines robust data processing modules with generative AI models. It streamlines documentation and enhances developer productivity by auto-generating comprehensive README.md files.
With README-AI, you can:
Automate Documentation: Synchronize data from third-party sources and generate documentation automatically.
Customize & Flexibly Style: Choose from dozens of options for styling, formatting, badges, header designs, and more.
Support Multiple Languages & Projects: Work across a wide range of programming languages and project types.
Leverage Multiple LLMs: Compatible with OpenAI, Ollama, Anthropic, Google Gemini and Offline Mode.
Follow Markdown Best Practices: Create clean, professional-looking documentation using Markdown formatting best practices.
Strive to make useful, creative, and high quality contributions. This isn't meant to be a high bar, but more of a guiding principle and philosophy. Here's what we mean by these terms:
Useful: Solve common problems, use cases, bugs, or new features.
Creative: Innovative and helping us all grow and learn new things.
High Quality: Well-written, structured, and explained.
"},{"location":"contributing/#ways-to-contribute","title":"Ways to Contribute","text":"
To improve and grow the project, we need your help! Here are some ways to get involved:
Activity Ideas \ud83d\udc4b Discussions Start a discussion by asking a question or making a suggestion. \ud83d\udc1b Open an Issue Find unhandled exceptions and bugs in the codebase. \ud83d\udcc4 Documentation Write documentation for the project. \ud83e\uddea Testing Write unit tests to increase code coverage. \ud83e\udde9 Feature Requests Brainstorm new ideas such as a CLI option to select any language. \ud83d\udee0\ufe0f Code Contributions Contribute to the codebase and submit a pull request. \ud83d\udd22 Code Readability Find ways to make code more readable and easier to understand. \ud83e\udd14 Other Anything else you can think of!
These are just a few examples, and we welcome any other ideas you may have!
By contributing to our project, you agree to license your contributions under the project's open source license. The project's license can be found in the LICENSE
Thank you for your interest in contributing to readme-ai! We appreciate your help and look forward to working with you.
"},{"location":"faq/","title":"README-AI Frequently Asked Questions","text":""},{"location":"faq/#general-questions","title":"General Questions","text":""},{"location":"faq/#q-what-is-readme-ai","title":"Q: What is README-AI?","text":"
A: README-AI is a tool that automatically generates comprehensive README files for your projects using artificial intelligence.
"},{"location":"faq/#q-which-ai-models-does-readme-ai-support","title":"Q: Which AI models does README-AI support?","text":"
A: README-AI primarily uses OpenAI's GPT models, but there are ongoing efforts to add support for other models like Claude, Azure OpenAI, Cohere, and Llama2 (via Replicate).
"},{"location":"faq/#installation-and-setup","title":"Installation and Setup","text":""},{"location":"faq/#q-how-do-i-install-readme-ai","title":"Q: How do I install README-AI?","text":"
A: You can install README-AI using pip:
pip install readmeai\n
Alternatively, you can use Docker:
docker run -it -e OPENAI_API_KEY=your_key_here -v \"$(pwd)\":/app zeroxeli/readme-ai:latest\n
"},{"location":"faq/#q-im-getting-an-error-when-trying-to-install-on-ubuntu-how-can-i-fix-it","title":"Q: I'm getting an error when trying to install on Ubuntu. How can I fix it?","text":"
A: If you're encountering issues with conda environment creation, try using a virtual environment with pip instead. Ensure you have Python 3.8 or higher installed.
"},{"location":"faq/#usage","title":"Usage","text":""},{"location":"faq/#q-how-do-i-generate-a-readme-for-my-project","title":"Q: How do I generate a README for my project?","text":"
Replace the URL with your repository link."},{"location":"faq/#q-can-i-use-readme-ai-with-private-repositories","title":"Q: Can I use README-AI with private repositories?","text":"
A: Yes, but you may need to provide authentication. For Bitbucket, use the format:
"},{"location":"faq/#q-does-readme-ai-work-with-gitlab-repositories","title":"Q: Does README-AI work with GitLab repositories?","text":"
A: Yes, README-AI supports GitLab repositories. Use the same command format as with GitHub repos.
"},{"location":"faq/#troubleshooting","title":"Troubleshooting","text":""},{"location":"faq/#q-im-getting-a-404-not-found-error-what-should-i-do","title":"Q: I'm getting a \"404 Not Found\" error. What should I do?","text":"
A: Ensure your OpenAI API key is correct and has sufficient permissions. Also, check if you're using the correct API endpoint.
"},{"location":"faq/#q-the-script-runs-but-doesnt-generate-a-file-why","title":"Q: The script runs but doesn't generate a file. Why?","text":"
A: Check the permissions in your current directory. Ensure README-AI has write access to create the output file.
"},{"location":"faq/#q-im-seeing-a-429-too-many-requests-error-how-can-i-resolve-this","title":"Q: I'm seeing a \"429 Too Many Requests\" error. How can I resolve this?","text":"
A: This error occurs when you've exceeded the rate limit for the OpenAI API. Wait a while before trying again, or consider upgrading your API plan.
"},{"location":"faq/#q-why-am-i-getting-a-notfound-object-is-not-iterable-error","title":"Q: Why am I getting a \"NotFound object is not iterable\" error?","text":"
A: This error may occur if you're using an incompatible model. Ensure you're using a supported model like \"gpt-3.5-turbo\" or \"gpt-4\".
"},{"location":"faq/#features-and-customization","title":"Features and Customization","text":""},{"location":"faq/#q-can-i-use-readme-ai-with-languages-other-than-english","title":"Q: Can I use README-AI with languages other than English?","text":"
A: While README-AI primarily generates content in English, there are ongoing efforts to add internationalization (i18n) support for languages like Spanish and Italian.
"},{"location":"faq/#q-is-it-possible-to-use-readme-ai-in-azure-devops","title":"Q: Is it possible to use README-AI in Azure DevOps?","text":"
A: While there isn't native integration, you could potentially use README-AI as part of your Azure DevOps pipeline by incorporating it into your build or release process.
"},{"location":"faq/#q-can-i-customize-the-openai-endpoint-or-model-used","title":"Q: Can I customize the OpenAI endpoint or model used?","text":"
A: There are ongoing efforts to make the configuration more extensible, including options to specify different endpoints (like Azure OpenAI) and models.
"},{"location":"faq/#contributing","title":"Contributing","text":""},{"location":"faq/#q-how-can-i-contribute-to-readme-ai","title":"Q: How can I contribute to README-AI?","text":"
A: You can contribute by submitting pull requests on GitHub. Areas of contribution include adding support for new AI models, improving documentation, adding tests, and fixing bugs.
If you have any other questions or issues, please check the GitHub repository or open a new issue for support.
"},{"location":"philosophy/","title":"Philosophy and Vision","text":""},{"location":"philosophy/#empowering-developers-enlightening-projects","title":"Empowering Developers, Enlightening Projects","text":"
Readme-ai envisions a future where every software project, regardless of size or complexity, is accompanied by clear, comprehensive, and up-to-date documentation. We strive to create an ecosystem where documentation is no longer an afterthought but an integral, effortless part of the development process.
We see Readme-ai as a catalyst for a paradigm shift in software development practices. By making high-quality documentation effortless and ubiquitous, we aim to:
Accelerate innovation by making it easier for developers to build upon each other's work.
Improve software quality by encouraging better-documented and more maintainable codebases.
Enhance collaboration within and between development teams through clearer project communication.
Increase the overall efficiency of the software development lifecycle.
Through Readme-ai, we aspire to create a world where every line of code is matched by a line of clear, concise, and helpful documentation, empowering developers and enlightening projects for the benefit of all.
The --help flag can be used to view the help menu for a command, e.g., for readmeai:
\u276f readmeai --help\n
"},{"location":"troubleshooting/#viewing-the-version","title":"Viewing the Version","text":"
When seeking help, it's important to determine the version of readmeai that you're using \u2014 sometimes the problem is already solved in a newer version.
"},{"location":"troubleshooting/#open-an-issue-on-github","title":"Open an issue on GitHub","text":"
The issue tracker on GitHub is a good place to report bugs and request features. Make sure to search for similar issues first, as it is common for someone else to encounter the same problem.
See the FAQ for answers to common questions and troubleshooting tips.
"},{"location":"why/","title":"Why use README-AI?","text":"
In the fast-paced world of software development, clear and comprehensive documentation is crucial. Yet, creating and maintaining high-quality README files can be time-consuming and often overlooked. This is where Readme-ai comes in, revolutionizing the way developers approach project documentation.
Readme-ai harnesses the power of artificial intelligence to automatically generate detailed, structured README files for your projects. By analyzing your codebase, Readme-ai creates documentation that is:
Comprehensive: Covers all essential aspects of your project, from installation instructions to usage examples.
Consistent: Maintains a uniform structure across all your projects, enhancing readability and professionalism.
"},{"location":"why/#time-saving-and-efficient","title":"Time-Saving and Efficient","text":"
Focus on Coding: Spend more time writing code and less time worrying about documentation.
Quick Setup: Get started with minimal configuration, allowing you to generate a README in minutes.
Customizable Templates: Fine-tune the output to match your project's specific needs and your personal style.
Flexible README Generation: Combines robust repository context extraction with generative AI to create detailed and accurate README files.
Customizable Output: Offers numerous CLI options for tailoring the README to your project's needs:
Badge styles and colors
Header designs
Table of contents styles
Project logos
Language Agnostic: Works with a wide range of programming languages and project types, automatically detecting and summarizing key aspects of your codebase.
Project Analysis: Automatically extracts and presents important information about your project:
Directory structure
File summaries
Dependencies
Setup instructions
Multi-LLM Support: Compatible with various language model APIs, including:
OpenAI
Ollama
Anthropic
Google Gemini
Offline Mode
Offline Mode: Generate a basic README structure without requiring an internet connection or API calls.
Markdown Expertise: Leverages best practices in Markdown formatting for clean, professional-looking documentation.
README-AI offers a wide range of configuration options to customize your README generation. This page provides a comprehensive list of all available options with detailed explanations.
"},{"location":"configuration/#cli-options","title":"CLI Options","text":"Option Description Default Impact --align Text alignment in header center Affects the visual layout of the README header --api LLM API service offline Determines which AI service is used for content generation --badge-color Badge color (name or hex) 0080ff Customizes the color of status badges in the README --badge-style Badge icon style type flat Changes the visual style of status badges --base-url Base URL for the repository v1/chat/completions Used for API requests to the chosen LLM service --context-window Max context window of LLM API 3999 Limits the amount of context provided to the LLM --emojis Add emojis to README sections False Adds visual flair to section headers --header-style Header template style classic Changes the overall look of the README header --image Project logo image blue Sets the main image displayed in the README --model Specific LLM model to use gpt-3.5-turbo Chooses the AI model for content generation --output Output filename readme-ai.md Specifies the name of the generated README file --rate-limit Max API requests per minute 5 Prevents exceeding API rate limits --repository Repository URL or local path None Specifies the project to analyze --temperature Creativity level for generation 0.9 Controls the randomness of the AI's output --toc-style Table of contents style bullet Changes the format of the table of contents --top-p Top-p sampling probability 0.9 Fine-tunes the AI's output diversity --tree-depth Max depth of directory tree 2 Controls the detail level of the project structure
Some options have a significant impact on the generated README's appearance and content. Experiment with different settings to find the best configuration for your project.
A badge is a simple embeddable icon that displays various metrics such as the number of stars or forks for a repository, languages used in the project, CI/CD build status, test coverage, the license of the project, and more. Badges are a great way to provide quick information about your project to users and visitors.
README-AI offers various badge styles to enhance your project's README. This guide explains how to use and customize these badges.
"},{"location":"configuration/badges/#how-it-works","title":"How It Works","text":"
README-AI automatically detects your project's dependencies and technologies during the repository ingestion process. It then uses these dependencies and technologies to generate a comprehensive list of relevant badges for your project.
When you provide the --badge-style option to the readmeai command, two sets of badges are generated:
Default Metadata Badges: The default set is always included in the generated README file. The default badges include the project license, last commit, top language, and total languages.
Project Dependency Badges: When the --badge-style argument is provided to the CLI, a second badge set is generated, representing the extracted dependencies and metadata from your codebase.
The badge sets are formatted in the README header and provide the reader with a quick overview of the project's key metrics and technologies.
The command above generates a README with the following badge configuration:
Example
Badge Generation
Built with the tools and technologies:
The --badge-color option only modifies the default badge set, while the --badge-style option is applied to both the default and project dependency badges
"},{"location":"configuration/badges/#tips-for-using-badges","title":"Tips for Using Badges","text":"
Choose a badge style that complements your project's overall design.
Use badges to highlight relevant information about your project, such as license, build status, and test coverage.
Don't overuse badges \u2013 too many can clutter your README and make it hard to read.
Ensure that all badge links are correct and up-to-date.
Consider using custom badges for project-specific information or metrics.
Emojis are a fun way to add some personality to your README.md file. README-AI allows you to automatically add emojis to all headers in the generated README file by providing the --emojis option to the readmeai command.
"},{"location":"configuration/emojis/#how-it-works","title":"How It Works","text":"
When you provide the --emojis option to the readmeai command, README-AI automatically adds emojis to all headers in the generated README file.
A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.
"},{"location":"configuration/header/#tips-for-using-header-styles","title":"Tips for Using Header Styles","text":"
Classic: Best for traditional open-source projects that need a professional look
Modern: Great for documentation sites and projects with longer READMEs
Compact: Ideal for smaller projects or when space is at a premium
SVG: Perfect for projects that need custom branding or full-width banners
ASCII: Good for terminal applications or when you want a retro feel
Consider these factors when choosing a header style: - Your project's target audience - The amount of content in your README - Whether you have a custom logo or banner - The overall aesthetic of your documentation - How the style works with your chosen badge style
Some header styles may look different on different platforms or markdown renderers. It's a good idea to test how your chosen style looks on your target platform.
A project logo is a visual representation of your project that appears at the top of your README file. It helps to brand your project and make it more recognizable. README-AI offers various options for adding a logo to your project's README.
Use the --image option to select from the following logo styles:
BlueGradientBlackCloudPurpleGrey "},{"location":"configuration/project_logo/#how-it-works","title":"How It Works","text":"
README-AI provides several ways to include a logo in your project's README:
Default Images: Choose from a set of pre-defined logos.
Custom Images: Use your own image by providing a URL or file path.
LLM Images: Generate a unique logo using AI (requires an LLM API).
The selected or generated logo will be placed at the top of your README file, helping to visually identify your project.
"},{"location":"configuration/project_logo/#examples","title":"Examples","text":""},{"location":"configuration/project_logo/#selecting-a-default-image","title":"Selecting a Default Image","text":"
To use one of the default images, specify the image name with the --image option:
The quality and relevance of LLM-generated logos can vary. It's a good idea to review and potentially edit the generated logo to ensure it meets your project's needs.
"},{"location":"configuration/project_logo/#tips-for-using-project-logos","title":"Tips for Using Project Logos","text":"
Choose a logo that represents your project's purpose or theme.
Ensure the logo is clear and recognizable even at smaller sizes.
If using a custom image, make sure it's high quality and appropriately sized.
When using LLM-generated logos, you may want to generate several options to choose from.
Consider how the logo will look alongside your project's badges and other README content.
If your project is part of a larger organization or ecosystem, consider using a logo that aligns with that branding.
"},{"location":"configuration/table_of_contents/","title":"Table of Contents (ToC) Templates","text":"
README-AI offers flexible options for generating a Table of Contents (ToC) in your README file, directly from the command line. You can specify different styles of ToC generation using the --toc-style option when running the CLI.
"},{"location":"configuration/table_of_contents/#cli-usage-for-toc-styles","title":"CLI Usage for ToC Styles","text":"
When using the readmeai CLI, you can customize the Table of Contents by specifying one of the supported styles with the --toc-style flag.
Here\u2019s how to generate a README file with a Numbered Table of Contents:
readmeai --repository ./my_project --toc-style number --output README.md\n
In this example: - The --repository flag specifies the local project directory. - The --toc-style number flag sets the Table of Contents to use a numbered format. - The --output README.md flag specifies the output file name.
"},{"location":"configuration/table_of_contents/#another-example-with-foldable-toc","title":"Another Example with Foldable ToC","text":"
To generate a README with a Foldable Table of Contents, run:
Here are some examples of README files generated by readme-ai for various projects using different languages and frameworks.
Language/Framework Output File Input Repository Description Python readme-python.md readme-ai Core readme-ai project TypeScript & React readme-typescript.md ChatGPT App React Native ChatGPT app PostgreSQL & DuckDB readme-postgres.md Buenavista Postgres proxy server Kotlin & Android readme-kotlin.md file.io Client Android file sharing app Streamlit readme-streamlit.md readme-ai-streamlit Streamlit UI for readme-ai app Rust & C readme-rust-c.md CallMon System call monitoring tool Docker & Go readme-go.md docker-gs-ping Dockerized Go app Java readme-java.md Minimal-Todo Minimalist todo Java app FastAPI & Redis readme-fastapi-redis.md async-ml-inference Async ML inference service Jupyter Notebook readme-mlops.md mlops-course MLOps course repository Apache Flink readme-local.md Local Directory Example using a local directory
See additional README files generated by readme-ai here
"},{"location":"guides/markdown_best_practices/","title":"Markdown Best Practices","text":"
This document provides a comprehensive guide to writing technical documentation using the GitHub flavored markdown spec. This guide includes examples of how to use various markdown elements to create visually appealing and informative documentation.
"},{"location":"guides/markdown_best_practices/#table-of-contents","title":"Table of Contents","text":"
Things I need to do today: 1. Fix usability problem 2. Clean up the page * Make the headings bigger 2. Push my changes 3. Create code review * Describe my changes * Assign reviewers * Ask for feedback
"},{"location":"guides/markdown_best_practices/#tables","title":"Tables","text":""},{"location":"guides/markdown_best_practices/#table-with-alignment","title":"Table with Alignment","text":"
Left Aligned Centered Right Aligned Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6"},{"location":"guides/markdown_best_practices/#multi-line-table-cells","title":"Multi-Line Table Cells","text":"
| Name | Details |\n| --- | --- |\n| Item1 | This text is on one line |\n| Item2 | This item has:<br>- Multiple items<br>- That we want listed separately |\n
This will render as:
Name Details Item1 This text is on one line Item2 This item has:- Multiple items- That we want listed separately"},{"location":"guides/markdown_best_practices/#task-lists","title":"Task Lists","text":"
<table>\n <tr>\n <td colspan=\"2\">I take up two columns!</td>\n </tr>\n <tr>\n <td>First column</td>\n <td>Second column</td>\n </tr>\n</table>\n
This will render as:
I take up two columns! First column Second column"},{"location":"guides/markdown_best_practices/#text-styling-formatting","title":"Text Styling & Formatting","text":"
"},{"location":"guides/markdown_best_practices/#buttons-keyboard-shortcuts","title":"Buttons & Keyboard Shortcuts","text":"Click here Click here Or here Or here
You can navigate through your items or search results using the keyboard. You can use Tab to cycle through results, and Shift + Tab to go backwards. Or use the arrow keys, \u2191, \u2192, \u2193 and \u2190.
Press Ctrl + S to save your changes. Select text and press Ctrl + B to make it bold.
"},{"location":"guides/markdown_best_practices/#math-equations","title":"Math Equations","text":"\\[ \\begin{aligned} \\dot{x} & = \\sigma(y-x) \\\\ \\dot{y} & = \\rho x - y - xz \\\\ \\dot{z} & = -\\beta z + xy \\end{aligned} \\] \\[ L = \\frac{1}{2} \\rho v^2 S C_L \\]"},{"location":"guides/markdown_best_practices/#images","title":"Images","text":""},{"location":"guides/markdown_best_practices/#simple-icons","title":"Simple Icons","text":""},{"location":"guides/markdown_best_practices/#docker","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#docker_1","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#centered-images","title":"Centered Images","text":""},{"location":"guides/markdown_best_practices/#horizontally-aligned-images","title":"Horizontally Aligned Images","text":""},{"location":"guides/markdown_best_practices/#small-images","title":"Small Images","text":"
Code documentation - Generated directory tree structure and summaries of the key files in your codebase.
Spike documentation - Generated directory tree structure and summaries of the key files in your codebase.
Chunking documentation - Generated directory tree structure and summaries of the key files in your codebase.
"},{"location":"guides/markdown_best_practices/#text-boxes","title":"Text Boxes","text":"This is text in the box. Much wow"},{"location":"guides/markdown_best_practices/#text-wrapping","title":"Text Wrapping","text":"
At the 2019 rendition of E3, an eccentric gamer in attendance interrupted Keanu Reeves' presentation of the role-playing game (RPG) Cyberpunk 2077, loudly claiming, \u201c\"You're breathtaking,\"\u201d which was directed at the actor-cum-presenter. The image macro used to build the \"You're Breathtaking\" meme generally features a still of Keanu Reeves pointing at someone in the audience in front of him - that someone is Peter Sark, though there are no images from Keanu's point of view that have since been used as part of the \"You're Breathtaking\" meme.
journey\n title My working day\n section Go to work\n Make tea: 5: Me\n Go upstairs: 3: Me\n Do work: 1: Me, Cat\n section Go home\n Go downstairs: 5: Me\n Sit down: 5: Me
"},{"location":"guides/markdown_best_practices/#return-to-top","title":"Return To Top","text":"
Return
Return
Return
"},{"location":"guides/markdown_best_practices/#html-spacing-entities","title":"HTML Spacing Entities","text":"Name HTML Entity Description En space   Half the width of an em space Em space   Width of an em space (equal to the font size) Three-per-em space   One-third of an em space Figure space   Width of a numeral (digit) Punctuation space   Width of a period or comma Thin space   Thinner than a regular space Hair space   Thinner than a thin space Narrow no-break space   Non-breaking thin space
Note: The   and   entities may not be supported in all browsers. For the narrow no-break space, there isn't a named HTML entity, so the numeric character reference   is used.
"},{"location":"llms/","title":"Large Language Model (LLM) Integrationss","text":"
Readme-ai integrates seamlessly with various Large Language Model (LLM) services to generate high-quality README content. This page provides an overview of the supported LLM services and links to detailed information about each.
"},{"location":"llms/#comparing-llm-services","title":"Comparing LLM Services","text":"Service Pros Cons OpenAI High-quality output, Versatile Requires API key, Costs associated Ollama Free, Privacy-focused, Offline May be slower, Requires local setup Anthropic Privacy-focused, Offline May be slower, Requires local setup Gemini Strong performance, Google integration Requires API key Offline No internet required, Fast Basic output, Limited customization"},{"location":"llms/#tips-for-optimal-results","title":"Tips for Optimal Results","text":"
Experiment with different models: Try various LLM services and models to find the best fit for your project.
Provide clear context: Ensure your repository has well-organized code and documentation to help the LLM generate more accurate content.
Fine-tune with CLI options: Use readme-ai's CLI options to customize the output further after choosing your LLM service.
Review and edit: Always review the generated README and make necessary edits to ensure accuracy and relevance to your project.
By leveraging these LLM integrations effectively, you can generate comprehensive and accurate README files for your projects with minimal effort. Here is a structured content tab implementation for integrating multiple APIs in README-AI, based on the detailed API integration information you provided:
"},{"location":"llms/#api-integrations","title":"\ud83d\ude80 API Integrations","text":"
README-AI supports multiple large language model (LLM) APIs for generating README files. The following tabs explain how to configure and use each supported API.
Argument Description --api Specifies the LLM API service to use (in this case, Ollama). --model Specifies the model to use with Ollama (e.g., llama3). --repository Specifies the GitHub repository or local directory path to analyze."},{"location":"llms/anthropic/#optional-dependencies","title":"Optional Dependencies","text":"
To use additional LLM providers like Anthropic or Google Gemini in addition to Ollama, install the optional dependencies:
Offline mode allows you to generate a README without an internet connection. This is useful when you want to quickly generate boilerplate README files.
Ollama is a privacy-focused, open-source tool for running open-source LLMs locally such as Llama 3, Mistral, and Gemma 2. Ollama can be used with readme-ai to generate README files with a variety of models and configurations from their model library.
Alternatively, use pipx to install readmeai in an isolated environment:
pipx install readmeai\n
Why use pipx?
Using pipx allows you to install and run Python command-line applications in isolated environments, which helps prevent dependency conflicts with other Python projects.
Replace <REPO_URL_OR_PATH> with your repository URL or local path, and <LLM_SERVICE> with your chosen LLM service (openai, ollama, gemini, or offline).
Running readme-ai in a containerized environment using Docker offers isolation of the application and its dependencies from the host system. This section details how to pull the Docker image from Docker Hub, build the Docker image from the source code, and run the Docker container.
Docker Installation
Before proceeding, ensure that Docker is installed and running on your system. If you haven't installed Docker yet, please visit the official Docker documentation for installation instructions.
"},{"location":"usage/docker/#pull-the-docker-image","title":"Pull the Docker Image","text":"
Pull the latest readme-ai image from Docker Hub:
docker pull zeroxeli/readme-ai:latest\n
"},{"location":"usage/docker/#build-the-docker-image","title":"Build the Docker Image","text":"
Alternatively, you can build the Docker image from the source code. This assumes you have cloned the readme-ai repository.
Using docker buildx allows you to build multi-platform images, which means you can create Docker images that work on different architectures (e.g., amd64 and arm64). This is particularly useful if you want your Docker image to be compatible with a wider range of devices and environments, such as both standard servers and ARM-based devices like the Raspberry Pi.
"},{"location":"usage/docker/#run-the-docker-container","title":"Run the Docker Container","text":"
Run the readme-ai Docker container with the following command:
Argument Function -it Creates an interactive terminal. --rm Automatically removes the container when it exits. -e Passes your OpenAI API key as an environment variable. -v \"$(pwd)\":/app Mounts the current directory to the /app directory in the container, allowing access to the generated README file on your host system. -r Specifies the GitHub repository to analyze.
For Windows users, replace $(pwd) with %cd% in the command. For PowerShell, use ${PWD} instead.
If you want to remove the Docker image and container from your system, follow these steps.
1. Identify the Container
First, list all containers on your system.
docker ps -a\n
You should see output similar to the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nabcdef123456 zeroxeli/readme-ai:latest \"python main.py -r h\u2026\" 2 minutes ago Up 2 minutes\n
Run readme-ai directly in your browser on Streamlit Cloud. No installation required!
Source Code
Find the source code for the Streamlit app in the readme-ai repository
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"README-AI","text":"README-AI Your AI-Powered README Generator Designed for simplicity, customization, and developer productivity.
README-AI is a developer tool and framework that combines robust data processing modules with generative AI models. It streamlines documentation and enhances developer productivity by auto-generating comprehensive README.md files.
With README-AI, you can:
Automate Documentation: Synchronize data from third-party sources and generate documentation automatically.
Customize & Flexibly Style: Choose from dozens of options for styling, formatting, badges, header designs, and more.
Support Multiple Languages & Projects: Work across a wide range of programming languages and project types.
Leverage Multiple LLMs: Compatible with OpenAI, Ollama, Anthropic, Google Gemini and Offline Mode.
Follow Markdown Best Practices: Create clean, professional-looking documentation using Markdown formatting best practices.
Strive to make useful, creative, and high quality contributions. This isn't meant to be a high bar, but more of a guiding principle and philosophy. Here's what we mean by these terms:
Useful: Solve common problems, use cases, bugs, or new features.
Creative: Innovative and helping us all grow and learn new things.
High Quality: Well-written, structured, and explained.
"},{"location":"contributing/#ways-to-contribute","title":"Ways to Contribute","text":"
To improve and grow the project, we need your help! Here are some ways to get involved:
Activity Ideas \ud83d\udc4b Discussions Start a discussion by asking a question or making a suggestion. \ud83d\udc1b Open an Issue Find unhandled exceptions and bugs in the codebase. \ud83d\udcc4 Documentation Write documentation for the project. \ud83e\uddea Testing Write unit tests to increase code coverage. \ud83e\udde9 Feature Requests Brainstorm new ideas such as a CLI option to select any language. \ud83d\udee0\ufe0f Code Contributions Contribute to the codebase and submit a pull request. \ud83d\udd22 Code Readability Find ways to make code more readable and easier to understand. \ud83e\udd14 Other Anything else you can think of!
These are just a few examples, and we welcome any other ideas you may have!
By contributing to our project, you agree to license your contributions under the project's open source license. The project's license can be found in the LICENSE
Thank you for your interest in contributing to readme-ai! We appreciate your help and look forward to working with you.
"},{"location":"faq/","title":"README-AI Frequently Asked Questions","text":""},{"location":"faq/#general-questions","title":"General Questions","text":""},{"location":"faq/#q-what-is-readme-ai","title":"Q: What is README-AI?","text":"
A: README-AI is a tool that automatically generates comprehensive README files for your projects using artificial intelligence.
"},{"location":"faq/#q-which-ai-models-does-readme-ai-support","title":"Q: Which AI models does README-AI support?","text":"
A: README-AI primarily uses OpenAI's GPT models, but there are ongoing efforts to add support for other models like Claude, Azure OpenAI, Cohere, and Llama2 (via Replicate).
"},{"location":"faq/#installation-and-setup","title":"Installation and Setup","text":""},{"location":"faq/#q-how-do-i-install-readme-ai","title":"Q: How do I install README-AI?","text":"
A: You can install README-AI using pip:
pip install readmeai\n
Alternatively, you can use Docker:
docker run -it -e OPENAI_API_KEY=your_key_here -v \"$(pwd)\":/app zeroxeli/readme-ai:latest\n
"},{"location":"faq/#q-im-getting-an-error-when-trying-to-install-on-ubuntu-how-can-i-fix-it","title":"Q: I'm getting an error when trying to install on Ubuntu. How can I fix it?","text":"
A: If you're encountering issues with conda environment creation, try using a virtual environment with pip instead. Ensure you have Python 3.8 or higher installed.
"},{"location":"faq/#usage","title":"Usage","text":""},{"location":"faq/#q-how-do-i-generate-a-readme-for-my-project","title":"Q: How do I generate a README for my project?","text":"
Replace the URL with your repository link."},{"location":"faq/#q-can-i-use-readme-ai-with-private-repositories","title":"Q: Can I use README-AI with private repositories?","text":"
A: Yes, but you may need to provide authentication. For Bitbucket, use the format:
"},{"location":"faq/#q-does-readme-ai-work-with-gitlab-repositories","title":"Q: Does README-AI work with GitLab repositories?","text":"
A: Yes, README-AI supports GitLab repositories. Use the same command format as with GitHub repos.
"},{"location":"faq/#troubleshooting","title":"Troubleshooting","text":""},{"location":"faq/#q-im-getting-a-404-not-found-error-what-should-i-do","title":"Q: I'm getting a \"404 Not Found\" error. What should I do?","text":"
A: Ensure your OpenAI API key is correct and has sufficient permissions. Also, check if you're using the correct API endpoint.
"},{"location":"faq/#q-the-script-runs-but-doesnt-generate-a-file-why","title":"Q: The script runs but doesn't generate a file. Why?","text":"
A: Check the permissions in your current directory. Ensure README-AI has write access to create the output file.
"},{"location":"faq/#q-im-seeing-a-429-too-many-requests-error-how-can-i-resolve-this","title":"Q: I'm seeing a \"429 Too Many Requests\" error. How can I resolve this?","text":"
A: This error occurs when you've exceeded the rate limit for the OpenAI API. Wait a while before trying again, or consider upgrading your API plan.
"},{"location":"faq/#q-why-am-i-getting-a-notfound-object-is-not-iterable-error","title":"Q: Why am I getting a \"NotFound object is not iterable\" error?","text":"
A: This error may occur if you're using an incompatible model. Ensure you're using a supported model like \"gpt-3.5-turbo\" or \"gpt-4\".
"},{"location":"faq/#features-and-customization","title":"Features and Customization","text":""},{"location":"faq/#q-can-i-use-readme-ai-with-languages-other-than-english","title":"Q: Can I use README-AI with languages other than English?","text":"
A: While README-AI primarily generates content in English, there are ongoing efforts to add internationalization (i18n) support for languages like Spanish and Italian.
"},{"location":"faq/#q-is-it-possible-to-use-readme-ai-in-azure-devops","title":"Q: Is it possible to use README-AI in Azure DevOps?","text":"
A: While there isn't native integration, you could potentially use README-AI as part of your Azure DevOps pipeline by incorporating it into your build or release process.
"},{"location":"faq/#q-can-i-customize-the-openai-endpoint-or-model-used","title":"Q: Can I customize the OpenAI endpoint or model used?","text":"
A: There are ongoing efforts to make the configuration more extensible, including options to specify different endpoints (like Azure OpenAI) and models.
"},{"location":"faq/#contributing","title":"Contributing","text":""},{"location":"faq/#q-how-can-i-contribute-to-readme-ai","title":"Q: How can I contribute to README-AI?","text":"
A: You can contribute by submitting pull requests on GitHub. Areas of contribution include adding support for new AI models, improving documentation, adding tests, and fixing bugs.
If you have any other questions or issues, please check the GitHub repository or open a new issue for support.
"},{"location":"philosophy/","title":"Philosophy and Vision","text":""},{"location":"philosophy/#empowering-developers-enlightening-projects","title":"Empowering Developers, Enlightening Projects","text":"
Readme-ai envisions a future where every software project, regardless of size or complexity, is accompanied by clear, comprehensive, and up-to-date documentation. We strive to create an ecosystem where documentation is no longer an afterthought but an integral, effortless part of the development process.
We see Readme-ai as a catalyst for a paradigm shift in software development practices. By making high-quality documentation effortless and ubiquitous, we aim to:
Accelerate innovation by making it easier for developers to build upon each other's work.
Improve software quality by encouraging better-documented and more maintainable codebases.
Enhance collaboration within and between development teams through clearer project communication.
Increase the overall efficiency of the software development lifecycle.
Through Readme-ai, we aspire to create a world where every line of code is matched by a line of clear, concise, and helpful documentation, empowering developers and enlightening projects for the benefit of all.
The --help flag can be used to view the help menu for a command, e.g., for readmeai:
\u276f readmeai --help\n
"},{"location":"troubleshooting/#viewing-the-version","title":"Viewing the Version","text":"
When seeking help, it's important to determine the version of readmeai that you're using \u2014 sometimes the problem is already solved in a newer version.
"},{"location":"troubleshooting/#open-an-issue-on-github","title":"Open an issue on GitHub","text":"
The issue tracker on GitHub is a good place to report bugs and request features. Make sure to search for similar issues first, as it is common for someone else to encounter the same problem.
See the FAQ for answers to common questions and troubleshooting tips.
"},{"location":"why/","title":"Why use README-AI?","text":"
In the fast-paced world of software development, clear and comprehensive documentation is crucial. Yet, creating and maintaining high-quality README files can be time-consuming and often overlooked. This is where Readme-ai comes in, revolutionizing the way developers approach project documentation.
Readme-ai harnesses the power of artificial intelligence to automatically generate detailed, structured README files for your projects. By analyzing your codebase, Readme-ai creates documentation that is:
Comprehensive: Covers all essential aspects of your project, from installation instructions to usage examples.
Consistent: Maintains a uniform structure across all your projects, enhancing readability and professionalism.
"},{"location":"why/#time-saving-and-efficient","title":"Time-Saving and Efficient","text":"
Focus on Coding: Spend more time writing code and less time worrying about documentation.
Quick Setup: Get started with minimal configuration, allowing you to generate a README in minutes.
Customizable Templates: Fine-tune the output to match your project's specific needs and your personal style.
Flexible README Generation: Combines robust repository context extraction with generative AI to create detailed and accurate README files.
Customizable Output: Offers numerous CLI options for tailoring the README to your project's needs:
Badge styles and colors
Header designs
Table of contents styles
Project logos
Language Agnostic: Works with a wide range of programming languages and project types, automatically detecting and summarizing key aspects of your codebase.
Project Analysis: Automatically extracts and presents important information about your project:
Directory structure
File summaries
Dependencies
Setup instructions
Multi-LLM Support: Compatible with various language model APIs, including:
OpenAI
Ollama
Anthropic
Google Gemini
Offline Mode
Offline Mode: Generate a basic README structure without requiring an internet connection or API calls.
Markdown Expertise: Leverages best practices in Markdown formatting for clean, professional-looking documentation.
README-AI offers a wide range of configuration options to customize your README generation. This page provides a comprehensive list of all available options with detailed explanations.
"},{"location":"configuration/#cli-options","title":"CLI Options","text":"Option Description Default Impact --align Text alignment in header center Affects the visual layout of the README header --api LLM API service offline Determines which AI service is used for content generation --badge-color Badge color (name or hex) 0080ff Customizes the color of status badges in the README --badge-style Badge icon style type flat Changes the visual style of status badges --base-url Base URL for the repository v1/chat/completions Used for API requests to the chosen LLM service --context-window Max context window of LLM API 3999 Limits the amount of context provided to the LLM --emojis Add emojis to README sections False Adds visual flair to section headers --header-style Header template style classic Changes the overall look of the README header --image Project logo image blue Sets the main image displayed in the README --model Specific LLM model to use gpt-3.5-turbo Chooses the AI model for content generation --output Output filename readme-ai.md Specifies the name of the generated README file --rate-limit Max API requests per minute 5 Prevents exceeding API rate limits --repository Repository URL or local path None Specifies the project to analyze --temperature Creativity level for generation 0.9 Controls the randomness of the AI's output --toc-style Table of contents style bullet Changes the format of the table of contents --top-p Top-p sampling probability 0.9 Fine-tunes the AI's output diversity --tree-depth Max depth of directory tree 2 Controls the detail level of the project structure
Some options have a significant impact on the generated README's appearance and content. Experiment with different settings to find the best configuration for your project.
A badge is a simple embeddable icon that displays various metrics such as the number of stars or forks for a repository, languages used in the project, CI/CD build status, test coverage, the license of the project, and more. Badges are a great way to provide quick information about your project to users and visitors.
README-AI offers various badge styles to enhance your project's README. This guide explains how to use and customize these badges.
"},{"location":"configuration/badges/#how-it-works","title":"How It Works","text":"
README-AI automatically detects your project's dependencies and technologies during the repository ingestion process. It then uses these dependencies and technologies to generate a comprehensive list of relevant badges for your project.
When you provide the --badge-style option to the readmeai command, two sets of badges are generated:
Default Metadata Badges: The default set is always included in the generated README file. The default badges include the project license, last commit, top language, and total languages.
Project Dependency Badges: When the --badge-style argument is provided to the CLI, a second badge set is generated, representing the extracted dependencies and metadata from your codebase.
The badge sets are formatted in the README header and provide the reader with a quick overview of the project's key metrics and technologies.
The command above generates a README with the following badge configuration:
Example
Badge Generation
Built with the tools and technologies:
The --badge-color option only modifies the default badge set, while the --badge-style option is applied to both the default and project dependency badges
"},{"location":"configuration/badges/#tips-for-using-badges","title":"Tips for Using Badges","text":"
Choose a badge style that complements your project's overall design.
Use badges to highlight relevant information about your project, such as license, build status, and test coverage.
Don't overuse badges \u2013 too many can clutter your README and make it hard to read.
Ensure that all badge links are correct and up-to-date.
Consider using custom badges for project-specific information or metrics.
Emojis are a fun way to add some personality to your README.md file. README-AI allows you to automatically add emojis to all headers in the generated README file by providing the --emojis option to the readmeai command.
"},{"location":"configuration/emojis/#how-it-works","title":"How It Works","text":"
When you provide the --emojis option to the readmeai command, README-AI automatically adds emojis to all headers in the generated README file.
A header template style determines how your project's header section is structured and displayed in the README file. README-AI offers several pre-designed header styles to help brand your project and create a professional appearance.
"},{"location":"configuration/header/#tips-for-using-header-styles","title":"Tips for Using Header Styles","text":"
Classic: Best for traditional open-source projects that need a professional look
Modern: Great for documentation sites and projects with longer READMEs
Compact: Ideal for smaller projects or when space is at a premium
SVG: Perfect for projects that need custom branding or full-width banners
ASCII: Good for terminal applications or when you want a retro feel
Consider these factors when choosing a header style: - Your project's target audience - The amount of content in your README - Whether you have a custom logo or banner - The overall aesthetic of your documentation - How the style works with your chosen badge style
Some header styles may look different on different platforms or markdown renderers. It's a good idea to test how your chosen style looks on your target platform.
A project logo is a visual representation of your project that appears at the top of your README file. It helps to brand your project and make it more recognizable. README-AI offers various options for adding a logo to your project's README.
Use the --image option to select from the following logo styles:
BlueGradientBlackCloudPurpleGrey "},{"location":"configuration/project_logo/#how-it-works","title":"How It Works","text":"
README-AI provides several ways to include a logo in your project's README:
Default Images: Choose from a set of pre-defined logos.
Custom Images: Use your own image by providing a URL or file path.
LLM Images: Generate a unique logo using AI (requires an LLM API).
The selected or generated logo will be placed at the top of your README file, helping to visually identify your project.
"},{"location":"configuration/project_logo/#examples","title":"Examples","text":""},{"location":"configuration/project_logo/#selecting-a-default-image","title":"Selecting a Default Image","text":"
To use one of the default images, specify the image name with the --image option:
The quality and relevance of LLM-generated logos can vary. It's a good idea to review and potentially edit the generated logo to ensure it meets your project's needs.
"},{"location":"configuration/project_logo/#tips-for-using-project-logos","title":"Tips for Using Project Logos","text":"
Choose a logo that represents your project's purpose or theme.
Ensure the logo is clear and recognizable even at smaller sizes.
If using a custom image, make sure it's high quality and appropriately sized.
When using LLM-generated logos, you may want to generate several options to choose from.
Consider how the logo will look alongside your project's badges and other README content.
If your project is part of a larger organization or ecosystem, consider using a logo that aligns with that branding.
"},{"location":"configuration/table_of_contents/","title":"Table of Contents (ToC) Templates","text":"
README-AI offers flexible options for generating a Table of Contents (ToC) in your README file, directly from the command line. You can specify different styles of ToC generation using the --toc-style option when running the CLI.
"},{"location":"configuration/table_of_contents/#cli-usage-for-toc-styles","title":"CLI Usage for ToC Styles","text":"
When using the readmeai CLI, you can customize the Table of Contents by specifying one of the supported styles with the --toc-style flag.
Here\u2019s how to generate a README file with a Numbered Table of Contents:
readmeai --repository ./my_project --toc-style number --output README.md\n
In this example: - The --repository flag specifies the local project directory. - The --toc-style number flag sets the Table of Contents to use a numbered format. - The --output README.md flag specifies the output file name.
"},{"location":"configuration/table_of_contents/#another-example-with-foldable-toc","title":"Another Example with Foldable ToC","text":"
To generate a README with a Foldable Table of Contents, run:
Here are some examples of README files generated by readme-ai for various projects using different languages and frameworks.
Language/Framework Output File Input Repository Description Python readme-python.md readme-ai Core readme-ai project TypeScript & React readme-typescript.md ChatGPT App React Native ChatGPT app PostgreSQL & DuckDB readme-postgres.md Buenavista Postgres proxy server Kotlin & Android readme-kotlin.md file.io Client Android file sharing app Streamlit readme-streamlit.md readme-ai-streamlit Streamlit UI for readme-ai app Rust & C readme-rust-c.md CallMon System call monitoring tool Docker & Go readme-go.md docker-gs-ping Dockerized Go app Java readme-java.md Minimal-Todo Minimalist todo Java app FastAPI & Redis readme-fastapi-redis.md async-ml-inference Async ML inference service Jupyter Notebook readme-mlops.md mlops-course MLOps course repository Apache Flink readme-local.md Local Directory Example using a local directory
See additional README files generated by readme-ai here
"},{"location":"guides/markdown_best_practices/","title":"Markdown Best Practices","text":"
This document provides a comprehensive guide to writing technical documentation using the GitHub flavored markdown spec. This guide includes examples of how to use various markdown elements to create visually appealing and informative documentation.
"},{"location":"guides/markdown_best_practices/#table-of-contents","title":"Table of Contents","text":"
Things I need to do today: 1. Fix usability problem 2. Clean up the page * Make the headings bigger 2. Push my changes 3. Create code review * Describe my changes * Assign reviewers * Ask for feedback
"},{"location":"guides/markdown_best_practices/#tables","title":"Tables","text":""},{"location":"guides/markdown_best_practices/#table-with-alignment","title":"Table with Alignment","text":"
Left Aligned Centered Right Aligned Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6"},{"location":"guides/markdown_best_practices/#multi-line-table-cells","title":"Multi-Line Table Cells","text":"
| Name | Details |\n| --- | --- |\n| Item1 | This text is on one line |\n| Item2 | This item has:<br>- Multiple items<br>- That we want listed separately |\n
This will render as:
Name Details Item1 This text is on one line Item2 This item has:- Multiple items- That we want listed separately"},{"location":"guides/markdown_best_practices/#task-lists","title":"Task Lists","text":"
<table>\n <tr>\n <td colspan=\"2\">I take up two columns!</td>\n </tr>\n <tr>\n <td>First column</td>\n <td>Second column</td>\n </tr>\n</table>\n
This will render as:
I take up two columns! First column Second column"},{"location":"guides/markdown_best_practices/#text-styling-formatting","title":"Text Styling & Formatting","text":"
"},{"location":"guides/markdown_best_practices/#buttons-keyboard-shortcuts","title":"Buttons & Keyboard Shortcuts","text":"Click here Click here Or here Or here
You can navigate through your items or search results using the keyboard. You can use Tab to cycle through results, and Shift + Tab to go backwards. Or use the arrow keys, \u2191, \u2192, \u2193 and \u2190.
Press Ctrl + S to save your changes. Select text and press Ctrl + B to make it bold.
"},{"location":"guides/markdown_best_practices/#math-equations","title":"Math Equations","text":"\\[ \\begin{aligned} \\dot{x} & = \\sigma(y-x) \\\\ \\dot{y} & = \\rho x - y - xz \\\\ \\dot{z} & = -\\beta z + xy \\end{aligned} \\] \\[ L = \\frac{1}{2} \\rho v^2 S C_L \\]"},{"location":"guides/markdown_best_practices/#images","title":"Images","text":""},{"location":"guides/markdown_best_practices/#simple-icons","title":"Simple Icons","text":""},{"location":"guides/markdown_best_practices/#docker","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#docker_1","title":"Docker","text":""},{"location":"guides/markdown_best_practices/#centered-images","title":"Centered Images","text":""},{"location":"guides/markdown_best_practices/#horizontally-aligned-images","title":"Horizontally Aligned Images","text":""},{"location":"guides/markdown_best_practices/#small-images","title":"Small Images","text":"
Code documentation - Generated directory tree structure and summaries of the key files in your codebase.
Spike documentation - Generated directory tree structure and summaries of the key files in your codebase.
Chunking documentation - Generated directory tree structure and summaries of the key files in your codebase.
"},{"location":"guides/markdown_best_practices/#text-boxes","title":"Text Boxes","text":"This is text in the box. Much wow"},{"location":"guides/markdown_best_practices/#text-wrapping","title":"Text Wrapping","text":"
At the 2019 rendition of E3, an eccentric gamer in attendance interrupted Keanu Reeves' presentation of the role-playing game (RPG) Cyberpunk 2077, loudly claiming, \u201c\"You're breathtaking,\"\u201d which was directed at the actor-cum-presenter. The image macro used to build the \"You're Breathtaking\" meme generally features a still of Keanu Reeves pointing at someone in the audience in front of him - that someone is Peter Sark, though there are no images from Keanu's point of view that have since been used as part of the \"You're Breathtaking\" meme.
journey\n title My working day\n section Go to work\n Make tea: 5: Me\n Go upstairs: 3: Me\n Do work: 1: Me, Cat\n section Go home\n Go downstairs: 5: Me\n Sit down: 5: Me
"},{"location":"guides/markdown_best_practices/#return-to-top","title":"Return To Top","text":"
Return
Return
Return
"},{"location":"guides/markdown_best_practices/#html-spacing-entities","title":"HTML Spacing Entities","text":"Name HTML Entity Description En space   Half the width of an em space Em space   Width of an em space (equal to the font size) Three-per-em space   One-third of an em space Figure space   Width of a numeral (digit) Punctuation space   Width of a period or comma Thin space   Thinner than a regular space Hair space   Thinner than a thin space Narrow no-break space   Non-breaking thin space
Note: The   and   entities may not be supported in all browsers. For the narrow no-break space, there isn't a named HTML entity, so the numeric character reference   is used.
"},{"location":"llms/","title":"Large Language Model (LLM) Integrationss","text":"
Readme-ai integrates seamlessly with various Large Language Model (LLM) services to generate high-quality README content. This page provides an overview of the supported LLM services and links to detailed information about each.
"},{"location":"llms/#comparing-llm-services","title":"Comparing LLM Services","text":"Service Pros Cons OpenAI High-quality output, Versatile Requires API key, Costs associated Ollama Free, Privacy-focused, Offline May be slower, Requires local setup Anthropic Privacy-focused, Offline May be slower, Requires local setup Gemini Strong performance, Google integration Requires API key Offline No internet required, Fast Basic output, Limited customization"},{"location":"llms/#tips-for-optimal-results","title":"Tips for Optimal Results","text":"
Experiment with different models: Try various LLM services and models to find the best fit for your project.
Provide clear context: Ensure your repository has well-organized code and documentation to help the LLM generate more accurate content.
Fine-tune with CLI options: Use readme-ai's CLI options to customize the output further after choosing your LLM service.
Review and edit: Always review the generated README and make necessary edits to ensure accuracy and relevance to your project.
By leveraging these LLM integrations effectively, you can generate comprehensive and accurate README files for your projects with minimal effort. Here is a structured content tab implementation for integrating multiple APIs in README-AI, based on the detailed API integration information you provided:
"},{"location":"llms/#api-integrations","title":"\ud83d\ude80 API Integrations","text":"
README-AI supports multiple large language model (LLM) APIs for generating README files. The following tabs explain how to configure and use each supported API.
Argument Description --api Specifies the LLM API service to use (in this case, Ollama). --model Specifies the model to use with Ollama (e.g., llama3). --repository Specifies the GitHub repository or local directory path to analyze."},{"location":"llms/anthropic/#optional-dependencies","title":"Optional Dependencies","text":"
To use additional LLM providers like Anthropic or Google Gemini in addition to Ollama, install the optional dependencies:
Offline mode allows you to generate a README without an internet connection. This is useful when you want to quickly generate boilerplate README files.
Ollama is a privacy-focused, open-source tool for running open-source LLMs locally such as Llama 3, Mistral, and Gemma 2. Ollama can be used with readme-ai to generate README files with a variety of models and configurations from their model library.
Alternatively, use pipx to install readmeai in an isolated environment:
pipx install readmeai\n
Why use pipx?
Using pipx allows you to install and run Python command-line applications in isolated environments, which helps prevent dependency conflicts with other Python projects.
Replace <REPO_URL_OR_PATH> with your repository URL or local path, and <LLM_SERVICE> with your chosen LLM service (openai, ollama, gemini, or offline).
Running readme-ai in a containerized environment using Docker offers isolation of the application and its dependencies from the host system. This section details how to pull the Docker image from Docker Hub, build the Docker image from the source code, and run the Docker container.
Docker Installation
Before proceeding, ensure that Docker is installed and running on your system. If you haven't installed Docker yet, please visit the official Docker documentation for installation instructions.
"},{"location":"usage/docker/#pull-the-docker-image","title":"Pull the Docker Image","text":"
Pull the latest readme-ai image from Docker Hub:
docker pull zeroxeli/readme-ai:latest\n
"},{"location":"usage/docker/#build-the-docker-image","title":"Build the Docker Image","text":"
Alternatively, you can build the Docker image from the source code. This assumes you have cloned the readme-ai repository.
Using docker buildx allows you to build multi-platform images, which means you can create Docker images that work on different architectures (e.g., amd64 and arm64). This is particularly useful if you want your Docker image to be compatible with a wider range of devices and environments, such as both standard servers and ARM-based devices like the Raspberry Pi.
"},{"location":"usage/docker/#run-the-docker-container","title":"Run the Docker Container","text":"
Run the readme-ai Docker container with the following command:
Argument Function -it Creates an interactive terminal. --rm Automatically removes the container when it exits. -e Passes your OpenAI API key as an environment variable. -v \"$(pwd)\":/app Mounts the current directory to the /app directory in the container, allowing access to the generated README file on your host system. -r Specifies the GitHub repository to analyze.
For Windows users, replace $(pwd) with %cd% in the command. For PowerShell, use ${PWD} instead.
If you want to remove the Docker image and container from your system, follow these steps.
1. Identify the Container
First, list all containers on your system.
docker ps -a\n
You should see output similar to the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nabcdef123456 zeroxeli/readme-ai:latest \"python main.py -r h\u2026\" 2 minutes ago Up 2 minutes\n