Skip to content

Commit

Permalink
contributing guidelines
Browse files Browse the repository at this point in the history
  • Loading branch information
mcharytoniuk committed Jul 6, 2024
1 parent 1b1bbc4 commit 780e22f
Show file tree
Hide file tree
Showing 6 changed files with 43 additions and 2 deletions.
34 changes: 34 additions & 0 deletions CONTRIBUTING
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Contributing

First of all, every contribution is welcomed.

You do not have to add or improve articles to contribute. Even giving us suggestions or general ideas is valuable if you want to join in.

To discuss the handbook contents, use [GitHub discussions](https://github.com/distantmagic/llmops-handbook/discussions).

## What are we looking for?

This handbook is intended to be a living document that evolves with the community. It is aimed at more advanced LLM users who want to deploy scalable setups and/or be able to architect applications around them.

It focuses primarily on runners like `llama.cpp` or `VLLM`, aimed at production usage. However, if you find an interesting use case for `aphrodite`, `tabby`, or any other runner, that is also welcomed.

Those are just general ideas. Anything related to the infrastructure, application layer, and tutorials is welcomed. If you have an interesting approach to using LLMs, feel free to contribute that also.

## How to contribute?

We are using [GitHub issues](https://github.com/distantmagic/llmops-handbook/issues) and [pull requests](https://github.com/distantmagic/llmops-handbook/pulls) to organize the work.

### Submitting a new article

If you want to submit an article:
1. Start a GitHub issue with an outline (with general points you want to cover) so we can decide together if it fits the handbook.
2. Add an `outline` label to that issue.
3. If the article fits the handbook, add a new page and create a pull request with a finished article.

### Updating an article

If you want to improve an existing article, start an issue to let us know your thoughts or create a pull request if you are ready to add changes. Add an `improvement` tag to such an issue or pull request.

### Scrutiny

If you think something in the handbook is incorrect, add a new issue with a `scrutiny` tag and point out the issues.
Empty file removed CONTRIBUTING.md
Empty file.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,7 @@ For the rendered version, visit our website: https://llmops-handbook.distantmagi
## License

Creative Commons Attribution Share Alike 4.0 International

## Community

Discord: https://discord.gg/kysUzFqSCK
4 changes: 3 additions & 1 deletion src/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

This handbook is a practical and advanced guide to LLMOps. It provides a solid understanding of large language models' general concepts, deployment techniques, and software engineering practices. With this knowledge, you will be prepared to maintain the entire stack confidently.

It will teach you how to use Large Language Models and self-host Open Source models and build applications around them. It goes beyond just [Retrieval Augmented Generation](/customization/retrieval-augmented-generation) and [Fine Tuning](/customization/fine-tuning).
This handbook focuses more on LLM runners like `llama.cpp` or `VLLM`, which can scale and behave predictably in the infrastructure, rather than runners focused more on casual use cases.

It will teach you how to use large language models in professional applications, self-host Open-Source models, and build software around them. It goes beyond just [Retrieval Augmented Generation](/customization/retrieval-augmented-generation) and [Fine Tuning](/customization/fine-tuning).

It assumes you are interested in self-hosting open source [Large Language Models](/general-concepts/large-language-model). If you only want to use them through HTTP APIs, you can jump straight to the [application layer](/application-layer) best practices.

Expand Down
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
- [Kubernetes]()
- [Ollama](./deployments/ollama/README.md)
- [Paddler](./deployments/paddler/README.md)
- [VLLM]()
- [Customization]()
- [Fine-tuning](./customization/fine-tuning/README.md)
- [Retrieval Augmented Generation](./customization/retrieval-augmented-generation/README.md)
Expand Down
2 changes: 1 addition & 1 deletion src/introduction/contributing.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
# Contributing
{{#include ../../CONTRIBUTING}}

0 comments on commit 780e22f

Please sign in to comment.