forked from zylon-ai/private-gpt
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream tag v0.6.2 (revision 22904ca) #6
Open
jordyantunes
wants to merge
41
commits into
apolo
Choose a base branch
from
upstream-to-pr/rev-22904ca
base: apolo
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* fix: mistral ignoring assistant messages * fix: typing * fix: fix tests
* Support for Google Gemini LLMs and Embeddings Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml) Install via poetry install --extras "llms-gemini embeddings-gemini" Notes: * had to bump llama-index-core to later version that supports Gemini * poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work... * fix: crash when gemini is not selected * docs: add gemini llm --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
…1883) * Added ClickHouse vector sotre support * port fix * updated lock file * fix: mypy * fix: mypy --------- Co-authored-by: Valery Denisov <valerydenisov@double.cloud> Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
* Update settings.mdx * docs: add cmd --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
…i#1779) * Fix/update concepts.mdx referencing to installation page The link for `/installation` is broken in the "Main Concepts" page. The correct path would be `./installation` or maybe `/installation/getting-started/installation` * fix: docs --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
Co-authored-by: chdeskur <chdeskur@gmail.com>
* docs: update project links ... * docs: update citation
zylon-ai#1998) * docs: add troubleshooting * fix: pass HF token to setup script and prevent to download tokenizer when it is empty * fix: improve log and disable specific tokenizer by default * chore: change HF_TOKEN environment to be aligned with default config * ifx: mypy
* docs: add missing configurations * docs: change HF embeddings by ollama * docs: add disclaimer about Gradio UI * docs: improve readability in concepts * docs: reorder `Fully Local Setups` * docs: improve setup instructions * docs: prevent have duplicate documentation and use table to show different options * docs: rename privateGpt to PrivateGPT * docs: update ui image * docs: remove useless header * docs: convert to alerts ingestion disclaimers * docs: add UI alternatives * docs: reference UI alternatives in disclaimers * docs: fix table * chore: update doc preview version * chore: add permissions * chore: remove useless line * docs: fixes ...
* integrate Milvus into Private GPT * adjust milvus settings * update doc info and reformat * adjust milvus initialization * adjust import error * mionr update * adjust format * adjust the db storing path * update doc
* Update README.md Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT * Update README.md Update text to address the comments * Update README.md Improve text
* chore: add pull request template * chore: add issue templates * chore: require more information in bugs
* fix: ffmpy dependency * fix: block ffmpy to commit sha
* chore: update ollama (llm) * feat: allow to autopull ollama models * fix: mypy * chore: install always ollama client * refactor: check connection and pull ollama method to utils * docs: update ollama config with autopulling info
* fix: when two user messages were sent * fix: add source divider * fix: add favicon * fix: add zylon link * refactor: update label
* added llama3 prompt * more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14 * fix: new llama3 prompt --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
* `UID` and `GID` build arguments for `worker` user * `POETRY_EXTRAS` build argument with default values * Copy `Makefile` for `make ingest` command * Do NOT copy markdown files I doubt anyone reads a markdown file within a Docker image * Fix PYTHONPATH value * Set home directory to `/home/worker` when creating user * Combine `ENV` instructions together * Define environment variables with their defaults - For documentation purposes - Reflect defaults set in settings-docker.yml * `PGPT_EMBEDDING_MODE` to define embedding mode * Remove ineffective `python3 -m pipx ensurepath` * Use `&&` instead of `;` to chain commands to detect failure better * Add `--no-root` flag to poetry install commands * Set PGPT_PROFILES to docker * chore: remove envs * chore: update to use ollama in docker-compose * chore: don't copy makefile * chore: don't copy fern * fix: tiktoken cache * fix: docker compose port * fix: ffmpy dependency (zylon-ai#2020) * fix: ffmpy dependency * fix: block ffmpy to commit sha * feat(llm): autopull ollama models (zylon-ai#2019) * chore: update ollama (llm) * feat: allow to autopull ollama models * fix: mypy * chore: install always ollama client * refactor: check connection and pull ollama method to utils * docs: update ollama config with autopulling info ... * chore: autopull ollama models * chore: add GID/UID comment ... --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
* feat: prevent to local ingestion (by default) and add white-list * docs: add local ingestion warning * docs: add missing comment * fix: update exception error * fix: black
* feat: change ollama default model to llama3.1 * chore: bump versions * feat: Change default model in local mode to llama3.1 * chore: make sure last poetry version is used * fix: mypy * fix: do not add BOS (with last llamacpp-python version)
* feat: unify embedding model to nomic * docs: add embedding dimensions mismatch * docs: fix fern
* feat: add summary recipe * test: add summary tests * docs: move all recipes docs * docs: add recipes and summarize doc * docs: update openapi reference * refactor: split method in two method (summary) * feat: add initial summarize ui * feat: add mode explanation * fix: mypy * feat: allow to configure async property in summarize * refactor: move modes to enum and update mode explanations * docs: fix url * docs: remove list-llm pages * docs: remove double header * fix: summary description
* fix: allow to configure trust_remote_code based on: zylon-ai#1893 (comment) * fix: nomic hf embeddings
* docs: update Readme * style: refactor image * docs: change important to tip
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…i#2037) * chore: update docker-compose with profiles * docs: add quick start doc
Fixing the error I encountered while using the azopenai mode
* chore: update docker-compose with profiles * docs: add quick start doc * chore: generate docker release when new version is released * chore: add dockerhub image in docker-compose * docs: update quickstart with local/remote images * chore: update docker tag * chore: refactor dockerfile names * chore: update docker-compose names * docs: update llamacpp naming * fix: naming * docs: fix llamacpp command
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* chore: block matplotlib to fix installation in window machines * chore: remove workaround, just update poetry.lock * fix: update matplotlib to last version
* docs: add numpy issue to troubleshooting * fix: troubleshooting link ...
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Integrating latest changes from zylon-ai/private-gpt tag v0.6.2
22904ca chore(main): release 0.6.2 (zylon-ai#2049)
7fefe40 fix: auto-update version (zylon-ai#2052)
b1acf9d fix: publish image name (zylon-ai#2043)
4ca6d0c fix: add numpy issue to troubleshooting (zylon-ai#2048)
b16abbe fix: update matplotlib to 3.9.1-post1 to fix win install
ca2b8da chore(main): release 0.6.1 (zylon-ai#2041)
f09f6dd fix: add built image from DockerHub (zylon-ai#2042)
1c665f7 fix: Adding azopenai to model list (zylon-ai#2035)
1d4c14d fix(deploy): generate docker release when new version is released (zylon-ai#2038)
dae0727 fix(deploy): improve Docker-Compose and quickstart on Docker (zylon-ai#2037)
6674b46 chore(main): release 0.6.0 (zylon-ai#1834)
e44a7f5 chore: bump version (zylon-ai#2033)
cf61bf7 feat(llm): add progress bar when ollama is pulling models (zylon-ai#2031)
50b3027 docs: update docs and capture (zylon-ai#2029)
5465958 fix: nomic embeddings (zylon-ai#2030)
8119842 feat(recipe): add our first recipe
Summarize
(zylon-ai#2028)40638a1 fix: unify embedding models (zylon-ai#2027)
9027d69 feat: make llama3.1 as default (zylon-ai#2022)
e54a8fe fix: prevent to ingest local files (by default) (zylon-ai#2010)
1020cd5 fix: light mode (zylon-ai#2025)
65c5a17 chore(docker): dockerfiles improvements and fixes (zylon-ai#1792)
d080969 added llama3 prompt (zylon-ai#1962)
d4375d0 fix(ui): gradio bug fixes (zylon-ai#2021)
20bad17 feat(llm): autopull ollama models (zylon-ai#2019)
dabf556 fix: ffmpy dependency (zylon-ai#2020)
05a9862 Add proper param to demo urls (zylon-ai#2007)
b626697 docs: update welcome page (zylon-ai#2004)
2c78bb2 docs: add PR and issue templates (zylon-ai#2002)
90d211c Update README.md (zylon-ai#2003)
43cc31f feat(vectordb): Milvus vector db Integration (zylon-ai#1996)
4523a30 feat(docs): update documentation and fix preview-docs (zylon-ai#2000)
01b7ccd fix(config): make tokenizer optional and include a troubleshooting doc (zylon-ai#1998)
15f73db docs: update repo links, citations (zylon-ai#1990)
187bc93 (feat): add github button (zylon-ai#1989)
dde0224 fix(docs): Fix concepts.mdx referencing to installation page (zylon-ai#1779)
067a5f1 feat(docs): Fix setup docu (zylon-ai#1926)
2612928 feat(vectorstore): Add clickhouse support as vectore store (zylon-ai#1883)
fc13368 feat(llm): Support for Google Gemini LLMs and Embeddings (zylon-ai#1965)
19a7c06 feat(docs): update doc for ipex-llm (zylon-ai#1968)
b687dc8 feat: bump dependencies (zylon-ai#1987)
c7212ac fix(LLM): mistral ignoring assistant messages (zylon-ai#1954)