Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REF: Remove chatglm-cpp and Fix latest llama-cpp-python issue #1844

Merged
merged 8 commits into from
Jul 12, 2024

Conversation

ChengjieLi28
Copy link
Contributor

@ChengjieLi28 ChengjieLi28 commented Jul 11, 2024

  1. Remove chatglm-cpp support.
  2. Remove chatglm, chatglm2, chatglm3 gglmv3 models.
  3. Add some glm4-chat gguf models.
  4. Remove create_embedding task for LLM models.
  5. Compatible with the latest llama-cpp-python
  • GPU docker can have the latest version
  • CPU docker should remain v0.2.77 version, since it has import issue for the latest version.

@XprobeBot XprobeBot added this to the v0.13.1 milestone Jul 11, 2024
@XprobeBot XprobeBot added the gpu label Jul 11, 2024
@ChengjieLi28 ChengjieLi28 changed the title REF: Remove chatglm-cpp support and Fix latest llama-cpp-python issue REF: Remove chatglm-cpp and Fix latest llama-cpp-python issue Jul 11, 2024
@ChengjieLi28 ChengjieLi28 marked this pull request as ready for review July 12, 2024 03:34
Copy link
Contributor

@qinxuye qinxuye left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@qinxuye qinxuye merged commit 0f9c942 into xorbitsai:main Jul 12, 2024
12 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants