Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature implementation] Add support for FP8 in VLLM in Xeon #40

Open
aalbersk opened this issue Feb 27, 2025 · 0 comments
Open

[Feature implementation] Add support for FP8 in VLLM in Xeon #40

aalbersk opened this issue Feb 27, 2025 · 0 comments
Labels
EnterpriseRAG Hackathon Issue created for OSS Hackathon

Comments

@aalbersk
Copy link
Collaborator

Implement support for FP8 quantization in VLLM on CPU based on TGI Gaudi's implementation.

  • Create a docker-compose.yaml for easy one-click execution
  • Prepare most performant quantization configuration for https://huggingface.co/Intel/neural-chat-7b-v3-1
  • Prepare a README showcasing how to easily quantize the model and utilize it in VLLM pipeline.
@aalbersk aalbersk added the EnterpriseRAG Hackathon Issue created for OSS Hackathon label Feb 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EnterpriseRAG Hackathon Issue created for OSS Hackathon
Projects
None yet
Development

No branches or pull requests

1 participant