Skip to content

Why is the vLLM backend commented out? #113

@myym0

Description

@myym0

When using vLLM as the inference backend, the generation of QA pairs gets stuck. The request has been sent to the vLLM backend, but the generation takes a very long time and does not complete.
The WeChat group seems to have expired. Could you please provide the contact information for another commu

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions