-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Describe the bug
While the Rust code for this API is technically correct, the build pipeline is a disaster of "goldfish memory." The AI agent implemented CUDA support in the main application but completely forgot to enable those features in the CI workflow, rendering the official binaries useless for hardware acceleration.
The Receipts
The AI agent successfully wrote the CUDA detection logic behind a feature flag:
rust/src/main.rs:30 - #[cfg(feature = "tch-backend")]
rust/src/main.rs:32 - if tch::Cuda::is_available() {
The "detrimental" failure: The release pipeline strictly downloads CPU libraries and compiles without the tch-backend feature, stripping all that GPU logic out as dead code:
.github/workflows/release-rust.yml:26 - wget -q "https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-${LIBTORCH_VERSION}%2Bcpu.zip" -O libtorch.zip
.github/workflows/release-rust.yml:38 - run: cargo build --release
The agent's "vibe coding" rules are left in the repo root, instructing it on how to sign its own commits:
python/CLAUDE.md:39 - git commit --trailer "Co-Authored-By:Claude <noreply@anthropic.com>"
Expected behavior
Update .github/workflows/release-rust.yml to actually use the cu128 LibTorch and enable the tch-backend feature during compilation.