Skip to content

(Oscg26)Multiple obstacles to running inference: Broken paths and missing dependencies #26

@GVN2307

Description

@GVN2307

The current state of the repository prevents users from successfully running the provided inference script. There are several structural and configuration issues that lead to immediate failures during setup and execution.

  1. Broken Adapter PathThe script inference.py is configured to look for the LoRA adapter in a specific subfolder:
    python
    ADAPTER_PATH = "./openmath-lora"However, the adapter files (adapter_model.safetensors and adapter_config.json) are located in the root directory. This causes the script to crash with a "Directory not found" error immediately upon execution.

  2. Missing Dependency Documentation
    There is no requirements.txtor environment.yml file in the repository. The project relies on non-standard libraries such as peft, bitsandbytes, and accelerate (required for 4-bit quantization). New users have no way of knowing which specific versions or packages are required to create a functional environment.

  3. Hardware Assumptions and Silent Failures
    The code is configured for 4-bit quantization (load_in_4bit=True), which strictly requires a CUDA-compatible GPU. Currently:

There is no check for CUDA availability.
Users on CPU-only machines receive cryptic errors from bitsandbytes or the model loader rather than a clear explanation of hardware requirements.
4. Hardcoded Inference Logic
The inference.py script is written as a flat procedural script that executes a single hardcoded math problem. Because the logic is not modularized into functions, it is difficult to reuse for different inputs or integrate into other workflows without manually editing the source code every time.

@AshChadha-iitg if you assign me i can help you resolve this issue

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions