A powerful VS Code extension that detects and fixes code bugs using machine learning. This extension integrates with a bug detection model and a bug-fixing model hosted on Hugging Face, allowing developers to improve code quality efficiently.
- π Detect Bugs: Classifies code as "buggy" or "bug-free."
- π§ Fix Bugs: Automatically suggests fixes for detected issues.
- π‘ Manual Control: Users decide when to run the detection and fixing functions.
- β‘ Fast & Local Processing: Uses Hugging Face models locally, avoiding API calls.
- π‘ Cross-Platform: people can use this for every os windows/linux/macos etc on vesion 0.0.2 onwards.
- Download and install the extension from the VS Code Marketplace.
- Ensure you have Node.js and VS Code installed.
- Open VS Code and enable the extension.
project structure
|_ your_code.py
|_ bug*detector*model [download from ```huggingface-cli download felixoder/bug_detector_model --local-dir ./bug_detector_model\n```]
|_ bug*fixer_model [download from ```huggingface-cli download felixoder/bug_fixer_model --local-dir ./bug_fixer_model```]
|_ run_model.py [see ## run_model.py]
pip install torch
pip install transformers
import sys
import torch
from transformers import (
AutoModelForCausalLM,
AutoModelForSequenceClassification,
AutoTokenizer,
)
detector_name = "./bug_detector_model"
fixer_name = "./bug_fixer_model"
# Automatically select the best available device (GPU > MPS > CPU)
device = (
torch.device("cuda")
if torch.cuda.is_available()
else torch.device("mps")
if torch.backends.mps.is_available()
else torch.device("cpu")
)
# Use FP16 if on GPU, else FP32
torch_dtype = torch.float16 if device.type == "cuda" else torch.float32
tokenizer = AutoTokenizer.from_pretrained(detector_name)
model = AutoModelForSequenceClassification.from_pretrained(
detector_name, torch_dtype=torch_dtype
).to(device)
fixer_tokenizer = AutoTokenizer.from_pretrained(fixer_name)
fixer_model = AutoModelForCausalLM.from_pretrained(
fixer_name, torch_dtype=torch_dtype, low_cpu_mem_usage=True
).to(device)
def classify_code(code):
inputs = tokenizer(
code, return_tensors="pt", padding=True, truncation=True, max_length=512
).to(device)
with torch.no_grad():
outputs = model(**inputs)
predicted_label = torch.argmax(outputs.logits, dim=1).item()
return "bug-free" if predicted_label == 0 else "buggy"
def fix_buggy_code(code):
prompt = f"### Fix this buggy Python code:\n{code}\n### Fixed Python code:\n"
inputs = fixer_tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
outputs = fixer_model.generate(
**inputs, max_length=256, do_sample=False, num_return_sequences=1
)
fixed_code = fixer_tokenizer.decode(outputs[0], skip_special_tokens=True)
return (
fixed_code.split("### Fixed Python code:")[1].strip()
if "### Fixed Python code:" in fixed_code
else fixed_code
)
if __name__ == "__main__":
command = sys.argv[1]
code = sys.argv[2]
if command == "classify":
print(classify_code(code))
elif command == "fix":
print(fix_buggy_code(code))
-
Detect Bugs:
- Open a python code file.
- Run the command:
Detect Bugs
- The extension highlights buggy code sections.
-
Use a build template:
- Paste this in your terminal
wget -O setup_and_run.sh https://raw.githubusercontent.com/felixoder/felix-detect-fix/master/setup_and_run.sh
chmod +x setup_and_run.sh
./setup_and_run.sh
-
Fix Bugs:
- After detecting bugs, run
Fix Bugs
- The model suggests code fixes.
- After detecting bugs, run
-
NEW RELEASE 1.0.4 onwards
- No need to install the model from wget I have made some changes so that we can install the model in a nice way.
- you can run the model or try using CLI then it will detect that the model(detect || fixer) is available in model/ route. if it is it will use them otherwise it will install those and use them.
- VS Code 1.70+
- Node.js 18+
- Hugging Face models:
- Fork the repo & create a new branch.
- Make your changes & commit.
- Open a Pull Request!
This project is licensed under the MIT License.
Made with β€οΈ by Debayan Ghosh.