Autodistill is an ecosystem for using big, slower foundation models to train small, faster supervised models. Using autodistill
and its associated packages, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between.
Autodistill
Use big, slow foundation vision models to train smaller, faster models.
Pinned Loading
Repositories
Showing 10 of 62 repositories
- autodistill Public
Images to inference with no labeling (use foundation models to train supervised models).
autodistill/autodistill’s past year of commit activity - autodistill-yolov11 Public Forked from autodistill/autodistill-yolov8
YOLOv11 Target Model plugin for Autodistill
autodistill/autodistill-yolov11’s past year of commit activity - autodistill-florence-2 Public
Use Florence 2 to auto-label data for use in training fine-tuned object detection models.
autodistill/autodistill-florence-2’s past year of commit activity - autodistill-grounded-sam-2 Public
Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.
autodistill/autodistill-grounded-sam-2’s past year of commit activity - autodistill-target-model-template Public template
A template for use in creating Autodistill Target Model packages.
autodistill/autodistill-target-model-template’s past year of commit activity - autodistill-paligemma Public
Use PaliGemma to auto-label data for use in training fine-tuned vision models.
autodistill/autodistill-paligemma’s past year of commit activity