diff --git a/topics/imaging/tutorials/yolo_prediction/tutorial.md b/topics/imaging/tutorials/yolo_prediction/tutorial.md index e4e5fb96aaed1b..55de73b3eb0dac 100644 --- a/topics/imaging/tutorials/yolo_prediction/tutorial.md +++ b/topics/imaging/tutorials/yolo_prediction/tutorial.md @@ -96,63 +96,50 @@ This dataset provides two pretrained YOLOv8 detection models tailored for the ma > > 6. Unhide images files and create a dedicated data collection > -> History search `extension:jpg deleted:false visible:any` then click on "select all" and "build dataset list", select 100 files and give a name of the data collection, "DeepSeaSpy images" for example. Tips: To select only last 100 files, you can use the history search function and specify `extension:jpg deleted:false hid>XXXX visible:any` in the serach bar where XXXX is the id of the last image dataset minus 100 (for example `extension:jpg deleted:false hid>47659 visible:any` if you have images until the history dataset ID 47759. +> History search `extension:jpg deleted:false visible:any` then click on "select all" and "build dataset list", select 100 files and give a name of the data collection, "DeepSeaSpy 100 images sample" for example. Tips: To select only last 100 files, you can use the history search function and specify `extension:jpg deleted:false hid>XXXX visible:any` in the serach bar where XXXX is the id of the last image dataset minus 100 (for example `extension:jpg deleted:false hid>47659 visible:any` if you have images until the history dataset ID 47759. > > {: .hands_on} ## ⚙️ Run YOLOv8 in detect mode - - To perform object detection in Galaxy, use the tool "**Perform YOLO image labeling** with ultralytics (Galaxy Version 8.3.0+galaxy2)". Here’s how to set it up: - -- `Input images:` Select the "DeepSeaSpy images" data collection you just created with 100 .jpg underwater images. - -- `Class names file:` This is a plain text file (.txt) that lists the names of the classes the model can detect. -For example, for detecting Bythograeidae species, the file should look like this: - -``` -Bythograeid crab -Buccinid snail - -``` -and for Buccinide species, class fil names could be like: - -``` -Autre poisson -Couverture de moules -Couverture microbienne -Couverture vers tubicole -Crabe araignée -Crabe bythograeidé -Crevette alvinocarididae -Escargot buccinidé -Ophiure -Poisson Cataetyx -Poisson chimère -Poisson zoarcidé -Pycnogonide -Ver polynoidé -Vers polynoidés -``` - -Each class name must be on its own line, in the same order used during model training. So the class ID 0 corresponds to Buccinidae, and 1 to Bythograeidae. - -- `Model`: Upload and choose from the dataset either `YOLOv8-weights-for-Buccinidae-detection.pt` or `YOLOv8-weights-for-Bythograeidae-detection.pt`, or test both on different runs. - -- `Prediction mode`: Select detect. This tells YOLO to output bounding boxes around detected objects. - -- `Image size`: Use 1000 (or a smaller number like 640 if processing speed is important). This controls how much the image is resized before prediction. Smaller values = faster but possibly less accurate. - -- `Confidence threshold`: Set to 0.25 (25%). This controls how confident the model must be to report a detection. If you increase this value (e.g., 0.5), you’ll get fewer detections, but they’ll be more confident. If you lower it (e.g., 0.1), you may get more results, but possibly more false positives. - -- `IoU threshold`: Set to 0.45. This is used for Non-Maximum Suppression (NMS), which removes overlapping detections. A higher IoU value (e.g., 0.7) keeps more overlapping boxes. A lower IoU (e.g., 0.3) removes more overlaps, which may help clean up crowded images. - -- `Max detections`: Set a reasonable cap like 300. This limits the number of objects detected per image. - -💡 Tip: Try changing the confidence and IoU thresholds to see how detection results vary. It helps you find a good balance between sensitivity and accuracy. - -> 💡 **Note**: These models are trained only for detection, not segmentation. +> Detect Buccinid snails on images +> +> 1. {% tool [Perform YOLO image labeling](toolshed.g2.bx.psu.edu/repos/bgruening/yolo_predict/yolo_predict/8.3.0+galaxy2) %} with the following parameters: +> - {% icon param-file %} *"Input images"*: `DeepSeaSpy 100 images sample` (Input images dataset collection) +> - *"Class names file"*: `Buccinide` (Input plain text file (.txt) that lists the names of the classes the model can detect) +> - *"Model"*: `dataset_seanoe_101899_YOLOv8-weights-for-Buccinidae-detection` (Input pt file) +> - *"Prediction mode"*: `Detect` +> - *"Image size"*: `1000` +> - *"Confidence"*: `0.25` +> - *"IoU"*: `0.45` +> - *"Max detections"*: `300` +> +> > Models type +> > +> > The model is trained only for detection, not segmentation +> > +> {: .warning} +> +> > IoU threshold parameter +> > +> > Try changing the confidence and IoU thresholds to see how detection results vary. It helps you find a good balance between sensitivity and accuracy. +> > +> > +> {: .tip} +> +> > Additional information on class names file and parameters +> > +> > Concerning the class names file: Each class name must be on its own line, in the same order used during model training. So the class ID 0 corresponds to Buccinidae, and 1 to Bythograeidae. +> > Concerning tool parameters: +> > - `Image size`: Use 1000 (or a smaller number like 640 if processing speed is important). This controls how much the image is resized before prediction. Smaller values = faster but possibly less accurate. +> > - `Confidence threshold`: Set to 0.25 (25%). This controls how confident the model must be to report a detection. If you increase this value (e.g., 0.5), you’ll get fewer detections, but they’ll be more confident. If you lower it (e.g., 0.1), you may get more results, but possibly more false positives. +> > - `IoU threshold`: Set to 0.45. This is used for Non-Maximum Suppression (NMS), which removes overlapping detections. A higher IoU value (e.g., 0.7) keeps more overlapping boxes. A lower IoU (e.g., 0.3) removes more overlaps, which may help clean up crowded images. +> > - `Max detections`: Set a reasonable cap like 300. This limits the number of objects detected per image. +> > +> {: .comment} +> +{: .hands_on} ## 🧾 Explore the Outputs