Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SAM2 models and AI Batch Mode #1493

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open

Add SAM2 models and AI Batch Mode #1493

wants to merge 14 commits into from

Conversation

jakep72
Copy link

@jakep72 jakep72 commented Sep 11, 2024

  • Support for Segment Anything 2 tiny and large models. Models are currently located in the fork release (see below):

SAM2 Large Models
https://github.com/jakep72/labelme/releases/download/SAM2/sam2_large.encoder.onnx
https://github.com/jakep72/labelme/releases/download/SAM2/sam2_large.decoder.onnx

SAM2 Tiny Models
https://github.com/jakep72/labelme/releases/download/SAM2/sam2_hiera_tiny.encoder.onnx
https://github.com/jakep72/labelme/releases/download/SAM2/sam2_hiera_tiny.decoder.onnx

  • Implement "AI Batch" mode -- adds the ability to label multiple similar polygons without clicking on each object. Accept or reject each polygon using the label dialog pop up widget.

Summary by CodeRabbit

  • New Features

    • Introduced advanced segmentation model options delivering both high accuracy and faster processing.
    • Added AI-Batch mode to the interface, enabling batch processing for segmentation tasks.
    • Implemented a new configuration setting to toggle AI-Batch functionality.
    • Enhanced drawing tools for a smoother, AI-powered shape creation experience.
  • Chores

    • Conducted internal refinements for improved code efficiency.

@ryouchinsa
Copy link

sam-cpp-macos is the Segment Anything Model 2 CPP Wrapper for macOS and Ubuntu CPU/GPU.
This code is to run a SAM2 ONNX model in c++ code and implemented on the macOS app RectLabel.
This code is currently support only image prediction, not video prediction.
We hope this code would be helpful for some users.

On macOS CPU use-case.

  • SAM2 Tiny takes 1s for preprocessing.
  • SAM2 Small takes 2s for preprocessing.
  • SAM2 BasePlus takes 4s for preprocessing.
  • SAM2 Large takes 10s for preprocessing.

sam2_polygon.mp4

Copy link

coderabbitai bot commented Feb 26, 2025

Walkthrough

This update expands segmentation capabilities by introducing new model classes and a complete segmentation model implementation using the SAM2 architecture. Two new classes, one optimized for accuracy and one for speed, extend the existing model. A new file implements segmentation functionality with image encoding, decoding, and embedding caching. Additionally, a new "ai_batch" mode is integrated into the GUI, with corresponding UI actions, configurations, and canvas event handling. Minor formatting cleanups in utility functions are also included.

Changes

File(s) Change Summary
labelme/ai/__init__.py Added SAM2HieraL and SAM2HieraT classes extending SegmentAnything2Model and updated the MODELS list.
labelme/ai/segment_anything2_model.py New file implementing the segmentation model with classes (SegmentAnything2Model, SegmentAnything2ONNX, SAM2ImageEncoder, SAM2ImageDecoder) and methods for image processing, embedding caching, and inference.
labelme/app.py, labelme/config/…/default_config.yaml, labelme/widgets/canvas.py Integrated a new "ai_batch" mode by adding a UI action (createAiBatchMode), a corresponding configuration option (ai_batch: false), and updates to event handlers (mouse events, finalisation, painting) for batch processing.
labelme/ai/_utils.py Removed extraneous blank lines in the compute_polygon_from_mask function.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant MW as MainWindow
    participant C as Canvas
    participant AM as AI Model
    U->>MW: Select "AI Batch Mode"
    MW->>C: Activate AI Batch Mode
    U->>C: Draw shape (mouse events)
    C->>AM: Finalise drawing and send points
    AM-->>C: Return predicted polygon/mask
    C->>MW: Update display with new shape
    MW-->>U: Render updated canvas
Loading
sequenceDiagram
    participant C as Canvas
    participant SAM as SegmentAnything2Model
    participant EN as SAM2ImageEncoder
    participant DE as SAM2ImageDecoder
    C->>SAM: set_image(image)
    SAM->>EN: Compute image embedding
    EN-->>SAM: Return embedding
    SAM->>DE: Predict mask/polygon from points
    DE-->>SAM: Return segmentation result
    SAM-->>C: Provide final segmentation output
Loading

Poem

I'm a rabbit of code, hopping through the night,
New models and modes make my heart so light.
With accuracy and speed in every new line,
"Ai_batch" mode makes the canvas shine.
Leaping over bugs with a digital cheer,
I celebrate these changes—happy code, my dear!

Tip

CodeRabbit's docstrings feature is now available as part of our Pro Plan! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (7)
labelme/ai/__init__.py (2)

90-102: Definition of SAM2HieraL.
Good introduction of a new class extending SegmentAnything2Model with specialized encoder/decoder paths for the large model. Consider verifying file checksums or versioning if you anticipate frequent updates to these ONNX files.


103-114: Definition of SAM2HieraT.
Similar to SAM2HieraL, but optimized for speed using tiny model variants. Overall structure is consistent. Also consider logging or verifying model downloads in case of updates or potential network issues.

labelme/widgets/canvas.py (1)

785-814: Painting logic for ai_batch.
Uses skimage’s find_contours and approximate_polygon to outline multiple polygons, painting each contour. This is an efficient approach for batch previews, though be mindful of performance with large images.

labelme/ai/segment_anything2_model.py (4)

17-22: Consider thread-safety in multi-threaded usage.
This class holds a _thread reference and uses a lock, but as soon as a new set_image call happens, the _thread reference might be overwritten. If multiple threads call set_image in parallel, the logic may become interleaved. Clarify whether this class is intended for single-threaded or truly concurrent use, and add additional synchronization if necessary.


36-44: Use LRU logic or a hashed key for memory efficiency.
Storing entire raw images as the cache key (self._image.tobytes()) can significantly impact memory usage for large images. Also, using popitem(last=False) in OrderedDict removes items in insertion order, not necessarily least recently used. Consider using an LRU approach or a hashed key to optimize memory usage and cache management.


52-62: Simplify unnecessary array reshaping.
best_mask is wrapped in np.array([[best_mask]]) only to be indexed back at [0,0]. To improve clarity and reduce extra operations, remove these redundant steps and work directly with the mask array.

- best_mask = np.array([[best_mask]])
- best_mask = best_mask[0,0]

151-156: Consider adding error handling for inference.
Currently, if self.session.run() fails for any reason, no exception handling occurs. This may leave the session in an inconsistent state or raise unhandled errors. Add a try-except block or logging to handle inference failures gracefully.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 449590f and c840a55.

📒 Files selected for processing (6)
  • labelme/ai/__init__.py (2 hunks)
  • labelme/ai/_utils.py (1 hunks)
  • labelme/ai/segment_anything2_model.py (1 hunks)
  • labelme/app.py (8 hunks)
  • labelme/config/default_config.yaml (1 hunks)
  • labelme/widgets/canvas.py (12 hunks)
✅ Files skipped from review due to trivial changes (1)
  • labelme/ai/_utils.py
🔇 Additional comments (15)
labelme/config/default_config.yaml (1)

84-84: New ai_batch configuration option introduced.
This new boolean toggle integrates seamlessly with the surrounding AI-related settings, and no immediate issues are observed.

labelme/ai/__init__.py (2)

4-4: Added import for SegmentAnything2Model.
This ensures the base class is accessible for the newly introduced SAM2 classes. No concerns here.


122-123: Added new classes to the MODELS list.
Makes the new SAM2 variants discoverable by the rest of the system. No concerns.

labelme/app.py (5)

398-413: New "Create AI-Batch" mode action.
This action properly parallels the existing AI creation modes. The callback to initialize the AI model is consistent with other AI-based modes.


667-667: Inclusion of createAiBatchMode in the actions structure.
This ensures the new mode's action is exposed to the application’s toolset.


706-706: Extended context menu with createAiBatchMode.
Properly follows the pattern used by other creation actions.


727-727: Enabled createAiBatchMode on shape load.
Consistent with other AI-based modes, ensuring it is available when images are loaded.


998-998: Enabling the createAiBatchMode on file load.
This completes the wire-up by allowing the mode to be active once the user opens a file.

labelme/widgets/canvas.py (5)

66-66: "ai_batch": false crosshair setting.
Adds crosshair handling for the new mode. No issues; consistent with other mode flags.


261-261: Added "ai_batch" to the creation mode check.
Allows the code path for AI-based shape logic to properly trigger.


289-289: Extends conditional for AI-based drawings to include ai_batch.
Ensures the line logic includes the new mode. No immediate issues.


447-447: Extended multi-mode check for ai_batch in mousePressEvent.
Preserves the existing code style and ensures the new mode’s logic remains consistent with other AI modes.


863-905: Finalizing shapes for ai_batch.

  1. Iterates through each contour to create multiple polygons in batch.
  2. Re-initializes self.current to a new shape after appending each one, ensuring each contour is stored.

Implementation looks correct for multi-polygon batch mode. Keep an eye on large or noisy masks, as excessive contours could degrade performance.

labelme/ai/segment_anything2_model.py (2)

23-35: Verify old threads won't be prematurely overshadowed.
When set_image is called repeatedly in quick succession, the old worker thread may still be running unless a prior _get_image_embedding has been invoked. This could lead to race conditions or overshadowing. Confirm that the design prevents simultaneous computation for multiple images to avoid concurrency issues.


64-65: Confirm that remove_small_objects accepts float values for min_size.
skimage.morphology.remove_small_objects typically expects an integer min_size. Though it may accept float inputs, confirm correctness to avoid future compatibility problems.

Comment on lines +13 to +15
import numpy as np
import skimage

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

New imports for numpy, skimage, and related modules.
These are essential for mask handling, contour detection, and polygon approximations. Ensure the project dependencies reflect these libraries.

Would you like to run a quick script to confirm that these dependencies are listed in your environment or requirements?


🏁 Script executed:

#!/bin/bash
# Check if numpy and scikit-image are installed in the environment
pip show numpy
pip show scikit-image

Length of output: 121


Dependency Update Required: Add numpy and scikit-image to the Project Dependencies
We verified via the pip commands that the required packages (numpy and scikit-image) are not currently installed in the environment. Since these libraries are essential for mask handling, contour detection, and polygon approximations—and they are imported in labelme/widgets/canvas.py (lines 13-15)—please make sure that the project’s dependency configuration (for example, in your requirements.txt or setup.py) includes these packages.

  • Location: labelme/widgets/canvas.py (lines 13-15)
  • Action Required:
    • Add numpy and scikit-image (the latter is imported as skimage) to your dependency file.
    • Verify that any environments used (development, CI, production) install these dependencies correctly.

Comment on lines +82 to +85
def encode(self, cv_image: np.ndarray) -> list[np.ndarray]:
original_size = cv_image.shape[:2]
high_res_feats_0, high_res_feats_1, image_embed = self.encoder(cv_image)
return {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix return type annotation.
The method signature says encode(self, cv_image: np.ndarray) -> list[np.ndarray], but it returns a dictionary. Update either the signature or the return structure to maintain consistency.

- def encode(self, cv_image: np.ndarray) -> list[np.ndarray]:
+ def encode(self, cv_image: np.ndarray) -> dict[str, np.ndarray | tuple[int, int]]:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def encode(self, cv_image: np.ndarray) -> list[np.ndarray]:
original_size = cv_image.shape[:2]
high_res_feats_0, high_res_feats_1, image_embed = self.encoder(cv_image)
return {
def encode(self, cv_image: np.ndarray) -> dict[str, np.ndarray | tuple[int, int]]:
original_size = cv_image.shape[:2]
high_res_feats_0, high_res_feats_1, image_embed = self.encoder(cv_image)
return {

Comment on lines +327 to +338
def process_output(
self, outputs: list[np.ndarray]
) -> tuple[list[ndarray | Any], ndarray[Any, Any]]:

scores = outputs[1].squeeze()
masks = outputs[0][0]

return (masks,
scores,
self.orig_im_size
)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct mismatch in tuple size vs. type annotation.
process_output indicates -> tuple[list[ndarray | Any], ndarray[Any, Any]], yet returns three elements (masks, scores, self.orig_im_size). Update the annotation or split the functionality to be consistent with the declared return type.

- ) -> tuple[list[ndarray | Any], ndarray[Any, Any]]:
+ ) -> tuple[list[np.ndarray], np.ndarray, tuple[int, int]]:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def process_output(
self, outputs: list[np.ndarray]
) -> tuple[list[ndarray | Any], ndarray[Any, Any]]:
scores = outputs[1].squeeze()
masks = outputs[0][0]
return (masks,
scores,
self.orig_im_size
)
def process_output(
self, outputs: list[np.ndarray]
) -> tuple[list[np.ndarray], np.ndarray, tuple[int, int]]:
scores = outputs[1].squeeze()
masks = outputs[0][0]
return (masks,
scores,
self.orig_im_size
)

Comment on lines +92 to +109
def predict_masks(self, embedding, points, labels) -> list[np.ndarray]:
points, labels = np.array(points), np.array(labels)

image_embedding = embedding["image_embedding"]
high_res_feats_0 = embedding["high_res_feats_0"]
high_res_feats_1 = embedding["high_res_feats_1"]
original_size = embedding["original_size"]
self.decoder.set_image_size(original_size)
masks, scores, orig_im_size = self.decoder(
image_embedding,
high_res_feats_0,
high_res_feats_1,
points,
labels,
)

return masks, scores, orig_im_size

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Align declared return type with actual triple.
predict_masks declares -> list[np.ndarray], but it actually returns (masks, scores, orig_im_size). Adjust the type hint to reflect the three-element tuple being returned.

- def predict_masks(self, embedding, points, labels) -> list[np.ndarray]:
+ def predict_masks(self, embedding, points, labels) -> tuple[list[np.ndarray], np.ndarray, tuple[int, int]]:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def predict_masks(self, embedding, points, labels) -> list[np.ndarray]:
points, labels = np.array(points), np.array(labels)
image_embedding = embedding["image_embedding"]
high_res_feats_0 = embedding["high_res_feats_0"]
high_res_feats_1 = embedding["high_res_feats_1"]
original_size = embedding["original_size"]
self.decoder.set_image_size(original_size)
masks, scores, orig_im_size = self.decoder(
image_embedding,
high_res_feats_0,
high_res_feats_1,
points,
labels,
)
return masks, scores, orig_im_size
def predict_masks(self, embedding, points, labels) -> tuple[list[np.ndarray], np.ndarray, tuple[int, int]]:
points, labels = np.array(points), np.array(labels)
image_embedding = embedding["image_embedding"]
high_res_feats_0 = embedding["high_res_feats_0"]
high_res_feats_1 = embedding["high_res_feats_1"]
original_size = embedding["original_size"]
self.decoder.set_image_size(original_size)
masks, scores, orig_im_size = self.decoder(
image_embedding,
high_res_feats_0,
high_res_feats_1,
points,
labels,
)
return masks, scores, orig_im_size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants