Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crop & Mask updates #6876

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Conversation

hipsterusername
Copy link
Member

Summary

Update to include a new node for Cropping to an identified object from automated masking nodes (e.g., Segment Anything), with the ability to configure for mask types and leverage resulting offsets later in the workflow.

Secondly, update Image Mask node to handle multiple formats for Image Masks. (May be unnecessary - check me)

Related Issues / Discussions

#6805

QA Instructions

  • Generate a segment anything mask, convert it to an image and then use it in a workflow utilizing these new nodes.

Merge Plan

N/A

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations labels Sep 18, 2024
@hipsterusername
Copy link
Member Author

As I consider this - Perhaps it'd be better to have this just output the crop offsets for the identified object and let existing crop nodes be used.


image: ImageField = InputField(description="An input mask image with black and white content")
margin: int = InputField(default=0, ge=0, description="The desired margin around the object, as measured in pixels")
object_color: Literal["white", "black"] = InputField(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SAM can output multi-colored masks, right? Maybe we want this to be a ColorField instead, and update the mask extraction accordingly.

A future UI component could let the user click the specific mask they want, sampling its color, and then pass that into this node. So it'd be like a two-stage filter - segment, then choose the mask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invocations PRs that change invocations python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants