-
Notifications
You must be signed in to change notification settings - Fork 210
Depth estimation workflow block #1175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…/inference into depth-estimation-workflow-block
…boflow. replaced model id with new nomenclature: depth-anything-v2/small
"3. Setting the token in your environment: export HUGGING_FACE_HUB_TOKEN=your_token_here\n" | ||
"Or by logging in with: huggingface-cli login" | ||
) from e | ||
print(f"Error initializing depth estimation model: {str(e)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use logger instead
|
||
def __init__( | ||
self, model_id, *args, dtype=None, huggingface_token=HUGGINGFACE_TOKEN, **kwargs | ||
self, *args, dtype=None, huggingface_token=HUGGINGFACE_TOKEN, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please do not remove this parameter without agreement from the code owner (probably @probicheaux would be able to judge best).
self.initialize_model() | ||
|
||
# Try to initialize model from cache first | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please do not change base class as that may imply changes to many other models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am worried about changes in transformers base class - could you elaborate why it is needed for this particular model but was not needed for others?
@PawelPeczek-Roboflow I got rid of structural changes in transformers base class and am passing integration tests. |
if model_type not in self.registry_dict: | ||
raise ModelNotRecognisedError(f"Model type not supported: {model_type}") | ||
return self.registry_dict[model_type] | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
am I right that this change does nothing compared to what it was?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, apart from one not needed change: https://github.com/roboflow/inference/pull/1175/files#r2047394190
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but conflicts needs to be resolved
@grzegorz-roboflow / @hansent - since I am done for today, would u approve when @reiffd7 resolves conflicts?
…ation-workflow-block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
formater check fails. run make style
to fix
…h-estimation-workflow-block`) In order to optimize the performance of the `add_model` and `get_model` methods, we need to minimize the time spent on certain operations and reduce the number of loggings wherever possible. **Changes made:** 1. In the `get_model` method of `ModelRegistry`, a single `try/except` block replaces the if-check and separate raise to catch `KeyError`, thereby combining lookup and retrieval into a single step. This optimizes the process of fetching the model class and raising the error when necessary. 2. In the `add_model` method of `ModelManager`, logging messages are minimized and combined to reduce the overhead caused by frequent logging. The change reduces unnecessary calls to `logger.debug()`. **Line profiling improvements:** - For `get_model`: Combined lookup and retrieval operations reduce steps. - For `add_model`: Consolidating `logger.debug()` calls minimizes overhead and redundant operations.
⚡️ Codeflash found optimizations for this PR📄 11% (0.11x) speedup for
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Implemented a new Depth Estimation workflow block that leverages Depth Anything V2 model to perform depth estimation on images. This block provides a powerful tool for analyzing spatial relationships and creating depth maps from 2D images.
The implementation includes:
The block is implemented in
inference/core/workflows/core_steps/models/foundation/depth_estimation/v1.py
.Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
The implementation was tested through:
Any specific deployment considerations
Docs