Skip to content

Conversation

codeflash-ai[bot]
Copy link
Contributor

@codeflash-ai codeflash-ai bot commented Apr 16, 2025

⚡️ This pull request contains optimizations for PR #1175

If you approve this dependent PR, these changes will be merged into the original PR branch depth-estimation-workflow-block.

This PR will be automatically closed if the original PR is merged.


📄 11% (0.11x) speedup for ModelManager.add_model in inference/core/managers/base.py

⏱️ Runtime : 2.13 milliseconds 1.91 millisecond (best of 118 runs)

📝 Explanation and details

In order to optimize the performance of the add_model and get_model methods, we need to minimize the time spent on certain operations and reduce the number of loggings wherever possible.

Changes made:

  1. In the get_model method of ModelRegistry, a single try/except block replaces the if-check and separate raise to catch KeyError, thereby combining lookup and retrieval into a single step. This optimizes the process of fetching the model class and raising the error when necessary.

  2. In the add_model method of ModelManager, logging messages are minimized and combined to reduce the overhead caused by frequent logging. The change reduces unnecessary calls to logger.debug().

Line profiling improvements:

  • For get_model: Combined lookup and retrieval operations reduce steps.
  • For add_model: Consolidating logger.debug() calls minimizes overhead and redundant operations.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 1069 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage
🌀 Generated Regression Tests Details
from typing import Dict, Optional
from unittest.mock import MagicMock

# imports
import pytest  # used for our unit tests
# function to test
from inference.core.exceptions import ModelNotRecognisedError
from inference.core.logger import logger
from inference.core.managers.base import ModelManager
from inference.core.models.base import Model
from inference.core.registries.base import ModelRegistry

# unit tests

@pytest.fixture
def model_registry():
    # Create a mock Model class
    class MockModel(Model):
        def __init__(self, model_id, api_key):
            self.model_id = model_id
            self.api_key = api_key

    # Create a mock registry dict
    registry_dict = {
        "model1": MockModel,
        "model2": MockModel,
    }
    return ModelRegistry(registry_dict)

@pytest.fixture
def model_manager(model_registry):
    return ModelManager(model_registry)

def test_add_model_success(model_manager):
    # Test adding a new model successfully
    model_manager.add_model("model1", "key1")


def test_add_duplicate_model(model_manager):
    # Test adding a duplicate model
    model_manager.add_model("model1", "key1")
    model_manager.add_model("model1", "key1")


def test_add_model_not_recognised(model_manager):
    # Test adding a model that is not recognised
    with pytest.raises(ModelNotRecognisedError):
        model_manager.add_model("unknown_model", "key1")


def test_add_model_empty_model_id(model_manager):
    # Test adding a model with an empty model_id
    with pytest.raises(ModelNotRecognisedError):
        model_manager.add_model("", "key1")

def test_add_model_special_characters(model_manager):
    # Test adding a model with special characters in model_id
    with pytest.raises(ModelNotRecognisedError):
        model_manager.add_model("!@#$%^&*()", "key1")


def test_logging_behavior(model_manager, caplog):
    # Test logging behavior
    with caplog.at_level("DEBUG"):
        model_manager.add_model("model1", "key1")

def test_logging_error_behavior(model_manager, caplog):
    # Test logging error behavior
    with caplog.at_level("DEBUG"):
        with pytest.raises(ModelNotRecognisedError):
            model_manager.add_model("unknown_model", "key1")
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

from typing import Dict, Optional
from unittest.mock import MagicMock

# imports
import pytest  # used for our unit tests
# function to test
from inference.core.exceptions import ModelNotRecognisedError
from inference.core.logger import logger
from inference.core.managers.base import ModelManager
from inference.core.models.base import Model
from inference.core.registries.base import ModelRegistry


# Mock Model class for testing
class MockModel:
    def __init__(self, model_id, api_key):
        self.model_id = model_id
        self.api_key = api_key

# unit tests
def test_add_model_success():
    # Basic functionality: Adding a new model successfully
    registry_dict = {"model1": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("model1", "key1")

def test_add_model_with_alias():
    # Basic functionality: Adding a new model with an alias
    registry_dict = {"alias2": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("model2", "key2", "alias2")

def test_add_duplicate_model():
    # Duplicate models: Adding a model with an existing model_id
    registry_dict = {"model1": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("model1", "key1")
    model_manager.add_model("model1", "key1")

def test_add_model_with_duplicate_alias():
    # Duplicate models: Adding a model with a new model_id but existing alias
    registry_dict = {"model1": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("model1", "key1")
    model_manager.add_model("model2", "key2", "model1")

def test_add_model_with_empty_id():
    # Edge case: Adding a model with an empty model_id
    registry_dict = {"": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("", "key3")

def test_add_model_with_none_id():
    # Edge case: Adding a model with None as model_id
    registry_dict = {None: MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model(None, "key4")


def test_add_model_with_none_alias():
    # Edge case: Adding a model with None as model_id_alias
    registry_dict = {"model5": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("model5", "key6", None)

def test_add_model_not_recognised():
    # Invalid model type: Model type not recognized
    registry_dict = {}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    with pytest.raises(ModelNotRecognisedError):
        model_manager.add_model("unknown_model", "key7")


def test_add_model_exception_handling():
    # Exception handling: Simulating an exception during model initialization
    class FaultyModel:
        def __init__(self, model_id, api_key):
            raise ValueError("Initialization failed")
    
    registry_dict = {"faulty_model": FaultyModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    with pytest.raises(ValueError, match="Initialization failed"):
        model_manager.add_model("faulty_model", "key8")

def test_add_model_logging(caplog):
    # Logging verification: Checking log messages
    registry_dict = {"model7": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    with caplog.at_level("DEBUG"):
        model_manager.add_model("model7", "key9")


def test_add_model_with_long_id():
    # Boundary case: Adding a model with a very long model_id
    long_id = "a" * 1000
    registry_dict = {long_id: MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model(long_id, "key10")

def test_add_model_with_special_characters():
    # Special characters: Adding a model with special characters in model_id
    special_id = "model@#1"
    registry_dict = {special_id: MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model(special_id, "key11")

def test_add_model_case_sensitivity():
    # Mixed case identifiers: Adding a model with uppercase model_id
    registry_dict = {"MODEL9": MockModel, "model9": MockModel}
    model_registry = ModelRegistry(registry_dict)
    model_manager = ModelManager(model_registry)
    
    model_manager.add_model("MODEL9", "key12")
    model_manager.add_model("model9", "key13")
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-pr1175-2025-04-16T21.35.26 and push.

Codeflash

…h-estimation-workflow-block`)

In order to optimize the performance of the `add_model` and `get_model` methods, we need to minimize the time spent on certain operations and reduce the number of loggings wherever possible.



**Changes made:**

1. In the `get_model` method of `ModelRegistry`, a single `try/except` block replaces the if-check and separate raise to catch `KeyError`, thereby combining lookup and retrieval into a single step. This optimizes the process of fetching the model class and raising the error when necessary.

2. In the `add_model` method of `ModelManager`, logging messages are minimized and combined to reduce the overhead caused by frequent logging. The change reduces unnecessary calls to `logger.debug()`.

**Line profiling improvements:** 
- For `get_model`: Combined lookup and retrieval operations reduce steps.
- For `add_model`: Consolidating `logger.debug()` calls minimizes overhead and redundant operations.
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Apr 16, 2025
@codeflash-ai codeflash-ai bot mentioned this pull request Apr 16, 2025
3 tasks
@misrasaurabh1
Copy link
Contributor

not so useful...

Base automatically changed from depth-estimation-workflow-block to main April 17, 2025 09:13
@grzegorz-roboflow
Copy link
Contributor

Closing, since the change does not introduce significant improvements

@codeflash-ai codeflash-ai bot deleted the codeflash/optimize-pr1175-2025-04-16T21.35.26 branch April 18, 2025 09:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants