You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Benchmarking command benchmarks all models when max_depth is used. This happens regardless of the value max_depth as long as max_depth>0.
Reproducing the issue
When running the script below using benchit model.py --analyze-only --max-depth 1 three models are discovered. However, when running benchit model.py --max-depth 1 10+ models are benchmarked.
import torch
import timm
from mlagility.parser import parse
# Creating model and set it to evaluation mode
model = timm.create_model("mobilenetv2_035", pretrained=False)
model.eval()
# Creating inputs
inputs1 = torch.rand((1, 3, 28, 28))
# Calling model
model(inputs1)
The text was updated successfully, but these errors were encountered:
@jeremyfowers I didn't spend time analyzing this issue yet. My assumption is that those extra models corresponds to all models that would be identified if --max-depth was set to a 999.
Issue description
Benchmarking command benchmarks all models when
max_depth
is used. This happens regardless of the valuemax_depth
as long asmax_depth
>0.Reproducing the issue
When running the script below using
benchit model.py --analyze-only --max-depth 1
three models are discovered. However, when runningbenchit model.py --max-depth 1
10+ models are benchmarked.The text was updated successfully, but these errors were encountered: