Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SD-126] Save model summary during benchmarking #53

Merged
merged 30 commits into from
Jan 24, 2025
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
2d1fdc3
WIP PR 126
OCarrollM Jan 20, 2025
6d9c089
SD-126 Console print out of JSON material asked in ticket. Needs JSON…
OCarrollM Jan 21, 2025
129f045
Update loop to use named_children
osw282 Jan 21, 2025
2623a30
SD-126: Code cleaned and contents saved to JSON file
OCarrollM Jan 21, 2025
3d26770
SD-126 Code cleaned, added load_model and get_layers to common module…
OCarrollM Jan 21, 2025
f7a836d
SD-126: Changed to old datacollection function, removed imports from …
OCarrollM Jan 22, 2025
0827c54
Updated script with dvc added
OCarrollM Jan 22, 2025
2596baf
Try fixing corrupted DVC cache
d-lowl Jan 22, 2025
62320fa
uv lock
d-lowl Jan 22, 2025
b1827ee
Fix imports
d-lowl Jan 23, 2025
e7b1d38
Add partial data to DVC
d-lowl Jan 23, 2025
cebcb44
Specify weights when loading
d-lowl Jan 23, 2025
0779eb1
Remove prebuilt models measurements from this branch
d-lowl Jan 23, 2025
60cb54a
Regenerate date with a different indentation, to avoid the corrupted …
d-lowl Jan 23, 2025
c75d7b0
Regenerate date with a different indentation, to avoid the corrupted …
d-lowl Jan 23, 2025
c69ba76
Test model loading
osw282 Jan 23, 2025
caae6e6
Test
osw282 Jan 23, 2025
668505b
Test torch hub endpoitns
osw282 Jan 23, 2025
65459ba
Test with version
osw282 Jan 23, 2025
313099d
Update working torch model repo version
osw282 Jan 23, 2025
e173da3
Add test file
d-lowl Jan 23, 2025
c79ea8b
Another test
osw282 Jan 23, 2025
b164482
Fix lenet input dimensions
osw282 Jan 23, 2025
a33d191
Add model summaries collected on the jetson
osw282 Jan 23, 2025
773e00b
Remove test files
d-lowl Jan 23, 2025
0a10fe6
Fixed data collected on jetson
osw282 Jan 23, 2025
3d47471
Revert some unnecessary changes
d-lowl Jan 23, 2025
67436b1
Address comments
d-lowl Jan 24, 2025
05554c2
Address comments
d-lowl Jan 24, 2025
ddfe693
Merge branch 'develop' into SD-126-JSON-Save
d-lowl Jan 24, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 1 addition & 54 deletions jetson/power_logging/model/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

from model.lenet import LeNet
from functools import partial
from model_utils import load_model, get_layers

"""
Wrapper class for Torch.cuda.event for non-CUDA supported devices
Expand Down Expand Up @@ -68,60 +69,6 @@ class BenchmarkMetrics(BaseModel):
avg_latency: float # in seconds
avg_throughput: float


def load_model(model_name: str, model_repo: str) -> Any:
"""Load model from Pytorch Hub.

Args:
model_name: Name of model.
It should be same as that in Pytorch Hub.

Raises:
ValueError: If loading model fails from PyTorch Hub

Returns:
PyTorch model
"""
if model_name == "lenet":
return LeNet()
if model_name == "fcn_resnet50":
return torch.hub.load(model_repo, model_name, pretrained=True)
try:
return torch.hub.load(model_repo, model_name)
except:
raise ValueError(
f"Model name: {model_name} is most likely incorrect. "
"Please refer https://pytorch.org/hub/ to get model name."
)


def get_layers(model: torch.nn.Module, name_prefix: str="") -> list[tuple[str, torch.nn.Module]]:
"""
Recursively get all layers in a pytorch model.

Args:
model: the pytorch model to look for layers.
name_prefix: Use to identify the parents layer. Defaults to "".

Returns:
a list of tuple containing the layer name and the layer.
"""
children = list(model.named_children())

if len(children) == 0: # No child
result = [(name_prefix, model)]
else:
# If have children, iterate over each child.
result = []
for child_name, child in children:
# Recursively call get_layers on the child, appending the current
# child's name to the name_prefix.
layers = get_layers(child, name_prefix + "_" + child_name)
result.extend(layers)

return result


def define_and_register_hooks(model, device) -> dict:
"""
Define and register hooks with CUDA or CPU timing.
Expand Down
26 changes: 26 additions & 0 deletions jetson/power_logging/model/generate_model_summaries.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/bin/bash

d-lowl marked this conversation as resolved.
Show resolved Hide resolved
# Directory to store results
RESULT_DIR = "raw_data/prebuilt_models"
d-lowl marked this conversation as resolved.
Show resolved Hide resolved

# List of models to process
MODELS = ("alexnet" "vgg11" "vgg13" "vgg16" "vgg19" "mobilenet_v2" "mobilenet_v3_small" "mobilenet_v3_large" "resnet18" "resnet34" "resnet50" "resnet101" "resnet152" "lenet" "resnext50_32x4d" "resnext101_32x8d" "resnext101_64x4d" "convnext_tiny" "convnext_small" "convnext_base")
d-lowl marked this conversation as resolved.
Show resolved Hide resolved

PYTHON_SCRIPT = "save_model_summary.py"
d-lowl marked this conversation as resolved.
Show resolved Hide resolved

for MODEL in "${MODELS[@]}"
do
MODEL_OUTPUT_DIR="${RESULT_DIR}/${MODEL}"
OUTPUT_FILE = "${MODEL_OUTPUT_DIR}/model_summary.json"
d-lowl marked this conversation as resolved.
Show resolved Hide resolved

mkdir -p "$MODEL_OUTPUT_DIR"

echo "Generating summary for model: $MODEL"
python "$PYTHON_SCRIPT" --model "$MODEL" --output-file "$OUTPUT_FILE"
d-lowl marked this conversation as resolved.
Show resolved Hide resolved

if [ $? -eq 0]; then
d-lowl marked this conversation as resolved.
Show resolved Hide resolved
echo "Summary saved to: $OUTPUT_FILE"
else
echo "Failed to generate summary for model: $MODEL"
fi
done
51 changes: 51 additions & 0 deletions jetson/power_logging/model/model_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import torch
from typing import Any
from model.lenet import LeNet

def load_model(model_name: str, model_repo: str) -> Any:
"""Load model from Pytorch Hub.

Args:
model_name: Name of model.
It should be same as that in Pytorch Hub.

Raises:
ValueError: If loading model fails from PyTorch Hub

Returns:
PyTorch model
"""
if model_name == "lenet":
return LeNet()
if model_name == "fcn_resnet50":
return torch.hub.load(model_repo, model_name, pretrained=True)
try:
return torch.hub.load(model_repo, model_name)
except:
raise ValueError(
f"Model name: {model_name} is most likely incorrect. "
"Please refer https://pytorch.org/hub/ to get model name."
)

def get_layers(model: torch.nn.Module, name_prefix: str="") -> list[tuple[str, torch.nn.Module]]:
"""
Recursively get all layers in a pytorch model.

Args:
model: the pytorch model to look for layers.
name_prefix: Use to identify the parents layer. Defaults to "".

Returns:
a list of tuple containing the layer name and the layer.
"""
children = list(model.named_children())

if len(children) == 0:
result = [(name_prefix, model)]
else:
result = []
for child_name, child in children:
layers = get_layers(child, name_prefix + "_" + child_name)
result.extend(layers)

return result
85 changes: 85 additions & 0 deletions jetson/power_logging/model/save_model_summary.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Top level comment: I'd split this file into a script under power_logging and the utility functions left here.
Just so that it can be properly called from the top level package (and the instructions for it included in the README)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On top of that. We should have a bash script that would run the said script for all the models and save the results to raw_data/prebuilt_models/{model_name}/model_summary.json

d-lowl marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import torch
import json
import argparse
from typing import Any
from model_utils import load_model, get_layers

def get_layer_info(model, input_shape):
"""
Get key information of all layers within a model.

Args:
model: The pytorch model
input_shape: input size of the model

Returns:
information about the model
"""
model_info = {}
test = torch.randn(*input_shape)
hooks = []

def register_hook(layer_name):
def hook(module, input, output):
d-lowl marked this conversation as resolved.
Show resolved Hide resolved
model_info[layer_name] = {
"input_shape": tuple(input[0].size()) if input else None,
"output_shape": tuple(output.size()) if output is not None else None,
d-lowl marked this conversation as resolved.
Show resolved Hide resolved
"kernel_size": getattr(module, "kernel_size", None),
"stride": getattr(module, "stride", None),
"padding": getattr(module, "padding", None),
"type": module.__class__.__name__,
}
return hook

for layer_name, layer in get_layers(model):
hooks.append(layer.register_forward_hook(register_hook(layer_name)))

model.eval()
with torch.no_grad():
_ = model(test)

for hook in hooks:
hook.remove()

return model_info


def run(args):
model = load_model(args.model, args.model_repo)
layer_info = get_layer_info(model, args.input_shape)

print(json.dumps(layer_info, indent=4, separators=(",", ": "), ensure_ascii=False))

output = "model_summary.json"
d-lowl marked this conversation as resolved.
Show resolved Hide resolved
with open(output, "w") as file:
json.dump(layer_info, file, indent=4, separators=(",", ": "), ensure_ascii=False)

if __name__ == "__main__":
d-lowl marked this conversation as resolved.
Show resolved Hide resolved
parser = argparse.ArgumentParser(
prog="Save Model Summary",
description="Save a summary of the model after benchmarking.",
)
parser.add_argument(
"--model",
type=str,
default="resnet18",
help="Specify name of pretrained CNN mode from PyTorch Hub."
"For more information on PyTorch Hub visit: "
"https://pytorch.org/hub/research-models",
)
parser.add_argument(
"--model-repo",
type=str,
default="pytorch/vision:v0.10.0",
help="Specify path and version to model repository from PyTorch Hub."
)
parser.add_argument(
"--input-shape",
type=int,
nargs="+",
default=[1, 3, 224, 224],
help="Input shape BCHW",
)

args = parser.parse_args()
run(args)
25 changes: 24 additions & 1 deletion jetson/power_logging/uv.lock
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uv needs locking dependencies again, after removing torchsummary from the dependencies list

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.