-
Notifications
You must be signed in to change notification settings - Fork 29
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[HWORKS-1113] Add docs for attaching model evaluation images to a mod…
…el version
- Loading branch information
1 parent
0923c44
commit 64b884d
Showing
2 changed files
with
76 additions
and
0 deletions.
There are no files selected for viewing
75 changes: 75 additions & 0 deletions
75
docs/user_guides/mlops/registry/model_evaluation_images.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
# How To Save Model Evaluation Images | ||
|
||
## Introduction | ||
|
||
In this guide, you will learn how to attach ==model evaluation images== to a model. Model evaluation images contain **confusion matrices**, **ROC curves** or other graphs that help visualizing model evaluation metrics. By attaching model evaluation images to your model version, other users can better understand the experiment results obtained during model training. | ||
|
||
## Code | ||
|
||
### Step 1: Connect to Hopsworks | ||
|
||
```python | ||
import hopsworks | ||
|
||
project = hopsworks.login() | ||
|
||
# get Hopsworks Model Registry handle | ||
mr = project.get_model_registry() | ||
``` | ||
|
||
### Step 2: Generate model evaluation figures | ||
|
||
Generate a figure that visualizes a model metric. | ||
|
||
```python | ||
import seaborn | ||
from sklearn.metrics import confusion_matrix | ||
|
||
# Predict the training data using the trained model | ||
y_pred_train = model.predict(X_train) | ||
|
||
# Predict the test data using the trained model | ||
y_pred_test = model.predict(X_test) | ||
|
||
# Calculate and print the confusion matrix for the test predictions | ||
results = confusion_matrix(y_test, y_pred_test) | ||
|
||
# Create a DataFrame for the confusion matrix results | ||
df_confusion_matrix = pd.DataFrame( | ||
results, | ||
['True Normal', 'True Fraud'], | ||
['Pred Normal', 'Pred Fraud'], | ||
) | ||
|
||
# Create a heatmap using seaborn with annotations | ||
heatmap = seaborn.heatmap(df_confusion_matrix, annot=True) | ||
|
||
# Get the figure and display it | ||
fig = heatmap.get_figure() | ||
fig.show() | ||
``` | ||
|
||
### Step 3: Save the figures as images inside the model directory | ||
|
||
Save the figure to an image file, and place it in a directory with name ´images´ inside the model directory to be exported. | ||
|
||
```python | ||
# Specify the directory name for saving the model and related artifacts | ||
model_dir = "./model" | ||
|
||
# Create a directory with name 'images' for saving the model evaluation images | ||
model_images_dir = model_dir + "/images" | ||
if not os.path.exists(model_images_dir): | ||
os.mkdir(model_images_dir) | ||
|
||
# Save the figure to an image file in the images directory | ||
fig.savefig(model_images_dir + "/confusion_matrix.png") | ||
|
||
# Register the model | ||
py_model = mr.python.create_model(name="py_model") | ||
py_model.save("./model") | ||
``` | ||
|
||
## Conclusion | ||
|
||
In this guide you learned how to attach model evaluation images to a model, helping better understand the experiment results obtained during model training. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters