Skip to content

Commit

Permalink
mediapipe Object detection notebook bug fix and re-formatting
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 705849543
  • Loading branch information
vertex-mg-bot authored and copybara-github committed Dec 16, 2024
1 parent 20138d9 commit ef3f725
Showing 1 changed file with 53 additions and 123 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -114,16 +114,15 @@
"\n",
"REGION = \"\" # @param {type:\"string\"}\n",
"\n",
"! pip3 install --upgrade pip\n",
"! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"import datetime\n",
"import importlib\n",
"import json\n",
"import os\n",
"import subprocess\n",
"import uuid\n",
"\n",
"import tensorflow\n",
"from google.cloud import aiplatform\n",
"\n",
"common_util = importlib.import_module(\n",
Expand Down Expand Up @@ -195,62 +194,6 @@
"), f'{REGION} is not supported. It must be prefixed by \"us\", \"asia\", or \"europe\".'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "init_aip:mbsdk,all"
},
"source": [
"### Initialize Vertex AI SDK for Python\n",
"\n",
"Initialize the Vertex AI SDK for Python for your project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "9wExiMUxFk91"
},
"outputs": [],
"source": [
"EVALUATION_RESULT_OUTPUT_DIRECTORY = os.path.join(STAGING_BUCKET, \"evaluation\")\n",
"EVALUATION_RESULT_OUTPUT_FILE = os.path.join(\n",
" EVALUATION_RESULT_OUTPUT_DIRECTORY, \"evaluation.json\"\n",
")\n",
"\n",
"EXPORTED_MODEL_OUTPUT_DIRECTORY = os.path.join(STAGING_BUCKET, \"model\")\n",
"EXPORTED_MODEL_OUTPUT_FILE = os.path.join(\n",
" EXPORTED_MODEL_OUTPUT_DIRECTORY, \"model.tflite\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n6IFz75WGCam"
},
"source": [
"### Define training machine specs"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "riG_qUokg0XZ"
},
"outputs": [],
"source": [
"TRAINING_JOB_DISPLAY_NAME = \"mediapipe_object_detector_%s\" % now\n",
"TRAINING_CONTAINER = f\"{REGION_PREFIX}-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/mediapipe-train\"\n",
"TRAINING_MACHINE_TYPE = \"n1-highmem-16\"\n",
"TRAINING_ACCELERATOR_TYPE = \"NVIDIA_TESLA_V100\"\n",
"TRAINING_ACCELERATOR_COUNT = 2"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down Expand Up @@ -337,19 +280,6 @@
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "O32DU5RRGhdV"
},
"source": [
"### Configure training dataset\n",
"\n",
"Once you have completed preparing your data, you can begin fine-tuning a model to recognize the new objects, or classes, defined by your training data. The instructions below use the data prepared in the previous section to finetune an object detection model to recognize the two types of android figurines.\n",
"\n",
"You can leave the path to the test data empty if you do not have a separate test data set."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -359,28 +289,18 @@
},
"outputs": [],
"source": [
"# @title Configure training dataset\n",
"\n",
"# @markdown Once you have completed preparing your data, you can begin fine-tuning a model to recognize the new objects, or classes, defined by your training data. The instructions below use the data prepared in the previous section to finetune an object detection model to recognize the two types of android figurines.\n",
"\n",
"# @markdown You can leave the path to the test data empty if you do not have a separate test data set.\n",
"\n",
"training_data_path = \"gs://mediapipe-tasks/object_detector/android_figurine/train\" # @param {type:\"string\"}\n",
"validation_data_path = \"gs://mediapipe-tasks/object_detector/android_figurine/validation\" # @param {type:\"string\"}\n",
"test_data_path = \"\" # @param {type:\"string\"}\n",
"data_format = \"coco\" # @param [\"coco\", \"pascal_voc\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aaff6f5be7f6"
},
"source": [
"### Set fine-tuning options\n",
"\n",
"You can pick between different model architectures to further customize your training:\n",
"\n",
"* MobileNet-V2\n",
"* MobileNet-MultiHW-AVG\n",
"\n",
"To set the model architecture and other training parameters, adjust the following values:"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -390,6 +310,14 @@
},
"outputs": [],
"source": [
"# @title Set fine-tuning options\n",
"\n",
"# @markdown You can pick between different model architectures to further customize your training:\n",
"# @markdown * MobileNet-V2\n",
"# @markdown * MobileNet-MultiHW-AVG\n",
"\n",
"# @markdown To set the model architecture and other training parameters, adjust the following values:\n",
"\n",
"model_architecture = \"mobilenet_v2\" # @param [\"mobilenet_v2\", \"mobilenet_multihw_avg\"]\n",
"\n",
"# The learning rate to use for gradient descent training.\n",
Expand Down Expand Up @@ -427,7 +355,7 @@
"id": "HwcCjwlBTQIz"
},
"source": [
"### Run fine-tuning\n",
"### Run training\n",
"With your training dataset and fine-tuning options prepared, you are ready to start the fine-tuning process. This process is resource intensive and can take a few minutes to a few hours depending on your available compute resources. This process is resource intensive and can take a few minutes to a few hours depending on your available compute resources. On Vertex AI with GPU processing, the example fine-tuning below takes about 3 to 4 minutes.\n",
"\n",
"To begin the fine-tuning process, use the following code:\n"
Expand All @@ -442,9 +370,28 @@
},
"outputs": [],
"source": [
"# @title Run training job\n",
"\n",
"EVALUATION_RESULT_OUTPUT_DIRECTORY = os.path.join(STAGING_BUCKET, \"evaluation\")\n",
"EVALUATION_RESULT_OUTPUT_FILE = os.path.join(\n",
" EVALUATION_RESULT_OUTPUT_DIRECTORY, \"evaluation.json\"\n",
")\n",
"\n",
"EXPORTED_MODEL_OUTPUT_DIRECTORY = os.path.join(STAGING_BUCKET, \"model\")\n",
"EXPORTED_MODEL_OUTPUT_FILE = os.path.join(\n",
" EXPORTED_MODEL_OUTPUT_DIRECTORY, \"model.tflite\"\n",
")\n",
"\n",
"model_export_path = EXPORTED_MODEL_OUTPUT_DIRECTORY\n",
"evaluation_result_path = EVALUATION_RESULT_OUTPUT_DIRECTORY\n",
"\n",
"\n",
"TRAINING_JOB_DISPLAY_NAME = \"mediapipe_object_detector_%s\" % now\n",
"TRAINING_CONTAINER = f\"{REGION_PREFIX}-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/mediapipe-train\"\n",
"TRAINING_MACHINE_TYPE = \"n1-highmem-16\"\n",
"TRAINING_ACCELERATOR_TYPE = \"NVIDIA_TESLA_V100\"\n",
"TRAINING_ACCELERATOR_COUNT = 2\n",
"\n",
"worker_pool_specs = [\n",
" {\n",
" \"machine_spec\": {\n",
Expand Down Expand Up @@ -490,7 +437,7 @@
"common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=TRAIN_ACCELERATOR_TYPE,\n",
" accelerator_type=TRAINING_ACCELERATOR_TYPE,\n",
" accelerator_count=1,\n",
" is_for_training=True,\n",
")\n",
Expand All @@ -514,17 +461,6 @@
"## Evaluate and export model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mV-Djz-frBni"
},
"source": [
"### Evaluate performance\n",
"\n",
"If you have specified test data, you can evaluate it on the test dataset and print the loss and coco metrics. The most important metric for evaluating the model performance is typically the \"AP\" coco metric for Average Precision.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -534,13 +470,25 @@
},
"outputs": [],
"source": [
"# @title Evaluate performance\n",
"\n",
"# @markdown If you have specified test data, you can evaluate it on the test dataset and print the loss and coco metrics. The most important metric for evaluating the model performance is typically the \"AP\" coco metric for Average Precision.\n",
"\n",
"\n",
"def get_evaluation_result(evaluation_result_path):\n",
" try:\n",
" with tensorflow.io.gfile.GFile(evaluation_result_path, \"r\") as input_file:\n",
" eval_result_filename = os.path.basename(evaluation_result_path)\n",
" subprocess.check_output(\n",
" [\"gsutil\", \"cp\", evaluation_result_path, eval_result_filename],\n",
" stderr=subprocess.STDOUT,\n",
" )\n",
" with open(eval_result_filename, \"r\") as input_file:\n",
" evalutation_result = json.loads(input_file.read())\n",
" return evalutation_result[\"loss\"], evalutation_result[\"coco_metrics\"]\n",
" except:\n",
" print(\"Evaluation result not found. Did you provide a test dataset?\")\n",
" print(\n",
" \"Evaluation result not found. Verify that the test dataset has been provided.\"\n",
" )\n",
" return None\n",
"\n",
"\n",
Expand All @@ -551,16 +499,6 @@
" print(f\"Validation coco metrics: {evaluation_result[1]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g0BGaofgsMsy"
},
"source": [
"### Export model\n",
"After fine-tuning and evaluating the model, you can save it as Tensorflow Lite model, try it out in the [Object Detector](https://mediapipe-studio.webapps.google.com/demo/object_detector) demo in MediaPipe Studio or integrate it with your application by following the [Object detection task guide](https://developers.google.com/mediapipe/solutions/vision/object_detector). The exported model also includes metadata and the label map."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -570,19 +508,11 @@
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"\n",
"def copy_model(model_source, model_dest):\n",
" ! gsutil cp {model_source} {model_dest}\n",
"\n",
"\n",
"copy_model(EXPORTED_MODEL_OUTPUT_FILE, \"object_detection_model.tflite\")\n",
"# @title Export model\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import files\n",
"# @markdown After fine-tuning and evaluating the model, you can save it as Tensorflow Lite model, try it out in the [Object Detector](https://mediapipe-studio.webapps.google.com/demo/object_detector) demo in MediaPipe Studio or integrate it with your application by following the [Object detection task guide](https://developers.google.com/mediapipe/solutions/vision/object_detector). The exported model also includes metadata and the label map.\n",
"\n",
" files.download(\"object_detection_model.tflite\")"
"! gsutil cp $EXPORTED_MODEL_OUTPUT_FILE \"object_detection_model.tflite\""
]
},
{
Expand Down

0 comments on commit ef3f725

Please sign in to comment.