diff --git a/docs/source/booklets/advanced.rst b/docs/source/booklets/advanced.rst index 0acd5659..536552f5 100644 --- a/docs/source/booklets/advanced.rst +++ b/docs/source/booklets/advanced.rst @@ -13,10 +13,6 @@ :titlesonly: /programming_resources/vision/vision_overview/vision-overview - /programming_resources/vision/tensorflow_pp_2022/tensorflow_pp_2022 - /programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode - /programming_resources/vision/java_tfod_opmode/java-tfod-opmode - /programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021 /programming_resources/vision/webcam_controls/index Camera Calibration diff --git a/docs/source/programming_resources/index.rst b/docs/source/programming_resources/index.rst index f1a3badd..4e41615b 100644 --- a/docs/source/programming_resources/index.rst +++ b/docs/source/programming_resources/index.rst @@ -1,7 +1,7 @@ .. meta:: :title: Programming Resources, FTC Docs :description: Official Programming Resources for FIRST Tech Challenge - :keywords: Blocks, FTC, FIRST Tech Challenge, On Bot Java, Android Studios, Control Hub, Robot Controller, Driver Station, FTC Control System, Programming Resources + :keywords: Blocks, FTC, FIRST Tech Challenge, On Bot Java, Android Studio, Control Hub, Robot Controller, Driver Station, FTC Control System, Programming Resources Programming Resources ===================== @@ -83,23 +83,6 @@ Topics for programming with AprilTags AprilTag Test Images <../apriltag/opmode_test_images/opmode-test-images> ../apriltag/apriltag_tips/decode_apriltag/decode-apriltag -TensorFlow Programming -~~~~~~~~~~~~~~~~~~~~~~ - -Topics for programming with TensorFlow Object Detection (TFOD) - -.. toctree:: - :maxdepth: 1 - :titlesonly: - - vision/tensorflow_cs_2023/tensorflow-cs-2023 - vision/tensorflow_pp_2022/tensorflow_pp_2022 - vision/tensorflow_ff_2021/tensorflow-ff-2021 - vision/blocks_tfod_opmode/blocks-tfod-opmode - vision/blocks_tfod_opmode_custom/blocks-tfod-opmode-custom - vision/java_tfod_opmode/java-tfod-opmode - vision/java_tfod_opmode_custom/java-tfod-opmode-custom - Vision Programming ~~~~~~~~~~~~~~~~~~~ @@ -152,5 +135,5 @@ Advanced Topics for Programmers Additional *FIRST* Website Resources ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- `FIRST Website Programming Resources Link `__ +- `FIRST Website Programming Resources Link `__ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst b/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst deleted file mode 100644 index 132d5602..00000000 --- a/docs/source/programming_resources/vision/blocks_tfod_opmode/blocks-tfod-opmode.rst +++ /dev/null @@ -1,289 +0,0 @@ -Blocks Sample OpMode for TFOD -============================= - -Introduction ------------- - -This tutorial describes the FTC Blocks Sample OpMode for TensorFlow -Object Detection (TFOD). This Sample, called -“ConceptTensorFlowObjectDetection”, can recognize one or more official -game elements and provide their visible size and position. - -For the 2023-2024 game CENTERSTAGE, the game element is a hexagonal -white **Pixel**. The FTC SDK software contains a TFOD model of this -object, ready for recognition. - -For extra points, teams may instead use their own custom TFOD models of -**Team Props**. That option is described here: - -- :doc:`Blocks Custom Model Sample OpMode for TFOD <../blocks_tfod_opmode_custom/blocks-tfod-opmode-custom>` - -Creating the OpMode -------------------- - -At the FTC Blocks browser interface, click on the “Create New OpMode” -button to display the Create New OpMode dialog box. - -Specify a name for your new OpMode. Select -“ConceptTensorFlowObjectDetection” as the Sample OpMode that will be the -template for your new OpMode. - -If no webcam is configured for your REV Control Hub, the dialog box will -display a warning message (shown here). You can ignore this warning -message if you will use the built-in camera of an Android RC phone. -Click “OK” to create your new OpMode. - -.. figure:: images/030-Create-New-OpMode.png - :align: center - :width: 75% - :alt: Creating a new OpMode - - Creating a New OpMode - -The new OpMode should appear in edit mode in your browser. - -.. figure:: images/040-Sample-OpMode.png - :align: center - :width: 75% - :alt: Sample OpMode - - Sample OpMode - -By default, the Sample OpMode assumes you are using a webcam, configured -as “Webcam 1”. If you are using the built-in camera on your Android RC -phone, change the USE_WEBCAM Boolean from ``true`` to ``false`` (green -arrow above). - -Adjusting the Zoom Factor -------------------------- - -If the object to be recognized will be more than roughly 2 feet (61 cm) -from the camera, you might want to set the digital zoom factor to a -value greater than 1. This tells TensorFlow to use an artificially -magnified portion of the image, which may offer more accurate -recognitions at greater distances. - -.. figure:: images/150-setZoom.png - :align: center - :width: 75% - :alt: Setting Zoom - - Setting the Zoom Factor - -Pull out the **``setZoom``** Block, found in the toolbox or palette -called “Vision”, under “TensorFlow” and “TfodProcessor” (see green oval -above). Change the magnification value as desired (green arrow). - -On REV Control Hub, the “Vision” menu appears only when the active robot -configuration contains a webcam, even if not plugged in. - -This ``setZoom`` Block can be placed in the INIT section of your OpMode, - -- immediately after the call to the ``initTfod`` Function, or -- as the very last Block inside the ``initTfod`` Function. - -This Block is **not** part of the Processor Builder pattern, so the Zoom -factor can be set to other values during the OpMode, if desired. - -The “zoomed” region can be observed in the DS preview (Camera Stream) -and the RC preview (LiveView), surrounded by a greyed-out area that is -**not evaluated** by the TFOD Processor. - -Other Adjustments ------------------ - -The Sample OpMode uses a default **minimum confidence** level of 75%. -The TensorFlow Processor needs to have a confidence level of 75% or -higher, to consider an object as “recognized” in its field of view. - -You can see the object name and actual confidence (as a **decimal**, -e.g. 0.75) near the Bounding Box, in the Driver Station preview (Camera -Stream) and Robot Controller preview (Liveview). - -.. figure:: images/160-min-confidence.png - :align: center - :width: 75% - :alt: Setting Minimum Confidence - - Setting the Minimum Confidence - -Pull out the **``setMinResultConfidence``** Block, found in the toolbox -or palette called “Vision”, under “TensorFlow” and “TfodProcessor”. -Adjust this parameter to a higher value if you would like the processor -to be more selective in identifying an object. - -Another option is to define, or clip, a **custom area for TFOD -evaluation**, unlike ``setZoom`` which is always centered. - -.. figure:: images/170-clipping-margins.png - :align: center - :width: 75% - :alt: Setting Clipping Margins - - Setting Clipping Margins - -From the same Blocks palette, pull out the **``setClippingMargins``** -Block. Adjust the four margins as desired, in units of pixels. - -These Blocks can be placed in the INIT section of your OpMode, - -- immediately after the call to the ``initTfod`` Function, or -- as the very last Blocks inside the ``initTfod`` Function. - -As with ``setZoom``, these Blocks are **not** part of the Processor -Builder pattern, so they can be set to other values during the OpMode, -if desired. - -Command Flow in this Sample ---------------------------- - -After the ``waitForStart`` Block, this OpMode contains the main program -loop: - -.. figure:: images/180-main-loop.png - :align: center - :width: 75% - :alt: Main Loop - - OpMode Main Loop - -This loop repeatedly calls a Blocks Function called -**``telemetryTfod``**. That Function is the heart of the OpMode, seeking -and evaluating recognized TFOD objects, and displaying DS Telemetry -about those objects. It will be discussed below, in the next section. - -The main loop also allows the user to press the ``Dpad Down`` button on -the gamepad, to temporarily stop the streaming session. This -``.stopStreaming`` Block pauses the flow and processing of camera -frames, thus **conserving CPU resources**. - -Pressing the ``Dpad Up`` button (``.resumeStreaming``) allows the -processing to continue. The on-and-off actions can be observed in the RC -preview (LiveView), described further below. - -These two commands appear here in this Sample OpMode, to spread -awareness of one tool for managing CPU and bandwidth resources. The FTC -VisionPortal offers over 10 such controls, :ref:`described here -`. - -Processing TFOD Recognitions ----------------------------- - -The Function called **``telemetryTfod``** is the heart of the OpMode, -seeking and evaluating recognized TFOD objects, and displaying DS -Telemetry about those objects. - -.. figure:: images/190-telemetryTfod.png - :align: center - :width: 75% - :alt: Telemetry TFOD - - Telemetry TFOD - -The first Block uses the TFOD Processor to gather and store all -recognitions in a List, called ``myTfodRecognitions``. - -The green “FOR Loop” iterates through that List, handling each item, one -at a time. Here the “handling” is simply displaying certain TFOD fields -to DS Telemetry. - -For competition, you want to do more than display Telemetry, and you -want to exit the main loop at some point. These code modifications are -discussed in another section below. - -Testing the OpMode ------------------- - -Click the “Save OpMode” button, then run the OpMode from the Driver -Station. The Robot Controller should use the CENTERSTAGE TFOD model to -recognize and track the white Pixel. - -For a preview during the INIT phase, touch the Driver Station’s 3-dot -menu and select **Camera Stream**. - -.. figure:: images/200-Sample-DS-Camera-Stream.png - :align: center - :width: 75% - :alt: Sample DS Camera Stream - - Sample DS Camera Stream - -Camera Stream is not live video; tap to refresh the image. Use the small -white arrows at lower right to expand or revert the preview size. To -close the preview, choose 3-dots and Camera Stream again. - -After touching the DS START button, the OpMode displays Telemetry for -any recognized Pixel(s): - -.. figure:: images/210-Sample-DS-Telemetry.png - :align: center - :width: 75% - :alt: Sample DS Telemetry - - Sample DS Telemetry - -The above Telemetry shows the label name, and TFOD confidence level. It -also gives the **center location** and **size** (in pixels) of the -Bounding Box, which is the colored rectangle surrounding the recognized -object. - -The pixel origin (0, 0) is at the top left corner of the image. - -Before and after touching DS START, the Robot Controller provides a -video preview called **LiveView**. - -.. figure:: images/240-Sample-RC-LiveView.png - :align: center - :width: 75% - :alt: Sample RC LiveView - - Sample RC LiveView - -For Control Hub (with no built-in screen), plug in an HDMI monitor or -learn about ``scrcpy`` (https://github.com/Genymobile/scrcpy). The -above image is a LiveView screenshot via ``scrcpy``. - -If you don’t have a physical Pixel on hand, try pointing the camera at -this image: - -.. figure:: images/300-Sample-Pixel.png - :align: center - :width: 75% - :alt: Sample Pixel - - Sample Pixel - -Modifying the Sample --------------------- - -In this Sample OpMode, the main loop ends only upon touching the DS Stop -button. For competition, teams should **modify this code** in at least -two ways: - -- for a significant recognition, take action or store key information – - inside the FOR loop - -- end the main loop based on your criteria, to continue the OpMode - -As an example, you might set a Boolean variable ``isPixelDetected`` to -``true``, if a significant recognition has occurred. - -You might also evaluate and store which randomized Spike Mark (red or -blue tape stripe) holds the white Pixel. - -Regarding the main loop, it could end after the camera views all three -Spike Marks, or after your code provides a high-confidence result. If -the camera’s view includes more than one Spike Mark position, perhaps -the white Pixel’s **Bounding Box** size and location could be useful. -Teams should consider how long to seek an acceptable recognition, and -what to do otherwise. - -In any case, the OpMode should exit the main loop and continue running, -using any stored information. - -Best of luck this season! - -============ - -Questions, comments and corrections to westsiderobotics@verizon.net - diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030-Create-New-OpMode.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030-Create-New-OpMode.png deleted file mode 100644 index 1b543afd..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/030-Create-New-OpMode.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040-Sample-OpMode.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040-Sample-OpMode.png deleted file mode 100644 index ac557b83..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/040-Sample-OpMode.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/150-setZoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/150-setZoom.png deleted file mode 100644 index 6ed836b8..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/150-setZoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/160-min-confidence.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/160-min-confidence.png deleted file mode 100644 index f05d34af..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/160-min-confidence.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/170-clipping-margins.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/170-clipping-margins.png deleted file mode 100644 index 2fec67b4..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/170-clipping-margins.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/180-main-loop.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/180-main-loop.png deleted file mode 100644 index 10e18f77..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/180-main-loop.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/190-telemetryTfod.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/190-telemetryTfod.png deleted file mode 100644 index cd07d688..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/190-telemetryTfod.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/200-Sample-DS-Camera-Stream.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/200-Sample-DS-Camera-Stream.png deleted file mode 100644 index 5b26217c..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/200-Sample-DS-Camera-Stream.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/210-Sample-DS-Telemetry.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/210-Sample-DS-Telemetry.png deleted file mode 100644 index a920c3bc..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/210-Sample-DS-Telemetry.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/240-Sample-RC-LiveView.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/240-Sample-RC-LiveView.png deleted file mode 100644 index 731ca5ea..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/240-Sample-RC-LiveView.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/300-Sample-Pixel.png b/docs/source/programming_resources/vision/blocks_tfod_opmode/images/300-Sample-Pixel.png deleted file mode 100644 index 972274e5..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode/images/300-Sample-Pixel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/blocks-tfod-opmode-custom.rst b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/blocks-tfod-opmode-custom.rst deleted file mode 100644 index 50f283e4..00000000 --- a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/blocks-tfod-opmode-custom.rst +++ /dev/null @@ -1,311 +0,0 @@ -Blocks Custom Model Sample OpMode for TFOD -========================================== - -Introduction ------------- - -This tutorial uses an FTC Blocks Sample OpMode to load and recognize a -**custom TensorFlow inference model**. - -- In this example, the “custom model” is actually the standard trained - model of the 2023-2024 CENTERSTAGE game element called a **Pixel**. This - does not affect the process described for a custom model. - -Downloading the Model ---------------------- - -The Robot Controller allows you to load a trained inference model in the -form of a TensorFlow Lite (``.tflite``) file. - -Here we use the standard FTC ``.tflite`` file from CENTERSTAGE -(2023-2024), available on GitHub at the following link: - -- `CENTERSTAGE TFLite File `__ - - -.. note:: - Very advanced teams could use Google's TensorFlow Object Detection API - (https://github.com/tensorflow/models/tree/master/research/object_detection) - to create their own custom inference model. - -Click the “Download Raw File” button to download the -``CenterStage.tflite`` file from GitHub to your local device -(e.g. laptop). See the green arrow. - -.. figure:: images/012-Centerstage-public-repo.png - :align: center - :width: 85% - :alt: Public Repo for CenterStage file - - Public repo for CenterStage tflite file - -Uploading to the Robot Controller ---------------------------------- - -After downloading the file to your laptop, you need to upload it to the -Robot Controller. Connect your laptop to your Robot Controller’s -wireless network and navigate to the FTC “Manage” page: - -.. figure:: images/020-Manage-page.png - :align: center - :width: 85% - :alt: Manage Page - - Example of the Manage Page - -Scroll down and click on “Manage TensorFlow Lite Models”. - -.. figure:: images/030-Manage-TFLite-Models.png - :align: center - :width: 85% - :alt: Managing TFLITE Models - - Manage TFLITE Models Link - -Now click the “Upload Models” button. - -.. figure:: images/040-Upload-Models.png - :align: center - :width: 85% - :alt: Upload TFLITE Model - - Upload TFLITE Models Button - -Click “Choose Files”, and use the dialog box to find and select the -downloaded ``CenterStage.tflite`` file. - -.. figure:: images/050-Choose-Files.png - :align: center - :width: 85% - :alt: Upload TFLITE Model - - Upload TFLITE Models Button - -Now the file will upload to the Robot Controller. The file will appear -in the list of TensorFlow models available for use in OpModes. - -.. figure:: images/060-Centerstage-tflite.png - :align: center - :width: 85% - :alt: Model Listed - - TFLITE Model Listed - -Creating the OpMode -------------------- - -Click on the “Blocks” tab at the top of the screen to navigate to the -Blocks Programming page. Click on the “Create New OpMode” button to -display the Create New OpMode dialog box. - -Specify a name for your new OpMode. Select -“ConceptTensorFlowObjectDetectionCustomModel” as the Sample OpMode that -will be the template for your new OpMode. - -If no webcam is configured for your REV Control Hub, the dialog box will -display a warning message (shown here). You can ignore this warning -message if you will use the built-in camera of an Android RC phone. -Click “OK” to create your new OpMode. - -.. figure:: images/createNewOpMode.png - :align: center - :width: 85% - :alt: Create OpMode - - Create New OpMode - -The new OpMode should appear in edit mode in your browser. - -.. figure:: images/100-Sample-OpMode-header.png - :align: center - :width: 85% - :alt: Sample OpMode - - Sample OpMode - -By default, the Sample OpMode assumes you are using a webcam, configured -as “Webcam 1”. If you are using the built-in camera on your Android RC -phone, change the USE_WEBCAM Boolean from ``true`` to ``false`` (green -arrow above). - -Loading the Custom Model ------------------------- - -Scroll down in the OpMode, to the Blocks Function called “initTfod”. - -In the Block with “.setModelFileName”, change the filename from -“MyCustomModel.tflite” to ``CenterStage.tflite`` – or other filename -that you uploaded to the Robot Controller. The filename must be an exact -match. See green oval below. - -.. figure:: images/120-Init-Tfod.png - :align: center - :width: 85% - :alt: Init TFOD Function - - Init TFOD Function - -When loading an inference model, you must specify a list of **labels** -that describe the known objects in the model. This is done in the next -Block, with “.setModelLabels”. - -This Sample OpMode assumes a default model with two known objects, -labeled “ball” and “cube”. The CENTERSTAGE model contains only one -object, labeled “Pixel”. - -For competition, the **Team Prop** label names might be -``myTeamProp_Red`` and/or ``myTeamProp_Blue``. - -The number of labels can be changed by clicking the small blue gear icon -for the “create list with” Block (see yellow arrow). - -.. figure:: images/145-blue-gear-delete.png - :align: center - :width: 85% - :alt: Blue Gear Delete - - Blue Gear Delete - -In the pop-up layout balloon, click on one of the list items to select -it (green arrow above). Then remove it, by pressing Delete (on -keyboard), or by dragging it to the balloon’s left-side grey zone. - -After editing that purple “list” structure, click the blue gear icon -again to close the layout balloon. Edit the remaining label to “Pixel”. - -When complete, the edited Blocks should look like this: - -.. figure:: images/147-Centerstage-Blocks.png - :align: center - :width: 85% - :alt: Adding Pixel Label - - Adding Pixel Label - -Adjusting the Zoom Factor -------------------------- - -If the object to be recognized will be more than roughly 2 feet (61 cm) -from the camera, you might want to set the digital zoom factor to a -value greater than 1. This tells TensorFlow to use an artificially -magnified portion of the image, which may offer more accurate -recognitions at greater distances. - -.. figure:: images/150-setZoom.png - :align: center - :width: 85% - :alt: Set Zoom - - Set Zoom - -Pull out the **“setZoom” Block**, found in the toolbox or palette called -“Vision”, under “TensorFlow” and “TfodProcessor” (see green oval above). -Change the magnification value as desired (green arrow). - -On REV Control Hub, the “Vision” menu appears only when the active robot -configuration contains a webcam, even if not plugged in. - -Place this Block immediately after the Block -``set myTfodProcessor to call myTfodProcessorBuilder.build``. This Block -is **not** part of the Processor Builder pattern, so the Zoom factor can -be set to other values during the OpMode, if desired. - -The “zoomed” region can be observed in the DS preview (Camera Stream) -and the RC preview (LiveView), surrounded by a greyed-out area that is -**not evaluated** by the TFOD Processor. - -Testing the OpMode ------------------- - -Click the “Save OpMode” button, then run the OpMode from the Driver -Station. The Robot Controller should use the new CENTERSTAGE inference -model to recognize and track the Pixel game element. - -For a preview during the INIT phase, touch the Driver Station’s 3-dot -menu and select **Camera Stream**. - -.. figure:: images/200-DS-Camera-Stream-Centerstage.png - :align: center - :width: 85% - :alt: DS Camera Stream - - DS Camera Stream - -Camera Stream is not live video; tap to refresh the image. Use the small -white arrows at lower right to expand or revert the preview size. To -close the preview, choose 3-dots and Camera Stream again. - -After touching the DS START button, the OpMode displays Telemetry for -any recognized Pixel(s): - -.. figure:: images/210-DS-Telemetry-Centerstage.png - :align: center - :width: 85% - :alt: DS Telemetry - - DS Telemetry - -The above Telemetry shows the label name, and TFOD confidence level. It -also gives the **center location** and **size** (in pixels) of the -Bounding Box, which is the colored rectangle surrounding the recognized -object. - -The pixel origin (0, 0) is at the top left corner of the image. - -Before and after touching DS START, the Robot Controller provides a -video preview called **LiveView**. - -.. figure:: images/240-RC-LiveView-Centerstage.png - :align: center - :width: 85% - :alt: RC LiveView - - RC LiveView - -For Control Hub (with no built-in screen), plug in an HDMI monitor or -learn about ``scrcpy`` (https://github.com/Genymobile/scrcpy). The -above image is a LiveView screenshot via ``scrcpy``. - -If you don’t have a physical Pixel on hand, try pointing the camera at -this image: - -.. figure:: images/300-Pixel.png - :align: center - :width: 85% - :alt: Sample Pixel - - Sample Pixel - -Modifying the Sample --------------------- - -In this Sample OpMode, the main loop ends only upon touching the DS Stop -button. For competition, teams should **modify this code** in at least -two ways: - -- for a significant recognition, take action or store key information – - inside the FOR loop - -- end the main loop based on your criteria, to continue the OpMode - -As an example, you might set a Boolean variable ``isTeamPropDetected`` -to ``true``, if a significant recognition has occurred. - -You might also evaluate and store which randomized Spike Mark (red or -blue tape stripe) holds the Team Prop. - -Regarding the main loop, it could end after the camera views all three -Spike Marks, or after your code provides a high-confidence result. If -the camera’s view includes more than one Spike Mark position, perhaps -the Team Prop’s **Bounding Box** size and location could be useful. -Teams should consider how long to seek an acceptable recognition, and -what to do otherwise. - -In any case, the OpMode should exit the main loop and continue running, -using any stored information. - -Best of luck this season! - -============ - -Questions, comments and corrections to westsiderobotics@verizon.net diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/010-Centerstage-repo.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/010-Centerstage-repo.png deleted file mode 100644 index bb1d131e..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/010-Centerstage-repo.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/012-Centerstage-public-repo.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/012-Centerstage-public-repo.png deleted file mode 100644 index 61db2e13..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/012-Centerstage-public-repo.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/020-Manage-page.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/020-Manage-page.png deleted file mode 100644 index 533b5a47..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/020-Manage-page.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/030-Manage-TFLite-Models.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/030-Manage-TFLite-Models.png deleted file mode 100644 index fd45aa42..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/030-Manage-TFLite-Models.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/040-Upload-Models.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/040-Upload-Models.png deleted file mode 100644 index 88f4a8ee..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/040-Upload-Models.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/050-Choose-Files.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/050-Choose-Files.png deleted file mode 100644 index 46c3abe3..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/050-Choose-Files.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/060-Centerstage-tflite.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/060-Centerstage-tflite.png deleted file mode 100644 index 397c17c1..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/060-Centerstage-tflite.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/100-Sample-OpMode-header.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/100-Sample-OpMode-header.png deleted file mode 100644 index be40e1e9..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/100-Sample-OpMode-header.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/120-Init-Tfod.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/120-Init-Tfod.png deleted file mode 100644 index 9a0e2c6d..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/120-Init-Tfod.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/140-blue-gear.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/140-blue-gear.png deleted file mode 100644 index 50a0b9f3..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/140-blue-gear.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/145-blue-gear-delete.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/145-blue-gear-delete.png deleted file mode 100644 index 2139d64c..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/145-blue-gear-delete.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/147-Centerstage-Blocks.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/147-Centerstage-Blocks.png deleted file mode 100644 index 258723a6..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/147-Centerstage-Blocks.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/150-setZoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/150-setZoom.png deleted file mode 100644 index 6ed836b8..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/150-setZoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/200-DS-Camera-Stream-Centerstage.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/200-DS-Camera-Stream-Centerstage.png deleted file mode 100644 index 8d725a6e..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/200-DS-Camera-Stream-Centerstage.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/210-DS-Telemetry-Centerstage.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/210-DS-Telemetry-Centerstage.png deleted file mode 100644 index 29b35d24..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/210-DS-Telemetry-Centerstage.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/240-RC-LiveView-Centerstage.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/240-RC-LiveView-Centerstage.png deleted file mode 100644 index e1500e66..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/240-RC-LiveView-Centerstage.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/300-Pixel.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/300-Pixel.png deleted file mode 100644 index 972274e5..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/300-Pixel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/changeLabels.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/changeLabels.png deleted file mode 100644 index 98048f7c..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/changeLabels.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/clickOnGear.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/clickOnGear.png deleted file mode 100644 index 9f653e55..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/clickOnGear.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/createNewOpMode.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/createNewOpMode.png deleted file mode 100644 index e3a10138..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/createNewOpMode.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/dialogBox.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/dialogBox.png deleted file mode 100644 index 8590eb99..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/dialogBox.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/downloadTflite.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/downloadTflite.png deleted file mode 100644 index 9f31b4c5..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/downloadTflite.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/manage.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/manage.png deleted file mode 100644 index ff1bd386..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/manage.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/selectFile.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/selectFile.png deleted file mode 100644 index 9d282eea..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/selectFile.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/setZoom.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/setZoom.png deleted file mode 100644 index 29fe2f11..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/setZoom.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/upload.png b/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/upload.png deleted file mode 100644 index ef278387..00000000 Binary files a/docs/source/programming_resources/vision/blocks_tfod_opmode_custom/images/upload.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/010-TFOD-recognition.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/010-TFOD-recognition.png deleted file mode 100644 index 61389e56..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/010-TFOD-recognition.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/020-New-File.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/020-New-File.png deleted file mode 100644 index f9d4b533..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/020-New-File.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/040-Sample-Open.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/040-Sample-Open.png deleted file mode 100644 index f1b4479f..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/040-Sample-Open.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/200-Sample-DS-Camera-Stream.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/200-Sample-DS-Camera-Stream.png deleted file mode 100644 index 5b26217c..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/200-Sample-DS-Camera-Stream.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/210-Sample-DS-Telemetry.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/210-Sample-DS-Telemetry.png deleted file mode 100644 index a920c3bc..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/210-Sample-DS-Telemetry.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/240-Sample-RC-LiveView.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/240-Sample-RC-LiveView.png deleted file mode 100644 index 0171f456..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/240-Sample-RC-LiveView.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/images/300-Sample-Pixel.png b/docs/source/programming_resources/vision/java_tfod_opmode/images/300-Sample-Pixel.png deleted file mode 100644 index 972274e5..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode/images/300-Sample-Pixel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst b/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst deleted file mode 100644 index 8e5b6373..00000000 --- a/docs/source/programming_resources/vision/java_tfod_opmode/java-tfod-opmode.rst +++ /dev/null @@ -1,409 +0,0 @@ -Java Easy Sample OpMode for TFOD -================================ - -Introduction ------------- - -This tutorial describes the “Easy” version of the FTC Java Sample OpMode -for TensorFlow Object Detection (TFOD). - -This Sample, called “ConceptTensorFlowObjectDetectionEasy.java”, can -recognize official FTC game elements and provide their visible size and -position. It uses standard/default TFOD settings. - -For the 2023-2024 game CENTERSTAGE, the game element is a hexagonal -white **Pixel**. The FTC SDK software contains a TFOD model of this -object, ready for recognition. - -.. figure:: images/010-TFOD-recognition.png - :align: center - :width: 85% - :alt: TFOD Recognition - - Sample TFOD Recognition - -For extra points, teams may instead use their own custom TFOD models of -**Team Props**. That option is described here: - -- :doc:`Java Custom Model Sample OpMode for TFOD <../java_tfod_opmode_custom/java-tfod-opmode-custom>` - -This tutorial shows **OnBot Java** screens. Users of **Android Studio** -can follow along, since the Sample OpMode is exactly the same. - -A different Sample OpMode shows how to set **TFOD options**, unlike the -“Easy” version which uses only standard/default TFOD settings. That -version, called “ConceptTensorFlowObjectDetection.java” has good -commenting to guide users in the Java **Builder pattern** for custom -settings. - -The “Easy” OpMode covered here does not require the user to work with -the Builder pattern, although the SDK does use it internally. - -Creating the OpMode -------------------- - -At the FTC OnBot Java browser interface, click on the large black -**plus-sign icon** “Add File”, to open the New File dialog box. - -.. figure:: images/020-New-File.png - :align: center - :width: 85% - :alt: New File - - New File Dialog - -Specify a name for your new OpMode. Select -“ConceptTensorFlowObjectDetectionEasy” as the Sample OpMode that will be -the template for your new OpMode. - -This Sample has optional gamepad inputs, so it could be designated as a -**TeleOp** OpMode (see above). - -Click “OK” to create your new OpMode. - -Android Studio users should follow the commented instructions to copy -this class from the Samples folder to the Teamcode folder, with a new -name. Also remove the ``@Disabled`` annotation, to make the OpMode -visible in the Driver Station list. - -The new OpMode should appear in edit mode in your browser. - -.. figure:: images/040-Sample-Open.png - :align: center - :width: 85% - :alt: Open Sample - - Opening New Sample - -By default, the Sample OpMode assumes you are using a webcam, configured -as “Webcam 1”. If you are using the built-in camera on your Android RC -phone, change the USE_WEBCAM Boolean from ``true`` to ``false`` (orange -oval above). - -Preliminary Testing -------------------- - -This OpMode is ready to use – it’s the “Easy” version! - -Click the “Build Everything” button (wrench icon at lower right), and -wait for confirmation “BUILD SUCCESSFUL”. - -If Build is prevented by some other OpMode having errors/issues, they -must be fixed before your new OpMode can run. For a quick fix, you could -right-click on that filename and choose “Disable/Comment”. This -“comments out” all lines of code, effectively removing that file from -the Build. That file can be re-activated later with “Enable/Uncomment”. - -In Android Studio (or OnBot Java), you can open a problem class/OpMode -and type **CTRL-A** and **CTRL-/** to select and “comment out” all lines -of code. This is reversible with **CTRL-A** and **CTRL-/** again. - -Now run your new OpMode from the Driver Station (on the TeleOp list, if -so designated). The OpMode should recognize any CENTERSTAGE white Pixel -within the camera’s view, based on the trained TFOD model in the SDK. - -For a **preview** during the INIT phase, touch the Driver Station’s -3-dot menu and select **Camera Stream**. - -.. figure:: images/200-Sample-DS-Camera-Stream.png - :align: center - :width: 85% - :alt: DS Camera Stream - - DS Camera Stream - -Camera Stream is not live video; tap to refresh the image. Use the small -white arrows at lower right to expand or revert the preview size. To -close the preview, choose 3-dots and Camera Stream again. - -After the DS START button is touched, the OpMode displays Telemetry for -any recognized Pixel(s): - -.. figure:: images/210-Sample-DS-Telemetry.png - :align: center - :width: 85% - :alt: DS Telemetry - - DS Telemetry Display - -The above Telemetry shows the Label name, and TFOD recognition -confidence level. It also gives the **center location** and **size** (in -pixels) of the Bounding Box, which is the colored rectangle surrounding -the recognized object. - -The pixel origin (0, 0) is at the top left corner of the image. - -Before and after DS START is touched, the Robot Controller provides a -video preview called **LiveView**. - -.. figure:: images/240-Sample-RC-LiveView.png - :align: center - :width: 85% - :alt: Sample RC LiveView - - Sample RC LiveView - -For Control Hub (with no built-in screen), plug in an HDMI monitor or -learn about ``scrcpy`` (https://github.com/Genymobile/scrcpy). The -above image is a LiveView screenshot via ``scrcpy``. - -If you don’t have a physical Pixel on hand, try pointing the camera at -this image: - -.. figure:: images/300-Sample-Pixel.png - :align: center - :width: 85% - :alt: A Pixel - - Example of a Pixel - - -Program Logic and Initialization --------------------------------- - -During the INIT stage (before DS START is touched), this OpMode calls a -**method to initialize** the TFOD Processor and the FTC VisionPortal. -After DS START is touched, the OpMode runs a continuous loop, calling a -**method to display telemetry** about any TFOD recognitions. The OpMode -also contains two optional features to remind teams about **CPU resource -management**, useful in vision processing. - -Here’s the first method, to initialize the TFOD Processor and the FTC -VisionPortal. - -.. code:: java - - /** - * Initialize the TensorFlow Object Detection processor. - */ - private void initTfod() { - - // Create the TensorFlow processor the easy way. - tfod = TfodProcessor.easyCreateWithDefaults(); - - // Create the vision portal the easy way. - if (USE_WEBCAM) { - visionPortal = VisionPortal.easyCreateWithDefaults( - hardwareMap.get(WebcamName.class, "Webcam 1"), tfod); - } else { - visionPortal = VisionPortal.easyCreateWithDefaults( - BuiltinCameraDirection.BACK, tfod); - } - - } // end method initTfod() - -For the **TFOD Processor**, the method ``easyCreateWithDefaults()`` uses -standard default settings. Most teams don’t need to modify these, -especially for the built-in TFOD model (white Pixel). - -For the **VisionPortal**, the method ``easyCreateWithDefaults()`` -requires parameters for camera name and processor(s) used, but otherwise -uses standard default settings such as: - -- camera resolution 640 x 480 - -- non-compressed streaming format YUY2 - -- enable RC preview (called LiveView) - -- if TFOD and AprilTag processors are disabled, still display LiveView - (without annotations) - -These are good starting values for most teams. - -Telemetry Method ----------------- - -After DS START is touched, the OpMode continuously calls this method to -display telemetry about any TFOD recognitions: - -.. code:: java - - /** - * Add telemetry about TensorFlow Object Detection (TFOD) recognitions. - */ - private void telemetryTfod() { - - List currentRecognitions = tfod.getRecognitions(); - telemetry.addData("# Objects Detected", currentRecognitions.size()); - - // Step through the list of recognitions and display info for each one. - for (Recognition recognition : currentRecognitions) { - double x = (recognition.getLeft() + recognition.getRight()) / 2 ; - double y = (recognition.getTop() + recognition.getBottom()) / 2 ; - - telemetry.addData(""," "); - telemetry.addData("Image", "%s (%.0f %% Conf.)", recognition.getLabel(), recognition.getConfidence() * 100); - telemetry.addData("- Position", "%.0f / %.0f", x, y); - telemetry.addData("- Size", "%.0f x %.0f", recognition.getWidth(), recognition.getHeight()); - } // end for() loop - - } // end method telemetryTfod() - -In the first line of code, **all TFOD recognitions** are collected and -stored in a List variable. The camera might “see” more than one game -element in its field of view, even if not intended (i.e. for CENTERSTAGE -with 1 game element). - -The ``for() loop`` then iterates through that List, handling each item, -one at a time. Here the “handling” is simply processing certain TFOD -fields for DS Telemetry. - -The ``for() loop`` calculates the pixel coordinates of the **center** of -each Bounding Box (the preview’s colored rectangle around a recognized -object). - -Telemetry is created for the Driver Station, with the object’s name -(Label), recognition confidence level (percentage), and the Bounding -Box’s location and size (in pixels). - -For competition, you want to do more than display Telemetry, and you -want to exit the main OpMode loop at some point. These code -modifications are discussed in another section below. - -Resource Management -------------------- - -Vision processing is “expensive”, using much **CPU capacity and USB -bandwidth** to process millions of pixels streaming in from the camera. - -This Sample OpMode contains two optional features to remind teams about -resource management. Overall, the SDK provides :ref:`over 10 -tools ` -to manage these resources, allowing your OpMode to run effectively. - -As the first example, streaming images from the camera can be paused and -resumed. This is a very fast transition, freeing CPU resources (and -potentially USB bandwidth). - -.. code:: java - - - // Save CPU resources; can resume streaming when needed. - if (gamepad1.dpad_down) { - visionPortal.stopStreaming(); - } else if (gamepad1.dpad_up) { - visionPortal.resumeStreaming(); - } - -Pressing the Dpad buttons, you can observe the off-and-on actions in the -RC preview (LiveView), described above. In your competition OpMode, -these streaming actions would be programmed, not manually controlled. - -The second example: after exiting the main loop, the VisionPortal is -closed. - -.. code:: java - - // Save more CPU resources when camera is no longer needed. - visionPortal.close(); - -Teams may consider this at any point when the VisionPortal is no longer -needed by the OpMode, freeing valuable CPU resources for other tasks. - -Adjusting the Zoom Factor -------------------------- - -If the object to be recognized will be more than roughly 2 feet (61 cm) -from the camera, you might want to set the digital Zoom factor to a -value greater than 1. This tells TensorFlow to use an artificially -magnified portion of the image, which may offer more accurate -recognitions at greater distances. - -.. code:: java - - // Indicate that only the zoomed center area of each - // image will be passed to the TensorFlow object - // detector. For no zooming, set magnification to 1.0. - tfod.setZoom(2.0); - -This ``setZoom()`` method can be placed in the INIT section of your -OpMode, - -- immediately after the call to the ``initTfod()`` method, or - -- as the very last command inside the ``initTfod()`` method. - -This method is **not** part of the Processor Builder pattern (used in -other TFOD Sample OpModes), so the Zoom factor can be set to other -values during the OpMode, if desired. - -The “zoomed” region can be observed in the DS preview (Camera Stream) -and the RC preview (LiveView), surrounded by a greyed-out area that is -**not evaluated** by the TFOD Processor. - -Other Adjustments ------------------ - -The Sample OpMode uses a default **minimum confidence** level of 75%. -This means the TensorFlow Processor needs a confidence level of 75% or -higher, to consider an object as “recognized” in its field of view. - -You can see the object name and actual confidence (as a **decimal**, -e.g. 0.96) near the Bounding Box, in the Driver Station preview (Camera -Stream) and Robot Controller preview (Liveview). - -.. code:: java - - // Set the minimum confidence at which to keep recognitions. - tfod.setMinResultConfidence((float) 0.75); - -Adjust this parameter to a higher value if you would like the processor -to be more selective in identifying an object. - -Another option is to define, or clip, a **custom area for TFOD -evaluation**, unlike ``setZoom`` which is always centered. - -.. code:: java - - // Set the number of pixels to obscure on the left, top, - // right, and bottom edges of each image passed to the - // TensorFlow object detector. The size of the images are not - // changed, but the pixels in the margins are colored black. - tfod.setClippingMargins(0, 200, 0, 0); - -Adjust the four margins as desired, in units of pixels. - -These methods can be placed in the INIT section of your OpMode, - -- immediately after the call to the ``initTfod()`` method, or - -- as the very last commands inside the ``initTfod()`` method. - -As with ``setZoom``, these methods are **not** part of the Processor -Builder pattern (used in other TFOD Sample OpModes), so they can be set -to other values during the OpMode, if desired. - -Modifying the Sample --------------------- - -In this Sample OpMode, the main loop ends only when the DS STOP button -is touched. For competition, teams should **modify this code** in at -least two ways: - -- for a significant recognition, take action or store key information – - inside the ``for() loop`` - -- end the main loop based on your criteria, to continue the OpMode - -As an example, you might set a Boolean variable ``isPixelDetected`` to -``true``, if a significant recognition has occurred. - -You might also evaluate and store which randomized Spike Mark (red or -blue tape stripe) holds the white Pixel. - -Regarding the main loop, it could end after the camera views all three -Spike Marks, or after your code provides a high-confidence result. If -the camera’s view includes more than one Spike Mark position, perhaps -the white Pixel’s **Bounding Box** size and location could be useful. -Teams should consider how long to seek an acceptable recognition, and -what to do otherwise. - -In any case, the OpMode should exit the main loop and continue running, -using any stored information. - -Best of luck this season! - -============ - -Questions, comments and corrections to westsiderobotics@verizon.net diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/010-TFOD-recognition.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/010-TFOD-recognition.png deleted file mode 100644 index 61389e56..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/010-TFOD-recognition.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/020-team-props.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/020-team-props.png deleted file mode 100644 index 08589855..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/020-team-props.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/030-Centerstage-public-repo.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/030-Centerstage-public-repo.png deleted file mode 100644 index 61db2e13..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/030-Centerstage-public-repo.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/040-Manage-page.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/040-Manage-page.png deleted file mode 100644 index 533b5a47..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/040-Manage-page.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/050-Manage-TFLite-Models.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/050-Manage-TFLite-Models.png deleted file mode 100644 index fd45aa42..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/050-Manage-TFLite-Models.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/060-Upload-Models.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/060-Upload-Models.png deleted file mode 100644 index 88f4a8ee..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/060-Upload-Models.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/070-Choose-Files.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/070-Choose-Files.png deleted file mode 100644 index 46c3abe3..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/070-Choose-Files.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/080-Centerstage-tflite.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/080-Centerstage-tflite.png deleted file mode 100644 index 397c17c1..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/080-Centerstage-tflite.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/100-New-File.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/100-New-File.png deleted file mode 100644 index 0c7d75a0..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/100-New-File.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/110-Sample-Open.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/110-Sample-Open.png deleted file mode 100644 index eb7cb021..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/110-Sample-Open.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/140-Builder-settings.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/140-Builder-settings.png deleted file mode 100644 index 2feb0b02..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/140-Builder-settings.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/200-Sample-DS-Camera-Stream.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/200-Sample-DS-Camera-Stream.png deleted file mode 100644 index 5b26217c..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/200-Sample-DS-Camera-Stream.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/210-Sample-DS-Telemetry.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/210-Sample-DS-Telemetry.png deleted file mode 100644 index a920c3bc..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/210-Sample-DS-Telemetry.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/240-Sample-RC-LiveView.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/240-Sample-RC-LiveView.png deleted file mode 100644 index ab53b15d..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/240-Sample-RC-LiveView.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/300-Sample-Pixel.png b/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/300-Sample-Pixel.png deleted file mode 100644 index 972274e5..00000000 Binary files a/docs/source/programming_resources/vision/java_tfod_opmode_custom/images/300-Sample-Pixel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/java_tfod_opmode_custom/java-tfod-opmode-custom.rst b/docs/source/programming_resources/vision/java_tfod_opmode_custom/java-tfod-opmode-custom.rst deleted file mode 100644 index c978298a..00000000 --- a/docs/source/programming_resources/vision/java_tfod_opmode_custom/java-tfod-opmode-custom.rst +++ /dev/null @@ -1,705 +0,0 @@ -Java Custom Model Sample OpMode for TFOD -======================================== - -Introduction ------------- - -This tutorial describes the regular, or **Builder**, version of the FTC -Java **Sample OpMode** for TensorFlow Object Detection (TFOD). - -This Sample, called **“ConceptTensorFlowObjectDetection.java”**, can -recognize **official or custom** FTC game elements and provide their -visible size and position. It uses the Java **Builder pattern** to -customize standard/default TFOD settings. - -This is **not the same** as the “Easy” version, which uses only default -settings and official/built-in TFOD model(s), described here: - -- :doc:`Java Easy Sample OpMode for TFOD <../java_tfod_opmode/java-tfod-opmode>` - -For the 2023-2024 game CENTERSTAGE, the official game element is a -hexagonal white **Pixel**. The FTC SDK software contains a TFOD model of -this object, ready for recognition. - -.. figure:: images/010-TFOD-recognition.png - :align: center - :width: 85% - :alt: TFOD Recognition - - Example Pixel Recognition using TFOD - -For extra points, FTC teams may instead use their own custom TFOD models -of game elements, called **Team Props** in CENTERSTAGE. - -.. figure:: images/020-team-props.png - :align: center - :width: 85% - :alt: Team Props - - Example Team Props - -This tutorial shows **OnBot Java** screens. Users of **Android Studio** -can follow along with a few noted exceptions, since the Sample OpMode is -exactly the same. - -Creating the OpMode -------------------- - -At the FTC **OnBot Java** browser interface, click on the large black -**plus-sign icon** “Add File”, to open the New File dialog box. - -.. figure:: images/100-New-File.png - :align: center - :width: 85% - :alt: New File Dialog - - Example New File Dialog - -Specify a name for your new OpMode. Select -**“ConceptTensorFlowObjectDetection”** as the Sample OpMode to be the -template for your new OpMode. - -This Sample has optional gamepad inputs, so it could be designated as a -**TeleOp** OpMode (see green oval above). - -Click “OK” to create your new OpMode. - -\ **Android Studio** users should follow the commented instructions to -copy this class from the Samples folder to the Teamcode folder, with a -new name. Also remove the ``@Disabled`` annotation, to make the OpMode -visible in the Driver Station list. - -The new OpMode should appear in the editing window of OnBot Java. - -.. figure:: images/110-Sample-Open.png - :align: center - :width: 85% - :alt: Sample Open Dialog - - Sample Open Dialog - -By default, the Sample OpMode assumes you are using a webcam, configured -as “Webcam 1”. If instead you are using the built-in camera on your -Android RC phone, change the USE_WEBCAM Boolean from ``true`` to -``false`` (orange oval above). - -Preliminary Testing -------------------- - -This Sample OpMode is **ready to use**, for detecting the -default/built-in model (white Pixel for CENTERSTAGE). - -If **Android Studio** users get a DS error message “Loading model from -asset failed”, skip to the next section “Downloading the Model”. - -Click the “Build Everything” button (wrench icon at lower right), and -wait for confirmation “BUILD SUCCESSFUL”. - -If Build is prevented by some other OpMode having errors/issues, they -must be fixed before your new OpMode can run. For a quick fix, you could -right-click on that filename and choose “Disable/Comment”. This -“comments out” all lines of code, effectively removing that file from -the Build. That file can be re-activated later with “Enable/Uncomment”. - -In Android Studio (or OnBot Java), you can open a problem class/OpMode -and type **CTRL-A** and **CTRL-/** to select and “comment out” all lines -of code. This is reversible with **CTRL-A** and **CTRL-/** again. - -Now run your new OpMode from the Driver Station (in the TeleOp list, if -so designated). The OpMode should recognize any CENTERSTAGE white Pixel -within the camera’s view, based on the trained TFOD model. - -For a **preview** during the INIT phase, touch the Driver Station’s -3-dot menu and select **Camera Stream**. - -.. figure:: images/200-Sample-DS-Camera-Stream.png - :align: center - :width: 85% - :alt: Sample DS Camera Stream - - Sample DS Camera Stream - -Camera Stream is not live video; tap to refresh the image. Use the small -white arrows at bottom right to expand or revert the preview size. To -close the preview, choose 3-dots and Camera Stream again. - -After the DS START button is touched, the OpMode displays Telemetry for -any recognized Pixel(s): - -.. figure:: images/210-Sample-DS-Telemetry.png - :align: center - :width: 85% - :alt: Sample DS Telemetry - - Sample DS Telemetry - -The above Telemetry shows the Label name, and TFOD recognition -confidence level. It also gives the **center location** and **size** (in -pixels) of the Bounding Box, which is the colored rectangle surrounding -the recognized object. - -The pixel origin (0, 0) is at the top left corner of the image. - -Before and after DS START is touched, the Robot Controller provides a -video preview called **LiveView**. - -.. figure:: images/240-Sample-RC-LiveView.png - :align: center - :width: 85% - :alt: Sample RC LiveView - - Sample RC LiveView - -For Control Hub (with no built-in screen), plug in an HDMI monitor or -learn about ``scrcpy`` (https://github.com/Genymobile/scrcpy). The -above image is a LiveView screenshot via ``scrcpy``. - -If you don’t have a physical Pixel on hand, try pointing the camera at -this image: - -.. figure:: images/300-Sample-Pixel.png - :align: center - :width: 85% - :alt: Sample Pixel - - Sample Pixel - -**Congratulations!** At this point the Sample OpMode and your camera -are working properly. Ready for a custom model? - -Downloading the Model ---------------------- - -Now we describe how to load a trained inference model in the form of a -TensorFlow Lite (``.tflite``) file. - -Instead of an **actual custom model**, here we use the standard FTC -model of the white Pixel from CENTERSTAGE (2023-2024). Later, your team -will follow this **same process** with your custom TFOD model, -specifying its filename and labels (objects to recognize). - -The standard ``.tflite`` file (white Pixel) is available on GitHub at -the following link: - -- CENTERSTAGE TFLite File (https://github.com/FIRST-Tech-Challenge/WikiSupport/blob/master/tensorflow/CenterStage.tflite) - -.. note:: - Very advanced teams could use Google's TensorFlow Object Detection - API (https://github.com/tensorflow/models/tree/master/research/object_detection) - to create their own custom inference model. - -Click the “Download Raw File” button to download the -``CenterStage.tflite`` file from GitHub to your local device -(e.g. laptop). See the green arrow. - -.. figure:: images/030-Centerstage-public-repo.png - :align: center - :width: 85% - :alt: Public Repo - - Public Repo - -Uploading to the Robot Controller ---------------------------------- - -Next, OnBot Java users will upload the TFOD model to the Robot -Controller. Connect your laptop to your Robot Controller’s wireless -network, open the Chrome browser, and navigate to the FTC “Manage” page: - -.. figure:: images/040-Manage-page.png - :align: center - :width: 85% - :alt: RC Manage Page - - Robot Controller Manage Page - -\ **Android Studio** users should instead skip to the instructions at -the bottom of this section. - -Scroll down and click on “Manage TensorFlow Lite Models”. - -.. figure:: images/050-Manage-TFLite-Models.png - :align: center - :width: 85% - :alt: TensorFlow Lite Model Management - - TensorFlow Lite Model Management - -Now click the “Upload Models” button. - -.. figure:: images/060-Upload-Models.png - :align: center - :width: 85% - :alt: Uploading Models - - Upload Models - -Click “Choose Files”, and use the dialog box to find and select the -downloaded ``CenterStage.tflite`` file. - -.. figure:: images/070-Choose-Files.png - :align: center - :width: 85% - :alt: Choose Files - - Choose Files - -Now the file will upload to the Robot Controller. The file will appear -in the list of TensorFlow models available for use in OpModes. - -.. figure:: images/080-Centerstage-tflite.png - :align: center - :width: 85% - :alt: CenterStage TFLITE Uploaded - - CENTERSTAGE TFLITE File Uploaded - -\ **Android Studio** users should instead store the TFOD model in the -project **assets** folder. At the left side, look under -``FtcRobotController`` for the folder ``assets``. If it’s missing, -right-click ``FtcRobotController``, choose ``New``, ``Directory`` and -``src\main\assets``. Right-click ``assets``, choose ``Open In`` and -``Explorer``, then copy/paste your ``.tflite`` file into that assets -folder. - -Basic OpMode Settings ---------------------- - -This Sample OpMode can now be modified, to detect the uploaded TFOD -model. - -Again, this tutorial uploaded the standard TFOD model (white Pixel for -CENTERSTAGE), just to demonstrate the process. Use the same steps for -your custom TFOD model. - -First, change the filename here: - -.. code:: java - - private static final String TFOD_MODEL_FILE = "/sdcard/FIRST/tflitemodels/myCustomModel.tflite"; - -to this: - -.. code:: java - - private static final String TFOD_MODEL_FILE = "/sdcard/FIRST/tflitemodels/CenterStage.tflite"; - -Later, you can change this filename back to the actual name of your -custom TFOD model. Here we are using the default (white Pixel) model -just downloaded. - -========= - -**Android Studio** users should instead verify or store the TFOD model -in the project **assets** folder as noted above, and use: - -.. code:: java - - private static final String TFOD_MODEL_ASSET = "CenterStage.tflite"; - -OR (for a custom model) - -.. code:: java - - private static final String TFOD_MODEL_ASSET = "MyModelStoredAsAsset.tflite"; - -========= - -For this example, the following line **does not** need to be changed: - -.. code:: java - - // Define the labels recognized in the model for TFOD (must be in training order!) - private static final String[] LABELS = { - "Pixel", - }; - -… because “Pixel” is the correct and only TFOD Label in the standard -model file. - -Later, you might have custom Labels like “myRedProp” and “myBlueProp” -(for CENTERSTAGE). The list should be in alphabetical order and contain -the labels in the dataset(s) used to make the TFOD model. - -========== - -Next, scroll down to the Java method ``initTfod()``. - -Here is the Java **Builder pattern**, used to specify various settings -for the TFOD Processor. - -.. figure:: images/140-Builder-settings.png - :align: center - :width: 85% - :alt: Builder Pattern Settings - - Builder Pattern Settings - -The **yellow ovals** indicate its distinctive features: **create** the -Processor object with ``new Builder()``, and **close/finalize** with the -``.build()`` method. - -This is the streamlined version of the Builder pattern. Notice all the -``.set`` methods are “chained” to form a single Java expression, ending -with a semicolon after ``.build()``. - -Uncomment two Builder lines, circled above in green: - -.. code:: java - - .setModelFileName(TFOD_MODEL_FILE) - .setModelLabels(LABELS) - -\ **Android Studio** users should instead uncomment the lines -``.setModelAssetName(TFOD_MODEL_ASSET)`` and -``.setModelLabels(LABELS)``. - -These Builder settings tell the TFOD Processor which model and labels to -use for evaluating camera frames. - -\ **That’s it!**\ You are ready to test this Sample OpMode again, this -time using a “custom” (uploaded) TFOD model. - -Testing with Custom Model -------------------------- - -In OnBot Java, click the “Build Everything” button (wrench icon at lower -right), and wait for confirmation “BUILD SUCCESSFUL”. - -Now run your updated OpMode from the Driver Station. The OpMode should -recognize objects within the camera’s view, based on the trained TFOD -model. - -Test the **Camera Stream** preview during the INIT phase. - -.. figure:: images/200-Sample-DS-Camera-Stream.png - :align: center - :width: 85% - :alt: Sample DS Camera Stream - - Sample DS Camera Stream - -Tap to refresh the image. Expand or revert the preview size as needed. -Close the preview, with 3-dots and Camera Stream again. - -After the DS START button is touched, the OpMode displays Telemetry for -any recognized object(s): - -.. figure:: images/210-Sample-DS-Telemetry.png - :align: center - :width: 85% - :alt: Sample DS Telemetry - - Sample DS Telemetry - -The above Telemetry shows the Label name, and TFOD recognition -confidence level. It also gives the **center location** and **size** (in -pixels) of the Bounding Box, which is the colored rectangle surrounding -the recognized object. - -Also test the RC’s video **LiveView**, using HDMI or -``scrcpy`` (https://github.com/Genymobile/scrcpy): - -.. figure:: images/240-Sample-RC-LiveView.png - :align: center - :width: 85% - :alt: Sample RC LiveView - - Sample RC LiveView - -For a large view of this standard model, right-click the image to open -in a new browser tab: - -.. figure:: images/300-Sample-Pixel.png - :align: center - :width: 85% - :alt: Sample Pixel - - Sample Pixel - -When your team creates, uploads and specifies a custom model containing -**red and blue Team Props**, the OpMode will recognize and process those -– instead of the standard model shown here. - -Program Logic and Initialization --------------------------------- - -How does this simple OpMode work? - -- During the INIT stage (before DS START is touched), this OpMode calls - a **method to initialize** the TFOD Processor and the FTC - VisionPortal. - -- After DS START is touched, the OpMode runs a continuous loop, calling - a **method to display telemetry** about any TFOD recognitions. - -- The OpMode also contains optional features to remind teams about - **CPU resource management**, useful in vision processing. - -You’ve already seen the first part of the method ``initTfod()`` which -uses a streamlined, or “chained”, sequence of Builder commands to create -the TFOD Processor. - -The second part of that method uses regular, non-chained, Builder -commands to create the VisionPortal. - -.. code:: java - - // Create the vision portal by using a builder. - VisionPortal.Builder builder = new VisionPortal.Builder(); - - // Set the camera (webcam vs. built-in RC phone camera). - if (USE_WEBCAM) { - builder.setCamera(hardwareMap.get(WebcamName.class, "Webcam 1")); - } else { - builder.setCamera(BuiltinCameraDirection.BACK); - } - - // Choose a camera resolution. Not all cameras support all resolutions. - builder.setCameraResolution(new Size(640, 480)); - - // Enable the RC preview (LiveView). Set "false" to omit camera monitoring. - builder.enableLiveView(true); - - // Set the stream format; MJPEG uses less bandwidth than default YUY2. - builder.setStreamFormat(VisionPortal.StreamFormat.YUY2); - - // Choose whether or not LiveView stops if no processors are enabled. - // If set "true", monitor shows solid orange screen if no processors enabled. - // If set "false", monitor shows camera view without annotations. - builder.setAutoStopLiveView(false); - - // Set and enable the processor. - builder.addProcessor(tfod); - - // Build the Vision Portal, using the above settings. - visionPortal = builder.build(); - -All settings have been uncommented here, to see them more easily. - -Here the ``new Builder()`` creates a separate ``VisionPortal.Builder`` -object called ``builder``, allowing traditional/individual Java method -calls for each setting. For the streamlined “chained” TFOD process, the -``new Builder()`` operated directly on the TFOD Processor called -``tfod``, without creating a ``TfodProcessor.Builder`` object. Both -approaches are valid. - -Notice the process again **closes** with a call to the ``.build()`` -method. - -Telemetry Method ----------------- - -After DS START is touched, the OpMode continuously calls this method to -display telemetry about any TFOD recognitions: - -.. code:: java - - /** - * Add telemetry about TensorFlow Object Detection (TFOD) recognitions. - */ - private void telemetryTfod() { - - List currentRecognitions = tfod.getRecognitions(); - telemetry.addData("# Objects Detected", currentRecognitions.size()); - - // Step through the list of recognitions and display info for each one. - for (Recognition recognition : currentRecognitions) { - double x = (recognition.getLeft() + recognition.getRight()) / 2 ; - double y = (recognition.getTop() + recognition.getBottom()) / 2 ; - - telemetry.addData(""," "); - telemetry.addData("Image", "%s (%.0f %% Conf.)", recognition.getLabel(), recognition.getConfidence() * 100); - telemetry.addData("- Position", "%.0f / %.0f", x, y); - telemetry.addData("- Size", "%.0f x %.0f", recognition.getWidth(), recognition.getHeight()); - } // end for() loop - - } // end method telemetryTfod() - -In the first line of code, **all TFOD recognitions** are collected and -stored in a List variable. The camera might “see” more than one game -element in its field of view, even if not intended (i.e. for CENTERSTAGE -with 1 game element). - -The ``for() loop`` then iterates through that List, handling each item, -one at a time. Here the “handling” is simply processing certain TFOD -fields for DS Telemetry. - -The ``for() loop`` calculates the pixel coordinates of the **center** of -each Bounding Box (the preview’s colored rectangle around a recognized -object). - -Telemetry is created for the Driver Station, with the object’s name -(Label), recognition confidence level (percentage), and the Bounding -Box’s location and size (in pixels). - -For competition, you want to do more than display Telemetry, and you -want to exit the main OpMode loop at some point. These code -modifications are discussed in another section below. - -Resource Management -------------------- - -Vision processing is “expensive”, using much **CPU capacity and USB -bandwidth** to process millions of pixels streaming in from the camera. - -This Sample OpMode contains three optional features to remind teams -about resource management. Overall, the SDK provides -:ref:`over 10 tools ` -to manage these resources, allowing your OpMode to run effectively. - -As the first example, **streaming images** from the camera can be paused -and resumed. This is a very fast transition, freeing CPU resources (and -potentially USB bandwidth). - -.. code:: java - - - // Save CPU resources; can resume streaming when needed. - if (gamepad1.dpad_down) { - visionPortal.stopStreaming(); - } else if (gamepad1.dpad_up) { - visionPortal.resumeStreaming(); - } - -Pressing the Dpad buttons, you can observe the off-and-on actions in the -RC preview (LiveView), described above. In your competition OpMode, -these streaming actions would be programmed, not manually controlled. - -=========== - -The second example, commented out, similarly allows a vision processor -(TFOD and/or AprilTag) to be disabled and re-enabled: - -.. code:: java - - //Disable or re-enable the TFOD processor at any time. - visionPortal.setProcessorEnabled(tfod, true); - -Simply set the Boolean to ``false`` (to disable), or ``true`` (to -re-enable). - -=========== - -The third example: after exiting the main loop, the VisionPortal is -closed. - -.. code:: java - - // Save more CPU resources when camera is no longer needed. - visionPortal.close(); - -Teams may consider this at any point when the VisionPortal is no longer -needed by the OpMode, freeing valuable CPU resources for other tasks. - -Adjusting the Zoom Factor -------------------------- - -If the object to be recognized will be more than roughly 2 feet (61 cm) -from the camera, you might want to set the digital Zoom factor to a -value greater than 1. This tells TensorFlow to use an artificially -magnified portion of the image, which may offer more accurate -recognitions at greater distances. - -.. code:: java - - // Indicate that only the zoomed center area of each - // image will be passed to the TensorFlow object - // detector. For no zooming, set magnification to 1.0. - tfod.setZoom(2.0); - -This ``setZoom()`` method can be placed in the INIT section of your -OpMode, - -- immediately after the call to the ``initTfod()`` method, or - -- as the very last command inside the ``initTfod()`` method. - -This method is **not** part of the TFOD Processor Builder pattern, so -the Zoom factor can be set to other values during the OpMode, if -desired. - -The “zoomed” region can be observed in the DS preview (Camera Stream) -and the RC preview (LiveView), surrounded by a greyed-out area that is -**not evaluated** by the TFOD Processor. - -Other Adjustments ------------------ - -This Sample OpMode contains another adjustment, commented out: - -.. code:: java - - // Set confidence threshold for TFOD recognitions, at any time. - tfod.setMinResultConfidence(0.75f); - -The SDK uses a default **minimum confidence** level of 75%. This means -the TensorFlow Processor needs a confidence level of 75% or higher, to -consider an object as “recognized” in its field of view. - -You can see the object name and actual confidence (as a **decimal**, -e.g. 0.96) near the Bounding Box, in the Driver Station preview (Camera -Stream) and Robot Controller preview (Liveview). - -Adjust this parameter to a higher value if you want the processor to be -more selective in identifying an object. - -=========== - -Another option is to define, or clip, a **custom area for TFOD -evaluation**, unlike ``setZoom`` which is always centered. - -.. code:: java - - // Set the number of pixels to obscure on the left, top, - // right, and bottom edges of each image passed to the - // TensorFlow object detector. The size of the images are not - // changed, but the pixels in the margins are colored black. - tfod.setClippingMargins(0, 200, 0, 0); - -Adjust the four margins as desired, in units of pixels. - -These method calls can be placed in the INIT section of your OpMode, - -- immediately after the call to the ``initTfod()`` method, or - -- as the very last commands inside the ``initTfod()`` method. - -As with ``setProcessorEnabled()`` and ``setZoom()``, these methods are -**not** part of the Processor or VisionPortal Builder patterns, so they -can be set to other values during the OpMode, if desired. - -Modifying the Sample --------------------- - -In this Sample OpMode, the main loop ends only when the DS STOP button -is touched. For CENTERSTAGE competition, teams should **modify this -code** in at least two ways: - -- for a significant recognition, take action or store key information – - inside the ``for() loop`` - -- end the main loop based on your criteria, to continue the OpMode - -As an example, you might set a Boolean variable ``isPixelDetected`` (or -``isPropDetected``) to ``true``, if a significant recognition has -occurred. - -You might also evaluate and store which randomized Spike Mark (red or -blue tape stripe) holds the white Pixel or Team Prop. - -Regarding the main loop, it could end after the camera views all three -Spike Marks, or after your code provides a high-confidence result. If -the camera’s view includes more than one Spike Mark position, perhaps -the Pixel/Prop’s **Bounding Box** size and location could be useful. -Teams should consider how long to seek an acceptable recognition, and -what to do otherwise. - -In any case, the OpMode should exit the main loop and continue running, -using any stored information. - -Best of luck this season! - -============ - -Questions, comments and corrections to westsiderobotics@verizon.net diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/TrainingBlownOut.psd b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/TrainingBlownOut.psd deleted file mode 100644 index 35dcb8c0..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/TrainingBlownOut.psd and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/angled_pixel_detection.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/angled_pixel_detection.png deleted file mode 100644 index 68c5d9ed..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/angled_pixel_detection.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/easypixeldetect.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/easypixeldetect.png deleted file mode 100644 index 3ce36736..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/easypixeldetect.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/lowanglepixel.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/lowanglepixel.png deleted file mode 100644 index f198bb39..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/lowanglepixel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/negatives.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/negatives.png deleted file mode 100644 index 29344769..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/negatives.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.png deleted file mode 100644 index 8d8d9fde..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.psd b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.psd deleted file mode 100644 index 7b4eb93a..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixel.psd and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect1.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect1.png deleted file mode 100644 index 74d083b5..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect1.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect2.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect2.png deleted file mode 100644 index a8e8ae3c..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect2.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect3.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect3.png deleted file mode 100644 index ce416f07..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect3.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect4.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect4.png deleted file mode 100644 index 146fd300..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixeldetect4.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect1.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect1.png deleted file mode 100644 index cc6a1aed..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect1.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect2.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect2.png deleted file mode 100644 index 9b81d145..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/pixelnodetect2.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/ribsexposed.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/ribsexposed.png deleted file mode 100644 index 2493c0e0..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/ribsexposed.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/trainingblownout.png b/docs/source/programming_resources/vision/tensorflow_cs_2023/images/trainingblownout.png deleted file mode 100644 index c649d3d6..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_cs_2023/images/trainingblownout.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.rst b/docs/source/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.rst deleted file mode 100644 index 5aec3bde..00000000 --- a/docs/source/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.rst +++ /dev/null @@ -1,443 +0,0 @@ -TensorFlow for CENTERSTAGE presented by RTX -=========================================== - -What is TensorFlow? -~~~~~~~~~~~~~~~~~~~ - -*FIRST* Tech Challenge teams can use `TensorFlow Lite -`__, a lightweight version of Google’s -`TensorFlow `__ machine learning technology that -is designed to run on mobile devices such as an Android smartphone or the `REV -Control Hub `__. A *trained -TensorFlow model* was developed to recognize the white ``Pixel`` game piece used in -the **2023-2024 CENTERSTAGE presented by RTX** challenge. - -.. figure:: images/pixel.png - :align: center - :alt: CENTERSTAGE Pixel - :height: 400px - - This season’s TFOD model can recognize a white Pixel - -TensorFlow Object Detection (TFOD) has been integrated into the control system -software to identify a white ``Pixel`` during a match. The SDK (SDK -version 9.0) contains TFOD Sample OpModes and Detection Models that can -recognize the white ``Pixel`` at various poses (but not all). - -How Might a Team Use TensorFlow this season? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For this season’s challenge the field is randomized during the Pre-Match stage. -This randomization causes the white ``Pixel`` placed on Spike Marks to be placed on -either the Left, Center, or Right Spike Mark. During Autonomous, Robots must -independently determine which of the three Spike Marks (Left, Center, Right) -the white ``Pixel`` was placed on. To do this, robots using a Webcam or a camera on -a Robot Controller Smartphone can inspect Spike Mark locations to determine if -a white ``Pixel`` is present. Once the robot has correctly identified which Spike -Mark the white ``Pixel`` is present on, the robot can then perform additional -actions based on that position that will yield additional points. - -Teams also have the opportunity to replace the white ``Pixel`` with an object -of their own creation, within a few guidelines specified in the Game -Competition. This object, or Team Game Element, can be optimized to help the -team identify it more easily and custom TensorFlow inference models can be -created to facilitate recognition. As the field is randomized, the team's Team -Game Element will be placed on the Spike Marks as the white ``Pixel`` would -have, and the team must identify and use the Team Game Element the same as if -it were a white ``Pixel`` on a Spike Mark. - -Sample OpModes -~~~~~~~~~~~~~~ - -Teams have the option of using a custom inference model with the *FIRST* Tech -Challenge software or to use the game-specific default model provided. As noted -above, the *FIRST* Machine Learning Toolchain is a streamlined tool for training -your own TFOD models. - -The FIRST Tech Challenge software (Robot Controller App and Android Studio -Project) includes sample OpModes (Blocks and Java versions) that demonstrate -how to use **the default inference model**. These tutorials show how to use -the sample OpModes, using examples from previous *FIRST* Tech Challenge -seasons, but demonstrate the process for use in any season. - -- :doc:`Blocks Sample OpMode for TensorFlow Object Detection <../blocks_tfod_opmode/blocks-tfod-opmode>` -- :doc:`Java Sample OpMode for TFOD <../java_tfod_opmode/java-tfod-opmode>` - -Using the sample OpModes, teams can practice identifying white ``Pixels`` placed -on Spike Marks. The sample OpMode ``ConceptTensorFlowObjectDetectionEasy`` is -a simple OpMode to use to detect a ``Pixel`` - it is a very basic OpMode simplified -for beginner teams to perform basic ``Pixel`` detection. - -.. figure:: images/easypixeldetect.png - :align: center - :alt: Pixel Detection - :width: 75% - - Example Detection of a Pixel - -It is important to note that if the detection of the object is below the -minimum confidence threshold, the detection will not be shown - it is important -to set the minimum detection threshold appropriately. - -.. note:: - The default minimum confidence threshold provided in the Sample OpMode (75%) - is only provided as an example; depending on local conditions (lighting, - image wear, etc...) it may be necessary to lower the minimum confidence in - order to increase TensorFlow's likelihood to see all possible image - detections. However, due to its simplified nature it is not possible to - change the minimum confidence using the ``Easy`` OpMode. Instead, you will - have to use the normal OpMode. - -Notes on Training the CENTERSTAGE Model -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``Pixel`` game piece posed an interesting challenge for TensorFlow Object -Detection (TFOD). As is warned in the Machine Learning Toolkit documentation, -TFOD is not very good with recognizing and differentiating simple geometric -shapes, nor distinguishing between specific colors; instead, TFOD is good at -detecting *patterns*. TFOD needs to be able to recognize a unique *pattern*, -and while there is a small amount of patterning in the ribbing of the -``Pixel``, in various lighting conditions it's dubious how much the ribbing -will be able to be seen. Even in the image at the top of this document, the -ribbing can only be seen due to the specific shadows that the game piece has -been provided. Even in optimal testing environments, it was difficult to -capture video of the object that nicely highlighted the ribbing enough for -TensorFlow to use for pattern recognition. This highlighted the inability to -guarantee optimal ``Pixel`` characteristics in unknown lighting environments -for TFOD. - -Another challenge with training the model had to do with how the ``Pixel`` -looks at different pose angles. When the camera is merely a scant few inches -from the floor, the ``Pixel`` can almost look like a solid object; at times -there may be sufficient shadows to see that there is a hole in the center of -the object, but not always. However, if the camera was several inches off the -floor the ``Pixel`` looked differently, as the mat or colored tape could be -seen through the hole in the middle of the object. This confused the neural -network and made it extremely difficult to train, and the resulting models -eventually recognized any "sufficiently light colored blob" as a ``Pixel``. -This was not exactly ideal. - -Even with the best of images, the Machine Learning algorithms had a difficult -time determining what *was* a ``Pixel`` and what wasn't. What ended up working -was providing NOT ONLY images of the ``Pixel`` in different poses, but also -several white objects that WERE NOT a ``Pixel``. This was fundamental to -helping TensorFlow train itself to understand that "All ``Pixels`` are White -Objects, but not all White Objects are ``Pixels``." - -To provide some additional context on this, here are a few examples of labeled -frames that illustrate the challenges and techniques in dealing with the -``Pixel`` game piece. - -.. only:: html - - .. grid:: 1 2 2 2 - :gutter: 2 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Training Frame 1 - - ^^^ - - .. figure:: images/trainingblownout.png - :align: center - :alt: Pixel that's saturated - :width: 100 % - - +++ - - Pixel Saturation (No Ribs) - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - (Rejected) Training Frame 2 - - ^^^ - - .. figure:: images/lowanglepixel.png - :align: center - :alt: Pixel at low angle - :width: 100 % - - +++ - - Camera Too Low (White Blob) - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Training Frame 3 - - ^^^ - - .. figure:: images/ribsexposed.png - :align: center - :alt: Rare good image - :width: 100 % - - +++ - - Actual Good Image with Ribbing (Rare) - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Training Frame 4 - - ^^^ - - .. figure:: images/negatives.png - :align: center - :alt: Pixel with non-pixel objects - :width: 100 % - - +++ - - Pixel with non-Pixel Objects - -.. only:: latex - - .. list-table:: Examples of Challenging Scenarios - :class: borderless - - * - .. image:: images/trainingblownout.png - - .. image:: images/lowanglepixel.png - * - .. image:: images/ribsexposed.png - - .. image:: images/negatives.png - - -Using the Default CENTERSTAGE Model -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In the previous section it's described how the height of the camera from the floor -has a huge effect on how the ``Pixel`` is seen; too low and the object can look -like a single "blob" of color, and too high and the object will look similar to -a white donut. When training the model, it was decided that the Donut approach was -the best - train the model to recognize the ``Pixel`` from above to provide a -clear and consistent view of the ``Pixel``. Toss in some angled shots as well, along -with some additional extra objects just to give TensorFlow some perspective, and -a model is born. **But wait, how does that affect detection of the Pixel from the -robot's starting configuration?** - -In CENTERSTAGE, using the default CENTERSTAGE model, it is unlikely that a -robot will be able to get a consistent detection of a White ``Pixel`` from the -starting location. In order to get a good detection, the robot's camera needs -to be placed fairly high up, and angled down to be able to see the gray tile, -blue tape, or red tape peeking out of the center of the ``Pixel``. Thanks to -the center structure on the field this season, it's doubtful that a team will -want to have an exceptionally tall robot - likely no more than 14 inches tall, -but most will want to be under 12 inches to be safe (depending on your strategy -- please don't let this article define your game strategy!). The angle that -your robot's camera will have with the Pixel in the starting configuration -makes this seem unlikely. - -Here are several images of detected and non-detected ``Pixels``. Notice that -the center of the object must be able to see through to what's under the -``Pixel`` in order for the object to be detected as a ``Pixel``. - -.. only:: html - - .. grid:: 1 2 2 2 - :gutter: 2 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Non-Detected Pixel #1 - - ^^^ - - .. figure:: images/pixelnodetect1.png - :align: center - :alt: Pixel Not Detected 1 - :width: 100 % - - +++ - - Pixel Not Detected, Angle Too Low - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Non-Detected Pixel #2 - - ^^^ - - .. figure:: images/pixelnodetect2.png - :align: center - :alt: Pixel Not Detected 2 - :width: 100 % - - +++ - - Pixel Not Detected, Angle Too Low - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Detected Pixel #1 - - ^^^ - - .. figure:: images/pixeldetect1.png - :align: center - :alt: Pixel Detected 1 - :width: 100 % - - +++ - - Pixel Detected, Min Angle - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Detected Pixel #2 - - ^^^ - - .. figure:: images/pixeldetect2.png - :align: center - :alt: Pixel Detected 2 - :width: 100 % - - +++ - - Pixel Detected, Better Angle - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Detected Pixel #3 - - ^^^ - - .. figure:: images/pixeldetect3.png - :align: center - :alt: Pixel Detected 3 - :width: 100 % - - +++ - - Pixel Detected, Min Angle on Tape - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Detected Pixel #4 - - ^^^ - - .. figure:: images/pixeldetect4.png - :align: center - :alt: Pixel Detected 4 - :width: 100 % - - +++ - - Pixel Detected, Top-Down View - -.. only:: latex - - .. list-table:: Examples of Detected and Non-Detected Pixels - :class: borderless - - * - .. image:: images/pixelnodetect1.png - - .. image:: images/pixelnodetect2.png - * - .. image:: images/pixeldetect1.png - - .. image:: images/pixeldetect2.png - * - .. image:: images/pixeldetect3.png - - .. image:: images/pixeldetect4.png - -Therefore, there are two options for detecting the ``Pixel``: - -1. The camera can be on a retractable/moving system, so that the camera is elevated to - a desirable height during the start of Autonomous, and then retracts before moving - around. - -2. The robot will have to drive closer to the Spike Marks in order to be able to - properly detect the ``Pixels``. - -For the second option (driving closer), the camera's field of view might pose a -challenge if it's desirable for all three Spike Marks to be always in view. If -using a Logitech C270 camera, perhaps using a Logitech C920 with a wider field -of view might help to some degree. This completely depends on the height of the -camera and how far the robot must be driven in order to properly recognize a -``Pixel``. Teams can also simply choose to point their webcam to the CENTER and -LEFT Spike Marks, for example, and drive closer to those targets, and if a -``Pixel`` is not detected then by process of elimination it must be on the -RIGHT Spike Mark. - -Selecting objects for the Team Prop -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Selecting objects to use for your custom Team Prop can seem daunting. Questions -swirl like "What shapes are going to be recognized best?", "If I cannot have -multiple colors, how do I make patterns?", and "How do I make this easier on myself?". -Hopefully this section will help you understand a little more about TensorFlow -and how to get the most out of it. - -First, it's important to note that TensorFlow has the following quirks/behaviors: - -- In order to run TensorFlow on mobile phones, *FIRST* Tech Challenge uses a very small core - model resolution. This means the image is downscaled from the high definition - webcam image to one that is only 300x300 pixels. This means that medium and - small objects within the webcam images may be reduced to very small - indistinguishable clusters of pixels in the target image. Keep the objects in - the view of the camera large, and train for a wide range of image sizes. -- TensorFlow is not really good at differentiating simple geometric shapes. TensorFlow - Object Detection is an object classifier, and similar geometric shapes will - classify similarly. Humans are much better at differentiating geometric shapes than - neural net algorithms, like TensorFlow, at the present. -- TensorFlow is great at pattern detection, but that means that within the footprint - of the object you need one or more repeating or unique patterns. The larger the - pattern the easier it will be for TensorFlow to detect the pattern at a - distance. - -So what kinds of patterns are good for TensorFlow? Let's explore a few examples: - -1. Consider the shape of a `chess board Rook - `__. - The Rook itself is mostly uniform all around, no matter how you rotate the - object it more or less looks the same. Not much patterning there. However, - the top of the Rook is very unique and patterned. Exaggerating the - "battlements", the square-shaped parts of the top of the Rook, can provide - unique patterning that TensorFlow can distinguish. - -2. Consider the outline of a `chess Knight - `__, - as the "head" of the Knight is facing to the right or to the left. That - profile is very distinguishable as the head of a horse. That specific animal - is one that `model zoos - `__ - have been optimized for, so it's definitely a shape that TensorFlow can be - trained to recognize. - -3. Consider the patterning in a fancy `wrought-iron fence - `__. If made - thick enough, those repeating patterns can be recognized by a TensorFlow - model. Like the Chess Board Rook, it might be wise to make the object round - so that the pattern is similar and repeats now matter how the object is - rotated. If allowed, having multiple shades of color can also help make a - more-unique patterning on the object (e.g. multiple shades of red, likely - must consult the `Q&A `__). - -4. TensorFlow can be used to - `Detect Plants `__ - and all of the plants are a single color. Similar techniques can be reverse-engineered - (make objects of different "patterns" similar to plants) to create an object that - can be detected and differentiated from other objects on the game field. - -Hopefully this gives you quite a few ideas for how to approach this challenge! - diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/010-TFOD-Cube-Duck-crop-2 b/docs/source/programming_resources/vision/tensorflow_ff_2021/images/010-TFOD-Cube-Duck-crop-2 deleted file mode 100644 index f750e00f..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/010-TFOD-Cube-Duck-crop-2 and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/010-TFOD-Cube-Duck-crop-2.png b/docs/source/programming_resources/vision/tensorflow_ff_2021/images/010-TFOD-Cube-Duck-crop-2.png deleted file mode 100644 index c5e7dfd0..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/010-TFOD-Cube-Duck-crop-2.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/020-TFOD-Barcode.png b/docs/source/programming_resources/vision/tensorflow_ff_2021/images/020-TFOD-Barcode.png deleted file mode 100644 index 78195d76..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/020-TFOD-Barcode.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/030-TFOD-levels.png b/docs/source/programming_resources/vision/tensorflow_ff_2021/images/030-TFOD-levels.png deleted file mode 100644 index 89da5dba..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/030-TFOD-levels.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/tfliteDemo.png b/docs/source/programming_resources/vision/tensorflow_ff_2021/images/tfliteDemo.png deleted file mode 100644 index 7f983456..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_ff_2021/images/tfliteDemo.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst b/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst deleted file mode 100644 index 5239c27c..00000000 --- a/docs/source/programming_resources/vision/tensorflow_ff_2021/tensorflow-ff-2021.rst +++ /dev/null @@ -1,125 +0,0 @@ -TensorFlow for FREIGHT FRENZY presented by Raytheon Technologies -================================================================ - -What is TensorFlow? -~~~~~~~~~~~~~~~~~~~ - -*FIRST* Tech Challenge teams can use `TensorFlow -Lite `__, a lightweight version of -Google’s `TensorFlow `__ machine learning -technology that is designed to run on mobile devices such as an Android -smartphone. A *trained TensorFlow model* was developed to recognize game -elements for the 2021-2022 Freight Frenzy challenge. - -.. figure:: images/010-TFOD-Cube-Duck-crop-2.png - :align: center - :alt: TFOD Cube Duck - :height: 200px - - This season’s TFOD model can recognize Freight elements - -TensorFlow Object Detection (TFOD) has been integrated into the -control system software, to identify and track these game pieces during -a match. The software (SDK version 7.0) contains TFOD Sample Op -Modes that can recognize the Freight elements Duck, Box (or Cube), and -Cargo (or Ball). - -How Might a Team Use TensorFlow in Freight Frenzy? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For this season’s challenge, during the pre-Match stage a single die is -rolled and the field is randomized. - -.. figure:: images/020-TFOD-Barcode.png - :align: center - :alt: Barcode - - Randomization - - -At the beginning of the match’s Autonomous period, a robot can use -TensorFlow to “look” at the **Barcode** area and determine whether the -Duck or optional Team Shipping Element (TSE) is in position 1, 2 or 3. -This indicates the preferred scoring level on the **Alliance Shipping -Hub**. A bonus is available for using the TSE instead of a Duck. - - -.. figure:: images/030-TFOD-levels.png - :align: center - :alt: Levels - - Alliance Shipping Hub - -Important Note on Phone Compatibility -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -TensorFlow Lite runs on Android 6.0 (Marshmallow) or higher, a -requirement met by all currently allowed devices. If you are a -Blocks programmer using an older/disallowed Android device that is not -running Marshmallow or higher, TFOD Blocks will automatically be missing -from the Blocks toolbox or design palette. - -Sample Op Modes -~~~~~~~~~~~~~~~ - -The software (SDK version 7.0 and higher) contains sample Blocks and -Java op modes that demonstrate TensorFlow **recognition** of Freight -elements Duck, Box (cube) and Cargo (ball). The sample op modes also -show **where** in the camera’s field of view a detected object is -located. - -Click on the following links to learn more about these sample Op Modes. - -- :ref:`Blocks TensorFlow Object Detection - Example ` -- :ref:`Java TensorFlow Object Detection - Example ` - -Using a Custom Inference Model -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Teams have the option of using a custom inference model with the FIRST -Tech Challenge software. As noted above, the **Machine Learning -toolchain** is a streamlined tool for training your own TFOD models. An -alternate would be to use the `TensorFlow Object Detection -API `__ -to create an enhanced model of the Freight elements or TSE, or to create -a custom model to detect other entirely different objects. Other teams -might also want to use an available pre-trained model to build a robot -that can detect common everyday objects (for demo or outreach purposes, -for example). - -The software includes sample op modes (Blocks and Java versions) -that demonstrate how to use a **custom inference model**: - -- `Using a Custom TensorFlow Model with - Blocks `__ -- `Using a Custom TensorFlow Model with - Java `__ - -These tutorials use examples from a previous season (Skystone), but -the process remains generally valid for Freight Frenzy. - -Detecting Everyday Objects -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can use a pretrained TensorFlow Lite model to detect **everyday -objects**, such as a clock, person, computer mouse, or cell phone. The -following advanced tutorial shows how you can use a free, pretrained -model to recognize numerous everyday objects. - -- `Using a TensorFlow Pretrained Model to Detect Everyday - Objects `__ - - -.. figure:: images/tfliteDemo.png - :align: center - :alt: TensorFlow Lite Demo - - TensorFlow can recognize everyday objects - - - -============================ - -Updated 11/19/21 diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/Panel_Training.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/Panel_Training.png deleted file mode 100644 index 41deb2f2..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/Panel_Training.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bolt.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bolt.png deleted file mode 100644 index 4ba58986..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bolt.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bolt_label.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bolt_label.png deleted file mode 100644 index 77043ab5..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bolt_label.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bulb.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bulb.png deleted file mode 100644 index 424e9bcb..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bulb.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bulb_label.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bulb_label.png deleted file mode 100644 index 93a1941c..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/bulb_label.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/panel.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/panel.png deleted file mode 100644 index 397b023b..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/panel.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/panel_label.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/panel_label.png deleted file mode 100644 index 59ffc52e..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/panel_label.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/signal.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/signal.png deleted file mode 100644 index 2d3f60c2..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/signal.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb1.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb1.png deleted file mode 100644 index 1d4b10ee..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb1.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb2.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb2.png deleted file mode 100644 index 5f1a437a..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb2.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb3.png b/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb3.png deleted file mode 100644 index 0d023c2f..00000000 Binary files a/docs/source/programming_resources/vision/tensorflow_pp_2022/images/wb3.png and /dev/null differ diff --git a/docs/source/programming_resources/vision/tensorflow_pp_2022/tensorflow_pp_2022.rst b/docs/source/programming_resources/vision/tensorflow_pp_2022/tensorflow_pp_2022.rst deleted file mode 100644 index 6e9b0a96..00000000 --- a/docs/source/programming_resources/vision/tensorflow_pp_2022/tensorflow_pp_2022.rst +++ /dev/null @@ -1,373 +0,0 @@ -TensorFlow for POWERPLAY presented by Raytheon Technologies -=========================================================== - -What is TensorFlow? -~~~~~~~~~~~~~~~~~~~ - -*FIRST* Tech Challenge teams can use `TensorFlow Lite `__, -a lightweight version of -Google’s `TensorFlow `__ machine learning -technology that is designed to run on mobile devices such as an Android -smartphone. A *trained TensorFlow model* was developed to recognize the -three game-defined images on the Signal element used in the **2022-2023 -POWERPLAY presented by Raytheon Technologies** challenge. - -.. figure:: images/signal.png - :align: center - :alt: POWERPLAY Signal - :height: 400px - - This season’s TFOD model can recognize Signal image elements - -TensorFlow Object Detection (TFOD) has been integrated into the control system -software to identify these Signal images during a match. The SDK (SDK -version 8.0) contains TFOD Sample Op Modes and Detection Models that can -recognize and differentiate between the Signal images: Bolt (green lightning -bolt), Bulb (4 yellow light bulbs), and Panel (purple solar panels). - -.. note:: - TensorFlow Lite runs on Android 6.0 (Marshmallow) or higher, a requirement met - by all currently allowed devices. If you are a Blocks programmer using an - older/disallowed Android device that is not running Marshmallow or higher, TFOD - Blocks will automatically be missing from the Blocks toolbox or design palette. - -How Might a Team Use TensorFlow this season? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For this season’s challenge, during the pre-Match stage a single die is rolled -and the field is randomized. The random value of the die determines how field -reset staff will rotate the Signal to show one of the three images on the -Signal to the robot - Signal images are offset 120 degrees on the Signal to -occlude all images other than the chosen one. Robots must independently -determine which of the three images (Image 1, Image 2, or Image 3, indicated by -the number of dots above the image either on the Signal stickers or on the Team -Specific Signal Sleeve) is showing. Once the robot has correctly identified the -Image being shown, the robot can then know in which zone to end the Autonomous -Period for additional points. - -Sample Op Modes -~~~~~~~~~~~~~~~ - -Teams have the option of using a custom inference model with the *FIRST* Tech -Challenge software or to use the game-specific default model provided. As noted -above, the *FIRST* Machine Learning Toolchain is a streamlined tool for training -your own TFOD models. - -The FIRST Tech Challenge software (Robot Controller App and Android Studio -Project) includes sample op modes (Blocks and Java versions) that demonstrate -how to use **the default inference model**. These tutorials show how to use -the sample op modes, using examples from previous *FIRST* Tech Challenge seasons, but demonstrate -the process for use in any season. - -- :doc:`Blocks Sample Op Mode for TensorFlow Object Detection <../blocks_tfod_opmode/blocks-tfod-opmode>` -- :doc:`Java Sample Op Mode for TFOD <../java_tfod_opmode/java-tfod-opmode>` - -Using the sample Op Modes, each of the Signal images can be detected. Here are -a few examples of detecting the images. - -.. grid:: 1 2 2 3 - :gutter: 2 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Example Image 1 - - ^^^ - - .. figure:: images/bolt.png - :align: center - :alt: BoltDetection - :width: 100% - - +++ - - Example Detection of a Bolt - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Example Image 2 - - ^^^ - - .. figure:: images/bulb.png - :align: center - :alt: BulbDetection - :width: 100% - - +++ - - Example Detection of a Bulb - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Example Image 3 - - ^^^ - - .. figure:: images/panel.png - :align: center - :alt: PanelDetection - :width: 100% - - +++ - - Example Detection of a Panel - -It is important to note that if the detection of the object is below the -minimum confidence threshold, the detection will not be shown - it is important -to set the minimum detection threshold appropriately. - -.. note:: - The default minimum confidence threshold provided in the Sample Op Mode is only - provided as an example; depending on local conditions (lighting, image wear, - etc...) it may be necessary to lower the minimum confidence in order to increase - TensorFlow's likelihood to see all possible image detections. - -Default POWERPLAY Model Detection Notes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -As shown in the previous examples, with the default POWERPLAY TensorFlow model -it is sometimes more common for TensorFlow to recognize/label partial image -areas (upper or lower portions of the images) than whole images themselves. -This is likely due to how the training set was developed during training of the -TensorFlow model. - -In order to try to ensure that there would be as many detections for a given -set of images as possible, the training set included frames that contained both -complete and partial images; it just happened that the way the frames were -developed there were more upper and lower partial images than whole images, and -it appears that TensorFlow's neural network seems to almost "prefer" to -recognize partial images rather than whole images. Such biases are common. - -To provide some additional context on this, here are a few examples of labeled -frames that were used to train the default TensorFlow model. - -.. grid:: 1 2 2 3 - :gutter: 2 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Example Training Frame 1 - - ^^^ - - .. figure:: images/bolt_label.png - :align: center - :alt: BoltLabel - :width: 100 % - - +++ - - Example Training for a Bolt - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Example Training Frame 2 - - ^^^ - - .. figure:: images/bulb_label.png - :align: center - :alt: BulbLabel - :width: 100 % - - +++ - - Example Training for a Bulb - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Example Training Frame 3 - - ^^^ - - .. figure:: images/panel_label.png - :align: center - :alt: PanelLabel - :width: 100 % - - +++ - - Example Training for a Panel - -Understanding Backgrounds For Signal Sleeves -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When thinking about how to develop a custom Signal Sleeve, it's easy to -overlook one of the most important elements that may make or break your ability -to detect objects - your image background. TensorFlow attempts to identify -common background material and "ignore" the backgrounds for detecting labeled -objects; a great example of this is the white background on the sticker. It -should be known that the white background on the stickers posed quite a -challenge, one that teams should be aware of when/if attempting to develop -their own images for their Signal Sleeves. - -If the same background is always present, and always has similar -characteristics in the training data, TensorFlow may assume the background -isn't actually a background and is really a part of the image. TensorFlow may -then expect to see the specific background with the objects always. If the -background of the image then varies for whatever reason, TensorFlow may not -recognize the image with the new background. - -A great example of this occurred in 2021 Freight Frenzy; the duck model was -trained to recognize a rubber duck, and the rubber duck just happened to always -be present on a gray mat tile within the training frames. The model happened to -"expect" a gray mat tile in the background, and rubber ducks seen without the -gray mat tile had a significantly reduced detection rate. - -In POWERPLAY, the white sticker background is always present, except the white -color of the background can be unintentionally altered based on the lighting -being used in the room; warmer lights cause the white to turn yellow or orange, -cooler lights cause the white to turn more blue, and glare causes a gradient of -colors to appear across the white background. Sometimes algorithms can adjust -the color scheme to provide a "white balance" to adjust the colors correctly, -but requiring such tools and adjustments might be beyond the grasp for the -average user. (See :doc:`White Balance Control -` -and :doc:`White Balance Control Mode -` -for more information about adjusting white balance -programmatically within the SDK's Java language libraries). - -In order to get TensorFlow to become less sensitive to the need for "white -balance" within the frame, and ignore the white altogether, a suite of -different lighting scenarios were replicated and used to train the model with -the hopes that TensorFlow would eventually see the "areas of changing colors" -(due to the different lighting situations) as background and ignore it -altogether to focus more on the images themselves. This is ultimately what was -successful for the default model. Below are some examples of the lighting -conditions used to train the model. - -.. grid:: 1 2 2 3 - :gutter: 2 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Lighting Scenario 1 - - ^^^ - - .. figure:: images/wb1.png - :align: center - :alt: White Balancing 1 - :width: 100 % - - +++ - - Example Lighting Scenario #1 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Lighting Scenario 2 - - ^^^ - - .. figure:: images/wb2.png - :align: center - :alt: White Balancing 2 - :width: 100 % - - +++ - - Example Lighting Scenario #2 - - .. grid-item-card:: - :class-header: sd-bg-dark font-weight-bold sd-text-white - :class-body: sd-text-left body - - Lighting Scenario 3 - - ^^^ - - .. figure:: images/wb3.png - :align: center - :alt: White Balancing 3 - :width: 100 % - - +++ - - Example Lighting Scenario #3 - -It is recommended that teams choose a background that is more resistant to -being "altered" by lighting conditions, and doesn't exist anywhere else on the -game field, or try adjusting the :doc:`White Balance Control -` -via programming if you're a Java language user. - -Selecting Images For Signal Sleeves -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Selecting images to use for your custom Signal Sleeve can seem daunting. Questions -swirl like "What images are going to be recognized best?", "Why were the images -used in the Default Model chosen?", and "How do I make this easier on myself?". -Hopefully this section will help you understand the image selection used for the -Default Model, and that will help inform your own decisions for your Signal Sleeve. - -First, it's important to note that TensorFlow has the following quirks/behaviors: - -- In order to run TensorFlow on mobile phones, *FIRST* Tech Challenge uses a very small core - model resolution. This means the image is downscaled from the high definition - webcam image to one that is only 300x300 pixels. This means that medium and - small objects within the webcam images may be reduced to very small - indistinguishable clusters of pixels in the target image. Keep the objects in - the view of the camera large, and train for a wide range of image sizes. -- TensorFlow is not really good at differentiating geometric shapes. TensorFlow - Object Detection is an object classifier, and similar geometric shapes will - classify similarly. Humans are much better at differentiating geometric shapes than - neural net algorithms, like TensorFlow, at the present. -- TensorFlow is great at pattern detection, color differentiation, and image - textures. For instance, TensorFlow can be easily trained to recognize the - difference between Zebras and Horses, but it would not be able to - differentiate between specific Zebra patterns to be able to identify, for - example, "Carl the Zebra." - -The default images were chosen for several design factors: - -- Images needed to be vertically short and horizontally long. When setting the - TensorFlow zoom factor above 1.0, the aspect ratio causes the zoom window to - be wider horizontally than vertically; even at modest zoom factors the - zoom window shrinks to be vertically smaller than the sticker itself at - even the minimim distance from the robot (18 inches). In order to have - more than one detection within the window, and to aid in providing wide margins - for adjusting the camera during robot setup, images that are horizontally wide - and vertically short were desired. Thanks to the season theme, the green - lightning bolt from the *FIRST* Energize season logo was chosen first. The green - color and the zig-zag pattern on the top and bottom of the bolt were desired - elements for TensorFlow. -- TensorFlow's ability to detect patterns better than shapes was utilized in two - ways in the "Bulb" image; first the repeated bulb image created a repeating pattern - that TensorFlow could recognize, and the image itself was colored differently than - other colors it may have seen on the sticker background, the cones themselves, or - on the green lightning bolt. Yellow was selected as the color within the - repeating light bulbs. It helped that the light bulb had a similar art style - to the lightning bolt and even fit the theme, even though that wasn't a hard - requirement. -- Finally, the solar panels were selected similarly to the bulbs. The grid pattern - within the solar panels made for a unique pattern element not present in the other - images, and the purple color helped offset it as well. - -With the images selected, there were only basic tweaks made to the images for use in -POWERPLAY. For example, the images were modified to have relatively similar aspect -ratios and sizes to aid in uniformity of setup, and it was determined that TensorFlow -could be trained to recognize elements of each image fairly well. - -When selecting images for use with TensorFlow, keep in mind the elements of pattern, -color, and size. For example, a donut can be a great image for use by TensorFlow; -not because of the circular shape, but because of the frosting and the sprinkles on -top which creates a very unique pattern for TensorFlow to recognize. Be creative! \ No newline at end of file