Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pokemon Demo Ease of Use #1

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "scraper"]
path = scraper
url = [email protected]:ben-hawks/pokedex_scraper.git
357 changes: 357 additions & 0 deletions PYNQ/Demo_Camera_v3.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,357 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Real-Time Edge AI on FPGAs - Live Pokemon Card Recognition!\n",
"\n",
"## Setup\n",
"\n",
"\n",
"### Make sure the USB Camera is plugged into **the PYNQ FPGA Dev Board, not the normal computer (Intel NUC)!!!!!**\n",
"\n",
"### Cable Setup (Sanity Check) \n",
"If you're reading this, the board is likely plugged in and setup correctly for the most part - but make sure:\n",
"* The ethernet cable is plugged straight into the intel NUC, and the intel NUC has a static IP set to `192.168.2.XX`, where `XX` is any number between 0-255 **that is not 99**, as the dev board itself is set to `192.168.2.99`\n",
" * If we're at DEFCON, we don't need to take any chances plugging this into any real network! We just need one computer to access and run this notebook.\n",
"* the HDMI cable is plugged into, and the monitor powered on, **before running the notebook**\n",
"* the USB camera is plugged into the Pynq Dev board itself, not the host computer! \n",
"\n",
"\n",
"### Camera setup & tweaks\n",
"Once all cables are verified as connected, press the \"Restart and Run all cells\" button at the top of the notebook (the fast forward icon/two arrows connected to eachother). This will start the actual video output of the USB webcam, which you will need to use to position/set the zoom, focus, etc. \n",
" * The goal is to position the camera so that the art of the card, when placed on the table in the printed box under the camera, will take up the entire frame and be in focus. \n",
" * Use the three adjustable rings on the camera lens itself to adjust zoom, aperature (brightness), and focus - in that order. You can use the little screws on each ring to lock it in place once properly set. \n",
" * If desired, you can set the script to display a \"camera calibration mode\" below via setting `calibrate_camera_mode = True`, but remember to set it back to `False` once your done, then Restart and Run All cells to output the actual display image\n",
" \n",
"Additionally, depending on the monitor/setup used, you might have to change the color mode. By default, BGR is used and assumed, but if the colors look weird, set `color_mode_bgr` to `False` (or `True` to switch back to BGR mode), then Restart & Re-run all cells to switch to RGB mode. \n",
"\n",
"## Running the Demo\n",
"\n",
"Once setup is complete, the demo hardware itself should be pretty hands off! It's up to you to talk to folks and walk them through the concept and idea of what's going on. More material for that is provided down below/along side this info!\n",
"\n",
"Folks are free to play with, rotate, etc. the fake pokemon cards to see what happens with the demo. Despite being just printed on some cardstock, they're not intended to be taken! We have a decent number of extras, but not enough to hand out. If some do disappear, there should be a stock of replacements nearby. Try to keep a decent balance of the pokemon that are out on the table! \n",
"\n",
"### Troubleshooting\n",
"\n",
"If you run into issues, try the following. If all else fails, restart and run all cells, and if that doesn't work, restart the whole dev board and/or call Ben Hawks (contact info left with AI Village staff) \n",
"\n",
"* The output froze!\n",
" * Try restart & run all cells. If that doesn't work, stop the cells and the kernel, unplug and power-cycle the display (if possible), plug it back in, then restart and run all cells. \n",
"* After multiple iterations/restarts of the notebook, the display output image is shifting and wrapping!\n",
" * This is a known issue with an unknown cause. Restarting the whole dev board is the only reliable way of fixing this. HDMI is weird. \n",
"* the colors are weird!\n",
" * the display probably expects a different mode. If this happens randomly, try restarting the display and/or restart&run all cells in the notebook. If that fails restart the dev board. \n",
"* The USB Camera isn't displaying/opening/connecting properly!\n",
" * Double check the normal usb connection, but also the weird barrel jack connector in the middle of the cable. also make sure it's plugged into the USB on the Pynq Dev board, not the host computer/Intel NUC. \n",
"* One of the cells is repeatedly giving an error!\n",
" * If all your connections are okay, and all the cells above that one have been run (and without any errors), call Ben Hawks. This shouldn't happen, probably?!\n",
"* It's not predicting the right pokemon!\n",
" * yeah it does that sometimes. The model is very small, and trained with a somewhat bad dataset. It should be able to get most cards after some rotating and movement, but if it seems like it's _never_ getting anything right, even with perfect framing and trying multiple cards of the same pokemon (Onix is pretty reliable), call Ben Hawks. \n",
"* I can't access the Jupyter server/webpage to start the notebook!\n",
" * The CPU itself is pretty small (dual core ARM) so it might take a minute to load pages, but if the connection is timing out, try restarting the dev board, but plug the ethernet cable in *before* powering it on. If the ethernet cable isn't connected, it gets a bit weird and doesn't automatically assign itself an ipv4 address without restarting the `networking` service (`sudo systemctl restart networking`). You can do this over the USB Serial connection (115200 baud, via the micro USB port next to the ethernet port) if you'd like, but the simplest way is to just restart the whole dev board.\n",
" \n",
" \n",
"**TL;DR - Try restarting the notebook, if that fails the dev board, and if that fails, call Ben Hawks (AIV Staff have contact info)**"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# Set to false if colors seem weird on display - likely an issue with display using BGR or RGB mode... \n",
"color_mode_bgr = True\n",
"\n",
"# Set to true if you need to set/calibrate the USB webcam position, zoom, focus, etc. \n",
"# outputs just a simplified view of the whole webcam frame if true. \n",
"# !!! Must set to True then Restart & Run all cells to go back to normal operation! !!!\n",
"calibrate_camera_mode = False\n",
"\n",
"# Count of how many frames to average the prediction over \n",
"# Higher num will likely give more \"accurate\" results, but have a noticable lag and make it harder to demonstrate some issues\n",
"# Lower num will have less of a lag when changing/placing new cards,but be less \"accurate\" overall\n",
"# 10 seems to be a good \"sweet spot\"\n",
"rolling_predict_frames = 15\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Capture device is open: True\n"
]
}
],
"source": [
"# initialize camera from OpenCV\n",
"import cv2 as cv\n",
"\n",
"videoIn = cv.VideoCapture(cv.CAP_V4L)\n",
"videoIn = cv.VideoCapture(-1)\n",
"#videoIn = cv.VideoCapture('/dev/v4l/by-id/usb-HD_Web_Camera_HD_Web_Camera_Ucamera001-video-index0')\n",
"while(videoIn.isOpened() == False):\n",
" #videoIn = cv.VideoCapture(cv.CAP_V4L)\n",
" videoIn = cv.VideoCapture(-1)\n",
" #videoIn = cv.VideoCapture('/dev/v4l/by-id/usb-HD_Web_Camera_HD_Web_Camera_Ucamera001-video-index0')\n",
" videoIn.set(cv.CAP_PROP_FRAME_WIDTH, 640)\n",
" videoIn.set(cv.CAP_PROP_FRAME_HEIGHT, 480)\n",
"\n",
"\n",
"print(\"Capture device is open: \" + str(videoIn.isOpened()))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"application/javascript": [
"\n",
"require(['notebook/js/codecell'], function(codecell) {\n",
" codecell.CodeCell.options_default.highlight_modes[\n",
" 'magic_text/x-csrc'] = {'reg':[/^%%microblaze/]};\n",
" Jupyter.notebook.events.one('kernel_ready.Kernel', function(){\n",
" Jupyter.notebook.get_cells().map(function(cell){\n",
" if (cell.cell_type == 'code'){ cell.auto_highlight(); } }) ;\n",
" });\n",
"});\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Load the Neural network FPGA firmware\n",
"import numpy as np\n",
"from axi_stream_driver import NeuralNetworkOverlay\n",
"\n",
"X_shape = (32, 32, 3)\n",
"y_shape =(10)\n",
"\n",
"nn = NeuralNetworkOverlay('../Video_v3.bit', X_shape, y_shape)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# Load the background image for HDMI output\n",
"bg_img = cv.imread(\"hls4ml_pokemon_demo_bg.png\") #defaults to BGR, uncomment below if needed?\n",
"# bg_img = cv2.cvtColor(bg_img, cv2.COLOR_BGR2RGB)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<contextlib._GeneratorContextManager at 0xa5372930>"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Steup and start HDMI output, make sure it's connected before running the cell!\n",
"from pynq.lib.video import *\n",
"\n",
"# monitor configuration: 1280*720 @ 60Hz\n",
"Mode = VideoMode(1280,720,24)\n",
"hdmi_out = nn.video.hdmi_out\n",
"\n",
"if color_mode_bgr:\n",
" hdmi_out.configure(Mode,PIXEL_BGR)\n",
"else:\n",
" hdmi_out.configure(Mode,PIXEL_RGB)\n",
" \n",
"hdmi_out.start()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Frames per second: 25.90793503645379\n"
]
}
],
"source": [
"import time\n",
"start = time.time()\n",
"for NumOfFrames in range (50): \n",
" ret, frame = videoIn.read()\n",
" if (ret):\n",
" outframe = hdmi_out.newframe()\n",
" outframe[0:480,0:640,:] = frame[0:480,0:640,:] \n",
" hdmi_out.writeframe(outframe)\n",
" else:\n",
" raise RuntimeError(\"Failed to read from camera.\")\n",
"end = time.time()\n",
"print(\"Frames per second: \" + str(50 / (end - start)))"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"## used to calibrate camera! Stop the cell, then set \"calibrate_camera_mode\" to false, then restart & run all when done!\n",
"while calibrate_camera_mode: \n",
" ret, frame = videoIn.read()\n",
" if (ret):\n",
" outframe = hdmi_out.newframe()\n",
" outframe[0:480,0:640,:] = frame[0:480,0:640,:] \n",
" hdmi_out.writeframe(outframe)\n",
" else:\n",
" raise RuntimeError(\"Failed to read from camera.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Actual Running mode!\n",
"\n",
"# Capture webcam video and display to HDMI Output\n",
"%matplotlib inline \n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"rolling_predict = np.zeros(rolling_predict_frames, dtype=np.int32)\n",
"#Prediction Text Location - Center Bottom of Screen (720p)\n",
"x,y,w,h = 550,660,150,40\n",
"start_time = time.time()\n",
"# FPS update time in seconds\n",
"display_time = 1\n",
"fc = 0\n",
"FPS = 0\n",
"fps_disp = \"\"\n",
"while (1):\n",
" # Read in image from webcam\n",
" ret, frame = videoIn.read()\n",
" if (ret):\n",
" \n",
" # Calculate FPS \n",
" fc+=1\n",
" TIME = time.time() - start_time\n",
" if (TIME) >= display_time:\n",
" FPS = fc / (TIME)\n",
" fc = 0\n",
" start_time = time.time()\n",
" fps_disp = \"FPS: \"+str(FPS)[:5]\n",
"\n",
" #preprocess image before passing to neural network, - Crop to square and resize to 32*32px\n",
" outframe = hdmi_out.newframe()\n",
" cropped_frame = frame[0:480, 0:480]\n",
" if color_mode_bgr:\n",
" RGB_img = cropped_frame #Desired input color format depends on monitor, USB Webcam returns RGB\n",
" else:\n",
" RGB_img = cv.cvtColor(cropped_frame, cv.COLOR_RGB2BGR)\n",
" resized = cv.resize(RGB_img, (32, 32))\n",
" resized_scaled = resized/255.\n",
" \n",
" # Send pre-processed image to FPGA Neural Network!\n",
" #y_hw, latency, throughput = nn.predict(resized_scaled, debug=False, profile=True)\n",
" y_hw = nn.predict(resized_scaled, debug=False, profile=False)\n",
" \n",
" # Get prediction back from FPGA, determine which pokemon to display as our prediction (rolling prediction)\n",
" percentage = np.array(y_hw)\n",
" last_predict = np.argmax(percentage)\n",
" if rolling_predict_frames > 1:\n",
" rolling_predict = rolling_predict[1:] # pop oldest element from our predictions\n",
" rolling_predict = np.append(rolling_predict, last_predict) # add newest prediction\n",
" percentage_max = np.bincount(rolling_predict).argmax() # get the most common prediction over the last N frames\n",
" else: # skip the whole array operations if set rolling pred is = 1, just for the sake of speed (+ ~0.25-0.5 FPS)\n",
" percentage_max = last_predict\n",
" if percentage_max == 0:\n",
" text = \"Bulbasaur\"\n",
" elif percentage_max == 1: \n",
" text = \"Charmander\"\n",
" elif percentage_max == 2:\n",
" text = \"Eevee\"\n",
" elif percentage_max == 3: \n",
" text = \"Gengar\"\n",
" elif percentage_max == 4: \n",
" text = \"Jigglypuff\"\n",
" elif percentage_max == 5: \n",
" text = \"Mewtwo\"\n",
" elif percentage_max == 6: \n",
" text = \"Onix\"\n",
" elif percentage_max == 7: \n",
" text = \"Pikachu\"\n",
" elif percentage_max == 8: \n",
" text = \"Snorlax\"\n",
" elif percentage_max == 9: \n",
" text = \"Squirtle\"\n",
" else: \n",
" text = \"NA\"\n",
" \n",
" # Construct our output HDMI video frame\n",
" scaled_input = cv.resize(resized, (480, 480),interpolation=cv.INTER_AREA) #resize our 32px image to 480px size, but at the same resolution\n",
" outframe[0:720,0:1280,:] = bg_img #Set background image\n",
" outframe[0:480,0:480,:] = RGB_img #add full resolution input image\n",
" outframe[0:480,800:1280,:] = scaled_input # add actual model input resolution image \n",
" cv.rectangle(outframe, (x,y), (x + w, y + h), (255,255,255), -1) # Add rectange & prediction text\n",
" cv.putText(img=outframe, text=text,org=(x+int(w/10),y+int(4*h/5)), fontFace=cv.FONT_HERSHEY_DUPLEX, fontScale=1, color=(255,0,0), thickness=1)\n",
" cv.putText(outframe, fps_disp, (10, 25), cv.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)\n",
" hdmi_out.writeframe(outframe) # Output the final constructed frame\n",
" else:\n",
" print(\"Failed to read from camera.\")\n",
" break"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### hdmi_out.close()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading