Most seasons we use a camera on our robot to help us do things like shoot at a goal or locate objectives on the field. Doing this quickly and accurately without human involvement (or autonomously) is a key part of our game plan.
The objective of this Stryke Force Olympics event is to learn how to use computer vision to determine the position of simulated robot on the field.
Table of Contents
- Challenges
- Scoring
- Team Information
There are three challenges in this event, each incrementally more difficult then the previous. Each objective gives points towards winning the computer vision event.
This repository contains a template robot project to get you started as well as sample target images you will use while developing and testing your solutions.
The sample images were taken of a 2020-21 Infinite Recharge target with the Deadeye vision system and also come with data regarding the field position they were taken from. You can use these sample images to check the accuracy of your programs; just be aware—new photos will be provided for the actual event!
Given an image, how well can you screen out false targets and isolate the real target? Points are awarded for obtaining accurate data returned by the Deadeye system.
This first challenge will get you familiar with using the Deadeye dashboard and getting target data to your robot program. You will simply need to complete Deadeye's Quickstart walkthrough to get two points!
This challenge will use the target data returned from Deadeye in a simulated robot shooting command.
-
For this challenge, configure the Deadeye dashboard to return target data from the large U-shaped target in the provided test image.
-
Create or reuse a robot program with a configured
DeadeyeX0
class (whereX
is the Deadeye unit you are using). -
Create a
ShooterSubsystem
and aShootCommand
in your robot program. -
The
ShootCommand
callsShooterSubsystem.shoot()
when a controller button is pressed. -
When the
ShooterSubsystem.shoot()
method is called, it will use theDeadeyeX0
object to retrieve the U-shaped target coordinates and print them in the log.
Repeat challenge two, this time returning target data from the large vertical line target in the provided test image.
Use the data returned by the Deadeye system to calculate the angle and range to the target with respect to the robot.
-
Select and configure a Deadeye vision pipeline like you did in challenge 1. Sample images and field position data they were taken from are in the samples folder. Note the differences in target data returned by each of the different pipeline types.
-
Create or reuse a robot program with a configured
DeadeyeX0
class (whereX
is the Deadeye unit you are using). -
Create a
ShooterSubsystem
and aShootCommand
in your robot program. -
The
ShootCommand
callsShooterSubsystem.shoot()
when a controller button is pressed. -
When the
ShooterSubsystem.shoot()
method is called, it will use theDeadeyeX0
object to retrieve the U-shaped target coordinates and print the range to the target and the angle to the target centerline from the camera centerline.
Points are awarded based on the least angle and range error.
Having calculated angle and range to the target, can you calculate the position of the robot on the field?
Given the following:
- range to center of target (cm)
- target angle from camera centerline (deg)
- robot camera angle relative to the field (deg)
Calculate the position of the robot on the field using Excel, Google Sheets or Stryke Force Jupyterhub. Show your work.
Points are awarded based on the least position error and how well you can explain your solution.
Implement your solution to objective one as robot code using a Command
and
Subsystem
.
Given a sample target image, write a robot program that calculates the robot's position.
The following points are awarded to all participating teams in order of finish and applied towards the overall Olympics score.
- first place - 20 points
- second place - 15 points
- third place - 10 points
- fourth place - 5 points
In the case of a tie, the average of the available points will be given to each team. For example, a tie for first place will award the average of the first and second place points to each team (N points).
You will use the Deadeye vision system's web dashboard to configure the camera to detect the target and filter out false targets.
Each team will have access to their own Deadeye unit:
There are three cameras available in each Deadeye unit, each configured with one of the standard vision processing pipelines. For example, Deadeye unit H has:
H0
- UprightRectPipelineH1
- MinAreaRectPipelineH2
- TargetListPipeline