Skip to content
Patrick Walton edited this page Apr 25, 2018 · 28 revisions

CVAntennaPattern

Contributors: Patrick Walton

Additional Contributors Welcome

Language: Python

Package Dependencies: OpenCV 3.4, NumPy, SciPy

Vision

Custom antennas are important to scientific, experimental, and hobby work when available commercial antennas lack required band and performance. ANSYS High Frequency Simulation Software (HFSS) is a popular tool for modeling performance of antenna designs, but most custom antenna developers stop there. Typical validation of the antenna performance, including the antenna pattern, requires a bulky patchwork of tools that discourage developers from doing more than relying on their HFSS results. The vision for CVAntennaPattern is to enable antenna pattern measurement using only a smartphone camera paired with a software defined radio (SDR). It is hoped that future smartphones will include SDRs, further simplifying the measurement process.

Summary

This repository extracts pose information from a video and integrates it with a sequence of radio signal power values. The repository outputs the input video with tracked points overlain and an image plot of the antenna pattern. As the program runs, it also animates the variation in pose and the corresponding plot points of the antenna pattern. To date, testing of this repository on the sample videos yields mixed results, suggesting an improved approach to recovering pose from motion is required.

This wiki describes the current methods for pose recovery, signal power measurement, and their integration to produce the antenna pattern plot. The results and current limitations of CVAntennaPattern are presented. Attempted alternate methods and future alternate methods are also suggested. Pose recovery faults and minor issues are further logged in GitHub's issues section for clarity to interested collaborators.

Implementation

This repository uses OpenCV's calib3d module for pose recovery, rtl-sdr for measurement of the signal power, NumPy for data management and integration, and OpenCV's drawing and display features for plotting the image. The main file is project.py, which manages the input video vile, updates the tracker object, and, optionally, can produce the camera matrix using a second video of motion over a checkerboard. Pose from motion is primarily handled by the tracker class, which uses the Point class to organize points and the Pose class to organize and update pose. Power collection is currently done separately, but read from CSV and managed in the Power class. Incidental to development, plotting is handled by the Pose class.

Pose Recovery

The test videos were captured using a OnePlus2 in video capture mode. If no camera matrix is provided, project.py first calibrates using an additional calibration video of motion over a checkerboard.

project.py manages the input video, initializing a tracker object and updating it with each step. The Tracker object returns the input video frame, with tracked points and their optic flow added. On initialization, the Tracker object, finds tracking points using OpenCV's Harris Corner Detector, cv2.goodFeaturesToTrack, and creates a corresponding list of Point objects. Due to occlusion while circling around the antenna, points on the antenna remained stationary, at the edge of the antenna, when they should have traveled. To mitigate this, a mask is used with cv2.goodFeaturesToTrack to prevent points from being selected at the center of the image. At each step, the Tracker object finds these points in the next frame, using cv2.calcOpticalFlowPyrLK. Points that left the frame or entered the center are replaced, again using cv2.goodFeaturesToTrack.

The Tracker object also initializes a Pose object and updates it with each time step. On initialization, the Pose object sets a starting position and rotation. The starting position is assumed to be 20 "pixels" from the object. The starting rotation is the transformation between the world coordinate frame, with x and y horizontal, and z vertical, and the camera coordinate frame, where z is out of the camera frame, x is to the right, and y is down. At each step, the Pose object uses the tracked points to find the rotation and translation between the current image and the prior image using cv2.findEssentialMat and cv2.recoverPose. These are used to update the total rotation and translation with respect to the world frame. The rotation and rotation with respect to the world frame are then combined into a homogeneous transformation matrix as [R | t] with [[0, 0, 0, 1]] concatenated as a fourth row.

Signal Power Measurement

Currently, the Power object reads in a CSV of signal power values. To collect these values, insert an RTL-SDR to the USB-drive of your device, install rtl-sdr, and run the following in Linux Command Line:

rtl_power -f 914.5M:915.5M:10k -g -10 -i 1s -e 30s antenna.csv

Replace 914.5M:915.5M:10k with the bandwidth and sampling bin size matching your antenna. This function integrates power from 1 second samples for 30 seconds with low gain.

On initialization, the power object reads the CSV with power values, converts the power values to be relative, down from the max value, and stores them in a NumPy array.

Data Integration

Since plotting is currently handled by the Pose object, this object handles the integration of the power and pose values internally. Pose.draw() creates a plot frame with the R-G-B lines corresponding to the X-Y-Z axes of the camera frame, transformed into the world frame. World frame X-Y axes and signal power dB scales are added to this plot. At each step, the camera frame axes are multiplied by the homogeneous transformation matrix resulting from the pose recovery at that step to help the user visualize the estimated pose.

Due to issues with position estimation, the position at each step is assumed to be [[0, 0, 0]].T. Thus, only the rotation is applied to the camera frame axes. To integrate the power, at each step, a dot is drawn, projected in the direction of the z-axis of the camera-frame, scaled according to the power level.

Results and Limitations

Pose Recovery

As mentioned above, the current implementation has issues with pose recovery. When testing on the Sundial.mp4 sample video, camera frame y-axis stays mostly down (into the plot frame), but rotation proceeds at more than twice the actual rate, causing the measurements to loop over on themselves, as shown in the plot below:

Antenna Pattern Plot. Signal power data paired with sample video pose.

The point tracking video corresponding to this plot can be found here.

When testing on the sample videos captured in conjunction with antenna signal power measurement, the performance was further degraded, apparently by the introduction of some off-axis rotation near the beginning. After that, rotation apparently proceeds at more than twice the actual rate, but around a new axis. These results are shown below:

Antenna Pattern Plot. Signal power paired with corresponding antenna video pose.

The point tracking video corresponding to this plot can be found here.

The cause of these issues is unknown. The most likely explanation is that there are issues with point selection and tracking. These steps are suspect, because varying the number of tracked points produced widely variable results. A couple of possible point tracking issues follow.

First, occlusion of points on the target object was mentioned above. Removal of these points reduced oscillations due to the point set gradually settling on the target object, but also caused the rate of rotation to change from accurate on average to much faster than actual.

Second, barring access to a large anechoic chamber, antenna pattern is best done on a large, asphault blacktop. Under some lighting conditions, the texture of the blacktop appears to provide a large number of good corners, causing the majority of points to settle on it (see the videos linked above). This may be causing some issues, but removing these points consistently degraded performance. Further, Nister's 5-point algorithm, on which cv2.recoverPose is based, was tested originally in similar scenarios, such as tracking pose around a group of people standing above grass. It is possible the OpenCV implementation of Nister's 5-point algorithm, or it's Python wrapping, are reducing the algorithms' robustness.

Third, handheld video capture contains a large amount of jitter. While the points appear to track with the jitter well, this may still be causing issues with the pose estimation.

Signal Power Measurement

Signal power measurement functions properly, but the current integration method introduces a few inaccuracies.

First, unless the user walks in a perfect circle around the antenna, errors will be introduced into the pattern due to differences in free-space path loss between measurements. Put differently, some measurements will be at a greater distance, and thus a reduced signal power, than others, leading to an inverse relationship between deformities in the walking path and deformities in the antenna pattern. This could easily be corrected with functional position estimation, since free-space path loss is proportional to the square of the distance from the antenna. Thus, the correction factor, F_c = r_estimated**2 / r_nominal**2 could be applied to adjust for walking path inaccuracies.

Second, the handheld video feed may not be pointed completely horizontally. This shortens the projection of the x and z axes onto the world frame plot. Since the z axis is used to integrate the power measurement, this leads to about -1dB of error in the plotted measurement, although the actual measurement is not affected.

Alternate Methods for Pose Recovery

Several alternate methods of pose recovery have been tested, but none offered significant improvements. These, as well as potential future methods to try, are described below.

It may be that the issue is with the points being tracked. Kalman filtering was applied, but degraded performance, since point motion due to jitter is fairly non-deterministic. It may be more effective to improve point selection and tracking by using a better algorithm, such as ORB.

Rotation could be extracted from cv2.findHomography and cv2.decomposeHomographyMat by masking point selection to only include the near-planar background. This was attempted, but results in 4 different rotations and down-selection attempts failed to provide a single rotation estimation. This may yet be a viable solution, but there was not enough time left in the project to try to resolve the selection issues.

Given the success reported by Nister with the 5-point algorithm, it is likely there is an issue with OpenCV's implentation. A simpler pose recovery algorithm was attempted using cv2.decomposeEssentialMat and selecting rotation by testing whether the trace was less than 2.5. This did not appear to improve performance. Reimplementation of Nister's 5-point algorithm may provide better results.