Skip to content
This repository has been archived by the owner on Dec 2, 2021. It is now read-only.

commits after update to noteboot #16

Open
wants to merge 24 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
547e6bd
interim commit
boldbinarywisdom Nov 12, 2017
46f8284
working set of scripts
boldbinarywisdom Nov 18, 2017
a1a20e8
Update writeup_template.md
boldbinarywisdom Nov 18, 2017
e1e5a1d
Rename writeup_template.md to Project_Report.md
boldbinarywisdom Nov 18, 2017
fedb7bf
Update Project_Report.md
boldbinarywisdom Nov 19, 2017
69d9711
Update Project_Report.md
boldbinarywisdom Nov 19, 2017
99f651e
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
fa8b30f
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
60bedfb
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
c5aabb6
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
e01fb76
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
ef6d4ea
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
2d77ba0
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
6f19895
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
6166b4b
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
6144177
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
93547cc
Update Project_Report.md
boldbinarywisdom Nov 20, 2017
4bc2f25
final changes
boldbinarywisdom Nov 20, 2017
8c9874e
Added output images and video
boldbinarywisdom Nov 21, 2017
4418b3f
Updates with images
boldbinarywisdom Nov 21, 2017
ad3b94a
Update and rename Project_Report.md to ProjectReport.md
boldbinarywisdom Nov 21, 2017
c35b8fe
Update ProjectReport.md
boldbinarywisdom Nov 21, 2017
140145f
Update ProjectReport.md
boldbinarywisdom Nov 21, 2017
7098d31
Update ProjectReport.md
boldbinarywisdom Nov 21, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
177 changes: 177 additions & 0 deletions ProjectReport.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
## Project: Search and Sample Return

Source code https://github.com/boldbinarywisdom/RoboND-Rover-Project

The purpose of this project is to write code to drive rover autonomoulsy in a map that is simulated environment with objects of interests and terrain where rover cannot drive. arned about in the lesson, and build upon them.


## Steps

** Training / Calibration **

* Download the simulator and take data in "Training Mode"
* Test out the functions in the Jupyter Notebook provided
* Add functions to detect obstacles and samples of interest (golden rocks)
* Fill in the `process_image()` function with the appropriate image processing steps (perspective transform, color threshold etc.) to get from raw images to a map. The `output_image` you create in this step should demonstrate that your mapping pipeline works.
* Use `moviepy` to process the images in your saved dataset with the `process_image()` function. Include the video you produce as part of your submission.

** Autonomous Navigation / Mapping **

* `perception_step()` function within the `perception.py` has image processing functions to create a map and update `Rover()` object
* `decision_step()` function within the `decision.py` has conditional statements that take into consideration the outputs of the `perception_step()` in deciding how to issue throttle, brake and steering commands to Rover object.
* drive_rover.py Iterates on perception and decision functions until rover does a reasonable job of navigating and mapping.
### More detailed steps can be found under Readme.md

[//]: # (Image References)

[image1]: ./output/terrain2direction.jpg
[image2]: ./output/terrain2drivablemap.jpg
[image3]: ./calibration_images/example_rock1.jpg
[image4]: ./output/foundRocks.jpg



## Submissions:


### Notebook Analysis

![Terrain to Direction of Robot][image1]


1. The `process_image()` function was modified with the appropriate analysis steps to map pixels identifying navigable terrain, obstacles and rock samples into a worldmap.

2. Running `process_image()` on the test data using the `moviepy` functions produces a movie which can be found in ./output folder


And another!

![Terrain to Driveable map][image2]

### Autonomous Navigation and Mapping

The telemetry() function in driver_rover.py receives images and calls two key functions:

1. perception_step() which applies vision steps, finds navigation angle and updates Rover state

Perception steps and modified code is detailed below and summary of steps added to the function are as follows:

1. Applying persptive transform
2. Applying color threshold
3. Converting rto over centric coordinates and updating to world map
4. obstacles are added to world map channel red which can be seen in image2
5. Rocks are added to the green channel in the world map
6. Navigable terrain is added to the blue channel of the world map
7. A mosaic is created with various images. Original image, warped image, transformed image and navigable map
8. Rover pixels are coverted to angles which are stored in Rover objects for subsequent steps used in actually driving the rover
9. Finding rocks
10. Rocks are brigher than the surrounding hence a RGB filter is applied to isolate the area in the map
11. This location of the map upon doing perspective transform gives the location in the terrain

2. decision_step() which decided to steer, throttle or brake

#### No modifications added to drive_rover.py

Updated code in perception_step() function

# Apply the above functions in succession and update the Rover state accordingly
def perception_step(Rover):
# Perform perception steps to update Rover()
# NOTE: camera image is coming to you in Rover.img
# 1) Define source and destination points for perspective transform
# 2) Apply perspective transform
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
image = Rover.img
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped, mask = perspect_transform(Rover.img, source, destination)

# 3) Apply color threshold to identify navigable terrain/obstacles/rock samples
threshed = color_thresh(warped)
obs_map = np.absolute(np.float32(threshed) -1) * mask
# 4) Update Rover.vision_image (this will be displayed on left side of screen)
# Example: Rover.vision_image[:,:,0] = obstacle color-thresholded binary image
# Rover.vision_image[:,:,1] = rock_sample color-thresholded binary image
# Rover.vision_image[:,:,2] = navigable terrain color-thresholded binary image

Rover.vision_image[:,:,2] = threshed * 255
Rover.vision_image[:,:,0] = obs_map * 255

# 5) Convert map image pixel values to rover-centric coords
xpix, ypix = rover_coords(threshed)

# 6) Convert rover-centric pixel values to world coordinates
world_size = Rover.worldmap.shape[0]
scale = 2 * dst_size
x_world, y_world = pix_to_world(xpix, ypix, Rover.pos[0], Rover.pos[1],
Rover.yaw, world_size, scale)
obsxpix, obsypix = rover_coords(obs_map)
obs_x_world, obs_y_world = pix_to_world(obsxpix, obsypix, Rover.pos[0], Rover.pos[1],
Rover.yaw, world_size, scale)


# 7) Update Rover worldmap (to be displayed on right side of screen)
# Example: Rover.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
# Rover.worldmap[rock_y_world, rock_x_world, 1] += 1
# Rover.worldmap[navigable_y_world, navigable_x_world, 2] += 1

Rover.worldmap[y_world, x_world, 2] += 10
Rover.worldmap[obs_y_world, obs_x_world, 0] += 1

# 8) Convert rover-centric pixel positions to polar coordinates
dist, angles = to_polar_coords(xpix, ypix)
# Update Rover pixel distances and angles
# Rover.nav_dists = rover_centric_pixel_distances
# Rover.nav_angles = rover_centric_angles
Rover.nav_angles = angles

# See if we can find some rocks
rock_map = find_rocks(warped, levels=(110, 110, 70))

if rock_map.any():
rock_x, rock_y = rover_coords(rock_map)

rock_x_world, rock_y_world = pix_to_world(rock_x, rock_y, Rover.pos[0],
Rover.pos[1], Rover.yaw, world_size, scale)
rock_dist, rock_ang = to_polar_coords(rock_x, rock_y)
rock_idx = np.argmin(rock_dist)
rock_xcen = rock_x_world[rock_idx]
rock_ycen = rock_y_world[rock_idx]

Rover.worldmap[rock_ycen, rock_xcen, 1] = 255
Rover.vision_image[:, :, 1] = rock_map * 255
else:
Rover.vision_image[:, :, 1] = 0

return Rover



#### The robot navigation is controlled by below parameters

self.stop_forward = 50 # Threshold to initiate stopping --> if < 50 pixesl are present then stop going forward
self.go_forward = 500 # Threshold to go forward again --> > 500 is good indication of path forward
self.max_vel = 2 # Maximum velocity (meters/second) --> velocity value which can be tuned

### Improvements

Launching autonomous mode with vision processing improves Rovers navigation greatly. When world map is displayed, the red areas indicates obstacle region. The Robotics motion can be further tuned by parameters listed above.


![And Found a Rock][image4]

### Summary

The Robot mapped over 75% of the terrain when image4 was captured.
799 changes: 799 additions & 0 deletions Rover_Project_Test_Notebook.ipynb

Large diffs are not rendered by default.

350 changes: 298 additions & 52 deletions code/Rover_Project_Test_Notebook.ipynb

Large diffs are not rendered by default.

17 changes: 8 additions & 9 deletions code/decision.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import numpy as np


# This is where you can build a decision tree for determining throttle, brake and steer
# This is where you can build a decision tree for determining throttle, brake and steer
# commands based on the output of the perception_step() function
def decision_step(Rover):

Expand All @@ -13,11 +13,11 @@ def decision_step(Rover):
# Check if we have vision data to make decisions with
if Rover.nav_angles is not None:
# Check for Rover.mode status
if Rover.mode == 'forward':
if Rover.mode == 'forward':
# Check the extent of navigable terrain
if len(Rover.nav_angles) >= Rover.stop_forward:
# If mode is forward, navigable terrain looks good
# and velocity is below max, then throttle
if len(Rover.nav_angles) >= Rover.stop_forward:
# If mode is forward, navigable terrain looks good
# and velocity is below max, then throttle
if Rover.vel < Rover.max_vel:
# Set throttle value to throttle setting
Rover.throttle = Rover.throttle_set
Expand Down Expand Up @@ -60,16 +60,15 @@ def decision_step(Rover):
# Set steer to mean angle
Rover.steer = np.clip(np.mean(Rover.nav_angles * 180/np.pi), -15, 15)
Rover.mode = 'forward'
# Just to make the rover do something
# Just to make the rover do something
# even if no modifications have been made to the code
else:
Rover.throttle = Rover.throttle_set
Rover.steer = 0
Rover.brake = 0

# If in a state where want to pickup a rock send pickup command
if Rover.near_sample and Rover.vel == 0 and not Rover.picking_up:
Rover.send_pickup = True

return Rover

return Rover
19 changes: 10 additions & 9 deletions code/drive_rover.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
from perception import perception_step
from decision import decision_step
from supporting_functions import update_rover, create_output_images
# Initialize socketio server and Flask application
# Initialize socketio server and Flask application
# (learn more at: https://python-socketio.readthedocs.io/en/latest/)
sio = socketio.Server()
app = Flask(__name__)
Expand All @@ -31,7 +31,7 @@
# and y-axis increasing downward.
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
# This next line creates arrays of zeros in the red and blue channels
# and puts the map into the green channel. This is why the underlying
# and puts the map into the green channel. This is why the underlying
# map output looks green in the display image
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)

Expand Down Expand Up @@ -65,19 +65,19 @@ def __init__(self):
# Image output from perception step
# Update this image to display your intermediate analysis steps
# on screen in autonomous mode
self.vision_image = np.zeros((160, 320, 3), dtype=np.float)
self.vision_image = np.zeros((160, 320, 3), dtype=np.float)
# Worldmap
# Update this image with the positions of navigable terrain
# obstacles and rock samples
self.worldmap = np.zeros((200, 200, 3), dtype=np.float)
self.worldmap = np.zeros((200, 200, 3), dtype=np.float)
self.samples_pos = None # To store the actual sample positions
self.samples_to_find = 0 # To store the initial count of samples
self.samples_located = 0 # To store number of samples located on map
self.samples_collected = 0 # To count the number of samples collected
self.near_sample = 0 # Will be set to telemetry value data["near_sample"]
self.picking_up = 0 # Will be set to telemetry value data["picking_up"]
self.send_pickup = False # Set to True to trigger rock pickup
# Initialize our rover
# Initialize our rover
Rover = RoverState()

# Variables to track frames per second (FPS)
Expand Down Expand Up @@ -116,7 +116,7 @@ def telemetry(sid, data):
out_image_string1, out_image_string2 = create_output_images(Rover)

# The action step! Send commands to the rover!

# Don't send both of these, they both trigger the simulator
# to send back new telemetry so we must only send one
# back in respose to the current telemetry data.
Expand Down Expand Up @@ -173,7 +173,7 @@ def send_control(commands, image_string1, image_string2):
data,
skip_sid=True)
eventlet.sleep(0)
# Define a function to send the "pickup" command
# Define a function to send the "pickup" command
def send_pickup():
print("Picking up")
pickup = {}
Expand All @@ -192,7 +192,7 @@ def send_pickup():
help='Path to image folder. This is where the images from the run will be saved.'
)
args = parser.parse_args()

#os.system('rm -rf IMG_stream/*')
if args.image_folder != '':
print("Creating image folder at {}".format(args.image_folder))
Expand All @@ -204,9 +204,10 @@ def send_pickup():
print("Recording this run ...")
else:
print("NOT recording this run ...")

# wrap Flask application with socketio's middleware
app = socketio.Middleware(sio, app)

# deploy as an eventlet WSGI server
eventlet.wsgi.server(eventlet.listen(('', 4567)), app)
#eventlet.wsgi.server(eventlet.listen(('', 4568)), app)
Loading