This repository has been archived by the owner on Apr 7, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
Weekly Report 04 (05.07. ~ 05.10.)
NXXR edited this page May 10, 2019
·
1 revision
- LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image
- End-to-End Navigation with Branch Turning Support Using Convolutional Neural Network
- Difference of Project to Similar Projects
- Plans for Week 05
- Read through LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image
- Relevance for the project:
- 3D Layout creation relevant for mapping of mobile robots surroundings
- LayoutNet only reconstructs room layout, still need to handle "smaller" obstacles (i.e. desks, chairs, persons, etc.)
- Read through End-to-End Navigation with Branch Turning Support Using Convolutional Neural Network
- Relevance for the project:
- project very similar to ours
- LiDAR + front camera instead of surround view camera
- only trajectory following, no reaction to moving obstacles
- project very similar to ours
Feature | HCU Project | End-to-End Navigation | End-to-End Learning of Driving Models |
---|---|---|---|
Single Sensor System | O | X | X |
Floor Segmentation / Mapping | O | O | - |
Route Planning | O | O | O |
Automatic Target Detection | O | X | X |
Obstacle Detection | O | X | O |
- Create first draft of mission statement (survey results)
- Start first practical tests & attempts with floor detection and segmentation
- Read Additional Material (optional)