Project for the Fundamental of Robotics course at the University of Trento A.Y. 2022/2023
Developed by:
De Martini Davide
Duong Anh Tu
Zamberlan Giulio
- Table of Contents
- Project Description
- Project Structure
- Installation
- How to run the project
- Known Issues
The goal of this project was to develop an autonomous robot that can perform pick and place tasks. The manipulator is an Ur5 that uses a Zed cam for the perception. The pick and place task consists in picking different types of "lego" blocks and placing them in their corresponding position. The robot is able to detect the blocks and perform the pick and place task autonomously.
The project is structured as follows:
motion
-> contains the catkin project for the motion planner and the task managerinclude
-> has the header filesmsg
-> has the custom messagessrc
-> has the source files- CMakeLists.txt -> It is the CMake file for the project
- pakage.xml -> It is the package file for the project
vision
-> contains the visions scripts and weightsdataset
-> contains the datasetscripts
-> contains the scripts for the visionweights
-> contains the weights
models
-> contains the lego models- lego.world -> the .world template
The project has been developed and tested on Ubuntu 20.04 with ROS Noetic, also we used the locosim repository for the ur5 simulation. The installation of the project is the following:
- Clone the locosim repository and follow the instructions to install it
- Clone this repository in the
ros_ws/src
folder of the catkin workspace - Install the vision dependencies with the following command:
- Install YOLOv5 dependencies
cd ~
git clone https://github.com/ultralytics/yolov5.git
cd yolov5
pip3 install -r requirements.txt
- Intall the other dependencies
pip install torchvision==0.13.0
- Compile the project with the following command:
cd ~/ros_ws
catkin_make install
source install/setup.bash
At first we have to modify a file in the locosim project in order to be able to use the gripper. To do that we have to modify the file ~/ros_ws/src/locosim/robot_control/lab_exercises/lab_palopoli/params.py
and change the line 32:
'gripper_sim': True,
Then to have the lego model in the world you have to add them:
cd ~/ros_ws/src/robotic_project
cp -r models ~/ros_ws/src/locosim/ros_impedance_controller/worlds/models
And add the custom world file
cp lego.world ~/ros_ws/src/locosim/ros_impedance_controller/worlds
Last thing is to modify the ur5_generic.py file in the locosim project adding the following line at line 72
self.world_name = 'lego.world'
Feel free to modify the world file in order to add more lego blocks and test it Now we are able to run the project.
For last thing check if the flags REAL_ROBOT is on 0 on motionPlanner.cpp, taskManager.cpp and Vision.py then compile the project
cd ~/ros_ws
catkin_make install
source install/setup.bash
For running the project you need to run the following commands:
- Run in one window the locosim simulation with the following command:
python3 ~/ros_ws/src/locosim/robot_control/lab_exercises/lab_palopoli/ur5_generic.py
- Run in another window the task manager with the following command:
rosrun motion taskManager
- Run in another window the motion planner with the following command:
rosrun motion motionPlanner
- Run in another window the vision node with the following command:
cd ~/ros_ws/src/robotics_project/vision/scripts
python3 vision.py
If your pc has a old graphic card that has not a lot of VRAM errors can be generated when running the vision node, to solve this you have to add these line
in the LegoDetect.py
file (add it after the imports):
torch.cuda.empty_cache()
torch.backends.cudnn.enabled = False