- We have two NICO robots:
A
andB
- Each robot can have
LEFT
orRIGHT
position. We use the robot's perspective, not a human.LEFT
position means that a robot on the right side of a commander - See markings on the floor for placing robots and a table correctly
- Each robot runs on its own computer. You need
sudo
access LEFT
robot runs onLEFT
computer (operator's perspective)
- Make sure that both robots are connected to the power source. Motors in a relaxed state produce noise, you can here it. One NICO needs two power sockets and the other only one.
- Connect all USBs and audio cables to PCs.
- Check if faces are lighted up
- Check if speakers on. Check the sound settings on a PC.
git clone https://git.informatik.uni-hamburg.de/wtm-robots-and-equipment/NICO-software.git
cd NICO-software/api
source NICO-setup.bash
cd NICO-software/api/src
git clone https://git.informatik.uni-hamburg.de/wtm-teaching-projects/phri1920_dev.git
cd NICO-software/api
catkin_make
source devel/setup.bash
sudo chmod 777 /dev/ttyACM*
sudo chmod 777 /dev/ttyUSB*
- On this computer run 4 services and one subscriber
- pose service
- face expression subscriber
- speech synthesis service
- speech recognition service
- vision service for counting cubes
- For each service/subscriber a new terminal is needed. The set-up process for
each terminal:
export ROS_MASTER_URI=http://wtmpc23:11311/
source ~/.NICO/bin/activate
cd NICO-software/api
source devel/setup.bash
- Start services:
rosrun nicopose poseService.py --label A --position LEFT
rosrun nicopose fexSub.py --label A --position LEFT
rosrun speech SpeechSynthesisStub.py A
rosrun speech SpeechRecognitionStub.py
rosrun vision count_resources.py
- On this computer run 4 services and one subscriber
- pose service
- face expression subscriber
- speech synthesis service
- state machine
- For each service/subscriber a new terminal is needed. The set-up process for
each terminal:
export ROS_MASTER_URI=http://wtmpc23:11311/
source ~/.NICO/bin/activate
cd NICO-software/api
source devel/setup.bash
- Start services:
rosrun nicopose poseService.py --label B --position RIGHT
rosrun nicopose fexSub.py --label B --position RIGHT
rosrun speech SpeechSynthesisStub.py B
rosrun state_machine StateMachine.py --scene 0 --entry 0
You can use launch scripts to avoid a lot of typing, which is described in the previous section.
- Launch nicopose
roslaunch nicopose pose.launch llabel:=A lnvc:=M
- Launch logging:
rostopic echo /rosout | grep msg
Note, you need to change lnvc parameter according to your setup. Start terminal. In each terminal session execute the required commands:
- Computer LEFT:
- Launch nicopose
roslaunch nicopose pose.launch llabel:=A lnvc:=M
- Launch logging:
rostopic echo /rosout | grep msg
- Execute
rosrun vision count_resources.py
- Launch nicopose
- Computer RIGHT:
- Launch nicopose
roslaunch nicopose pose.launch llabel:=B lnvc:=S
- Execute
rosrun speech SpeechRecognitionStub.py
- Execute
rosservice call speech_recognition calibrate
- Launch nicopose
- Computer iCub. Note, you must run
cd /data/home/hri/phri1920
andsource setup.bash in each session
- Execute
roscore
- Execute
roslaunch state_machine lab_pc.launch
before participant enters - Execute
rosrun state_machine StateMachine.py
when robots are supposed to start talking.- optional parameters are e.g. : --scene=0 --entry=0
- You can set the lights manually, if needed:
rosservice call /light_control "setting: 'cyan'"
- Execute
NB: In case of face expression is not recognized for B (right PC) open a terminal
- check which ttyACM "ls /dev/ttyACM" and tab
- go to fexSub.py change specifically line 37, change to respective ttyACM