Skip to content

NAO Basics

williamross165 edited this page Apr 23, 2021 · 1 revision

Getting Started with the NAO:
This link is for the NAO User Guide from SoftBank that provides information on the basics of getting started with the NAO as a general user.

NAOqi Developer Guide:
Information and documentation for programming the NAO.

General Information:
NAO comes pre-built with a lot of functionality. It has speech and facial recognition built in. Typically, in order to engage speech recognition, the robot must recognize a face. While it is autonomous life mode (standing with slight motion back and forth) it will try and find people in its environment. Thus, it will react to sounds and touch, and try to identify a human. Once a face is identified, it will make a sound and the eyes will turn blue. At this point, the person can give verbal commands to NAO. If it understands the commands, it will give verbal feedback. If it hears you but does not understand the command, it will simply nod its head.

The simplest interface to program the NAO would be Choreographe. This program was provided by the NAO developers and allows simple drag and drop programming. It also allows for the user to write blocks of code that can be dropped in. More information is provided here.

To engage with more complex functionality, writing direct programs for the robot is more useful. In this way, you can do things such as capture its camera feed or joint positions, and write your own algorithms to enable more complex functionalities.

The programming can be done in Python and the basic API for interacting with the robot is NAOqi. NAOqi is great for getting started, but the qi Framework is highly recommended for building a large codebase.

Clone this wiki locally