Skip to content

initial setup

Cristian Beltran edited this page Oct 25, 2021 · 3 revisions

Initial Setup

Follow this step-by-step guide to perform the initial setup of the project on a development machine running Ubuntu.

Content:

[[TOC]]

Note 1: The development environment of the o2ac-ur project is containerized/virtualized using Docker. This ensures that all the contributors work in the exact same software environment.

Note 2: The robot and simulator interfaces are primarily implemented with ROS.

Note 3: The project relies heavily on both hardware 3D graphic acceleration and Nvidia CUDA, thus a discrete Nvidia GPU is highly recommended. Machines without Nvidia GPUs (including virtual machines) are only partially supported: 3D accelerated tools such as Rviz and Gazebo will most likely crash at runtime.

Note 4: Machines running a real-time kernel do not need an Nvidia GPU.

Note 5: Although this initial setup is meant to be performed only once, you can run it again should you want to reset the development environment as the project evolves. However, make sure to backup your latest changes beforehand.

Step 0: Verify the Prerequisites

Mandatory:

  • A machine running Ubuntu 18.04 LTS (Xenial Xerus) or higher, on an AMD64 architecture.
  • Access to administrator privileges (sudo) on the Ubuntu machine.

Recommended:

  • UR robots for full operability. If not, the simulator provides support for basic operations.
  • A Nvidia GPU capable of running CUDA 9.0 (compute capability >= 3.0) or newer. If not, 3D accelerated tools will most likely crash (including Rviz and Gazebo).
  • If your machine has a Nvidia GPU, make sure that the Nvidia drivers are active, and not the "Nouveau" open-source drivers. You can check this in "System Settings/Software and Drivers/Additional Drivers".

Step 1: Setup the Development Environment

Setup the environment of the development machine with the following instructions.

  1. Install git if necessary:

    sudo apt-get update && sudo apt-get install -y git
  2. Clone the o2ac-ur project repository into your home folder:

    cd ~/ && git clone https://github.com/o2ac/o2ac-ur.git && cd ~/o2ac-ur/ && git submodule update --init

    Enter your Github developer credentials if prompted.

  3. Configure the system environment:

    cd ~/o2ac-ur/ && ./SETUP-DEVEL-MACHINE.sh

    The execution of SETUP-DEVEL-MACHINE.sh requires sudo permissions to install the tools that allow virtualization, i.e. Docker, Docker Compose, and Nvidia Docker 2. System changes made with sudo are kept to a strict minimum.

  4. Reboot the system (or log out and back in) for the changes to users and groups to take effect:

    sudo reboot

Note 6: The SETUP-DEVEL-MACHINE.sh script is divided into INCL-SUDO-ENV.sh and INCL-USER-ENV.sh. The execution of INCL-SUDO-ENV.sh makes system-wide changes and thus requires sudo permissions. However, if your system has already all the necessary tools installed, you can directly set up your local user environment with cd ~/o2ac-ur/ && ./INCL-USER-ENV.sh which does not require sudo permissions.

Note 7: You do not need to reboot if your local user has already been added to the docker group. If so, executing docker --version should not ask for sudo. In principle, you only need to reboot after the very first time you run SETUP-DEVEL-MACHINE.sh.

Step 2: Build the Docker Image

Create a virtual environment using Docker (a Docker image) on the development machine with the following instructions.

  1. Build the Docker image:

    cd ~/o2ac-ur/docker/o2ac-dev/ && python3 update_dockerfile.py
    cd ~/o2ac-ur/ && ./BUILD-DOCKER-IMAGE.sh

    This script builds the image according to the instructions found in ~/o2ac-ur/docker/Dockerfile. The Python script updates the versions in the Dockerfile.

Note 8: You may want to use a wired connection to speed up the large downloads.

Step 3: Run the Docker Container

Enter a virtual instance of the Docker image (= Docker container) on the development machine with the following instructions.

  1. Run the Docker container:

    cd ~/o2ac-ur/ && ./RUN-DOCKER-CONTAINER.sh

    This script creates or updates the container following the instructions found in ~/o2ac-ur/docker/docker-compose.yml. It allows the container to share system resources, such as volumes and devices, with the host machine.

  2. If you are using a multi-machine setup (one PC for computer vision, a second PC for controlling the robots using a realtime kernel), the commands to enter the Docker container are cd ~/o2ac-ur/ && ./VISION-DOCKER-CONTAINER.sh or cd ~/o2ac-ur/ && ./REALTIME-DOCKER-CONTAINER.sh respectively. They use the configuration files docker-compose-vision.yml and docker-compose-realtime.yml respectively.

  3. Use Ctrl + D to exit the container at any time.

Note 9: If no Nvidia drivers are present, the Docker runtime is set to runc, instead of nvidia, to bypass nvidia-docker2 when entering the container. However, 3D accelerated tools, including Rviz, will most likely not work. You can modify the default runtime in ~/.bashrc.

Note 10: Be careful if you need to modify docker-compose.yml as the container will be recreated from scratch the next time you run RUN-DOCKER-CONTAINER.sh.

Step 4: Test the Catkin Workspace

Build and test the ROS environment (= Catkin workspace) inside the Docker container with the following instructions.

  1. Enter the Docker container if necessary:

    cd ~/o2ac-ur/ && ./RUN-DOCKER-CONTAINER.sh
  2. Install all dependencies, set up the Catkin and underlay workspace and install the dependencies (press tab to auto-complete the command):

    o2ac-initialize-catkin-workspace

    This script will remove any existing catkin workspace configuration and build a new workspace inside /root/o2ac-ur/catkin_ws/ and /root/o2ac-ur/underlay_ws. The files in /src/ are not touched.

  3. Make sure that the new Catkin workspace is sourced:

    cd ~/o2ac-ur/ && source ./catkin_ws/devel/setup.bash

    You can also simply enter s instead.

  4. Launch a couple of publisher and subscriber test nodes from the em_chatter package:

    roslaunch em_chatter em_chatter_rqt.launch

    If everything has installed correctly, you should see a node em_listener that subscribes to a publisher node called em_talker in a Rqt GUI.

Note 11: The command o2ac-build-catkin-workspace will build both the catkin and underlay workspace using catkin build. The command o2ac-reset-catkin-workspace will clean and then rebuild both workspaces. This takes a long time, but usually takes care of most problems.

Step 5: Develop on the robot/in simulation

From now, you can develop in simulation or on the real robot, if it is available on your network:

Step 6: Learn the Advanced Functions

The development environment inside the Docker container offers several useful functions that you should be aware of. These advanced functions will help you increase both the convenience and the quality of your work for the project.

Multiple Terminal Operation

You can start up multiple terminals in a predefined configuration by using LAUNCH-TERMINATOR-TERMINAL.sh in a non-Terminator terminal. This script opens Terminator with the layout stored in ~/o2ac-ur/terminator/config. Each sub-terminal automatically executes RUN-DOCKER-CONTAINER.sh (some with a predefined ROS launch file) for convenience.

If you are using a multi-machine setup, use the commands LAUNCH-TERMINATOR-TERMINAL.sh vision and LAUNCH-TERMINATOR-TERMINAL.sh rt respectively.

Note 12: This script overwrites the Terminator configuration file.

Custom Bash Functions

The development environment contains the several useful Bash functions, all starting with the prefix o2ac-, to make your work more convenient. Including, but not limited to:

  • s: Sources the workspace (source ~/catkin_ws/devel/setup.bash). Forgetting to source is a frequent source of errors.
  • o2ac-build-catkin-workspace: Build, and source the Catkin and underlay workspace on top of the system ROS environment. This uses existing build artifacts, so if you encounter persistent errors, try the next command.
  • o2ac-reset-catkin-workspace: Remove built artifacts, then cleanly rebuild and source the Catkin and underlay workspace on top of the system ROS environment (to use after submodule updates or switching branches).
  • o2ac-fix-permission-issues: Assigns ownership of all files to the host PC's user. This fixes the various permission issues that may appear when manipulating, on the host machine, files generated by the root user of the Docker container.
  • o2ac-run-vscode-editor: Open Microsoft VS Code to efficiently work on the project with a fully preconfigured IDE tool (see Custom IDE Integration below).
  • o2ac-magic-rosdep-command: After adding new packages, run this in catkin_ws or underlay_ws to install their dependencies.
  • o2ac-get-fully-started: Execute several of the aforementioned functions to quickly get started when entering a freshly built Docker container.

Note 13: These Bash functions are based on helper scripts that can be found in /root/o2ac-ur/docker/o2ac-dev/scripts/ in the Docker container or in ~/o2ac-ur/docker/o2ac-dev/scripts/ in the host machine. You can see their definitions in ~/.bashrc inside the container.

Custom IDE Integration

The Docker container provides a modern and fully preconfigured IDE, Microsoft VS Code, to help you explore and debug the source code of the project. Run o2ac-run-vscode-editor to launch the GUI that includes:

  • A extensive Git integration to push and pull code, as well as tools to highlight changes between commits.
  • Python and C++ debuggers, with breakpoints, that are linked to the ROS environment inside the container.
  • Python and C++ code auto-completion with on-the-fly reference and property display.
  • Python and C++ linters to automatically format your code using predefined standard styles.
  • Highlighting various language syntaxes, including the commonly used Python, C++, Bash, YAML, JSON, XML, and Docker.

Configuration of the Docker Container

When executing RUN-DOCKER-CONTAINER.sh, a series of scripts perform the initialization of the Docker container. Each one has a specific function and future implementations/revisions should keep these functions separated as much as possible. They can be found on the host machine at:

  • ~/o2ac-ur/docker/docker-compose.yml: External information from outside the container (such as host names or network) and interface configuration between the host and the container.
  • ~/o2ac-ur/docker/Dockerfile: Internal configuration of the container.
  • ~/o2ac-ur/docker/scripts/initialize-bash-shell.sh: A template script used in Dockerfile. Theoretically, its contents can be written in Dockerfile. However, it will result in complex lines of code as helpful syntax for writing long scripts like here-documents are not supported in Dockerfile.
  • ~/o2ac-ur/docker/scripts/keep-container-running.sh: A start-up script invoked when starting the container and referenced in docker-compose.yml.