ROS2 Humble GUI-Docker Container: A Step-by-Step Guide

Shashank Goyal
19 min readOct 19, 2024

--

Developing with ROS2 (Robot Operating System 2) can often be simplified by using Docker to create an isolated and reproducible environment. By using Docker, you can avoid version conflicts, ensure consistency across machines, and facilitate easy collaboration. With Docker, the development and deployment processes become much more streamlined, especially when working with complex robotic software like ROS2, which requires precise versioning and specific dependencies. In this blog post, we will go through the process of creating a custom ROS2 Docker container step-by-step, using a set of files that define the entire setup.

ROS2 is a powerful framework for developing robotic systems, but it comes with numerous dependencies and a steep setup curve. This complexity makes Docker an excellent choice for managing ROS2 environments. Using Docker ensures that everyone on your development team is running the same configuration, which reduces debugging time and avoids unexpected behaviors that can arise due to environment discrepancies. By the end of this guide, you will have a robust and repeatable ROS2 development environment that you can easily share and reuse.

Prerequisites

Before we dive into the setup, make sure you have Docker installed on your system. You can follow the installation guide on Docker’s official website for your respective OS. For Ubuntu users, follow these steps to install Docker:

  1. Update existing package list: Run the following command to update your system’s package list:
sudo apt update

2. Install prerequisites: Install the necessary prerequisites to allow apt to use a repository over HTTPS:

sudo apt install -y apt-transport-https ca-certificates curl software-properties-common

3. Add Docker’s official GPG key: Run the command below to add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg — dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

4. Set up the stable repository: Add the Docker apt repository to your sources list:

echo “deb [arch=$(dpkg — print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

5. Update package list again: Run the update command again to include the Docker repository:

sudo apt update

6. Install Docker: Install Docker using the following command:

sudo apt install -y docker-ce

If you want to use Docker without needing sudo privileges, you can follow these steps after installing Docker:

1. Create Docker Group: Create a Docker group to manage access. Run:

sudo groupadd docker

3. Add User to Docker Group: Add your user to the Docker group to avoid using `sudo` with Docker commands. Replace `<username>` with your actual username:

sudo usermod -aG docker <username>

4. Restart: Log out and back in, or restart your system to apply the group changes.

5. Test Docker: Verify that you can run Docker commands without `sudo` by running:

docker run hello-world

It’s also recommended to have some familiarity with basic Docker commands, as this will make it easier to follow along. Having a fundamental understanding of ROS2 concepts will also be helpful but is not strictly necessary.

Breakdown of Docker Configuration Files

To create a custom ROS2 Docker container, we will use the following files, each serving a specific purpose in the Docker-based development environment setup:

  • Dockerfile: Defines the Docker image, including the base image, installed packages, and environment configurations.
  • entrypoint.sh: Sets up the environment inside the container when it starts, ensuring ROS2 is ready for use.
  • docker_build.sh: A script that automates the Docker image build process.
  • docker_run.sh: A script to run the Docker container with the appropriate settings.
  • .bashrc: Customizes the bash environment inside the container, making it user-friendly for development.

Each of these files plays a crucial role in building and managing your ROS2 Docker environment. Let’s go through each one in detail.

Dockerfile

The Dockerfile is the blueprint for our Docker image. It defines the base image, installs the necessary dependencies, and configures the environment. Below is a breakdown of the main components of the Dockerfile:

  1. Base Image: We start by using the official ROS2 Humble base imageosrf/ros:humble-desktop-full. This base image includes all the essential ROS2 components for development and is a great starting point for our custom container. By using this base image, we can ensure compatibility with other ROS2 tools and libraries, making it easier to integrate various ROS2 packages and extensions. This also saves time since the base image already includes many of the necessary ROS2 dependencies.
  2. Set Default Username: We use an argument to allow customization of the username for the non-root user. By setting this argument, you can easily customize the non-root user name when building the Docker image. This feature is particularly useful when multiple developers are working on the same project, allowing each developer to create a container with their preferred username. It also helps when deploying the container on different systems, as you can match the container user to the local system user for better integration. The user also has superuser privileges via sudo without requiring a password, making it easier to manage system-level tasks during development.
  3. Install Additional Tools: The Dockerfile installs some basic development tools like git, wget, curl, and python3-pip are installed. These tools are useful for developing and managing ROS2 packages, as well as accessing repositories and installing additional dependencies. git is essential for version control, allowing you to clone and manage code repositories. wget and curl are useful for downloading files and interacting with web APIs. python3-pip is used to install Python packages, which are often needed for ROS2 development. software-properties-common helps manage software repositories, making it easier to add new sources for package installation. Finally, ros-dev-tools provides additional utilities that enhance the ROS2 development experience.
  4. Install Python Packages Python packages are installed using pip to support ROS2 development: These packages are essential for building ROS2 workspaces using colcon. setuptools is a package development tool that simplifies the process of packaging and distributing Python code. colcon is the build tool used for ROS2, and colcon-common-extensions adds useful features and plugins to colcon, making it easier to build and manage complex workspaces. By installing these packages, we ensure that our container is fully equipped to handle ROS2 package builds, allowing developers to easily compile and run their ROS2 nodes.
  5. Install ROS2 Packages The Dockerfile includes installation of key ROS2 packages, such as ros-humble-desktop, ros-humble-moveit*, and others: These packages are essential for robotic development, providing control tools, simulation capabilities, and visualization tools. ros-humble-desktop includes the core ROS2 components along with visualization tools like Rviz, which are crucial for developing and debugging robotic systems. ros-humble-control* and ros-humble-moveit* provide advanced capabilities for controlling robotic arms and other hardware, enabling motion planning and manipulation tasks. ignition-fortress is a powerful simulation tool that allows developers to create realistic environments for testing their robotic applications. By installing these packages, we create a versatile environment capable of handling a wide range of robotic development tasks.
  6. Create Non-Root User For security purposes, the Docker container should not run as the root user. Therefore, we create a non-root user: This non-root user will be used for all operations inside the container, reducing potential security risks. Running as a non-root user prevents accidental system modifications and minimizes the impact of any security vulnerabilities that might be exploited within the container. By using the useradd command, we create a new user with a home directory and bash as the default shell. We also add an entry in the sudoers file to allow the user to execute commands with superuser privileges without requiring a password. This is particularly useful for development tasks that require elevated privileges, such as installing new software or modifying system settings.
  7. Setup User Environment We copy a custom .bashrc file into the container to customize the shell environment: This .bashrc file sets up useful aliases and sources the necessary ROS2 setup files to make the development experience smoother. The .bashrc file is executed whenever a new terminal session is started, allowing us to define environment variables, aliases, and functions that enhance productivity. For example, we can add aliases for commonly used commands like colcon build or ros2 launch, reducing the amount of typing required. We also source the ROS2 setup files, ensuring that all necessary environment variables are set up correctly for ROS2 development. By customizing the .bashrc file, we create a more efficient and user-friendly development environment.
  8. Entrypoint Script We use an entrypoint script to ensure the ROS2 environment is properly initialized every time the container starts: The entrypoint.sh script will source the ROS2 setup files and execute any provided command, making it easy to start developing immediately after launching the container. The ENTRYPOINT instruction in the Dockerfile specifies the script that will be executed whenever the container starts. This ensures that the ROS2 environment is always properly set up, regardless of how the container is run. The CMD instruction provides a default command (bash) to be executed if no other command is specified. By using an entrypoint script, we can guarantee that the container is always in a ready-to-use state, allowing developers to focus on writing and testing their code without worrying about environment setup.

Full Dockerfile

FROM osrf/ros:humble-desktop-full

# Set environment variables
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=UTC

# Install dependencies and tools
RUN apt-get update && apt-get install -y \
git wget curl python3-pip software-properties-common ros-dev-tools \
&& rm -rf /var/lib/apt/lists/*

# Install Python packages
RUN pip3 install --no-cache-dir setuptools==58.2.0 colcon-common-extensions

# Add Gazebo repository
RUN wget https://packages.osrfoundation.org/gazebo.gpg -O /usr/share/keyrings/pkgs-osrf-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/pkgs-osrf-archive-keyring.gpg] http://packages.osrfoundation.org/gazebo/ubuntu-stable $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/gazebo-stable.list > /dev/null

# Install ROS2 packages
RUN apt-get update && apt-get install -y \
ros-humble-desktop \
ros-humble-control* \
ros-humble-ros2-control* \
ros-humble-moveit* \
ros-humble-ros-ign* \
ros-humble-joint-state-publisher-gui \
ros-humble-kinematics-interface-kdl \
ros-humble-rqt-joint-trajectory-controller \
~nros-humble-rqt* \
ignition-fortress \
&& rm -rf /var/lib/apt/lists/*

RUN apt-get update && apt-get upgrade -y

# Create a non-root user
RUN useradd -m -s /bin/bash <username> && \
echo "<username> ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/<username>

# Set up the user's environment
COPY ./.bashrc /home/<username>/.bashrc
RUN chown <username>:<username> /home/<username>/.bashrc

# Set the working directory
WORKDIR /home/<username>

# Switch to the non-root user
USER <username>

RUN rosdep update

# Entrypoint
COPY ./entrypoint.sh /entrypoint.sh
RUN sudo chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bash"]

Note: Replace the <username> in the above script with the username you wish for the user in the docker container shell.

Entrypoint Script

The entrypoint.sh script plays a crucial role in ensuring that the ROS2 environment is properly set up each time the Docker container starts. Below is the breakdown of its functionality:

Explanation

  • Source ROS Installation: The script begins by sourcing the base ROS2 setup file, which is located at /opt/ros/humble/setup.bash. This step is essential as it ensures that all the ROS2 environment variables are properly configured. This includes adding ROS2 binaries to the system path and setting up any other environment requirements.
  • Source the Workspace: The script then checks if there is an existing workspace located at /home/<username>/<ros2_workspace_directory>/install/setup.bash and sources it. This step allows the workspace's setup file to be loaded, making any compiled packages in the workspace available for use in the current environment. By checking the file's existence, the script ensures that it only sources the workspace if it has been built, preventing errors during container initialization.
  • Execute Passed Command: Finally, the script executes any command passed to it (exec "$@"). This is an important feature that allows the container to behave flexibly depending on what is provided during its run time. For example, you could start a shell, run a script, or launch a specific ROS2 node. By using exec, it replaces the shell with the given command, which helps with better signal handling, especially in Docker.

The entrypoint.sh script ensures that every time the Docker container starts, it is ready to work with ROS2, either by launching a node, testing, or running any development script. This makes the development process more efficient, as it minimizes the time spent on setup.

Full entrypoint.sh

#!/bin/bash

# Source ROS installation
source /opt/ros/humble/setup.bash

# Directory path to the ROS2 Workspace
ROS_WS="ros2_workspace"

# Source the workspace if it exists
if [ -f "/home/$USER/$ROS_WS/install/setup.bash" ]; then
source "/home/$USER/$ROS_WS/install/setup.bash"
fi

# Execute the command passed to the script
exec "$@"

Note: Replace the ROS_WS in the above script with the path to the ROS2 Workspace Directory.

Docker Build Script

The docker_build.sh script automates the process of building the Docker image for the ROS2 environment. This script simplifies the image creation process by encapsulating all the necessary build commands, ensuring that developers can quickly generate a consistent image without manually executing multiple commands.

Explanation

  • Set the Image Name: The script defines a variable IMAGE_NAME to specify the name of the Docker image being built. This makes it easy to modify the image name if needed.
  • Build the Docker Image: The docker build command is used to create the Docker image using the Dockerfile in the current directory. The -t flag tags the image with the specified name.
  • Output Success Message: After building the image, the script outputs a success message to indicate that the build process has completed.

This script makes the image-building process straightforward and reproducible, ensuring that all team members can generate the same development environment with ease.

Full Docker Build Script

#!/usr/bin/bash

# Set Contianer Name
CONTAINER_NAME="<CUSTOM_ROS2_CONTAINER>"

echo "Building ROS2-Humble Container"
docker build --rm -t $CONTAINER_NAME:latest .

echo "Docker Build Completed"

Note: Replace the <CUSTOM_ROS2_CONTAINER> in the above script with the name of your ROS2 Container.

Docker Run Script

The docker_run.sh script is used to run the Docker container with the correct settings. This script ensures that the container is executed with the appropriate configuration and makes it easy to start development without manually specifying Docker options each time.

#!/usr/bin/env bash

BASH_HISTORY_FILE=${PWD%/*}/.bash_history
BASH_RC_FILE=${PWD%/*}/docker/.bashrc

CONTAINER_NAME=<CUSTOM_CONTAINER_NAME>
DOCKER_USER="<USERNAME>"

docker_count=$(docker ps -a | grep CONTAINER_NAME | wc -l)
((docker_count=docker_count+1))

XAUTH=/tmp/.docker.xauth_$docker_count
sleep 0.1
touch $XAUTH
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -

# Create a string with all --device options for each device in /dev/
device_options=""
for device in /dev/*; do
if [ -e "$device" ]; then
device_options+="--device=$device "
fi
done

docker run -it --rm \
--name CONTAINER_NAME-$docker_count \
--user $(id -u):$(id -g) \
--volume="${PWD%/*}:/home/$DOCKER_USER" \
--volume="$BASH_HISTORY_FILE:/home/$DOCKER_USER/.bash_history" \
--volume="$BASH_RC_FILE:/home/$DOCKER_USER/.bashrc" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--volume="$XAUTH:$XAUTH" \
--env="XAUTHORITY=$XAUTH" \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--workdir="/home/$DOCKER_USER" \
$device_options \
--net=host \
--privileged \
CONTAINER_NAME:latest

echo "Docker container exited."

Script Breakdown:

  1. Bash History and .bashrc Paths:
  • BASH_HISTORY_FILE=${PWD%/*}/.bash_history: This line sets the path to the .bash_history file, which will be mounted inside the container. It helps in preserving the bash history across container runs.
  • BASH_RC_FILE=${PWD%/*}/docker/.bashrc: This sets the path to the .bashrc file that will be used in the container. Customizing the bash environment inside the container improves usability.

2. Dynamic Container Naming:

  • docker_count=$(docker ps -a | grep CONTAINER_NAME | wc -l): This command counts the number of existing CONTAINER_NAME containers. It ensures that each new container has a unique name.
  • ((docker_count=docker_count+1)): The count is incremented by one to assign a new number to the container name for each new instance.
  • This dynamic naming helps avoid naming conflicts when launching multiple containers.

3. X11 Authentication Setup:

  • XAUTH=/tmp/.docker.xauth_$docker_count: A unique Xauthority file is created for each container to handle X11 forwarding.
  • xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -: This command fetches the X11 authentication token from the host’s display and prepares it for use inside the Docker container. It allows graphical applications inside the container to access the display server on the host.

4. Device Mounting:

  • The loop for device in /dev/* collects all devices from the /dev/ directory and constructs a string of --device options. This ensures that all connected devices (e.g., sensors, cameras, etc.) are accessible inside the Docker container.
  • device_options+="--device=$device ": This line adds each device found under /dev/ to the device options passed to the docker run command. It’s useful for hardware access, such as when interfacing with robotic components or USB devices.

5. Running the Docker Container: The docker run command is structured with several flags and options:

  • -it: This flag allows the container to run in interactive mode with a terminal attached.
  • --rm: This flag ensures the container is removed once it exits, keeping your Docker environment clean.
  • --name CONTAINER_NAME-$docker_count: The container is named dynamically, ensuring no name conflicts when launching multiple containers.
  • --user $(id -u):$(id -g): This ensures that the container runs as the current user, preventing permission issues when accessing files created inside the container.
  • --volume="${PWD%/*}:/home/$DOCKER_USER": Mounts the parent directory of the current working directory into the container at /home/$DOCKER_USER, allowing you to access files from your local environment inside the container.
  • --volume="$BASH_HISTORY_FILE:/home/$DOCKER_USER/.bash_history": Preserves the bash history between container sessions by mounting the host machine’s .bash_history file into the container.
  • --volume="$BASH_RC_FILE:/home/$DOCKER_USER/.bashrc": Mounts the .bashrc file from the host machine into the container, ensuring that any custom shell configurations are applied when using the container.
  • --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw": This mounts the X11 socket into the container, allowing GUI applications to interface with the host's display.
  • --volume="$XAUTH:$XAUTH" and --env="XAUTHORITY=$XAUTH": These lines ensure the container can authenticate with the host’s X server, enabling graphical applications to display on the host.
  • --env="DISPLAY" and --env="QT_X11_NO_MITSHM=1": These environment variables set up the display environment, ensuring that graphical applications can be run from within the container.
  • --workdir="/home/$DOCKER_USER": This sets the working directory inside the container to /home/$DOCKER_USER, where mounted files and environment variables are located.
  • $device_options: The collected --device options are passed here to allow the container to access all hardware devices available in /dev/.
  • --net=host: This option makes the container use the host’s network stack, which is necessary for ROS2 networking and communication, especially in multi-machine setups.
  • --privileged: This flag grants the container extended privileges, enabling access to hardware devices such as USB ports, which is crucial for robotic systems.
  • Image to Run: CONTAINER_NAME:latest: This specifies the Docker image to run. In this case, it's the ndi-needle image with the latest tag.
  • Exit Message: echo "Docker container exited.": After the container finishes its run, this message is printed to indicate that the container has exited.

This custom docker_run.sh script allows you to easily start a ROS2 development environment with access to graphical applications and hardware devices. The script handles various important tasks:

  • Dynamic container naming: To avoid conflicts and allow multiple container instances.
  • Graphical application support: By forwarding X11 credentials and mounting the necessary sockets, GUI applications like rviz or gazebo can run inside the container and display on the host machine.
  • Hardware device access: The script automatically discovers and mounts hardware devices, making it easy to interface with robots, sensors, and other peripherals.
  • User-friendly: The mounted .bashrc and .bash_history files ensure that your shell environment inside the container is familiar and customizable.

This script streamlines the process of launching a fully-featured ROS2 container with all necessary configurations, perfect for robotic development and testing environments.

Custom BASHRC Script

The provided .bashrc file plays an essential role in customizing the ROS2 development environment inside the Docker container. It sets up the necessary ROS2 configurations, enables useful aliases, and ensures a more efficient shell experience. The custom .bashrc file enhances the ROS2 development workflow by:

  • Sourcing ROS2 environments automatically: Both the main ROS2 environment and your specific workspace are set up every time a terminal is opened, eliminating repetitive manual commands.
  • Improving usability: The custom prompt makes it easier to distinguish between Docker and host environments. Color support and handy aliases for ls and grep commands make the terminal output more readable.
  • Streamlining ROS2 development: The provided ROS2 aliases simplify common tasks like building, running nodes, launching simulations, and interacting with topics, nodes, and parameters.
  • Effortless navigation: Automatically navigating to the ROS2 workspace on session start reduces friction when switching between tasks, ensuring that you are ready to build, test, or launch nodes without extra steps.
# .bashrc

# Source ROS setup files
source /opt/ros/humble/setup.bash

# Path to the Default ROS2 Workspace
ROS2_WS="<DEFAULT_ROS2_WORKSPACE_PATH>"

if [ -f "/home/$USER/$ROS2_WS/install/setup.bash" ]; then
source "/home/$USER/$ROS2_WS/install/setup.bash"
fi
# Set the prompt
PS1='\[\033[01;32m\]\u@docker:\[\033[01;34m\]\w\[\033[00m\]\$ '

# Enable color support for ls and add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi

# Some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'

# ROS2 aliases
alias cb='colcon build && source install/setup.bash'
alias cbp='colcon build --symlink-install --packages-select'
alias cbt='colcon build --symlink-install --packages-select --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo'
alias run='ros2 run'
alias launch='ros2 launch'
alias topic='ros2 topic'
alias node='ros2 node'
alias param='ros2 param'

# CD to the workspace
cd /home/$USER/$ROS2_WS

Script Breakdown:

1. Sourcing ROS2 Setup Files:

  • source /opt/ros/humble/setup.bash: This command sources the ROS2 setup file for the Humble release, initializing the ROS2 environment by setting the necessary environment variables.
  • The next if block checks for the existence of the workspace’s setup file (/home/$USER/$ROS2_WS/install/setup.bash). If it exists, it is sourced, making the ROS2 packages in the workspace available. This ensures that every time a new terminal is opened, the workspace is automatically ready for development without requiring manual setup.

2. Custom Prompt:
PS1='\[\033[01;32m\]\u@docker:\[\033[01;34m\]\w\[\033[00m\]\$ ': This line customizes the command prompt to display the username, the current working directory, and the identifier @docker in different colors. This visual change makes it clear when you're working inside the Docker container, reducing confusion between host and container environments.

3. Color Support and Aliases for ls: This block enables color support for commands like ls, grep, and their variants (fgrep, egrep). This makes output easier to read, particularly when dealing with long lists of files or search results. The additional aliases for ls provide shortcuts for common listing tasks :

  • ll: Lists all files in long format, including hidden files (-alF).
  • la: Lists all hidden files (-A).
  • l: Lists files in columns (-CF).

4. ROS2 Aliases: This section provides shortcuts for common ROS2 commands to streamline development:

  • cb: Runs colcon build and then sources the generated setup.bash file, making the build results available in the environment.
  • cbp: Builds specific ROS2 packages by name using --packages-select.
  • cbt: Builds selected packages with additional CMake arguments for RelWithDebInfo (optimized with debugging information).
  • run: A shortcut for running ROS2 nodes (ros2 run).
  • launch: A shortcut for launching ROS2 launch files (ros2 launch).
  • topic, node, param: Shortcuts for inspecting and managing ROS2 topics, nodes, and parameters, respectively. This reduces the need for typing the full command names.

5. Automatic Workspace Navigation: cd /home/$USER/$ROS2_WS: This command automatically navigates to the ROS2 workspace ($ROS2_WS) when a new terminal session is started. This ensures that you’re always in the correct directory to start development without manually navigating to it each time.

This configuration optimizes the development experience inside the Docker container, ensuring a faster and smoother workflow when working with ROS2 applications.

General Architecture of the Project Workspace

This project workspace is organized to support ROS2 development using Docker, ensuring that all necessary files and scripts are contained within a structured hierarchy. Below is an elaboration of the key components and how each part fits into the overall workflow.

ROS-FOLDER
├── docker
│ ├── docker_build.sh
│ ├── Dockerfile
│ ├── docker_run.sh
│ ├── entrypoint.sh
│ └── .bashrc
├── .bashrc
├── .bash_history
└── ros_workspace
├── build
├── install
│ ├── setup.bash
│ ├── setup.ps1
│ ├── setup.sh
│ └── setup.zsh
├── log
└── src
├── pkg_1
└── pkg_2

1. ROS-FOLDER:

The top-level directory serves as the root of your project workspace. It contains two main sections:

  • docker/: A folder for Docker-related files and scripts that set up and manage the ROS2 container.
  • ros_workspace/: A standard ROS2 workspace that includes source code, build files, installation scripts, and logs.

2. docker/:

This folder contains all the files required for setting up, building, and running your Docker-based ROS2 environment.

  • docker_build.sh: This script automates the process of building the Docker image for the ROS2 development environment. Running this script ensures that the Docker container is created with all necessary dependencies, libraries, and tools.
  • Dockerfile: Defines the Docker image that encapsulates your ROS2 environment. It specifies the base image, installs ROS2 and additional tools, sets up environment variables, and configures the system to create a development environment that is isolated from the host system.
  • docker_run.sh: A script that automates launching the Docker container. It handles mounting volumes (such as your workspace and configuration files), passing device access, setting up environment variables, and allowing GUI applications to run inside the container. This script dynamically names containers and manages hardware access, making it easier to run your ROS2 nodes and work with the environment.
  • entrypoint.sh: This script is executed whenever the container starts, ensuring that the ROS2 environment is properly initialized by sourcing the necessary setup files. It also allows for passing commands dynamically to be executed when the container starts.
  • .bashrc: A container-specific .bashrc file, which configures the shell environment for the container. It sources ROS2 setup files, defines useful aliases, and customizes the command prompt to make development inside the container more efficient.

3. .bashrc (outside docker folder):

This is the .bashrc file located in the root of the ROS-FOLDER, and it is separate from the .bashrc used within the container. This file likely contains shell customizations for the user on the host system. When the Docker container is run, the relevant .bashrc from the docker/ folder is used inside the container instead.

4. .bash_history:

This file contains the command history for your shell sessions. It is shared between the host and the container, allowing you to retain your command history across different Docker sessions, which improves productivity by letting you easily re-run commands without needing to type them again.

5. ros_workspace/:

This directory is your ROS2 workspace, where the actual development and execution of ROS2 nodes happen. It follows the standard ROS2 workspace structure:

  • build/: This folder contains the compiled binary files after running the colcon build command. It includes all build artifacts needed to run your ROS2 packages.
  • install/: After building your ROS2 workspace, the install/ directory contains the final installation of the packages, including the necessary setup files for various shells. These setup files are automatically generated when you build the ROS2 workspace and are used to ensure that all ROS2-related environment variables are correctly set up when running ROS2 nodes or packages.
  • log/: This folder stores the logs from running and building the ROS2 packages. It is useful for debugging purposes, as it provides detailed information on what occurred during build or execution, helping you identify any errors or issues.
  • src/: The src directory is where your ROS2 packages (or nodes) are stored. This is where you write your ROS2 code.

Overall Structure Benefits:

This workspace structure is designed to separate concerns:

  • Docker management is handled in the docker/ folder, ensuring a consistent ROS2 environment across different machines.
  • ROS2 workspace development follows the standard ROS2 workspace structure, enabling smooth development, build, and execution processes.

By adhering to this structure, you ensure that both Docker and ROS2 are properly configured and can easily interact, providing a clean and modular development workflow. This organization helps in maintaining scalability and manageability, especially when dealing with larger ROS2 projects with multiple packages and dependencies.

Overall Summary

This blog provides a comprehensive guide for setting up a ROS2 Humble environment using Docker, emphasizing the importance of containerization in robotics development. Docker offers a reproducible and isolated development environment, avoiding common issues such as dependency conflicts and version mismatches. This guide walks through the process of configuring and launching a custom ROS2 Docker container, highlighting key components such as the Dockerfile, docker_build.sh, docker_run.sh, entrypoint.sh, and .bashrc, all of which streamline the setup and development process.

Key takeaways include:

  • Docker as a Tool for Consistency: Docker ensures that your ROS2 environment is identical across all machines, facilitating team collaboration and reducing the risk of environment discrepancies.
  • ROS2 in Docker: By containerizing ROS2, you can manage complex dependencies more easily, ensuring your development process is isolated from the host machine while retaining access to necessary hardware devices and graphical interfaces through X11 forwarding.
  • Customizable Environment: The use of custom .bashrc files and ROS2 workspace configuration ensures that the development environment is tailored for both ease of use and efficiency, with handy aliases and automatic sourcing of setup files.
  • Effortless Automation: The provided scripts, including docker_build.sh and docker_run.sh, automate tedious setup processes, making it easy to build and run the Docker container with minimal effort. The scripts dynamically configure container names, access devices, and set up necessary volumes and environment variables.
  • Standard ROS2 Workspace Structure: A well-organized workspace structure, adhering to ROS2 standards, ensures that the development process is clean, maintainable, and scalable. This layout allows seamless compilation and execution of ROS2 nodes.

Overall, this guide demonstrates how to effectively leverage Docker for ROS2 development, providing a repeatable, secure, and efficient setup for robotic system development and testing.

If you found this article helpful, please follow me on Medium. If this post was helpful, please click the clap 👏 button below a few times to show your support for the author.

--

--

Shashank Goyal
Shashank Goyal

Written by Shashank Goyal

I'm Shashank Goyal, a passionate Dual Master's student at Johns Hopkins University, pursuing degrees in Computer Science and Robotics.