Linux containers allow for easy isolation of developer environments. If you’re often working with a bunch of different ROS versions, it’s a lot easier to do your work entirely in containers.
You’ll first need to install LXD using snap.
ubuntu@lxhost:~$ sudo snap install lxd
lxd 3.21 from Canonical✓ installed
Throughout this guide I will be using the hostname to distinguish which machine I’m running commands on. lxhost represents the bare metal machine you’ll be creating containers in. ros1-live is the container we’ll be creating later remote-pc is a different machine on the same LAN as lxhost Pay attention to the value after the @ in the shell prompts to make sure you run commands on the right machine.
If you haven’t added /snap/bin to your PATH yet, you’ll want to in order to call the programs snap installs.
ubuntu@lxhost:~$ export PATH=/snap/bin:$PATH
You’ll need to initialize LXD before your first run. I’ll be using default settings for this example, but you should look into the settings to make sure of what you’re getting into. The exact printout may vary depending on your system, as well as what version of LXD you’re using
ubuntu@lxhost:~$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
This handled setting a number of configuration options for us, as well as creating a storage pool for any containers we make and a network bridge (named lxdbr0 above) that will connect our containers to the network.
LXD runs as a daemon on your system that is accessible to root and the lxd group on your system. Users with access to LXD can attach host devices and filesystems, presenting a security risk. Only add users you’d trust with root access to lxd.
If you’d like to access the container from a remote pc, the default bridged network setup makes things tricky, as the containers are behind a NAT on the host. The easiest way to ssh in is to use the host as a jumphost.
If your remote PC is running Windows, you’re likely using a version of ssh without the ProxyJump command. I recommend downloading the latest release of OpenSSH and putting it at the front of your Windows PATH variable, so VSCode and other tools can use it.
If you need more direct access, you can add an lxc proxy device to the container to forward a port into it
This will listen on the host port 2222 and forward connections to the container’s port 22. You can learn more about proxy devices here.
If you don’t want the container behind a NAT on the host, you can specify a different bridge configuration or add an IPVLAN or MACVLAN nic network device to the container. You can read more about nic devices here.
Sharing Files Transparently
You can neatly share directories between the host and the container using disk devices. Disk devices can be a bind-mount of an existing file or directory, a regular mount of a block device, or one of several other source types. You can read about disk devices here.
Lets create a directory and share it with the container:
ubuntu@lxhost:~$ mkdir share && cd share
ubuntu@lxhost:~/share$ mkdir ros1-live
ubuntu@lxhost:~/share$ lxc config device add ros1-live share disk source=~/share/ros1-live path=/home/ubuntu/share
In order to truly share access, we’ll want to use the raw.idmap config option for the container to map your UID and GID. Assuming your UID and GID on the host is 1000 (the default for a single user Ubuntu installation), you’ll use the following command to set the option:
ubuntu@lxhost:~$ lxc config set ros1-live raw.idmap "both 1000 1000"
Now any files you make and modify in the host and the container will be indistinguishable, permission-wise.
To verify the directory is shared:
ubuntu@lxhost:~$ ssh ros1-live
Set up ROS
For once, setting up ROS will be the easy part. Just follow the instructions at the ROS wiki like normal.
Developing in the Container
Create a workspace directory in the share directory, and use it as normal to develop.
If you intend to interface with actual hardware, you’ll need to attach devices to your container. I’ve linked to the device configuration option several times before, but you’ll want to look at it in depth for configuring your hardware.
For example, if you’re controlling an OpenManipulatorX via a U2D2, you’ll need to communicate via serial. This is done via a Unix character device at /dev/ttyUSB0 or similar. To add it to the container as a unix-char device, use
You can also forward the entire usb device using the vendorid and productid of the device as you would for a udev rule. See the entry on usb devices.
You may notice GUI programs don’t work, either well or at all, in containers. Simos Xenitellis wrote a wonderful guide on how to fix this and you’ll be able to run Gazebo, RViz, et al in your containers.
Create a new profile for your containers as follows: (Note: this will use vim, if you do not know vim, use the alternative further below):
To summarize the above, we tell lxc that every device with a gui profile on it should
Do the raw.idmap config from earlier.
On cloud-init, disable shm in /etc/pulse/client.conf
Set the pulse audio server to a socket /tmp/.pulse_native
Install x11-apps, mesa-utils, and pulseaudio
Mount the pulseaudio socket from the host to to /tmp/.pulse_native
Mount the X11 socket
Mount your gpu
We apply this profile to the existing container using:
ubuntu@lxhost:~$ lxc profile add ros1-live gui
Which adds the gui profile alongside the already applied default profile to the container.
Restart the container using:
ubuntu@lxhost:~$ lxc restart ros1-live
And after ~30 seconds of downloading and installing the packages, the container should be up and ready to use. You can get GUI apps over ssh with X11 forwarding, or just from the lxc exec method of getting a shell.