LXC and LXD

May 7, 2020

Continuing the container series naturally brings us to LXC and LXD. For a background on Linux Containers and an introduction to Docker, see Linux Containers and Docker. The rest of the container series posts can be found here.


LXC

containers

Engineers from IMB developed LXC in 2008. LXC layered userspace tooling on top of the already existing cgroups and namespace technologies. While LXC improved the user experience of deploying containers, security remained an obstacle with the threat of users being able to break out of containers and attack the host system.

In version 1.0 of LXC, released in 2014, security was enhanced with an update to cgroups and namespaces, along with support for additional security features including SELinux and Seccomp.

LXC 1.0 also introduced the concept of unprivileged containers. Up to this point all containers were privileged, where the container uid 0 was mapped to the host’s uid 0. In this case protection of the host and prevention of escape was done through MAC systems, seccomp filters, and dropping of capabilities and namespaces. LXC considers privileged containers as root-unsafe. Unprivileged containers on the other hand are safe by design. The container uid 0 is mapped to an unprivileged user outside of the container and only has extra rights on resources that it owns itself.


LXD

LXD is a system container manager written in Go, developed by Canonical, and released in 2014. LXD is built on top of LXC to provide its own user experience to manage containers.

LXD is image based with a wide number of Linux distributions available. It offers a full Linux system running inside containers, resulting in a user experience somewhere in between virtual machines and containers. Unlike virtual machines, containers implemented with LXD utilize the same Kernel as the host system and do not simulate hardware. Unlike Docker containers, LXD containers act more like a full Linux system and are not treated as ethereal application containers.


Installation

I’ll be starting with a fresh Ubuntu 20.04 Server install. LXD comes preinstalled on Ubuntu Server, but will need to be installed on Ubuntu Desktop. Installing the Snap version will give you the most up to date version of LXD:

# snap install -y lxd

LXD can alternatively be installed with apt:

# apt install -y lxd

Configuration

By default lxc commands require root privileges. Add your current user to the lxd group and run newgrp to apply these settings straight away:

# usermod -aG lxd $USER
$ newgrp lxd

Configure LXD by runnning the init script. In most cases the default options will be just fine. Go ahead and press ENTER at each prompt for the default options. Advanced storage, network, and remote options can be specified at this time. I suggest using the default ZFS storage pool, which allows LXD to take advantage of ZFS features such as cloning and snapshots:

$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=15GB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

LXD CLI

Launching your first container is easy. Run lxc launch, followed by an image name, and finally a container name of your choosing:

$ lxc launch ubuntu:20.04 test

List all containers on the system with:

$ lxc list
+-------+---------+----------------------+---------------------------------------------+-----------+-----------+
| NAME  |  STATE  |         IPV4         |                    IPV6                     |   TYPE    | SNAPSHOTS |
+-------+---------+----------------------+---------------------------------------------+-----------+-----------+
| test | RUNNING | 10.47.243.213 (eth0) | fd42:9ea0:5534:5192:216:3eff:fe9f:bf (eth0) | CONTAINER | 1         |
+-------+---------+----------------------+---------------------------------------------+-----------+-----------+

Access your container with a shell:

$ lxc exec test -- /bin/bash

Or run commands directly:

$ lxc exec test -- apt update

Containers can be easily stopped and deleted:

$ lxc stop test
$ lxc delete test

Images

Besides the many versions of Ubuntu LXD has by default, LXD also includes a wide variety of distros from its public image server.

Note: The images found on this image server are unofficial images are generated using community supported, upstream LXC image templates.

The available images range from Centos to Kali Linux and everything in between. You can list all available images with:

$ lxc image list image:

Find the image you’re looking for and launch:

$ lxc image list image: | grep centos
$ lxc launch images:centos/8 centos-8

Storage

List all LXD storage pools:

$ lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |                   SOURCE                   | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default |             | zfs    | /var/snap/lxd/common/lxd/disks/default.img | 2       |
+---------+-------------+--------+--------------------------------------------+---------+

Get info on a specific storage pool:

$ lxc storage info default
info:
  description: ""
  driver: zfs
  name: default
  space used: 545.91MB
  total space: 14.04GB
used by:
  images:
  - 647a85725003d873f8bb9a5bd1a09bdc7fd4bcb393b2cf629f7e0edaa58f5637
  profiles:
  - default

Expand a default loop backed ZFS storage pool (use the pool name and image location specified in the lxc storage info output):

# zpool set autoexpand=on default
# zpool online -e default /var/snap/lxd/common/lxd/disks/default.img
# zpool set autoexpand=off default
$ lxc storage info default
info:
  description: ""
  driver: zfs
  name: default
  space used: 545.92MB
  total space: 19.24GB
used by:
  images:
  - 647a85725003d873f8bb9a5bd1a09bdc7fd4bcb393b2cf629f7e0edaa58f5637
  profiles:
  - default

Copy and Snapshots

When utilizing ZFS as a storage back-end, LXD can take almost instantaneous snapshots that initially consume no additional disk space, but over time increase in size as snapshots continue to reference old data.

Take a few snapshots:

$ lxc snapshot test snap0
$ lxc snapshot test snap1

Snapshot info is included with the general info of a container:

$ lxc info test
Name: test
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/05/07 16:59 UTC
Status: Stopped
Type: container
Profiles: default
Snapshots:
  snap0 (taken at 2020/05/07 17:02 UTC) (stateless)
  snap1 (taken at 2020/05/07 17:57 UTC) (stateless)

Containers can be restored from snapshots, but only the latest snapshot can be used:

lxc restore test snap0
Error: Snapshot 'snap0' cannot be restored due to subsequent snapshot(s). Set zfs.remove_snapshots to override
lxc restore test snap1

A simple workaround for this behavior (a consequence of the ZFS storage back-end) is to make a copy of the container from the earlier snapshot:

lxc copy test/snap0 test0

Containers can also be easily copied:

lxc copy test test1

Delete specific snapshots:

lxc delete test/snap0

That should pretty much cover the basic usage of LXC and LXD. These two systems have a long history and have gained important features over the years. LXD definitely stands out from both Docker and virtual machines in its own unique niche. Next time we’ll be covering networking and reverse proxies with LXD using a practical example.