Getting started with LXC in RedHat 7

5 minute read

LXC v.s. Docker

As stated in Understanding the key differences between LXC and Docker, the main difference between Docker and LXC is:

LXC containers have an init and can thus run multiple processes; and Docker containers do not have an init and can only run single processes.

And the container use case is fairly simple:

  • To run a stack of services (e.g. Zookeeper, Storm, Cassandra and etc.) in the container, notably services may depend on others.
  • To have both staging and production containers running in the same metal box, thus isolation is important (down to network interfaces will be bonded to different containers).

The decision in favour of LXC is simply based on my current observation:

  1. Having all services in one single LXC container is more intuitive and straightforward than running each service in separated but dependable Docker containers.
  2. Docker may also introduce potential complexities if one service relies on at least two processes to run?
  3. From the security perspective, Docker requires root privileges to run, while new LXC supports unprivileged containers. Although it’s not feasible in current RedHat 7.x.
  4. With new LXD that is built atop LXC, live migration is possible, though it is still not feasible in current RedHat 7.x.

In future, if the container reusability overweights other factors, then adopting Docker is always possible.

Privileged container v.s. Unprivileged container

Unfortunately, the current RedHat 7.x does NOT support unprivileged containers. The privileged containers is not likely to receive native support in RedHat 7 life cycle. Mostly, because the Linux kernel versions in RedHat 7 are too old.

  • The unprivileged container depends on a list of new kernel features and packages (shadow-utils >= 4.2.1; linux-kernel >= 3.12; cgmanager and etc.).
  • Current RedHat 7.2 and all future 7.x updates are based on linux-kernel 3.10.0-xxx.

Installing LXC in AWS RedHat 7.2

The installation is pretty straightforward. The objective is to use official packages as many as possible, thus excluding self-compiling options. The following are minimal steps after checking a few documents 1, 2, 3.

Install EPEL

Run the following two commands to enable EPEL on your AWS EC2 instances. Check this article if you are running RedHat variations.

curl -O http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
sudo rpm -ivh epel-release-7-5.noarch.rpm

Install LXC

# lxc-extra is included in Fedora but DOES NOT exist in EPEL.
sudo yum -y --enablerepo=epel install lxc lxc-templates wget libvirt
# Optional, used if no virbr0 is initalised
sudo systemctl start libvirtd

Playing with privileged container

Creating/Deleting privileged container

# the privileged container are essentially a container created via root,
# so we will start everything begin with su
sudo su
# this will download "CentOS 7 64-bit" and create a container named "prod"
lxc-create -t download -n prod -- -d centos -r 7 -a amd64
# destroy the "prod" container
lxc-destroy -n prod 

Listing containers

# list ALL containers
lxc-ls
# list RUNNING containers
lxc-ls --active
# --fancy flag NOT supported in current lxc in Redhat 7.x
# lxc-ls --fancy

Starting and stopping containers

# -d launches container at backend, removing the flag brings the container's tty to current tty
lxc-start -d -n prod
# stop the "prod" container
lxc-stop -n prod

Accessing containers

The default CentOS 7 container:

  • is well isolated from its host.
  • has no sudo access via lxc-start and lxc-console.
  • has no default user.

To access the containers, the following steps have to be executed:

  • Create your default user.
# start the "prod" container in the background
lxc-start -d -n prod
# use lxc-attach to gain root access
lxc-attach -n prod
# create "lxcuser" user and assign a password to the user
adduser lxcuser
passed lxcuser
# exit root mode in lxc-attach
exit 
  • Login the container as normal user.
# use lxc-console to login
lxc-console -n prod
# lxc-console might stuck before the login prompt showing up
# in that case, use lxc-start without -d flag as an alternative
# stop the container
lxc-stop  -n prod
# restart the container and bring the container's tty to current tty
lxc-start -n prod

Notes on infrastructure architecture

Since we are going to run both prod and staging in every host node (physical and virtual), and resetting a physical node is considerably complicated. The best practice will be:

  • The physical and virtual nodes should have minimum packages installed and configured.
    • e.g. lxc packages and security patches.
    • e.g. network bridging configures between network interfaces and containers.
  • Running prod and staging in two separated containers.
    • Prod and staging are isolated from each other.
    • Prod and staging are isolated from their underline host.
    • If any prod or staging servers go wrong, we will have options to either revert back to the previous snapshot or destroy and rebuild the container.
  • A piece of separated script is needed for the container server that runs Puppet Master.

Mounting external disks to a container

Mount via shared directory option

With this option, you can use UUID of the partitions without specifying the external disk device name. This is very handy in AWS environment, as the disk device name may change over time.

  • Make sure the external disk has been fdisked, mkfsed, and fstabed.
    • The external disk /etc/xvdb1 is mounted at /mnt/prod-volume in /etc/fstab.
    • i.e. UUID=2ae1a7e4-7cb0-43ba-b524-5fae73ca211e /mnt/prod-volume ext4 defaults 0.
  • If /etc/fstab is newly configured, then mount everything again with sudo mount -a.
  • Edit the container’s config file in /var/lib/lxc/prod/config, by adding the following lines at the end of the file lxc.mount.entry=/mnt/prod-volume mnt/volume none bind,create=dir 0 0.

In the line above, create=dir or create=file is container’s mount specific option to create dir (or file) when the point to be mounted.

The second path option can use both relative and absolute path, but they mean differently:

  • An absolute path has to start from host path, /var/lib/lxc/mycontainer/rootfs/mount_point. It may not work in some Debian derived systems.
  • A relative path starts from / inside container.

Mount via fstab option

With this option, you have to use device name instead of UUID. It’s bit inconvenient if the device name changes over time. But the pro is, mounting inside container doesn’t require mounting in the host at the first place.

  • Create a fstab file under /var/lib/lxc/prod.
    • This is not the fstab file loaded inside the container (i.e. /etc/fstab), in fact, this file is loaded before /etc/fstab.
    • Add lines like /dev/xvdb1 mnt/volume ext4 noatime 0 0.
  • Edit the container’s config file in /var/lib/lxc/prod/config, by adding the following lines at the end of the file lxc.mount = /var/lib/lxc/prod/fstab.

Other options

More options can be found in this thread.

Bridging container network to host’s interface

There are 5 different LXC network configures described in this blog. The current case case requires to bridge one physical network interface to a container, thus, Phys is the right type to go with.

By adding the following lines to /var/lib/lxc/prod/config, the physical network interface will disappear in the host, but appear in the container.

lxc.network.type = phys
# hwaddr should match the hwaddr of the interface
lxc.network.hwaddr = 08:00:27:f9:8f:2e
lxc.network.flags = up
lxc.network.link = eth1
lxc.network.ipv4 = 192.168.1.135/24
# Gateway should be corrected
lxc.network.ipv4.gateway = 192.168.1.254
# Rename host interface to a generic name in container
lxc.network.name = eth0