Education and the Cloud

May 12, 2015

Proof-of-Concept – Using Mesh VPN to interconnect LXC containers on Multiple Hosts on Multiple Clouds

Filed under: LXC, mesh VPN, ubuntu, Uncategorized, VPN — Tags: , , , , , — bmullan @ 4:34 pm


Secure Mesh VPN Network Interconnect for

LXC containers in Multiple IaaS Clouds

by Brian Mullan (

April 2015


I’ll start off this blog post by saying LXC containers are great (see !

LXC provides pre-built OS containers for Centos 6, Debian (Jessie, Sid, Squeeze & Wheezy), Oracle linux v6.5, Plamo linux, and multiple releases of Ubuntu from Precise up to and including Wily.

Its important to understand that the Host of LXC containers can be one distro and the containers can actually be any of the other supported Distro’s.   The only requirement for a container OS is that it utilize the same Linux Kernel as the Host OS.

This post though is about LXC  so lets begin.

The rest of this post is about my proof-of-concept testing of using a full mesh VPN to provide LXC container connectivity between any remote host whether on an IaaS Cloud like AWS, Digital Ocean or Rackspace or your own servers.

This document should be considered a Work-in-Progress Draft as I have been receiving a lot of good input
from others & will continue to edit for additional information, improvements and/or corrections.

Problem Statement

On any of the existing IaaS Cloud providers like AWS, Digital Ocean etc you can easily create virtual machine “instances” of running Linux servers.

Note that some IaaS clouds (Azure, AWS as examples) let you also create Windows virtual machines also.   But again tthis is about Linux & LXC not Windows.

Although you can create and run a Linux server in those Clouds you cannot “nest” other linux servers inside of those Cloud server “instances” … by “nesting” I am referring to using KVM or VirtualBox etc inside of an AWS Ubuntu server instance to create other virtual machines (vm inside a vm).   There may be some IaaS providers that permit nested VMs but I am not aware of any.  AWS for instance does not allow this.

The reason is that those clouds do not permit nested hardware virtualization.

NOTE:  On your home linux pc/server you can nest KVM hw virtualized instances.

LXC containers are:

  • much more light-weight than using full HW virtualization like KVM, vmware, VirtualBox etc.   This means LXC is faster and use less “host” server resources (memory, cpu etc).    Canonical (ubuntu, lxc, juju etc) just published some performance test results of LXD (LXD utilizes LXC !) versus KVM instances.   LXD/LXC in terms of both scalability & performance far surpasses KVM.    On a server where you may be limited to running ~20 HW virtualized VMs you may be able to run 80-100+ LXC containers.
    • LXC containers can be “nested” within a HW virtualized Linux on AWS, Digital Ocean etc
  • the LXC containers also all share the same kernel as the host machine so they are able to take advantage of the “host” security, networking, file system management etc.
  • extremely fast to start up & shut-down… almost instantaneous.
  • flexible because you can use say an Ubuntu host and have LXC containers that are other linux distro’s such as Debian or Centos.

A benefit of LXC is that you can use it to create create full container based servers in IaaS Clouds like AWS, Digital Ocean etc and you can also “nest” LXC containers (containers inside a container) on those cloud “instances”.

LXC has some characteristics which are “default”.   These can be modified/changed but I will not be going to cover that in this document.

LXC containers are by default created/started/running behind a NAT’d bridge interface called “lxcbr0” which is created when you install LXC on a server.

lxcbr0 is by default given a 10.0.3.x network/subnet

NOTE: you can change this if you want/need to 

Each LXC container you create on the “host” will be assigned an IP address in that 10.0.3.x subnet (examples:, etc).

NOTE: Your Cloud “instance” (re VM) will be assigned an IP address at the time you create the cloud “instance” by whoever the Cloud IaaS provider is.   Actually there are usually 2 ip addresses assigned, one private to the cloud and one “public” so the Cloud instance can be reached from the Internet.

The LXC containers you create & run on any Cloud instance (the “instance” will from now on be referred to as the LXC “host”) can by default reach anything on the Internet which the “host” can reach.   Again, that is configurable.

By default, all LXC containers running on any one “host” can also reach each other.

But what if you wanted LXC containers running on a host on say AWS to interact with LXC containers running on a host on Digital Ocean’s cloud?    No you can’t… not without some network configuration magic the LXC containers running on one host cannot talk to containers running on another host because all will be running behind their own hosts lxcbr0 NAT’d interface.

Also, LXC containers running on AWS cannot reach LXC containers running on another host also on AWS (ditto for other Clouds).

So the problem becomes… what if you wanted to do this though?

What if you wanted your LXC containers on a host somewhere (cloud or elsewhere) to be able to reach & interact with LXC containers running on any other host anywhere (assuming firewalls etc don’t prevent it).

Also, how could you make this secure so not just anyone could do this?

A Solution Approach I Utilized

Virtual Private Networks (VPNs) are commonly used in the normal networking world to securely interconnect remote sites & servers.  Think of a VPN as a “tunnel”.

VPNs encrypt the data links utilized for this interconnect to keep the VPN and any data traversing it  “private”.   So a VPN is an encrypted “tunnel”.

Most common VPN are peer-to-peer (P2P).   P2P VPN usually require configuration of each server that you want to connect to.   If you have 100 servers or sites then that means configuring each individual site for 99 different connections (1 for each “peer” site/server).

That solution if used beyond a few servers can be both complicated & messy to maintain.

The solution to this is to use what is called a Mesh VPN.  A “mesh” VPN means that every host configured as part of the VPN can connect to every other host connected to that VPN without necessarily being configured specifically  to do so.


mesh vpn with lxc




















In Open Source there are quite a few Mesh VPN choices and some offer more or less features and are more or less complicated to setup.   Some mesh vpn solutions are more complicated to configure than others.

Some Mesh VPN utilize a concept of a “super-node” which is used to keep a stateful database of all “member” servers/hosts that are part of the VPN.

Other Mesh VPN have been designed so as to not require a “super-node” at all!    This reduces overhead traffic to/from the “super-node” and any subsequent delays that traffic can cause.

In this blog post I am describing how to utilize one such Open Source Mesh VPN named PeerVPN ( which is the work of Tobias Volk.

Key PeerVPN Features include:

  • Ethernet tunneling support using TAP devices.
  • IPv6 support.
  • Full mesh network topology.
  • Automatically builds tunnels through firewalls and NATs without any further setup (for example, port forwarding).
  • Shared key encryption and authentication support.
  • Open Source (GPLv3)

PeerVPN uses UDP exclusively and PeerVPN sends UDP packets that are larger than the MTU.
Tobias Volk, the author of PeerVPN, has indicated that PeerVPN fragments/reassembles packets itself to enable this MTU capability.

PeerVPN is both simple to setup and it does create a full mesh VPN and it does not require a “super-node”.

You can define multiple separate VPN’s on each host(s)!   To define additional VPN networks just create additional copies of your peervpn.conf using a new unique name for each.

  • edit each new configuration file (call the new config file anything you want)
  • change the networkname variable to be a unique name of the additional VPN
  • change the port variable to be unique for each new VPN
  • generate a different PSK encryption/authentication key for each additonal  VPN and add that PSK key to the appropriate VPN’s  .conf file after the PSK variable in that file.

NOTE:   All servers that you want to be part of the same VPN must use the same config file values (exceptions:  “interface” & “ifconfig4/ifconfig6” values)

The minimum PeerVPN configuration file requires only 9-11 items to be configured depending on whether you are using IPv4 and IPv6 or not:

port 7000                                                 # your specified Port number to be used by any individual/distinct PeerVPN (other VPN’s require a different Port #)
networkname VPNnet1                          # your name for each unique VPN network deployed (other VPN’s require a different “networkname”)
psk MyCryptoSeedPassword                    # PSK is an encryption/authentication “password” upto 512 characters (other VPN PSKs should be unique). For PeerVPN, the PSK that you enter in the config is just a seed password used to generate the “real” crypto keys. The “real” crypto keys are always AES keys with 256 bit length, which are generated individually for each VPN link.
enabletunneling <yes|no>                    # Default is YES.  Enables tunneling interface (refer to config documentation link below)
enableipv4 <yes|no>                              # Default is YES
enableipv6 <yes|no>                              # Default is YES
interface peervpn0                                 # name you want to give the local VPN Tunnel End Point (TEP) on a “host” (name it whatever you like)
ifconfig4                             # IP address of “this” hosts TEP. Next Host TEP may be etc.
ifconfig6 <configure>                              # this is the node’s IPv6 address that should be assigned to the tunneled interface (i.e the encrypted tunnel).
initpeers 7000 7000  # For HA you should ID at least several Peer Node Public IPv4 addresses that “this” node should try to initially connect or reconnect to if connection is lost
initpeers 2001:DB8:1337::1  7000         # For HA you should ID at least several Peer Node Public IPv6 addresses that “this” node should try to initially connect or reconnect to if connection is lost
enablendpcache <yes|no>                    # Default is NO.  If using IPv6 set to YES. Enables caching of tunneled IPv6 NDP messages to improve performance by reducing the amount of NDP multicast messages that need to be sent between peers

For a basic PeerVPN configuration file that’s it!    Pretty simple to implement I think compared to other mesh VPN solutions I have seen!

To start peervpn use the following command:

usage:  ./peervpn <path to peervpn config file>

IMPORTANT NOTE:   For complete PeerVPN configuration options and descriptions see:

The 10,000 ft view of the overall process to setup and use PeerVPN are:

  • Create ubuntu server instance (re host) on each cloud
  • On each Cloud Instance/host open port 7000 which is used by PeerVPN.
  • Install peervpn on each cloud instance.   Copy the .zip file and unzip it in a subdirectory of your choosing.
  • Create a peervpn.conf configuration file.   Refer to:
  • Generate a PSK encryption password “seed”.  I  used “psktool”…  and set the “psk” variable in your peervpn.conf file to that key
    • note::  use the same PSK on all VPN “member” hosts within the same VPN
  • Follow the instructions on the above PeerVPN link in regards to adding more server/hosts to the VPN.    You can add as many as you can support from a traffic perspective.

Start a new instance of the peervpn and point to the additional .conf configuration file


  • ./peervpn ./vpn-network-A.conf
  • ./peervpn ./vpn-network-B.conf
  • etc

If you do this, each VPN will be separate & isolated from every other VPN not of the same “networkname”.

How to Install & Use PSKTOOL to generate your PSK encryption password

An important part of any VPN is the encryption of the data traversing the VPN tunnel. This is especially true for data crossing the Internet. To insure the security of the data you send through your VPN tunnel PeerVPN’s configuration file (peervpn.conf) allows you to specify a PSK encryption password.   The PSK you enter into the peervpn.conf file is used as a “seed” to generate the actual 256bit AES keys used to encrypt the VPN link.

Pre-Shared Keys (PSK) can be used to provide both authentication & encryption. Pre-Shared Keys is the most common authentication method used today.

I used psktool for my experiment and it is included in the gnutls package(s).

On Ubuntu the following will install what is required for you to use psktool:

$ sudo apt-fast install gnutls-bin gnutls26-doc guile-gnutls -y

Usage : psktool [options]
-u, – -username username
specify username (username not important for our use-case here with peervpn but tool requires one)
-p, – -passwd FILE specify a password file.
-s, – -keysize SIZE specify the key size in bytes! NOTE: the max keysize is 64 bytes (ie 512 bits)
-v, – -version prints the program’s version number
-h, – -help shows this help text

then… to generate a 512 bit PSK for “any” username and save it to some file (example = ./mypsk):

example: $ psktool -u bmullan -p ./mypsk -s 64

Edit the mypsk file and copy everything after the name you used (the name will be the only readable text in that file) and add that copied PSK password key into your peervpn.conf file after the variable “PSK”

How to use the PeerVPN “mesh” VPN with LXC

The big question is…  how does this help interconnect LXC containers running on possibly many remote and independent server/hosts

All it takes is a small networking change…

If on each host you configured & started PeerVPN properly, then on each host if you execute “ifconfig” you will see one or more VPN Tunnel End Point (TEP) “interfaces” created by PeerVPN.

NOTE:  The TEP will be named the same “name” as you entered in the PeerVPN configuration for the variable called “interface” … refer to the above PeerVPN tutorial.

To connect LXC containers running on any PeerVPN configured host you attach the “peervpn0” interface to the lxcbr0 bridge that lxc uses on that host.

NOTE:   Depending on your peervpn.conf file configuration you are the one that defines the PeerVPN TEP interface IP address.   In the PeerVPN Tutorial example the peervpn0 interface is given a 10.8.x.x address

When you installed LXC on a host (sudo apt-get install lxc) a default LXC bridge will be created and given a 10.0.3.x IP address.   Also, any lxc containers created using the lxc-create command on that host will by default get a 10.0.3.x IP address.

While logged into each of your servers you should now be able to ping the 10.8.0.x IP address of the other PeerVPN member servers.

Our next step is to connect our TEP to the LXCBR0 bridge to enable containers attached to that bridge to pass data over the VPN tunnel.

Since the PeerVPN TEP interface (“peervpn0” in the Tutorial example) is just like any other Linux ethernet interface we can use the “ip link” command to connect the peervpn0 interface to the LXC lxcbr0 bridge.

$ sudo ip link set dev peervpn0 master lxcbr0

NOTE:   After executing this command on EACH Host… you will find that you can no longer PING the 10.8.0.x IP addresses of the other PeerVPN member servers!

This is expected and is OK because if you still have the terminal up where you executed the command to start the PeerVPN (ie  sudo peervpn … you should still see your “peers connected” !

Next create an LXC container on each “host”


$ sudo lxc-create -t download -n my-container —  -d ubuntu -r trusty -a amd64

Note:  this will create a new LXC container named “my-container” using Ubuntu and the Trusty release (re v14.04) and also make it a 64Bit OS in that container.

Next… start the container your created on each host and then get access into the LXC container “my-container”

$ sudo lxc-start -n my-container -d

$ sudo lxc-attach -n my-container

If you look closely at the Terminal window you are using you will see that the “prompt” has now changed to show that you are logged into the container “my-container” and that you are logged in as root.

Note:   root in a container is NOT the same as root in the “host”

On each host get the IP address of each host’s container that you created and write it down.

You can get those IP addresses using the following LXC command on both Host A and Host B

$ sudo lxc-ls -f

Or if you are logged into the Container on each host just do:

$ ifconfig

NOTE:   (your container IP addresses will be different but for our example here let’s say

  • eth0 of Host A’s container has IP address
  • eth0 of Host B’s container has IP address


peervpn lxc diagram




















While logged into the Container on Host A, try to ping the Container IP address on Host B

Using our example IP addresses from above (again your own Container IP addresses will be different:

$ ping

This should now work and Containers on Host A can reach Containers on Host B via the PeerVPN Tunnel you created.

Important Note

For our proof-of-concept trial here you need to understand that we have left LXC on each host node to utilize the default LXC configuration.   So each host will have its own LXC lxcbr0 bridge … and … the lxcbr0 bridge on each host will all have the same 10.0.3.x subnet & and ip address from that 10.0.3.x subnet defined for lxcbr0.

Furthermore, the LXC containers created and running on the individual “hosts” will all also have been assigned a 10.0.3.x ip address by the local lxcbr0 dnsmasq.

Even though LXC “by default” creates & assigns “unique” IP addresses to each LXC container created inside a particular “host”…    LXC running on separate “hosts” is NOT by default aware of IP addresses used by LXC on any other host.

For our “proof-of-concept” here, that means there is the potential for a “duplicate” 10.0.3.x IP address to be assigned to a container on one or more “hosts”.

For a small proof-of-concept this is probably unlikely to occur and so for this blog write-up we will ignore that fact.   But for a production environment you will want to look into using a centralized IPAM (ip address management) solution which will probably involve other linux tools such as DNSMASQ, DHCP, DNS.    However, that is beyond the purpose of this proof-of-concept article/blog post.

Finally Step — repeat this process for each cloud instance/host if you’d like to test beyond just a couple servers.  However, remember there is a “remote” possibility of some LXC container getting a duplicate IP address in your own proof-of-concept trial.   it is remote but it is possible.

NOTE:   you can configure LXC on each host to use a different bridge you create (say br0) and then on 1 host create & add a DNSMASQ and attach it to the br0 bridge.   After doing so, all LXC containers on any host anywhere that is part of the same PeerVPN tunnel will get their IP assigned by a single dnsmasq and you will not have to worry about IP duplication.

Now each lxc container on each cloud instance should be able to ping the 10.8.x.x address of any other lxc container on any other PeerVPN host you have setup anywhere as well as ping any other LXC container on any of those Hosts.

Also, for any Production use it might be advantageous to utilize Non-Privileged LXC containers.    All of this blog post has just talked about “privileged” LXC containers.

Use & Implementation of IPv6 as a Production Solution

The introduction & increasing use of IPv6 instead of IPv4 will greatly simplify this overall PeerVPN solution in regards to IPAM because IPv6 was designed to allow local IPv6 address assignments which are guaranteed to be unique even between separate and remote host compute nodes/containers.   Google “ipv6” and read up to become more familiar with it as the “internet of things” or IOT as its popularly called will require the vast number of available IPV6 addresses in order to connect the future world’s billions of inter-connected internet devices (phones, tv’s, cars, tablets, laptops etc).

ARIN announced in June 2015 that it has exhausted ALL IPv4 addresses !

So no more new IPv4 is available.   For this reason, its important to start learning, testing, deploying IPv6 where you can.   In the U.S. almost all ISP’s (Cable, ATT, Mobile etc) now support IPv6 !

NOTE:  The main advantage of IPv6 over IPv4 is its larger address space. The length of an IPv6 address is 128 bits, compared with 32 bits in IPv4. The address space therefore has 2128 or approximately 7038340000000000000♠3.4×1038 addresses

General IPv6 Configuration for LXC

Searching the web I found a good write-up describing the configuration of IPv6 for LXC container use.

Although this article does not address anything about VPNs I think it provides a great background to understand the critical steps & considerations to configure IPv6 for LXC and the LXC Host machine.

Refer to:     LXC Host featuring IPv6 connectivity

Unique Local IPv6 Generator

There is a great online tool to help you generate a unique “local” IPv6 address to utilize with your mesh network or simply to use IPv6 with LXC or Linux configurations.   See:

Suggested Readings

To really start understanding LXC be sure to read through the terrific 10 part Series on LXC by one of the Principle LXC Developers Stephane Graber.   Refer to:

To gain a good understanding of IPv6 configuration in Linux one web site that is fairly comprehensive in its description of the terms, configuration options, and usage refer to:     IPv6 – Set UP An IPv6 LAN with Linux

Also just for a good reference I have found the iproute2 cheat sheet web page extremely valuable.

Last Words…   As I am not any kind of expert in IPv6, LXC or Linux feel free to suggest improvements, changes and/or configuration examples to this  approach in any of the related areas !

Have fun…!


January 1, 2015

Using Rundeck on Ubuntu to automate server deployments into LXC (local or remote) containers

Filed under: Cloud Management, LXC, ubuntu, Virtualization Tools — Tags: , , , , — bmullan @ 11:59 am

Continuing my last posts regarding LXC (linux containers) I realize that managing them from the command line might be a bit tedious when there can be hundreds or thousands of containers between your local PC/laptop and any “remote” (ie cloud) servers & LXC containers you utilize/manage.

I just recently found out about Rundeck while searching for orchestration/mgmt tools.

My use-case was that I was looking for something that could help in managing LXC (linux containers) whether remote or local.

Note:  many people confuse LXC and other container technologies like Docker, LMCTFY, etc.   They are all different solutions that underneath utilize Linux Namespaces.   Here is a good multi-part series describing Linux Namespaces.

LXC ( is an incredible technology.

With the release of 1.x this past year it now supports nested containers, unprivileged containers and much more.

Anyway, I decided to see if I could get Rundeck to work in an LXC container and also be able to create workflows/jobs etc to work with LXC containers.

LXC has a rich set of CLI commands:

  • lxc-create
  • lxc-start
  • lxc-attach
  • lxc-stop
  • lxc-clone
  • lxc-destroy
  • etc

There is also an API that supports Python, Go, Ruby etc.

Stephane Graber (one of the LXC core developers) has a great 10 part Blog series that tells you all about LXC.

For me,  I just wanted to get Rundeck to issue the above lxc-xxxxx commands.

Turns out it only took a couple configuration changes so I thought I’d share my notes here.

Note: all of this was done on Ubuntu 14.04

Steps I took to install Rundeck in an LXC container.

create a new container on the Host.   I called mine “rundeck”

$ sudo lxc-create -t download -n rundeck

start the container which will run detached from the terminal you started it on.

$ sudo lxc-start -n rundeck

attach (re get a console into the container)

$ sudo lxc-attach -n rundeck

Note: at this point your console prompt should change to show you are logged in as Root in the Container whose hostname is “rundeck”.

At this point you can do whatever you would do with any ubuntu server but here were my steps

root@rundeck#  apt-get update && apt-get upgrade -y
root@rundeck# apt-get install wget nano default-jre

then I used wget to download the latest Rundeck .deb file:

root@rundeck# wget

Note:  check on their website for the rundeck version number as it may change often

install the rundeck .deb it

root@rundeck# gdebi ./rundeck-2.4.0-1-GA.deb

When the Rundeck installation is done I needed to do a couple of things.

LXC containers in Ubuntu by default are started in their own 10.0.3.x network.   By default applications in the container have internet access and as I’d mentioned before are like being logged into any other ubuntu server in regards to what you can do.

Because its possible that each time you stop/restart an LXC container it may get a different 10.0.3.x address I wanted to create a solution to where the Rundeck webapp would acquire the “current” IP address of the container Rundeck is running inside of, each time that container starts & rundeck starts.

My script looks like this and I saved it into the containers /usr/bin directory after setting it as executable (chmod +x …  I called mine “” or whatever you called it.

# purpose:
#            get ip address while running inside a container
#            Stream Edit (SED) and set the /etc/rundeck/rundesk-config.
properties file so that it substitutes
#            “localhost” with that IP
#            $ sudo service rundeskd start
# assumptions:   container is using its own eth0 for network connection and its primary address
# This script is called from /etc/rc.local during system boot but after network IP is set and rundeckd is started
# first set back to original
cp /etc/rundeck/ /etc/rundeck/

# get eth0 IP address (we assume that’s what the container is using

my_ip=$(ifconfig eth0 | grep “inet addr” | awk -F: ‘{print $2}’ | awk ‘{print $1}’)

# swap the term “localhost” with the real IP of the container in the file
sed -i -e “s|localhost|$my_ip|” $FILE
# restart the rundeckd service with the new & now actual IP address
sudo /etc/init.d/rundeckd restart


Run this script by adding it into the rc.local file  inside of the lxc container in which you installed rundeck (my container is called rundeck)

in /etc/rc.local just add the following line at the end of whats already there


Next, as you may note above I am simply searching for the word “localhost” and substituting the current IP address of the eth0 of the LXC container Rundeck is running in as again “by default” an LXC container will use that IP address and I am assuming defaults here.

Secondly, to keep this simple, before I ever restart the system for the first time I copied the to so I had a virgin copy of the original file with “localhost” still in it.

The first step of the script is to restore the original file, then do the SED substitution that way I could always find/substitute the actual IP of the container.

root@rundeck# cp /etc/rundeck/rundeck-config.properites  /etc/rundeck/

My next step was to enable use of SUDO in job command so I could have Rundeck work with Privileged LXC containers

Remember to create/start/stop etc those you have to have SUDO privileges on the Host.

I searched the Rundeck forum and found others were grappling with this problem too.

For me my solution (whether its best or not) worked.

I used visudo to edit the SUDOERS file and set the user “rundeck” so “rundeck” user does NOT require a password to execute a sudo command.

Note:  Again, you are doing this WHILE LOGGED INTO the “rundeck” container – NOT – the Host !

This will enable the rundeck web app to execute commands that require “sudo” in them.

in the rundeck container…

$ sudo visudo

Add the following at the end of the sudoers file


Ctrl-X to leave, save your changes, and you’re done!

Now while logged INTO the rundeck container reboot it.

root@rundeck# shutdown -r now

Note that this will log you out of the container and return you to the original terminal prompt on your Host OS.

If you want to log back into the container “rundeck” you should be able to almost immediately log back in using the lxc-attach command again

$ sudo lxc-attach -n rundeck

But at this point you should be able to log into Rundeck which is running in the separate and isolated LXC container we also called rundeck by pointing your browser to the IP address of the container.

You can find out the containers IP address using the following LXC command while in a terminal on the Host OS:

$ sudo lxc-ls -f

NAME     STATE    IPV4                            IPV6  GROUPS  AUTOSTART 
base_cn  STOPPED  –                                  –          –                 NO        
rundeck  RUNNING,    –          –                 NO        
wings    STOPPED  –                                    –          –                 NO  

so in the above case I point my browser to:

and log into Rundeck as normal (admin/admin  -or- user/user)

However, now when I create a “job” for the localhost… that job executes inside of the LXC container “rundeck” and NOT on the Host OS …!

If you read the website you will also have noticed a new capability/extension to LXC that is now available called LXD (lex-dee).

LXD is introducing a whole new exciting  capability to LXC that includes the ability to easily create/run/manage LXC containers anywhere on any LXC capable host (LXC is part of the linux kernel) whether that host is remote (re Cloud) or local.

This means that even on your laptop you can have dozens or many dozens (depending on memory, applications, etc) of containers all isolated as much/little as you want from each other, from the Host or from the internet.

So now you can use Rundeck to manage/orchestrate all your local PC LXC containers BUT… you should also be able to use LXC & LXD to do the same with remote (re Cloud) servers/LXC containers.

As I am no expert in Rundeck, LXC or Linux feel free to suggest improvements, changes etc where you think this post requires it as I am sure I probably have made some incorrect assumptions w/Rundeck and/or LXC here.



November 20, 2013

Configure x2go remote desktop capability into LXC Containers

Filed under: LXC, Remote Desktop, ubuntu, x2go — Tags: , , , — bmullan @ 8:32 am

I’ve long used x2go for remote desktop access to Linux machines.   So far I’ve found x2go to be by far the fastest/best remote desktop application for Linux whereby a Linux, Windows or Mac user can access that Linux desktop “server”.

The following will show you how to create an LXC container and configure it to implement the x2go (see remote desktop “server” so you can access the LXC container’s desktop using any of x2go native client (windows, linux, mac) or even the x2go web browser plugin (ubuntu only at this time).

Note 1:

  • the following assumes an Ubuntu Host OS.   LXC is implemented in the Linux Kernel and should be available on ANY Distro but use may differ in some ways not documented here.

First lets create a test LXC container

$ sudo lxc-create -t ubuntu -n test

Note 2:    -t specifies “what” linux LXC “template” to use in creation of the LXC container.   In ubuntu templates exist for:

  • lxc-alpine
  • lxc-busybox
  • lxc-fedora
  • lxc-sshd
  • lxc-altlinux
  • lxc-cirros
  • lxc-opensuse
  • lxc-ubuntu
  • lxc-archlinux
  • lxc-debian
  • lxc-oracle
  • lxc-ubuntu-cloud

So although I use Ubuntu I could create an LXC container running OpenSuse, Debian, Arch Linux etc….  very cool capability.

The ONLY caveate is that all container OS’s will have to run the Host OS’s “kernel”.    This normally is not a problem for most use-cases though.

Next we have to “start” the LXC container we called “test”

$ sudo lxc-start -n test

As part of executing the above command you will be presented with a login prompt for the LXC container.   The default LoginID = ubuntu and the password = ubuntu

So login to the LXC container called “test”

Next I started adding some of the applications I would be using to do the test.

First I make sure the test container is updated

test:~$ sudo apt-get update && sudo apt-get upgrade -y

Next I install either an XFCE or LXDE desktop… Note, I use one of these because no remote desktop software I am aware of supports the 3D graphics of etiher Unity or Gnome3… including x2go. But x2go does support xfce, lxde, mate and a couple others.

So lets install xfce desktop in the container.

test:~$ sudo apt-get install xubuntu-desktop -y

In order to install x2go PPA in the container I have to get “add-apt-repository” (its not by default)

test:~$ sudo apt-get install sofware-properties-common -y

Now I can add the x2go PPA:

test:~$ sudo add-apt-repository ppa:x2go/stable

Next, install the x2goserver to which I will connect from my Host by using the x2goclient I will install there later.

test:~$ sudo apt-get install x2goserver x2goserver-xsession -y

x2goclient uses SSH to login to an x2goserver.

There are various advanced x2go configs you can do for login but to keep it simple I am going to just be using login/password combo.

However, to be able to do that the default Ubuntu /etc/ssh/sshd_config file needs 2 changes to allow logging in with login/password.

Use whatever editor you use to edit (I use nano – which you would have to also install with apt-get into the container)

test:~$ sudo nano /etc/ssh/sshd_config

Change the following from NO to YES to enable challenge-response passwords

ChallengeResponseAuthentication no

Uncomment out (re remove the #) the following to enable Password Authentication

#PasswordAuthentication yes 

Save your 2 changes and exit your editor.

Now, restart SSH so the changes take effect

 test:~$ sudo service ssh restart

At this point the x2goserver is all setup in the LXC container so you can access it with your x2goclient on your Host OS or wherever they might be assuming they can connect to your LXC container’s IP address.

You can shutdown (or reboot) the LXC container while logged into it just as you would in any Ubuntu by:

test:~$ sudo shutdown -r now  -or- $ sudo shutdown -h now

What is nice about LXC is that once you have shutdown the LXC container you can “clone” that entire container very quickly by issuing the following command on your Host OS

hostOS:~$  sudo lxc-clone -o test -n new_container

Each new LXC container will get a new IP address (default will be in the 10.x.x.x address range).

After you “start” your new cloned LXC container:

hostOS:~$  sudo lxc-start -n new_container

To access the NEW LXC container you can find out the new LXC container’s IP address using the following command after the LXC container has been started:

hostOS:~$ sudo lxc-ls –fancy

 You can then use that IP address in creating a new x2go “session profile”.

Again, remember that each container “could” be configured with a different Desktop Environment so one user could have xfce another lxde another Mate etc.

Hope this is useful and fun for you to experiment with.


How to Enable Sound in LXC (Linux Containers)

Filed under: LXC, pulseaudio, ubuntu, x2go — Tags: , , , — bmullan @ 7:26 am

An Approach to Enable Sound in an LXC container


LXC Containers are usually used for “server” type applications where utilizing sound is not required.

My personal “use-case” is that I want to use LXC containers to provide a remote-desktop “server” to remote users.    In my use-case I use both the awesome x2go remote desktop application refer to: and also my own spin of the great Guacamole HTML5 remote desktop proxy gateway.

I will not go into anything x2go or Guacamole related here regarding how to setup it up for use with LXC.

The following is how I enabled Sound in my LXC containers on my Ubuntu 15.10 amd64 host/server.

Before you do anything with a container you need to make 1 change to whatever “Host/Server” you want to play sound from LXC containers.   Whether that Host/Server is local or remote or the same Host/Server that the LXC containers are running on.

$ echo “load-module module-native-protocol-tcp auth-ip-acl=;” |  sudo tee -a /etc/pulse/  

$ echo “load-module module-zeroconf-publish”  | sudo tee -a /etc/pulse/ 

The above will add the following 2 lines to the end of your Host’s /etc/pulse/  file:

load-module module-native-protocol-tcp auth-ip-acl=;

load-module module-zeroconf-publish

The 1st statement says to allow sound from “remote” systems whose IP addresses are part of 10.0.3.x … in essence from any LXC container running on that Host/Server.

Once you have done the above you will need to either reboot the Host or just “kill” the Pulseaudio Daemon running on the Host, which will auto-restart itself picking up the 2 new commands you created!

to restart pulseaudio

ps -ax | grep pulseaudio

then use the kill -9 command & the PID of the above pulseaudio output.   As an example lets assume pulseaudio is running on PID 2189

$ sudo kill -9 2189

You can check that pulseaudio daemon restarted by doing the “ps -ax | grep pulseaudio” command again.


Step 1 – Create a Test container

Create a test container (the following creates a “privileged” LXC container but Un-privileged works as well:

$ sudo lxc-create -t download -n test

Start the test container:

$ sudo lxc-start -n test


Step 2 – Add PulseAudio and an audio player (mpg321) into the Test container

$ sudo apt-get install  pulseaudio  mpg321  -y

Create your new Ubuntu UserID

$ sudo adduser YourID


Step 3 – Configure your LXC Test Container’s PulseAudio to redirect any Sound over the Network

PulseAudio is really a very powerful audio/sound management application and there are many ways to utilize it.

One such way allows you to configure a “remote system”… in this case “remote” being the Test LXC container which is on a different IP Network than your Host OS so that it plays any sound/audio on the Host/Server (or a truly remote Host/Server:


  1. The “target” PulseAudio Host PC that will “play” the sound … (if on a home network) is usually a 192.168.x.x IP network.
  2. An LXC container on your Host PC is usually on a 10.x.x.x IP Network
  3. The LXC “Host PC” and any LXC Containers are usually bridged together via the lxcbr0 (lxcbr”zero”) bridge so they can communicate and so your LXC container can communicate with the Internet.

Make sure you are logged into your Test LXC container using “YourID” and “YourPassword”.    If you just created yourID in the container and are still logged in as ubuntu/root the SU to yourID   ( $ su yourID).

Next is the important step regarding PulseAudio configuration in your LXC Test Container.   The following command adds a new environment variable when you login to the Container in the future.

$  echo “export PULSE_SERVER=” | tee -a ~/.bashrc

The above will add the following line to the end of your .bashrc file


In the above is the IP of the HOST OS on the lxcbr0 bridge that LXC by default installs for you when you install LXC.

Note:  if the actual Host/Server you want to play the sound on is a truly remote Host/Server (re not the Host of the LXC container) the use the IP address of that remote Host/Server in the above


  1. PulseAudio by default usesport 4713.  Both on your Target Host OS and in any LXC container you might create unless otherwise configured differently.
  2. If you have any problems using sound in a future container make sure that Port 4713 is open in any firewalls if you plan to send sound to your local workstation  over a network or the Internet itself.


Step 4 – Finally Check to see if Sound works from your LXC Test Container

To test that sound works in your container use SCP to copy some mp3 file from your Host to the LXC container (assume the mp3 is called test.mp3).

$  scp /path-to-mp3/test.mp3  yourID@container_ip:/home/yourID

Next log back into your container as yourID.  You can ssh into it or lxc-attach to it.  In either case make sure you are logged in as yourID not root or ubuntu user.

Now you can  use the application “mpg123” to see if sound worked.

If you did everything correctly and if you have your speaker On and Volume turned up on your Host PC you should hear the .mp3 file playing when you execute the following:

$ mpg123 ~/test.mp3


The PulseAudio configuration I described here for the “test” LXC container allows PulseAudio to redirect sound to ANY other Linux system running PulseAudio on the network -or- the Internet.


This PulseAudio setup does allow concurrent simultaneous use of Sound by BOTH the Host and the Container.    For a single user case this may not be what you want but if you want the audio to play on some remote Linux machine, a Raspberry Pi out on your Deck etc. this is really useful

However, remember “my use-case” was for remote desktop access to LXC container based Ubuntu desktop systems. In “my use-case” … each container will eventually be configured so that any container will redirect PulseAudio TO the remote desktop “user” PC wherever that is on the “internet”.

Remember that the PulseAudio port 4713 can not be blocked by any firewalls


This configuration of course was simply to test that Sound would work.

I do think LXC could become a great User Desktop virtualization approach as it works great now with x2go (in my case) but there are other remote desktop access applications that others may utilize also.

Finally, the has a lot of other detailed information regarding advanced PulseAudio configuration and use. I’m still learning myself.

Hope this helps others trying to do similar things.

Create a free website or blog at