Education and the Cloud

September 7, 2016

Using Guacamole for a browser-only HTML5 Remote Desktop capability that will work with both Ubuntu and Windows servers!

Cloud in a Box (CIAB) Remote Desktop system v2.2

Connect using only a Web Browser  to both Linux and Windows Servers to access and use a Remote Desktop

January 2019- Brian Mullan (


CIAB is a clientless remote desktop system. It’s called “clientless” because no plugins or client software are required!

      Thanks to HTML5, once the CIAB Remote Desktop System is installed on a Server/VM/Cloud
instance, all you need to access your desktop(s) is an HTML5 capable web browser!

PLEASE REVIEW the GitHub CIAB Repository ISSUES Section for IMPORTANT CIAB bug fix info and Use TIPS…!


The CIAB Remote Desktop System

CIAB Remote Desktop System current v2.2 Release Notes

CIAB has had a redesign of how the CIAB Web App Containers are created.

Previously, in v2.1 they were created as “nested” LXD containers inside the ciab-guac container.

Due to a couple bugs in Apparmor that are related to “nested” Apparmor profiles which may take quite a while to be resolved I have redesigned how the CIAB Web Apps are deployed. They are now created as normal LXD (ie non-nested) containers at the same container level as the ciab-guac and CN1 containers.

Note: the Web App containers and their applications are still isolated to the private 10.x.x.x network and still accessible by Remote Desktop users of the CN1 environment. Those Web App containers also still retain access to the Internet but are isolated from direct access “from” the Internet. CIAB version 2.2 introduces the following improvements.

Any CIAB Web-Apps installed by the CIAB Admin from the Menu available on their CIAB-GUAC container Mate Desktop now get installed in LXD Containers as “peer containers” on the 10.x.x.x private network that CIAB-GUAC and CN1 containers are on.

This is a change from CIAB’s previous use of “nested” containers for the CIAB Web-Apps and was driven by a bug in upstream Apparmor concerning “nested” Apparmor profiles. That bug may be a while before it is fixed so a decision was made to make this change.

Now as Admin, from the Host/Server you can find/see all installed applications and the CIAB-GUAC and CN1 containers and their IP addresses by executing:

$ lxc list

Also, in CIAB v2.2, TOTP (Timed One Time Password) 2 Factor Authentication (2FA) is now an optional capability which the CIAB Admin can easily install and it will automatically activate for use by CIAB users logging in afterwards.

NOTE: users will have to have a TOTP compatible application on their smart phones inorder to use TOTP. Google Authenticator works well for this and is available on Android and iPhone.

This Installation Guide now has an added section describing the 4 steps required to create an IPP (Internet Printing Protocol) Printer in the CN1 CUPS (Common Unix Printing System) so users can print directly to their local printers (if those printers support IPP).

Lastly, the CIAB-README.PDF document now also contains a section titled “Steps to Implement a real/valid HTTPS/TLS Certificate for CIAB”. This section provides information and a Guide for how to obtain & install a valid Certificate from a Certificate Authority (LetsEncrypt) for use for HTTPS/TLS access to your CIAB installation.

The CIAB Remote Desktop System

CIAB Remote Desktop System current v2.1 Release Notes

This update introduces the following improvements and new features.

Both file uploads and downloads now work between a Users local PC and their CIAB Remote Desktop system.

Entry of a Login ID and Password is now only required one time for both CIAB’s Guacamole and the CIAB Remote Desktop connections.

If a CIAB User only has a single Remote Desktop connection configured for them they will be connected to that Desktop immediately after entering their User ID and Password.


Please review the CIAB – README.pdf !

The PDF has been significantly updated/improved in regards to not only the new features but also the overall CIAB Installation & Configuration process!

Some of the new capabilities (such as file download and only requiring Login ID and Password once) will require some minor editing of existing CIAB Guacamole CIAB-GUAC and CN1 “connection” configurations for the new capabilities to work correctly!

1 – Fresh Ubuntu 18.04 server (cloud or local) or VM
2 – The CIAB Remote Desktop system files contained in this GitHub repository which include all of the installation scripts.
3 – sudo privileges on that server

Also, note that I have started adding some short Tips into this Repository’s Issues section to help with several topics.

CIAB Remote Desktop System v2.0 release notes

NOTE: With CIAB Remote Desktop System v2.0 we introduced several major enhancements!

CIAB has been updated with Guacamole v1.0.0 which was released January 2019. This version of Guacamole now supports many new capabilities such as:

  • Support for user groups
  • Multi-factor authentication with Google Authenticator / TOTP
  • Support for RADIUS authentication
  • Support for creating ad-hoc connections
  • Support for renaming RDP drive and printer
  • Cut & Paste for text-only (no pictures) now works as it normally would on a desktop
  • Configurable terminal color schemes
  • Optional recording of input events
  • SSH host key verification
  • Automatic detection of network issues
  • Support for systemd
  • Incorrect status reported for sessions closed by RDP server
  • Automatic connection behavior which means Guacamole will automatically connect upon login for users that have access to only a single connection, skipping the home screen.

A recent addition to the CIAB Remote Desktop are the CIAB Web Applications! Previously, this had to be installed separately after the CIAB Remote Desktop system had been installed.

With CIAB Remote Desktop System v2.0 this has now been integrated so that the CIAB Admin will see an Icon on their Mate Desktop when they log into the CIAB-GUAC LXD container desktop.

To install one or more of those CIAB Web Apps you just have to click on that Icon and when prompted select “RUN IN A TERMINAL” and after doing so you will be presented with a GUI Menu where you can Check the boxes for one or more Web Applications you’d like to install.

NOTE: These applications will be installed as peer LXD containers to the CIAB-GUAC and CN1 containers and they will all be attached to the same 10.x.x.x private network that the CIAB-GUAC and the CN1 containers are attached to. This use of the private 10.x.x.x network has greatly enhanced the security regarding using these Web Applications as only validated users with CIAB Accounts and a login on the CN1 Mate Desktop container will have access to those web applications unless the CIAB Admin allows access from the internet via a separate configuration requirement.

Read more about the CIAB Web Application later in the Section of this README titled “CIAB Web Applications”.

These are a large group of Web based applications. Each selected application is installed in its own LXD container.

There are 2 YouTube video’s regarding CIAB:

CIAB Remote Desktop Part 1 – Installation


CIAB Remote Desktop Part 2 – Configuration and Use


How to get the CIAB Source Code

The CIAB Remote Desktop System source code and documentation can be found on my GitHub repository:

Important Note:

In this repository is a file called CIAB System Architecture Mindmap.pdf. If you download that PDF you can click on any bubble that has a small link icon in it to drill down/up in the map for further information. At the CIAB Applications level each Application bubble has a link icon and if you click on that bubble you will be taken to the Documentation webpage for that > particular Web Application.

With this update the CIAB Remote Desktop components all run in LXD Containers which means Guacamole, MySQL, NGINX, Tomcat8, XRDP, XFreeRDP and the Ubuntu MATE desktop environment.

Installation time depends on the chosen Server/Host’s number/type of CPU, amount of RAM and type of disk (SSD or spinning).

NOTE: As an example, on a Server with 4 core, 8GB ram and an SSD disk drive the installation will take between 30-45 minutes.

After installation you can very easily add more remote desktop server containers either on the same LXD Host/Server or on another LXD Host/Server just by copying the existing CN1 container which only takes a 1-2 minutes:

$ lxc copy cn1 cn2

This 2.0 version also utilizes the recently added new Device Proxy capability that maps Port 443 on your Host Server (cloud or VM) to an LXD container called ciab-guac (where guacamole etc gets installed). This means that after installation any CIAB Desktop user that points their Browser to the Host/Server will be redirected to the ciab-guac LXD container.

Since ciab-guac resides on the same private/internal 10.x.x.x subnet as the cn1 container and any additional containers you clone from the original cn1, they can all inter-communicate with one another. Also, any CIAB Web Applications the Admin installs will also be attached to this same 10.x.x.x network allowing validated CIAB Users logged into the Mate Desktop on CN1 to use their browser to access those applications.

Depending on the Host Server’s number of CPU core, Memory capacity and storage you could potentially have dozens or hundreds of cnX containers, each with its own Ubuntu-Mate Desktop. This means that remote users can also be configured to potentially access and use any of those dozens of cnX Ubuntu-Mate Desktops by the Guacamole admin.

For example, on AWS EC2 one of the larger Virtual Machine Instances you can spin up today approximates this:

Instance Type…..vCPU…..Memory (GB)………Storage……………….Network Performance
m5d.12xlarge………96………384GBytes………..4 x 900GB SSD…………..25 Gbps

As you can infer from the above, configuring ciab-desktop on such a server could potentially support many dozens of cnX Ubuntu-Mate Desktop containers, each with perhaps dozens of “users” by the Guacamole/CIAB admin of the Server/Host.

If you download the Installation script source files and the CIAB-README.pdf documenation file using GitHub’s ZIP file format the resulting archive will be called – “

NOTE: In there are commented-out sections that show what you need to do if you’d prefer a Desktop Environment (DE) other than Ubuntu-Mate. Included are xubuntu (xfce4) and budgie DE.

Please refer to the CIAB-README.pdf file for more complete documentation on installation and use.

CIAB Web Applications





Installable Web Applications for use by CIAB Remote Desktop Users.

This repository contains scripts and a large group of Web based applications that can be installed in LXD containers on the same Host/Server as the CIAB Remote Desktop itself.

These applications can only be installed by the CIAB Remote Desktop Administrator.


Each application selection by the Administrator will be installed in its own LXD container attached to the same 10.x.x.x private network as the CIAB Remote Desktop container(s) and thus will only be accessible to validated users of the installed CIAB Remote Desktop system, not from the Internet!

Since CIAB Remote Desktop users must be first configured with valid accounts in Guacamole and can only access CIAB LXD Desktop Servers they have been given access to by the Administrator this significantly reduces the Security exposure footprint from any intrusion from the Internet.

Remember, nothing is running on the Host/Server on which the CIAB Remote Desktop system has been installed except LXD. Everything else is running in unprivileged LXD Containers on that Host/Server.

Only the CIAB Administrator that initially installed the CIAB System would have a User Account on the Host/Server and has total control over any Ports open. Usually only Ports 22 (ssh) and 443 (Https) are open on the Host/Server.

Direct Internet access to these applications & the LXD containers they run in is prohibited by design.

If access to these Web Applications is desired there is a relatively easy configuration change which would enable such, so that users on the “internet” could access the Web Applications also while still being under the control/adminstration of the CIAB Administrator.

To enable Internet access to any installed CIAB Web Applications the administrator has to issue two commands for each installed application. Both commands are related in that they will setup a “chain” of Port Forwarding. First from the internet into the ciab-guac container and then for that port into the LXD container of the target CIAB Web Application.

Example for the Drupal CMS application lets say we want to use Port 8000 from the internet to access it. From the Host server we would first issue the following:

lxc config device add ciab-guac proxyport8000 proxy listen=tcp: connect=tcp:
then from inside the ciab-guac container…
lxc config device add drupal proxyport8000 proxy listen=tcp: connect=tcp:

NOTE: the label “proxyport” is arbitrary and is just an identifier. Port 8000 is also somewhat arbitrary in that you can choose any port that is not a “well-known port” an IANA reserved port (ie 0 – 1023).

However, the various applications do have access out to the Internet and thus their functionality is not restricted.

But again, only CIAB Remote Desktop users logged into one of the LXD Remote Desktop LXD containers and using that container’s Web Browser can access and log into these web applications.

Another big benefit involves Backups.

Because all of the CIAB Web Applications are installed in nested LXD containers inside of the ciab-guac container, if you backup or copy the ciab-guac container you will automatically be backing up/copying all of your installed applications too!

Also, the same for restores if it is ever required!

Keep in mind if you have installed a lot of the CIAB Web Applications and users have input a lot of data into them, the backup may take a while to complete and you need to insure that the backup remote LXD server has appropriate free disk space.


A good thread on LXD backups that includes comments by Stephane Graber, the lead for the LXD project, can be found here

Important (PLEASE READ)

  1. These applications are being provided for installation only. The installer is designed to install any selected Application into an existing CIAB Remote Desktop environment!
  2. NO SUPPORT for the Applications is provided except through the Application’s original Author or Organization’s website
    and any support mechanisms they have (forum, for-fee, mailer alias etc)!_
  3. ANY & ALL future upgrades to these Applications are the SOLE responsibility of the Installer, the Organization the applications are being installed for, and/or any Contracted Support arranged with the Application Authors/Creators or 3rd Parties. So if you believe you will eventually need or want to upgrade an application & migrate the existing data please research how this needs to be done sooner than later!

The current list of Applications available to be installed are of several Categories:

Content Management Systems (CMS)

Drupal – Both Drupal and Joomla are widely used world-wide and BOTH have dozens and dozens of add-ons you can easily install.

Joomla – see above

WordPress – One of the most popular blogging & content management platforms worldwide.

Enterprise Resource Management (ERP)

ERPNext – ERPNext supports manufacturing, distribution, retail, trading, services, education, non profits and includes Accounting, Inventory, Manufacturing, CRM, Sales, Purchase, Project Management, and a Human Resource Management System (HRMS).

Learning Management Systems (LRM)

Moodle – Moodle is a learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalised learning environments. Moodle can also be used by Businesses to help plan/manage Training for Employees.

Human Resource Management (HRM)

OrangeHRM – a complete open source HR management system.

Business & eCommerce

PrestaShop – PrestaShop is an efficient and innovative e-commerce solution with all the features you need to create an online store and grow your business.

Mahara – Mahara is a fully featured electronic portfolio, weblog, resume builder and social networking system, connecting users and creating online communities. Mahara provides you with the tools to set up a personal learning and development environment.

OSClass – online classified Ads

Mautic – Mautic provides free and open source marketing automation software available to everyone. Free email marketing software, lead management software

Odoo – Odoo is a suite of web based open source business apps. The main Odoo Apps include an Open Source CRM, Website Builder, eCommerce, Warehouse Management, Project Management, Billing & Accounting, Point of Sale, Human Resources

iTop – iTop stands for IT Operations Portal. It is a complete open source, ITIL, web based service management tool including a fully customizable Configuration Mangement Database (CMDB), a Helpdesk system and a Document Management tool. iTop also offers mass import tools and web services to integrate with your IT

Project Management

Open Atrium – BOTH Open Atrium and Open Project are widely used World-Wide and either can provide comprehensive Project Management capabilities.

Open Project – see above

Social Media Systems

Discourse – Discourse is the next-next-generation community forum platform which allows you to create categories, tag posts, manage notifications, create user profiles, and includes features to let communities govern themselves by voting out trolls and spammers. Discourse is built for mobile from the ground up and support high-res devices.

MediaWiki – Mediawiki is the core of Wikipedia. A wiki enables communities of editors and contributors to write documents collaboratively.

Ghost – blogging platform


NextCloud – Nextcloud is an open source, self-hosted file share and communication platform. Access & sync your files, contacts, calendars & communicate and collaborate.

RStudio – RStudio is an integrated development environment for R, a programming language for statistical computing and graphics.

nuBuilder4 – A browser-based tool for developing web-based database Applications accessible from the CIAB Desktop(s). Using MySQL databases it gives users the ability to easily do database operations like… Search, Create, Insert, Read, Update, Delete. It includes low-code tools that… Drag and Drop objects, Create database queries with the SQL Builder, Create customised date and number formats with the Format Builder, Create calculated fields with the Formula Builder, Create Fast Forms, Create Fast Reports,

CiviCRM – CiviCRM is web-based software used by a diverse range of organisations, particularly not-for-profit organizations (nonprofits and civic sector organizations).

Lime Survey – Create & run Online Surveys. NOTE: to login as Lime Survey Admin goto http://<ip_addr>/admin

Mantis – Mantis Defect/Problem Tracker is a free and open source, web-based tracking system.

How to install CIAB Web Applications

An existing CIAB Remote Desktop System Installation on an Ubuntu 18.04 VM/Physical Server/Cloud instance Host is required for installation of the CIAB Web Applications.

The CIAB Admin (person who installed the CIAB Remote Desktop System) when they use Guacamole to log into the CIAB-GUAC container’s MATE desktop will find a new ICON on their Desktop named:

CIAB Web Applications Installer

To install one or more of the CIAB Web Applications the Admin needs to click on the CIAB Web Applications Installer icon.

Each web application’s installation can take up to 5 minutes. So be patient as there may be times where there will seem to be no activity for upto 60 seconds or so.

Currently, all applications are installed using Bitnami .RUN files ( for iTop, NextCloud and NuBuilder which are not Bitnami applications and thus require their own installation scripts.

After completion of the installation of any CIAB Web Applications there may be important information displayed regarding login/access information (specific default admin login IDs or passwords, specific Port numbers that must be used/open etc).

So the CIAB Admin should, upon completion of the installation, record any important information like this and if necessary perform any additional configuration requirements such as opening a Port in that Application’s LXD container Firewall etc.

Additional CIAB applications will be added in the future.

Each selected application will be installed into its own LXD container nested in the ciab-guac container.

NOTE: NextCloud is an exception to this. Due to a bug with AppArmor and “nested” AppArmor profiles we cannot install the SNAP version of NextCloud in a “nested” container as with the other applications. So NextCloud is installed in an LXD container called “nextcloud” in the Host/Server and is attached to the same private 10.x.x.x network as all the other containers. If you (ie CIAB Admin) need to delete/copy/start etc the NextCloud container you will have to ssh into the Host and execute the appropirate LXC commands there (re – lxc list nextcloud, lxc stop nextcloud etc)

Each of those Web Application Containers will be attached to the same lxdbr0 bridge via the ETH0 interface of the ciab-guac container and thus those Web Application containers will be allocated a 10.x.x.x IP address on the same subnet as ciab-guac and the initial cn1 Ubuntu-Mate desktop container.

After applications have been installed you can:

  1. get a full list of installed applications & their LXD container IP addresses by opening a terminal when logged into the ciab-guac container and executing:

lxc list

Next, you should log into the CIAB Remote Desktop using your local web browser to access Guacamole which is running in the cn1 LXD container.

When you are logged into a CIAB Ubuntu-Mate desktop, start a web browser on that Desktop and point it to the 10.x.x.x IP address of any of the applications you installed.

NOTE: Most of the applications are reachable with a URL like “10.x.x.x/app_name”.


Lets say the WordPress application was installed in an LXD container with IP address

You want to login as admin and add/edit content on the WordPress blog.
You would point the CIAB Remote Desktop browser to

If a CIAB User just wanted to read the WordPress Blog they would point the CIAB Remote Desktop browser

As an Admin you could make life easier for the users by modifying the CIAB Remote Desktop container (re cn1, cn2 etc) /etc/hosts file and adding entries for each CIAB Application you installed.

Example /etc/hosts in CN1 container:

more hosts localhost cn1 wordpress erpnext nextcloud

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

after completing this a CiAB Remote Desktop user can access any of the installed apps by using the URL:


which is simpler to remember than the 10.x.x.x IP addresses of each application container.

If you screw-up any of the CIAB Application installations (entered something wrong during installation) you, the CIAB Admin, can simply stop then delete that Application’s LXD container and then reinstall it again!

Just open a terminal and execute:

$ lxc stop <container/application name>
$ lxc delete <container/application name>

Example – you entered something wrong with wordpress’s installation

log into the ciab-guac container desktop
open a terminal then..
$ lxc stop wordpress
$ lxc delete wordpress

Then reinstall wordpress by following the above CIAB Apps installation process agin and after running ciab-apps-install.shand select wordpress to reinstall it (Note: it may get a different IP address upon reinstallation).

Again… there will be more Web Based Applications added in the future.





May 12, 2015

Proof-of-Concept – Using Mesh VPN to interconnect LXC containers on Multiple Hosts on Multiple Clouds

Filed under: LXC, mesh VPN, ubuntu, Uncategorized, VPN — Tags: , , , , , — bmullan @ 4:34 pm


Secure Mesh VPN Network Interconnect for

LXC containers in Multiple IaaS Clouds

by Brian Mullan (

April 2015


I’ll start off this blog post by saying LXC containers are great (see !

LXC provides pre-built OS containers for Centos 6, Debian (Jessie, Sid, Squeeze & Wheezy), Oracle linux v6.5, Plamo linux, and multiple releases of Ubuntu from Precise up to and including Wily.

Its important to understand that the Host of LXC containers can be one distro and the containers can actually be any of the other supported Distro’s.   The only requirement for a container OS is that it utilize the same Linux Kernel as the Host OS.

This post though is about LXC  so lets begin.

The rest of this post is about my proof-of-concept testing of using a full mesh VPN to provide LXC container connectivity between any remote host whether on an IaaS Cloud like AWS, Digital Ocean or Rackspace or your own servers.

This document should be considered a Work-in-Progress Draft as I have been receiving a lot of good input
from others & will continue to edit for additional information, improvements and/or corrections.

Problem Statement

On any of the existing IaaS Cloud providers like AWS, Digital Ocean etc you can easily create virtual machine “instances” of running Linux servers.

Note that some IaaS clouds (Azure, AWS as examples) let you also create Windows virtual machines also.   But again tthis is about Linux & LXC not Windows.

Although you can create and run a Linux server in those Clouds you cannot “nest” other linux servers inside of those Cloud server “instances” … by “nesting” I am referring to using KVM or VirtualBox etc inside of an AWS Ubuntu server instance to create other virtual machines (vm inside a vm).   There may be some IaaS providers that permit nested VMs but I am not aware of any.  AWS for instance does not allow this.

The reason is that those clouds do not permit nested hardware virtualization.

NOTE:  On your home linux pc/server you can nest KVM hw virtualized instances.

LXC containers are:

  • much more light-weight than using full HW virtualization like KVM, vmware, VirtualBox etc.   This means LXC is faster and use less “host” server resources (memory, cpu etc).    Canonical (ubuntu, lxc, juju etc) just published some performance test results of LXD (LXD utilizes LXC !) versus KVM instances.   LXD/LXC in terms of both scalability & performance far surpasses KVM.    On a server where you may be limited to running ~20 HW virtualized VMs you may be able to run 80-100+ LXC containers.
    • LXC containers can be “nested” within a HW virtualized Linux on AWS, Digital Ocean etc
  • the LXC containers also all share the same kernel as the host machine so they are able to take advantage of the “host” security, networking, file system management etc.
  • extremely fast to start up & shut-down… almost instantaneous.
  • flexible because you can use say an Ubuntu host and have LXC containers that are other linux distro’s such as Debian or Centos.

A benefit of LXC is that you can use it to create create full container based servers in IaaS Clouds like AWS, Digital Ocean etc and you can also “nest” LXC containers (containers inside a container) on those cloud “instances”.

LXC has some characteristics which are “default”.   These can be modified/changed but I will not be going to cover that in this document.

LXC containers are by default created/started/running behind a NAT’d bridge interface called “lxcbr0” which is created when you install LXC on a server.

lxcbr0 is by default given a 10.0.3.x network/subnet

NOTE: you can change this if you want/need to 

Each LXC container you create on the “host” will be assigned an IP address in that 10.0.3.x subnet (examples:, etc).

NOTE: Your Cloud “instance” (re VM) will be assigned an IP address at the time you create the cloud “instance” by whoever the Cloud IaaS provider is.   Actually there are usually 2 ip addresses assigned, one private to the cloud and one “public” so the Cloud instance can be reached from the Internet.

The LXC containers you create & run on any Cloud instance (the “instance” will from now on be referred to as the LXC “host”) can by default reach anything on the Internet which the “host” can reach.   Again, that is configurable.

By default, all LXC containers running on any one “host” can also reach each other.

But what if you wanted LXC containers running on a host on say AWS to interact with LXC containers running on a host on Digital Ocean’s cloud?    No you can’t… not without some network configuration magic the LXC containers running on one host cannot talk to containers running on another host because all will be running behind their own hosts lxcbr0 NAT’d interface.

Also, LXC containers running on AWS cannot reach LXC containers running on another host also on AWS (ditto for other Clouds).

So the problem becomes… what if you wanted to do this though?

What if you wanted your LXC containers on a host somewhere (cloud or elsewhere) to be able to reach & interact with LXC containers running on any other host anywhere (assuming firewalls etc don’t prevent it).

Also, how could you make this secure so not just anyone could do this?

A Solution Approach I Utilized

Virtual Private Networks (VPNs) are commonly used in the normal networking world to securely interconnect remote sites & servers.  Think of a VPN as a “tunnel”.

VPNs encrypt the data links utilized for this interconnect to keep the VPN and any data traversing it  “private”.   So a VPN is an encrypted “tunnel”.

Most common VPN are peer-to-peer (P2P).   P2P VPN usually require configuration of each server that you want to connect to.   If you have 100 servers or sites then that means configuring each individual site for 99 different connections (1 for each “peer” site/server).

That solution if used beyond a few servers can be both complicated & messy to maintain.

The solution to this is to use what is called a Mesh VPN.  A “mesh” VPN means that every host configured as part of the VPN can connect to every other host connected to that VPN without necessarily being configured specifically  to do so.


mesh vpn with lxc




















In Open Source there are quite a few Mesh VPN choices and some offer more or less features and are more or less complicated to setup.   Some mesh vpn solutions are more complicated to configure than others.

Some Mesh VPN utilize a concept of a “super-node” which is used to keep a stateful database of all “member” servers/hosts that are part of the VPN.

Other Mesh VPN have been designed so as to not require a “super-node” at all!    This reduces overhead traffic to/from the “super-node” and any subsequent delays that traffic can cause.

In this blog post I am describing how to utilize one such Open Source Mesh VPN named PeerVPN ( which is the work of Tobias Volk.

Key PeerVPN Features include:

  • Ethernet tunneling support using TAP devices.
  • IPv6 support.
  • Full mesh network topology.
  • Automatically builds tunnels through firewalls and NATs without any further setup (for example, port forwarding).
  • Shared key encryption and authentication support.
  • Open Source (GPLv3)

PeerVPN uses UDP exclusively and PeerVPN sends UDP packets that are larger than the MTU.
Tobias Volk, the author of PeerVPN, has indicated that PeerVPN fragments/reassembles packets itself to enable this MTU capability.

PeerVPN is both simple to setup and it does create a full mesh VPN and it does not require a “super-node”.

You can define multiple separate VPN’s on each host(s)!   To define additional VPN networks just create additional copies of your peervpn.conf using a new unique name for each.

  • edit each new configuration file (call the new config file anything you want)
  • change the networkname variable to be a unique name of the additional VPN
  • change the port variable to be unique for each new VPN
  • generate a different PSK encryption/authentication key for each additonal  VPN and add that PSK key to the appropriate VPN’s  .conf file after the PSK variable in that file.

NOTE:   All servers that you want to be part of the same VPN must use the same config file values (exceptions:  “interface” & “ifconfig4/ifconfig6” values)

The minimum PeerVPN configuration file requires only 9-11 items to be configured depending on whether you are using IPv4 and IPv6 or not:

port 7000                                                 # your specified Port number to be used by any individual/distinct PeerVPN (other VPN’s require a different Port #)
networkname VPNnet1                          # your name for each unique VPN network deployed (other VPN’s require a different “networkname”)
psk MyCryptoSeedPassword                    # PSK is an encryption/authentication “password” upto 512 characters (other VPN PSKs should be unique). For PeerVPN, the PSK that you enter in the config is just a seed password used to generate the “real” crypto keys. The “real” crypto keys are always AES keys with 256 bit length, which are generated individually for each VPN link.
enabletunneling <yes|no>                    # Default is YES.  Enables tunneling interface (refer to config documentation link below)
enableipv4 <yes|no>                              # Default is YES
enableipv6 <yes|no>                              # Default is YES
interface peervpn0                                 # name you want to give the local VPN Tunnel End Point (TEP) on a “host” (name it whatever you like)
ifconfig4                             # IP address of “this” hosts TEP. Next Host TEP may be etc.
ifconfig6 <configure>                              # this is the node’s IPv6 address that should be assigned to the tunneled interface (i.e the encrypted tunnel).
initpeers 7000 7000  # For HA you should ID at least several Peer Node Public IPv4 addresses that “this” node should try to initially connect or reconnect to if connection is lost
initpeers 2001:DB8:1337::1  7000         # For HA you should ID at least several Peer Node Public IPv6 addresses that “this” node should try to initially connect or reconnect to if connection is lost
enablendpcache <yes|no>                    # Default is NO.  If using IPv6 set to YES. Enables caching of tunneled IPv6 NDP messages to improve performance by reducing the amount of NDP multicast messages that need to be sent between peers

For a basic PeerVPN configuration file that’s it!    Pretty simple to implement I think compared to other mesh VPN solutions I have seen!

To start peervpn use the following command:

usage:  ./peervpn <path to peervpn config file>

IMPORTANT NOTE:   For complete PeerVPN configuration options and descriptions see:

The 10,000 ft view of the overall process to setup and use PeerVPN are:

  • Create ubuntu server instance (re host) on each cloud
  • On each Cloud Instance/host open port 7000 which is used by PeerVPN.
  • Install peervpn on each cloud instance.   Copy the .zip file and unzip it in a subdirectory of your choosing.
  • Create a peervpn.conf configuration file.   Refer to:
  • Generate a PSK encryption password “seed”.  I  used “psktool”…  and set the “psk” variable in your peervpn.conf file to that key
    • note::  use the same PSK on all VPN “member” hosts within the same VPN
  • Follow the instructions on the above PeerVPN link in regards to adding more server/hosts to the VPN.    You can add as many as you can support from a traffic perspective.

Start a new instance of the peervpn and point to the additional .conf configuration file


  • ./peervpn ./vpn-network-A.conf
  • ./peervpn ./vpn-network-B.conf
  • etc

If you do this, each VPN will be separate & isolated from every other VPN not of the same “networkname”.

How to Install & Use PSKTOOL to generate your PSK encryption password

An important part of any VPN is the encryption of the data traversing the VPN tunnel. This is especially true for data crossing the Internet. To insure the security of the data you send through your VPN tunnel PeerVPN’s configuration file (peervpn.conf) allows you to specify a PSK encryption password.   The PSK you enter into the peervpn.conf file is used as a “seed” to generate the actual 256bit AES keys used to encrypt the VPN link.

Pre-Shared Keys (PSK) can be used to provide both authentication & encryption. Pre-Shared Keys is the most common authentication method used today.

I used psktool for my experiment and it is included in the gnutls package(s).

On Ubuntu the following will install what is required for you to use psktool:

$ sudo apt-fast install gnutls-bin gnutls26-doc guile-gnutls -y

Usage : psktool [options]
-u, – -username username
specify username (username not important for our use-case here with peervpn but tool requires one)
-p, – -passwd FILE specify a password file.
-s, – -keysize SIZE specify the key size in bytes! NOTE: the max keysize is 64 bytes (ie 512 bits)
-v, – -version prints the program’s version number
-h, – -help shows this help text

then… to generate a 512 bit PSK for “any” username and save it to some file (example = ./mypsk):

example: $ psktool -u bmullan -p ./mypsk -s 64

Edit the mypsk file and copy everything after the name you used (the name will be the only readable text in that file) and add that copied PSK password key into your peervpn.conf file after the variable “PSK”

How to use the PeerVPN “mesh” VPN with LXC

The big question is…  how does this help interconnect LXC containers running on possibly many remote and independent server/hosts

All it takes is a small networking change…

If on each host you configured & started PeerVPN properly, then on each host if you execute “ifconfig” you will see one or more VPN Tunnel End Point (TEP) “interfaces” created by PeerVPN.

NOTE:  The TEP will be named the same “name” as you entered in the PeerVPN configuration for the variable called “interface” … refer to the above PeerVPN tutorial.

To connect LXC containers running on any PeerVPN configured host you attach the “peervpn0” interface to the lxcbr0 bridge that lxc uses on that host.

NOTE:   Depending on your peervpn.conf file configuration you are the one that defines the PeerVPN TEP interface IP address.   In the PeerVPN Tutorial example the peervpn0 interface is given a 10.8.x.x address

When you installed LXC on a host (sudo apt-get install lxc) a default LXC bridge will be created and given a 10.0.3.x IP address.   Also, any lxc containers created using the lxc-create command on that host will by default get a 10.0.3.x IP address.

While logged into each of your servers you should now be able to ping the 10.8.0.x IP address of the other PeerVPN member servers.

Our next step is to connect our TEP to the LXCBR0 bridge to enable containers attached to that bridge to pass data over the VPN tunnel.

Since the PeerVPN TEP interface (“peervpn0” in the Tutorial example) is just like any other Linux ethernet interface we can use the “ip link” command to connect the peervpn0 interface to the LXC lxcbr0 bridge.

$ sudo ip link set dev peervpn0 master lxcbr0

NOTE:   After executing this command on EACH Host… you will find that you can no longer PING the 10.8.0.x IP addresses of the other PeerVPN member servers!

This is expected and is OK because if you still have the terminal up where you executed the command to start the PeerVPN (ie  sudo peervpn … you should still see your “peers connected” !

Next create an LXC container on each “host”


$ sudo lxc-create -t download -n my-container —  -d ubuntu -r trusty -a amd64

Note:  this will create a new LXC container named “my-container” using Ubuntu and the Trusty release (re v14.04) and also make it a 64Bit OS in that container.

Next… start the container your created on each host and then get access into the LXC container “my-container”

$ sudo lxc-start -n my-container -d

$ sudo lxc-attach -n my-container

If you look closely at the Terminal window you are using you will see that the “prompt” has now changed to show that you are logged into the container “my-container” and that you are logged in as root.

Note:   root in a container is NOT the same as root in the “host”

On each host get the IP address of each host’s container that you created and write it down.

You can get those IP addresses using the following LXC command on both Host A and Host B

$ sudo lxc-ls -f

Or if you are logged into the Container on each host just do:

$ ifconfig

NOTE:   (your container IP addresses will be different but for our example here let’s say

  • eth0 of Host A’s container has IP address
  • eth0 of Host B’s container has IP address


peervpn lxc diagram




















While logged into the Container on Host A, try to ping the Container IP address on Host B

Using our example IP addresses from above (again your own Container IP addresses will be different:

$ ping

This should now work and Containers on Host A can reach Containers on Host B via the PeerVPN Tunnel you created.

Important Note

For our proof-of-concept trial here you need to understand that we have left LXC on each host node to utilize the default LXC configuration.   So each host will have its own LXC lxcbr0 bridge … and … the lxcbr0 bridge on each host will all have the same 10.0.3.x subnet & and ip address from that 10.0.3.x subnet defined for lxcbr0.

Furthermore, the LXC containers created and running on the individual “hosts” will all also have been assigned a 10.0.3.x ip address by the local lxcbr0 dnsmasq.

Even though LXC “by default” creates & assigns “unique” IP addresses to each LXC container created inside a particular “host”…    LXC running on separate “hosts” is NOT by default aware of IP addresses used by LXC on any other host.

For our “proof-of-concept” here, that means there is the potential for a “duplicate” 10.0.3.x IP address to be assigned to a container on one or more “hosts”.

For a small proof-of-concept this is probably unlikely to occur and so for this blog write-up we will ignore that fact.   But for a production environment you will want to look into using a centralized IPAM (ip address management) solution which will probably involve other linux tools such as DNSMASQ, DHCP, DNS.    However, that is beyond the purpose of this proof-of-concept article/blog post.

Finally Step — repeat this process for each cloud instance/host if you’d like to test beyond just a couple servers.  However, remember there is a “remote” possibility of some LXC container getting a duplicate IP address in your own proof-of-concept trial.   it is remote but it is possible.

NOTE:   you can configure LXC on each host to use a different bridge you create (say br0) and then on 1 host create & add a DNSMASQ and attach it to the br0 bridge.   After doing so, all LXC containers on any host anywhere that is part of the same PeerVPN tunnel will get their IP assigned by a single dnsmasq and you will not have to worry about IP duplication.

Now each lxc container on each cloud instance should be able to ping the 10.8.x.x address of any other lxc container on any other PeerVPN host you have setup anywhere as well as ping any other LXC container on any of those Hosts.

Also, for any Production use it might be advantageous to utilize Non-Privileged LXC containers.    All of this blog post has just talked about “privileged” LXC containers.

Use & Implementation of IPv6 as a Production Solution

The introduction & increasing use of IPv6 instead of IPv4 will greatly simplify this overall PeerVPN solution in regards to IPAM because IPv6 was designed to allow local IPv6 address assignments which are guaranteed to be unique even between separate and remote host compute nodes/containers.   Google “ipv6” and read up to become more familiar with it as the “internet of things” or IOT as its popularly called will require the vast number of available IPV6 addresses in order to connect the future world’s billions of inter-connected internet devices (phones, tv’s, cars, tablets, laptops etc).

ARIN announced in June 2015 that it has exhausted ALL IPv4 addresses !

So no more new IPv4 is available.   For this reason, its important to start learning, testing, deploying IPv6 where you can.   In the U.S. almost all ISP’s (Cable, ATT, Mobile etc) now support IPv6 !

NOTE:  The main advantage of IPv6 over IPv4 is its larger address space. The length of an IPv6 address is 128 bits, compared with 32 bits in IPv4. The address space therefore has 2128 or approximately 7038340000000000000♠3.4×1038 addresses

General IPv6 Configuration for LXC

Searching the web I found a good write-up describing the configuration of IPv6 for LXC container use.

Although this article does not address anything about VPNs I think it provides a great background to understand the critical steps & considerations to configure IPv6 for LXC and the LXC Host machine.

Refer to:     LXC Host featuring IPv6 connectivity

Unique Local IPv6 Generator

There is a great online tool to help you generate a unique “local” IPv6 address to utilize with your mesh network or simply to use IPv6 with LXC or Linux configurations.   See:

Suggested Readings

To really start understanding LXC be sure to read through the terrific 10 part Series on LXC by one of the Principle LXC Developers Stephane Graber.   Refer to:

To gain a good understanding of IPv6 configuration in Linux one web site that is fairly comprehensive in its description of the terms, configuration options, and usage refer to:     IPv6 – Set UP An IPv6 LAN with Linux

Also just for a good reference I have found the iproute2 cheat sheet web page extremely valuable.

Last Words…   As I am not any kind of expert in IPv6, LXC or Linux feel free to suggest improvements, changes and/or configuration examples to this  approach in any of the related areas !

Have fun…!

January 1, 2015

Using Rundeck on Ubuntu to automate server deployments into LXC (local or remote) containers

Filed under: Cloud Management, LXC, ubuntu, Virtualization Tools — Tags: , , , , — bmullan @ 11:59 am

Continuing my last posts regarding LXC (linux containers) I realize that managing them from the command line might be a bit tedious when there can be hundreds or thousands of containers between your local PC/laptop and any “remote” (ie cloud) servers & LXC containers you utilize/manage.

I just recently found out about Rundeck while searching for orchestration/mgmt tools.

My use-case was that I was looking for something that could help in managing LXC (linux containers) whether remote or local.

Note:  many people confuse LXC and other container technologies like Docker, LMCTFY, etc.   They are all different solutions that underneath utilize Linux Namespaces.   Here is a good multi-part series describing Linux Namespaces.

LXC ( is an incredible technology.

With the release of 1.x this past year it now supports nested containers, unprivileged containers and much more.

Anyway, I decided to see if I could get Rundeck to work in an LXC container and also be able to create workflows/jobs etc to work with LXC containers.

LXC has a rich set of CLI commands:

  • lxc-create
  • lxc-start
  • lxc-attach
  • lxc-stop
  • lxc-clone
  • lxc-destroy
  • etc

There is also an API that supports Python, Go, Ruby etc.

Stephane Graber (one of the LXC core developers) has a great 10 part Blog series that tells you all about LXC.

For me,  I just wanted to get Rundeck to issue the above lxc-xxxxx commands.

Turns out it only took a couple configuration changes so I thought I’d share my notes here.

Note: all of this was done on Ubuntu 14.04

Steps I took to install Rundeck in an LXC container.

create a new container on the Host.   I called mine “rundeck”

$ sudo lxc-create -t download -n rundeck

start the container which will run detached from the terminal you started it on.

$ sudo lxc-start -n rundeck

attach (re get a console into the container)

$ sudo lxc-attach -n rundeck

Note: at this point your console prompt should change to show you are logged in as Root in the Container whose hostname is “rundeck”.

At this point you can do whatever you would do with any ubuntu server but here were my steps

root@rundeck#  apt-get update && apt-get upgrade -y
root@rundeck# apt-get install wget nano default-jre

then I used wget to download the latest Rundeck .deb file:

root@rundeck# wget

Note:  check on their website for the rundeck version number as it may change often

install the rundeck .deb it

root@rundeck# gdebi ./rundeck-2.4.0-1-GA.deb

When the Rundeck installation is done I needed to do a couple of things.

LXC containers in Ubuntu by default are started in their own 10.0.3.x network.   By default applications in the container have internet access and as I’d mentioned before are like being logged into any other ubuntu server in regards to what you can do.

Because its possible that each time you stop/restart an LXC container it may get a different 10.0.3.x address I wanted to create a solution to where the Rundeck webapp would acquire the “current” IP address of the container Rundeck is running inside of, each time that container starts & rundeck starts.

My script looks like this and I saved it into the containers /usr/bin directory after setting it as executable (chmod +x …  I called mine “” or whatever you called it.

# purpose:
#            get ip address while running inside a container
#            Stream Edit (SED) and set the /etc/rundeck/rundesk-config.
properties file so that it substitutes
#            “localhost” with that IP
#            $ sudo service rundeskd start
# assumptions:   container is using its own eth0 for network connection and its primary address
# This script is called from /etc/rc.local during system boot but after network IP is set and rundeckd is started
# first set back to original
cp /etc/rundeck/ /etc/rundeck/

# get eth0 IP address (we assume that’s what the container is using

my_ip=$(ifconfig eth0 | grep “inet addr” | awk -F: ‘{print $2}’ | awk ‘{print $1}’)

# swap the term “localhost” with the real IP of the container in the file
sed -i -e “s|localhost|$my_ip|” $FILE
# restart the rundeckd service with the new & now actual IP address
sudo /etc/init.d/rundeckd restart


Run this script by adding it into the rc.local file  inside of the lxc container in which you installed rundeck (my container is called rundeck)

in /etc/rc.local just add the following line at the end of whats already there


Next, as you may note above I am simply searching for the word “localhost” and substituting the current IP address of the eth0 of the LXC container Rundeck is running in as again “by default” an LXC container will use that IP address and I am assuming defaults here.

Secondly, to keep this simple, before I ever restart the system for the first time I copied the to so I had a virgin copy of the original file with “localhost” still in it.

The first step of the script is to restore the original file, then do the SED substitution that way I could always find/substitute the actual IP of the container.

root@rundeck# cp /etc/rundeck/rundeck-config.properites  /etc/rundeck/

My next step was to enable use of SUDO in job command so I could have Rundeck work with Privileged LXC containers

Remember to create/start/stop etc those you have to have SUDO privileges on the Host.

I searched the Rundeck forum and found others were grappling with this problem too.

For me my solution (whether its best or not) worked.

I used visudo to edit the SUDOERS file and set the user “rundeck” so “rundeck” user does NOT require a password to execute a sudo command.

Note:  Again, you are doing this WHILE LOGGED INTO the “rundeck” container – NOT – the Host !

This will enable the rundeck web app to execute commands that require “sudo” in them.

in the rundeck container…

$ sudo visudo

Add the following at the end of the sudoers file


Ctrl-X to leave, save your changes, and you’re done!

Now while logged INTO the rundeck container reboot it.

root@rundeck# shutdown -r now

Note that this will log you out of the container and return you to the original terminal prompt on your Host OS.

If you want to log back into the container “rundeck” you should be able to almost immediately log back in using the lxc-attach command again

$ sudo lxc-attach -n rundeck

But at this point you should be able to log into Rundeck which is running in the separate and isolated LXC container we also called rundeck by pointing your browser to the IP address of the container.

You can find out the containers IP address using the following LXC command while in a terminal on the Host OS:

$ sudo lxc-ls -f

NAME     STATE    IPV4                            IPV6  GROUPS  AUTOSTART 
base_cn  STOPPED  –                                  –          –                 NO        
rundeck  RUNNING,    –          –                 NO        
wings    STOPPED  –                                    –          –                 NO  

so in the above case I point my browser to:

and log into Rundeck as normal (admin/admin  -or- user/user)

However, now when I create a “job” for the localhost… that job executes inside of the LXC container “rundeck” and NOT on the Host OS …!

If you read the website you will also have noticed a new capability/extension to LXC that is now available called LXD (lex-dee).

LXD is introducing a whole new exciting  capability to LXC that includes the ability to easily create/run/manage LXC containers anywhere on any LXC capable host (LXC is part of the linux kernel) whether that host is remote (re Cloud) or local.

This means that even on your laptop you can have dozens or many dozens (depending on memory, applications, etc) of containers all isolated as much/little as you want from each other, from the Host or from the internet.

So now you can use Rundeck to manage/orchestrate all your local PC LXC containers BUT… you should also be able to use LXC & LXD to do the same with remote (re Cloud) servers/LXC containers.

As I am no expert in Rundeck, LXC or Linux feel free to suggest improvements, changes etc where you think this post requires it as I am sure I probably have made some incorrect assumptions w/Rundeck and/or LXC here.



November 20, 2013

Configure x2go remote desktop capability into LXC Containers

Filed under: LXC, Remote Desktop, ubuntu, x2go — Tags: , , , — bmullan @ 8:32 am

I’ve long used x2go for remote desktop access to Linux machines.   So far I’ve found x2go to be by far the fastest/best remote desktop application for Linux whereby a Linux, Windows or Mac user can access that Linux desktop “server”.

The following will show you how to create an LXC container and configure it to implement the x2go (see remote desktop “server” so you can access the LXC container’s desktop using any of x2go native client (windows, linux, mac) or even the x2go web browser plugin (ubuntu only at this time).

Note 1:

  • the following assumes an Ubuntu Host OS.   LXC is implemented in the Linux Kernel and should be available on ANY Distro but use may differ in some ways not documented here.

First lets create a test LXC container

$ sudo lxc-create -t ubuntu -n test

Note 2:    -t specifies “what” linux LXC “template” to use in creation of the LXC container.   In ubuntu templates exist for:

  • lxc-alpine
  • lxc-busybox
  • lxc-fedora
  • lxc-sshd
  • lxc-altlinux
  • lxc-cirros
  • lxc-opensuse
  • lxc-ubuntu
  • lxc-archlinux
  • lxc-debian
  • lxc-oracle
  • lxc-ubuntu-cloud

So although I use Ubuntu I could create an LXC container running OpenSuse, Debian, Arch Linux etc….  very cool capability.

The ONLY caveate is that all container OS’s will have to run the Host OS’s “kernel”.    This normally is not a problem for most use-cases though.

Next we have to “start” the LXC container we called “test”

$ sudo lxc-start -n test

As part of executing the above command you will be presented with a login prompt for the LXC container.   The default LoginID = ubuntu and the password = ubuntu

So login to the LXC container called “test”

Next I started adding some of the applications I would be using to do the test.

First I make sure the test container is updated

test:~$ sudo apt-get update && sudo apt-get upgrade -y

Next I install either an XFCE or LXDE desktop… Note, I use one of these because no remote desktop software I am aware of supports the 3D graphics of etiher Unity or Gnome3… including x2go. But x2go does support xfce, lxde, mate and a couple others.

So lets install xfce desktop in the container.

test:~$ sudo apt-get install xubuntu-desktop -y

In order to install x2go PPA in the container I have to get “add-apt-repository” (its not by default)

test:~$ sudo apt-get install sofware-properties-common -y

Now I can add the x2go PPA:

test:~$ sudo add-apt-repository ppa:x2go/stable

Next, install the x2goserver to which I will connect from my Host by using the x2goclient I will install there later.

test:~$ sudo apt-get install x2goserver x2goserver-xsession -y

x2goclient uses SSH to login to an x2goserver.

There are various advanced x2go configs you can do for login but to keep it simple I am going to just be using login/password combo.

However, to be able to do that the default Ubuntu /etc/ssh/sshd_config file needs 2 changes to allow logging in with login/password.

Use whatever editor you use to edit (I use nano – which you would have to also install with apt-get into the container)

test:~$ sudo nano /etc/ssh/sshd_config

Change the following from NO to YES to enable challenge-response passwords

ChallengeResponseAuthentication no

Uncomment out (re remove the #) the following to enable Password Authentication

#PasswordAuthentication yes 

Save your 2 changes and exit your editor.

Now, restart SSH so the changes take effect

 test:~$ sudo service ssh restart

At this point the x2goserver is all setup in the LXC container so you can access it with your x2goclient on your Host OS or wherever they might be assuming they can connect to your LXC container’s IP address.

You can shutdown (or reboot) the LXC container while logged into it just as you would in any Ubuntu by:

test:~$ sudo shutdown -r now  -or- $ sudo shutdown -h now

What is nice about LXC is that once you have shutdown the LXC container you can “clone” that entire container very quickly by issuing the following command on your Host OS

hostOS:~$  sudo lxc-clone -o test -n new_container

Each new LXC container will get a new IP address (default will be in the 10.x.x.x address range).

After you “start” your new cloned LXC container:

hostOS:~$  sudo lxc-start -n new_container

To access the NEW LXC container you can find out the new LXC container’s IP address using the following command after the LXC container has been started:

hostOS:~$ sudo lxc-ls –fancy

 You can then use that IP address in creating a new x2go “session profile”.

Again, remember that each container “could” be configured with a different Desktop Environment so one user could have xfce another lxde another Mate etc.

Hope this is useful and fun for you to experiment with.


How to Enable Sound in LXC (Linux Containers)

Filed under: LXC, pulseaudio, ubuntu, x2go — Tags: , , , — bmullan @ 7:26 am

An Approach to Enable Sound in an LXC container


LXC Containers are usually used for “server” type applications where utilizing sound is not required.

My personal “use-case” is that I want to use LXC containers to provide a remote-desktop “server” to remote users.    In my use-case I use both the awesome x2go remote desktop application refer to: and also my own spin of the great Guacamole HTML5 remote desktop proxy gateway.

I will not go into anything x2go or Guacamole related here regarding how to setup it up for use with LXC.

The following is how I enabled Sound in my LXC containers on my Ubuntu 15.10 amd64 host/server.

Before you do anything with a container you need to make 1 change to whatever “Host/Server” you want to play sound from LXC containers.   Whether that Host/Server is local or remote or the same Host/Server that the LXC containers are running on.

$ echo “load-module module-native-protocol-tcp auth-ip-acl=;” |  sudo tee -a /etc/pulse/  

$ echo “load-module module-zeroconf-publish”  | sudo tee -a /etc/pulse/ 

The above will add the following 2 lines to the end of your Host’s /etc/pulse/  file:

load-module module-native-protocol-tcp auth-ip-acl=;

load-module module-zeroconf-publish

The 1st statement says to allow sound from “remote” systems whose IP addresses are part of 10.0.3.x … in essence from any LXC container running on that Host/Server.

Once you have done the above you will need to either reboot the Host or just “kill” the Pulseaudio Daemon running on the Host, which will auto-restart itself picking up the 2 new commands you created!

to restart pulseaudio

ps -ax | grep pulseaudio

then use the kill -9 command & the PID of the above pulseaudio output.   As an example lets assume pulseaudio is running on PID 2189

$ sudo kill -9 2189

You can check that pulseaudio daemon restarted by doing the “ps -ax | grep pulseaudio” command again.


Step 1 – Create a Test container

Create a test container (the following creates a “privileged” LXC container but Un-privileged works as well:

$ sudo lxc-create -t download -n test

Start the test container:

$ sudo lxc-start -n test


Step 2 – Add PulseAudio and an audio player (mpg321) into the Test container

$ sudo apt-get install  pulseaudio  mpg321  -y

Create your new Ubuntu UserID

$ sudo adduser YourID


Step 3 – Configure your LXC Test Container’s PulseAudio to redirect any Sound over the Network

PulseAudio is really a very powerful audio/sound management application and there are many ways to utilize it.

One such way allows you to configure a “remote system”… in this case “remote” being the Test LXC container which is on a different IP Network than your Host OS so that it plays any sound/audio on the Host/Server (or a truly remote Host/Server:


  1. The “target” PulseAudio Host PC that will “play” the sound … (if on a home network) is usually a 192.168.x.x IP network.
  2. An LXC container on your Host PC is usually on a 10.x.x.x IP Network
  3. The LXC “Host PC” and any LXC Containers are usually bridged together via the lxcbr0 (lxcbr”zero”) bridge so they can communicate and so your LXC container can communicate with the Internet.

Make sure you are logged into your Test LXC container using “YourID” and “YourPassword”.    If you just created yourID in the container and are still logged in as ubuntu/root the SU to yourID   ( $ su yourID).

Next is the important step regarding PulseAudio configuration in your LXC Test Container.   The following command adds a new environment variable when you login to the Container in the future.

$  echo “export PULSE_SERVER=” | tee -a ~/.bashrc

The above will add the following line to the end of your .bashrc file


In the above is the IP of the HOST OS on the lxcbr0 bridge that LXC by default installs for you when you install LXC.

Note:  if the actual Host/Server you want to play the sound on is a truly remote Host/Server (re not the Host of the LXC container) the use the IP address of that remote Host/Server in the above


  1. PulseAudio by default usesport 4713.  Both on your Target Host OS and in any LXC container you might create unless otherwise configured differently.
  2. If you have any problems using sound in a future container make sure that Port 4713 is open in any firewalls if you plan to send sound to your local workstation  over a network or the Internet itself.


Step 4 – Finally Check to see if Sound works from your LXC Test Container

To test that sound works in your container use SCP to copy some mp3 file from your Host to the LXC container (assume the mp3 is called test.mp3).

$  scp /path-to-mp3/test.mp3  yourID@container_ip:/home/yourID

Next log back into your container as yourID.  You can ssh into it or lxc-attach to it.  In either case make sure you are logged in as yourID not root or ubuntu user.

Now you can  use the application “mpg123” to see if sound worked.

If you did everything correctly and if you have your speaker On and Volume turned up on your Host PC you should hear the .mp3 file playing when you execute the following:

$ mpg123 ~/test.mp3


The PulseAudio configuration I described here for the “test” LXC container allows PulseAudio to redirect sound to ANY other Linux system running PulseAudio on the network -or- the Internet.


This PulseAudio setup does allow concurrent simultaneous use of Sound by BOTH the Host and the Container.    For a single user case this may not be what you want but if you want the audio to play on some remote Linux machine, a Raspberry Pi out on your Deck etc. this is really useful

However, remember “my use-case” was for remote desktop access to LXC container based Ubuntu desktop systems. In “my use-case” … each container will eventually be configured so that any container will redirect PulseAudio TO the remote desktop “user” PC wherever that is on the “internet”.

Remember that the PulseAudio port 4713 can not be blocked by any firewalls


This configuration of course was simply to test that Sound would work.

I do think LXC could become a great User Desktop virtualization approach as it works great now with x2go (in my case) but there are other remote desktop access applications that others may utilize also.

Finally, the has a lot of other detailed information regarding advanced PulseAudio configuration and use. I’m still learning myself.

Hope this helps others trying to do similar things.

September 17, 2012

HowTo – Integrate Windows Apps into the Ubuntu Linux Desktop using Windows RemoteApp

HowTo – Ubuntu and Windows RemoteApp Use Guide

By: Brian Mullan  (

August 2013

Note:  update from the WinConn app developer…  good news !

I had sent Alex Stanev, the developer of WinConn an email some time ago asking about the possibility of getting WinConn updated for the release of Ubuntu 16.04LTS in April 2016.   Alex has done the upgrade/update and you can read the email below to find out where/how to install the new WinConn so it will work with the newer Ubuntu versions.

                                          = = = = = = = = = = = = = = = = =

I’ve moved winconn to github:

With this version, I’ve moved to new xfreerdp commandline options and fixed dependencies.

Generally, you can build it with:

git clone


cd winconn

If you have dependencies needed (it should cry for them), you’ll have the .deb packages built successfully and installable. This works with 15.10, but should work with other versions also.

The problem here is with current freerdp (2.0.0-dev) they have RemoteAPP regressions – partial window shown, window froze and etc.

Please let me know If you have stable RemoteAPP functionality with concrete freerdp version. There are bugs submitted against freerdp, I’ll keep checking their status.
Any dev  help with winconn is welcome.



= = = = = = = = = = = = = = = = =

Note:  updated April 2015 – RemoteAppTool now supports  Windows 7 Enterprise -or- Ultimate, Windows 8 Enterprise, Windows XP SP3, Windows Server 2008 and newer!


A “How-To Guide” about going beyond WINE’s capabilities to enable a clean integrated Linux Desktop with all of the “necessary” Windows applications you still require or can’t live without.

Note:   Because I use Ubuntu, in this guide I reference Ubuntu as my linux system.   However, using the same approach in any Linux Distro should work the same !   

What is the Problem we are trying to Solve

I like many Ubuntu users are still saddled at times with the need to run that one or two critical Windows-only software that just cannot be made to run correctly in WINE.

Of course we all know we can use virtualization like KVM or VirtualBox to install a Windows operating system and then install the needed Windows application(s) there.

But that only presents us with another set of problems:

  1. You are running the Windows OS as a VM and thus see the whole Windows Desktop presented to you which in my mind at least clutters up my desktop… just for access to your needed Windows applications.
  2. Without resorting to installing/configuring something like CIFS/NFS/SAMBA there is no convenient way to share/exchange data/files created in the Windows Application with your Ubuntu applications or vice-versa.

This article is being written to describe what I think is a very nice working environment that addresses this problem and may introduce you to several technologies that you will find interesting in other ways.

My own problem Windows Application happened to be a great MindMapping tool called ConceptDraw  which is part of an integrated suite called ConceptDraw Office.

ConceptDraw Office is only available for Windows and Mac OS.

I’ve purchased and use the great CodeWeaver’s Crossover Ubuntu WINE environment.   CrossOver allows you to install many popular Windows applications and PC Games on your Ubuntu PC.

While Crossover let me easily install Microsoft Office into my Ubuntu system there are some applications that it still cannot help run fully/correctly in Ubuntu.

For me, one of those was ConceptDraw Office.    Using CodeWeaver’s CrossOver application I could successfully install ConceptDraw Office and 2 of the 3 applications in the Suite worked flawlessly (Project Manager and the Visio-like Designer).

However, the ConceptDraw MindMap application installs okay and all the menu’s were correct BUT… the mindmap drawing surface just would not correctly render the mindmap images correctly no matter what I tried so the…  Mindmap application was unusable and useless to me.

As I really like using that Mindmap tool for brainstorming new projects like integration of applications into cloud environments (AWS or OpenStack),  I wanted to have the Mindmap available to me on Ubuntu without resorting to booting Windows or being forced see the entire Windows desktop in a VM or separate PC… just to use the one application.

So I came up with a very usable solution which I’d like to share.

This guide explains what I did and how it was done so others might benefit from it as well.

Where to Start

This approach does not eliminate the need for a VM but it will make your Ubuntu desktop and working environment much more nicely integrated with the Windows Applications you need.

In my solution that I will present I will be using several technologies:

  • KVM (VirtualBox is certainly an alternative)
  • FreeRDP (opensource Ubuntu tool that supports Windows RemoteApp and RemoteFX)
  • Windows 7 Enterprise -or- Ultimate, Windows 8 Enterprise, Windows XP SP3, Windows Server 2008 and newer.
  • Microsoft’s RemoteApp capability

It is a assumed you have a working Ubuntu desktop environment and KVM installed.

Using your (licensed) CD or .ISO file copy of one of the above required versions of Windows, create a new KVM virtual machine and install Windows into it.

When you create the VM you should probably size it for:

  • minimum of 30-35GB disk space
  • initially for 2 CPU (if you can) which after installation you can reduce to 1 cpu.
  • I’d recommend giving that VM an initial 3074MB of RAM (again if you can)

All of the above is to simply to make the Windows installation go quicker.

Note:  some of the Windows specific steps below are assumed to be known by you already.   You will also either need to make your account Admin capable or have a separate Admin account you can access.

After you have Windows installed in the VM the fun part of this begins.

Steps to do in Windows  (screen shots are from Windows 7)

Create a User account for yourself in Windows

  1. Click on Start
  2. Right-Click on Computer
  3. Select & click on Properties

After clicking on Properties you will see the following menu on which you need to select/click-on “Remote settings” in the upper-left.

On the next screen that is presented click on the Tab labeled Remote.

Then select the option:

 “Allow connections from computers running any version of Remote Desktop (less secure)”

Click on the “Select Users” button and you will see this menu screen.

Next, click the Add button and in this menu enter your Windows 7 UserID.

Click Check Names.

Click OK to return to the previous menu and you should see your Windows UserID now listed as authorized for Remote Desktop.

Click OK to save this UseriD as a user allowed to use Remote Desktop.

Now we can start some of the interesting configuration for Windows.

Note:  The reason why it must be a Windows 7 Ultimate or Enterprise version is that both of those are “capable” of supporting Microsoft’s RemoteApp as a “RemoteApp Server” but unfortunately Microsoft  made the decision to not make that capability readily useable.

Some very smart Windows users/programmers figured out how to turn on the RemoteApp server capability in Windows.

They have also made it almost painless by creating a very nice GUI interface to “Publish” RemoteApps from Windows.

Enabling RemoteApp Publishing on Windows

You must make sure that you have Microsoft’s .NET installed on your Windows VM ….   if it is not already.

A google search for “Microsoft .NET” should give you multiple hits where you can download and install .NET into your Windows VM.

Do this now !

Next we need to download the GUI based tool that will not only enable RemoteApp publishing on the Windows VM but will also let you Publish … ANY…  Windows Application you install onto that Windows VM as a “RemoteApp”.

A person named Kim Knight built a GUI based application called RemoteAppTool that you can download from here:

Download and Install the RemoteAppTool into your Windows Virtual Machine (VM).

Note:  The RemoteAppTool requires .NET which is why we did that step first.

After installation of the RemoteAppTool you need to start installing your Windows Applications that you will want available on your Ubuntu system.

Do those Application Installations now !!

Publishing a Windows Application as a RemoteApp using RemoteAppTool

Now that you have installed all of the Windows Applications you want access to the next step is to “publish” them as RemoteApps.

Right-Click on the Windows Icon for Kim Knight’s RemoteAppTool program and select to start it as an Administrator.

Note:  this is why you need to be an Admin or able to log-in as an Admin on the Windows VM

After the RemoteAppTool starts you will see its GUI Menu.

Click on the “New” button

Enter any meaningful name for your RemoteApp

At the next screen (RemoteApp Properties Entry screen)   click on the 3 dots (…) to the right of the PATH entry which will bring up Windows Explorer.   Use Explorer to search your system for ANY application you want to associate with this RemoteApp “name”.

After you double-click on the .exe name of the Windows program you want to make a RemoteApp… all the rest of the fields in the RemoteAppTool menu should be filled in automatically.

When the form is complete REMEMBER to Click SAVE !

NOTE:  Once you click  SAVE your application is available from the Windows 7 VM as a Windows RemoteApp !

Now we need to go back your your Ubuntu desktop so leave your Windows 7 VM running but just minimize it off of your Ubuntu desktop.

Note:  As I’d mentioned earlier I have tested and used RemoteAppTool on Ubuntu 12.04 LTS and 12.10, Ubuntu 13.04, Ubuntu 14.04 LTS.

Install the Applications enabling use of Windows RemoteApps from the Ubuntu desktop

FreeRDP is a free implementation of the Remote Desktop Protocol (RDP), released under the Apache license.   FreeRDP is primarily the work of  Marc-André Moreau  who on January 16, 2012 announced  the stable release of FreeRDP 1.0 for Ubuntu but FreeRDP may have have also released newer versions.

Note:  FreeRDP can also be used on Mac OS and Windows clients also to connect to Windows Servers !

FreeRDP v1.x can be downloaded here.

You will only need to do this if your Ubuntu Distro does not have it available or you want the latest version of FreeRDP.

FreeRDP’s Key Features:

  • RemoteFX
    • Both encoder and decoder
    • SSE2 and NEON optimization
    • NSCodec
    • RemoteApp
    • Multimedia Redirection
      • ffmpeg support
      • Network Level Authentication (NLA)
        • NTLMv2
        • Certificate validation
        • FIPS-compliant RDP security

Note:  the 2 key features (my opinion) are the support on Ubuntu for RemoteFX and RemoteApp.

If you are unaware of what RemoteFX or RemoteApp do in a Windows architecture see these referensce:

For RemoteApp:  

For RemoteFX:    

So check NOW your distro’s Repository to see if FreeRDP is there and  it is at least v1.0 !

Note:   For this Guide to work the repository must have at least version 1.0 of FreeRDP.   

If the repository does not yet have at least v1.0 then you may have to download the source and build and install FreeRDP yourself.

NOTE:   In Ubuntu 14.04 (which I use) the FreeRDP in the Repository is v1.02 which works with this process!

Install FreeRDP now!

Now that you have FreeRDP installed you are ready to run one of the Windows RemoteApp programs you previously configured.

FreeRDP itself is a command line tool and obviously you “could” run one of your Windows RemoteApp’s using a command line such the examples on the FreeRDP Wiki:

As they those TV Ads we all love late at night  say —   “But wait … there’s more”.

There was recently released an open source GUI application that makes running RemoteApps via FreeRDP almost too simple.

This application is called WinConn.

WinConn simplifies creation, management and desktop integration of remote windows applications in Ubuntu.  WinConn uses RemoteApp technology, implemented by the FreeRDP Project to provide a seamless user experience with Windows Applications on your Ubuntu system.

Each RemoteApp application runs in its own “window” on your Ubuntu Desktop.

This means the RemoteApp application can be used like any other locally installed Ubuntu application, without bringing the full windows desktop to the user.

You can download WinConn from the website:

NOTE:  WinConn’s Launchpad PPA has not been updated beyond Ubuntu 12.10 (re Quantal release)!  So to use the PPA on a newer version of Ubuntu (13.04, 13.10, 14.04 etc) you will have to edit your /etc/apt/sources.list file and manually add the following 2 lines of text entry:

      deb <your ubuntu version here – example = trusty> main
      deb-src <your ubuntu version here> main

Save the edit of the /etc/apt/sources.list file and then do:

      sudo apt-get update && sudo apt-get install winconn -y

WinConn simplifies use of FreeRDP without resorting to the Command Line Interface.   It makes it easy to not only run your RemoteApps but also to specify a local “shared” directory where your Windows application can push/pull documents or files to/from your Ubuntu environment and those Windows applications.

So Let’s see a Movie and All of this in Action

Although I “could have” done a short video from my own Ubuntu desktop I’m basically lazy.

So I’m just going to show you what this Guide is all about….  via an existing Video which had been produced by Alex Staney (WinConn)  and posted on Vimeo.


The following video is an Ubuntu 12.04 Desktop PC running WinConn to present RemoteApps published from a WINDOWS 2008 server.

What this guide has been all about is accomplishing the very same function but instead of a Windows 2008 server publishing the RemoteApps you want to use this Guide shows you how to use a Windows VM to do the very same thing with the same if not better performance  !!

I did this because its more likely that people have a Windows 7 or WinXP license than an Windows 2008 server license handy.

Watch the following Video from  Alex Stanev demoing the WinConn RemoteApp manager on an Ubuntu 12.04 desktop:

Ways to make this even more Productive

Up until now we’ve only discussed how you can make any Windows Application a RemoteApp and then run those apps in their own “window” on your Ubuntu desktop without seeing the whole Windows desktop.

There is a way to make this even more productive for yourself and actually avoid having to setup each and every Windows Application as a RemoteApp.

How do you do that?    Well, a unique capability of this approach  of using RemoteApp is that if you were to setup the Windows Explorer program itself as a RemoteApp and publish it then when you run it (using FreeRDP or WinConn) you will see Windows Explorer appear in its own window on your Ubuntu Desktop.

One capability that Windows Explorer brings to the table is that in Windows itself it allows you to find an executable .EXE program, .BAT batch file or .COM file and just click on it to run that program.

So as I’d said earlier … I’m basically a lazy kind of guy so if I can keep from doing extra work by doing something smarter, all the better.

So lets setup Windows Explorer, publish it as a RemoteApp and then run it.

RemoteAppTool First Use Screen with No RemoteApps

First we log back into our Windows 7 VM and again start the RemoteAppTool application (as Administrator).

Next click on the Create New button and and enter the form’s fields to begin the process of specifying Windows Explorer (explorer.exe)… as a remote app.   We’ll explain why  we pick Explorer.exe later.

remoteapptool new entry screen

    After entering a name for our RemoteApp (note: this “name” can be anything that is meaningful to you)… click OK.

RemoteAppTool New App Properties Entry Screen

RemoteAppTool New App Properties Entry Screen

Now to the RIGHT of the “PATH” entry there are 3 dots (…) – Click on those 3 dots.

This will bring up explorer and allow you to search on your system for the program you want to make a RemoteApp.

In our case we want to actually make Explorer.exe itself a RemoteApp so you can Click on Computer, Click on C: drive, click on Windows, then scroll down until you see explorer.exe then double-click on it and it will be added to the RemoteAppTool screen entries for you.

-OR-   you should be able to actually enter exactly what I have in this picture as  all Windows systems use the same %windir%  variable to specify where the location where Windows system applications live (%windir% =  the c:\Windows directory) where explorer.exe is located.

When you are done completing this form…  again, remember to hit “Save”.

Now you have published Windows Explorer (explorer.exe) as a RemoteApp !!!   Its that simple with RemoteAppTool.

Let’s go back to the Ubuntu desktop.

Now let’s create some directory in your Linux system that you might use for any and all exchange of files to/from Ubuntu and Windows 7.    If not in your home directory or “Documents” directory make sure you have READ/WRITE privileges to wherever you create it.

Let’s use /opt and so we remember what this directory is for lets just call it “win-share”.   Since we are using /opt you will have to use “sudo” to give you the permissions to create the new directory and to change its permissions for access.

$ sudo mkdir /opt/win-share

$ sudo chmod 777 /opt/win-share

and then start WinConn up again.

This time lets configure just a single entry for the explorer.exe   RemoteApp we just published.

!! Remember to click the Save button !!

Now before you exit WinConn lets do one more thing.

Click on the little menu Icon

If you just hover your mouse over it on the WinConn menu you will see that it lets you create a Ubuntu Desktop Launcher.

Let’s do that now !

After you’ve done this you will see a new Launcher Icon on your Ubuntu Desktop which is labeled appropriately enough… Windows Explorer.

If you’ve followed all these steps so far all you have to do now is click on that Icon to bring the Windows 7 – Windows Explorer (explorer.exe) onto your Ubuntu desktop in its own window which you again can resize, minimize etc.

I am going to assume everyone has used Windows Explorer so after it appears on your Ubuntu Desktop use Explorer to search for some other Applications you’ve installed on Windows 7 and click on any of them just as if you were in Windows itself.


You will see the application you clicked on also appear on your Ubuntu Desktop and because you previously configured /opt/win-share and made it accessible you can use any application now and save or open files in the /opt/win-share directory.

NOTE:   if your application does not appear the Windows Ultimate or Enterprise edition installation “may” need a extra entry put into the Windows Registry using the “regedit” tool.   You may also see an error if you use FreeRDP from the command line that says:      error: RAIL exec error: execResult=RAIL_EXEC_E_NOT_IN_ALLOWLIST NtError=0x15

To fix this problem do the following simple steps MAKING SURE to follow them all.

  1. On your Windows 7 virtual machine, run Regedit from the “command bar” on the lower left side of Windows.
  2. When Regedit pops up, start clicking down the following path HKLocalMachine\SOFTWARE\Policies\Microsoft\Windows NT (re click HKLocalMachine, then click SOFTWARE, then Policies etc)
  3. once you are at Windows NT — check to see if  there is an KEY entry alrea dy named “Terminal Services” beneath it… if there is NOT then do steps 4 & 5.     If it already is there skip to step 6
  4. Right Click on Windows NT, select/click on NEW and then select KEY (note:  this will create a new “key” entry box underneath Windows NT 
  5. Change the name of that new KEY to “Terminal Services”
  6. Click on Terminal Services
  7. In the right hand window of Regedit “right click” add a DWORD 32 entry
  8. Under that, add a DWORD32 value named “fAllowUnlistedRemotePrograms” and set the value to 1

When you are done your Regedit screen should look like the following:

Regedit entry to fix problem with FreeRDP execution of RemoteApp ending in error15

Regedit entry to fix problem with FreeRDP execution of RemoteApp ending in error15

Now you are ready to try all of this out.     in WinConn double-click on the entry for Explorer and you should see your Windows Explorer pop up in a Linux Window all by itself (re without the rest of the Windows Desktop)

If you use Windows Explorer to find and launch the Microsoft Paint program (mspaint.exe) Explorer you will see something like this… with my great artwork

If you Click on the Paint program button that looks like a Floppy Disk you will see the “SAVE” AS menu appear.

Click on the “file sharing” directory we created earlier in /opt.

Note: Because we are using WinConn it will appear as a folder called something like:


In my case, the shared directory name is going to be labeled in the Windows Save AS menu in the left panel as:

“winconn on ubuntu-2tb”   … is called “ubuntu-2tb”

Why… because my Ubuntu system’s hostname is “ubuntu-2b”

Enter the File Name (My Cool Beans drawing) and Save as type (I chose JPEG) then click on the Save button.

Now go back to your Ubuntu desktop and click on Nautilus (in Ubuntu) or whatever file manager tool you use and go to the /opt/win-share directory and will now see:

Since we are on our Ubuntu system just click on the “My Cool Beans drawing.jpg” and open it with an appropriate application you will see:

This is…  Cool Beans … isn’t it !!!

Some Parting Thoughts

I hope some of you find this a useful approach to your own “required windows app” problem.

You now know how to:

  • run any Windows App as a Windows RemoteApp.
  • Share files between your Ubuntu and Windows VM

Now… WHY did we make Explorer.exe  itself a RemoteApp program??

Because Simplicity is our Friend

Go back and log into to your Windows 7  VM again.

Now create a directory in windows  (lets call it “WinApps”) then use Window’s own Explorer to find every application you want to use either copy the Application or create a “short-cut” for each and put them in our “WinApps” directory folder.

Now logout of Windows 7 and return to your Ubuntu system.   Click on your Windows Explorer Launcher that we created earlier using WinConn and then using Explorer change to our  new “WinApps” directory and you will see:

From now on to launch ANY of your Windows applications as a RemoteApp click on your Windows Explorer Launcher that we created earlier using WinConn.

Then in Windows 7, using Explorer,  change to our  new “MyWinApps” directory and click on ANY of those applications and it will appear on your Ubuntu desktop.

How simple can it be?   From now on at most you only have 3 steps to do.

Actually, once Windows Explorer has appeared on your Ubuntu Desktop there is only 1 step… clicking on the Application(s) you want.     You can launch multiple apps and    they will all appear in separate windows on your Ubuntu Desktop.

So to review….  On your Ubuntu desktop:

  1. Click on the Windows Explorer Icon to run explorer.exe as a RemoteApp.
  2. When Explorer appears change to your own  MyWinApps directory on your Windows Virtual Machine desktop
  3. Click on any application shortcut you’ve placed in the directory MyWinApps and it will be launched as a separate RemoteApp and appear in its own Ubuntu Desktop window !

Best of all, you only had to setup one RemoteApp while in Windows using the RemoteAppTool and now any Windows installed application is available to you from your Ubuntu Desktop !

Also, every Windows application will be able to open/save files to our Ubuntu systems /opt/win-share directory and so will any of your Ubuntu applications.

Note:  All of this approach  works because any Windows Application launched by Windows Explorer “inherits” the Windows Environment of Windows Explorer.    In our case any program started by our “RemoteApp” Windows Explorer… will “inherit” being a RemoteApp itself..!!

Every Windows application you launch  will appear on your Ubuntu Desktop in its own “window”.   

All because … Windows Explorer was setup as a RemoteApp … so any/all applications it launches will also be RemoteApp enabled.

Advanced Windows 7 Configuration Setup

Windows 7 has the ability to be extremely customizable by anyone with Administrator privileges.

To further customize our Windows 7 VM let’s go through some of these steps.

NOTE:   BEFORE…  we start this section of the Guide lets make a KVM Clone of our existing Windows 7 VM.

Login into the Windows VM and and do a SHUTDOWN.

When the Windows VM has terminated use the Ubuntu  KVM Virt-Manager to create a “clone” of your Windows 7 VM.   That clone will be a snapshot of the configurations we have done so far.

Note:  This clone will be our backup in case we make any mistakes in this advanced configuration section of the Guide and we don’t know how to reverse what we’ve done.

Next, restart your original Windows VM and login in again.

Once you are at the Windows Desktop  click the Start button in the lower left and in the pop-up enter the following Windows Global Policy Editor tool named:  gpedit.msc  then press Enter.

gpedit.msc is a very powerful tool as it will let you literally change any setting in Windows.

Note:   this is why we made a backup clone of all our previous work !!

A nice feature of gpedit.msc is that if you click on any configuration entry in the left side of its display you will see on the right-side an “edit” box which:

  • explains in plain language what the edit options are for that feature
  • provides a simple check-box type configuration entry to change that feature’s options
  • provides simple Previous/Next buttons to move either to the next feature to edit or the previous.

For this Guide the following is only to demonstrate HOW TO change Windows 7 feature settings using the Windows Global Policy Editor tool called “gpedit.msc”.

Just to illustrate how the gpedit.msc tool works I’ll show you how to find/change one Feature option related to RDP User Sessions.

Note:   this is not necessarily something you have to do but is a useful demo so you can see the displays that gpedit.msc will present to you.   

gpedit.msc is so useful to customize Windows that there are probably other Features/settings you will want to modify to customize your Windows application and RDP session use.

Lets demonstrate how to find/modify the Feature options for the number of  RDP connections a user can have.   This Feature setting is called:   Limit number of connections

To find this feature in order to change its settings:

1) In the Left-side Panel click on “Computer Configuration
2) Click on its sub-option “Administrative Templates
3) Click on the Administrative Templates sub-option for “All Settings
4) In the Right-side Panel you can scroll or page up/down until you see “Limit number of connections” then click on that entry to edit its feature settings.

When you click on “Limit number of connections”  you will be presented with the feature option editor that as we’ve mentioned earlier will explain in it’s “Help” box what each setting does so you understand what your changes will do.

In the case of  “Limit the number of connections” we want to change this to “unlimited” so enter 999999 into the RD Maximum Connections allowed field and then Click the Apply button.

Note:  After Clicking on Apply, the change becomes immediately active in Windows.

Click OK to return to the Group Policy Editor (gpedit.msc) so we can make more changes.

Note:  without buying Terminal Server licenses for multiple users (CALS) from Microsoft.. by default.. Windows will only allow 2 desktop connections!

Important Note:  When using FreeRDP the way we have set it up, the only RemoteApp is the Windows Explorer program.   This only requires one RDP connection but enables you to start many Windows Applications up concurrently that will appear on your Linux Desktop and look/act like any other Linux application.

But it will only use a single RDP connection for all applications launched from that Windows Explorer !!

There are so many Windows system configuration options available (literally hundreds) to edit in gpedit.msc that its almost impossible to go through all of the useful ones for your particular situation.

The Global Policy Editor (gpedit.msc) is so user friendly with it’s “help” display on each Feature entry that it is hard to mess up your Windows installation… but never say never … and you could do something that might cause you a problem.   Again, that’s why we created our snapshot KVM clone of our Windows VM.

Search through all of the editable Features presented in gpedit.msc and tune your Windows system however you like.   Some often configured items revolve around settings for:

Enabling/disabling non-admin users to Shutdown the Windows system

Setting a time limit for disconnected RDP sessions before terminating the User’s Session.

Specifying a program to be run each time an RDP connection session is created.

And many more….

So browse the Features settings and change what makes sense to you and your use of Windows .

Have fun….

You gotta love Open Source Software and Solutions it makes possible !

Brian Mullan

Raleigh, NC

August 21, 2009

A bit off topic but worth mentioning…

There is so many great things happening today with how we all use the Internet, computing devices, software that its just hard to keep up.

Lately I’ve had to look more into several things, some known for a while some just stumbled on.

The group producing TurnKey Linux applications has always been interesting in their approach to applications deployment.    Their collection to Open-Source applications has been growing steadily.   Take a look and try one out.   In just a few minutes you can have a top server/service application up and running.

An interesting product I also bought to play with was the Marvell electronics SheevaPlug Development kit… about $120 … I still wonder about all the things you could do with this technology.   Marvell is a leading custom silicon manufacturer and their Plug Computer technology is way cool to work with.    I got several and one was configured in 20 minutes as a Samba server for my house.  I’d connected a 250Gig mini usb hard drive to it with all my music and let the really tiny Plug computer serve music to all the devices in my house.

So my future plan involves a 6 Port USB 2.0 self powered hub connecting 3 – 500 Gig mini-USB drives to 3 PowerPlugs computers.

The 1Gig E port on each of the PowerPlug Computers connected to my hub on the back of my Wireless N router.

For my fooling,  this will basically become 1.5Tbytes storage with 3 – 1.2 GHZ ARM processors each with their respective 512Meg FLASH,  512Meg DDR memory.    Adding 16GB or 32GB USB Flash RAM  to each Plug Computer will give them plenty of intermediate speed memory and since Ubuntu Linux is preinstalled it can take advantage of it all.

Sort of like a poor-man’s mini data center. $100 ea for the PowerPlug Computers, $100 ea for the USB mini-drives, $40 ea for the 16GB Flash, $35 for the 6 port USB hub

TOTAL: around $700.00

OH… and for the GREEN of you…

Each Sheeva Plug Computer uses less than 5 Watts and since USB 2.0 offers 2.5W @ 5V to each connected device (in our case 3 USB mini-hard drives)…

Our $700 Mini-Data Center runs on less power than a 30 Watt Light Bulb !!

Imagine the kinds of things you could do with that kind of setup running this shoe-box size mini-server farm — say an Apache WWW server, Email Server, low-end SQL.    You might run for a brewski when the idea hits you but all that technology really did fit in a shoe-box… I only had to cut a hole in the side for the power & ethernet cables.    Think about that for a second.     I forgot to mention that included my Open-Mesh wireless G connectivity (described in a bit).

I’ll try to post a picture later.

Yes and for the neh-sayers…

  • Yeah the mini-USB 2.0 drives only provide thruput of SCSI or eSATA,
  • and the ARM processors don’t have arithmetic coprocessing capability
  • the DRAM is just 512M and additional currently has to be via slower FLASH memory

But less than 5 years ago you’d have paid thousands for these same capability and it would have taken a Rack to hold it all.

Again, I can’t begin to think of all the things you could do with this kind of stuff.

Check it out what a GlobalScale’s GuruPlug Server provides for $130.    GlobalScale uses the Marvell chipsets.

Whats in the GuruPlug Server Package

eSata, USB, Gigabit EthernetMarvell has released new chipsets/boards that are even adding Wireless LAN and BlueTooth.

GuruPlug Server System

Next on my piqued interest list was when I found Open-Mesh’s Open-Mesh“mesh” wireless technology products.

Open Mesh wireless router

Almost incredibly easy to setup (5 minutes tops) and based on Open Source and standards.     Thee devices are really inexpensive ($29-$59) but the Open-Mesh Wireless networking technology is another really cool idea that I think will have people inventing ways to utilize it more besides coffee shops where  you drink your hot Java and work or check Facebook and Linked-In.

What blew me away was how great their free wireless management system was.    Its Open-Mesh’s Wireless Dashboard … make sure you take a look and think about what you are seeing for a bit.

Open Mesh Wireless - Dashboard

It was great to set it all up then see a Google Map on my PC showing all the Open-Mesh nodes overlaid on the map as icons.

Open-Mesh provides a free administration, alerting and mapping system called the “dashboard”.  It allows you to configure the SSIDs, splash page, passwords, and user bandwith for your network.

Click on a red/yellow/green icon and get details of the Node’s status, traffic etc.   These Open-Mesh devices support only Wireless G now but I understand N is in the works.   Heck, hard to pass up at $29-$59 each considering the wireless capabilities & more importantly free wireless mgmt you are getting with the Mesh Wireless Routing.

But lets get back to my over all theme…

Maybe the mini-datacenter  will get wedded to the Open-Mesh wireless boxes and produce smarter kids by adding  computing capabilities to schools more inexpensively !!!

Maybe … get it ??    With this economy – Help your community and help your schools… it helps kids and as the old saying goes

Kids may only be 30% of the population BUT GUARANTEED they are 100% of tomorrow. !



August 3, 2009

Part 2 – Using Cloud & Virtualization Technologies for Education -or- how Education and the Cloud met, married and had smarter kids!

Here I continue my last discussion about K-20 education and how to use cloud technology to possibly do things.

Lately, I’ve been following this thread… and would like to share some ideas and thoughts with you all…


Message: 1
Date: Thu, 30 Jul 2009 15:32:18 -0600
From: xxxxxxxxxxxxxx
Subject: Re: [Ltsp-discuss] Recommend Server for 25 clients
Content-Type: text/plain; charset=UTF-8

On Thu, Jul 30, 2009 at 1:40 PM, xxxxx xxxxxxxxx<xxxxxxxxxx> wrote:
> xxxxxx xxxxxxxxxx :
>> How powerful server would you recommend for 25 users ?
> “Server sizing in an LTSP network is more art than science. Ask any LTSP
> administrator how big a server you need to use, and you’ll likely be
> told “It depends”.”


So I replied to that thread with the following response with I’ll share here on my blog…

I’ve been using Amazon Web Services (AWS) ie Amazon’s cloud for K-20 proof-of-concept work. So bear with me while I describe some things…

  1. Amazon’s Elastic Compute Cloud (EC2) service is very inexpensive and easy to use and provides 5-6 different choices for “compute resources” (ie servers).
  2. Amazon uses a “Utility” based pricing model (you pay only for how much of something you use like water or electricity) and only when you are using it.

ie.  need a bigger server… just pick one and start it up (ie Launch it in AWS terminology) migrate your apps (won’t go into that here)

Need 10 or 100 servers… easy… pick the server model (linux/windows, 32/64 bit etc) — this is called an AMI – Amazon Machine Instance — and when you LAUNCH the AMI just put the # of servers you need into the “Number of Instances” box that pops up when you select to LAUNCH the AMI you picked.

5 minutes later… they will all be running.

You manage all the startup/shutdown, IP address’s, Security Firewall/Access lists etc using Amazon’s web based AWS Management Console.

Now I’ve always wanted say this … But WAIT there’s MORE… it gets better yet <g> !!

You can take ADVANTAGE of Amazon’s Auto-Scaling and Auto-Load-Balancing features.

Since AWS costs are based like a Utility …  you can start off with just 1 server at 5am and if you set it up for auto-scaling …

As students/teachers (ie Load) starts to build say around 9am… the server “can” Auto-Scale UP by cloning itself and at the end of the day the servers will Auto-Scale DOWN by terminating
themselves when no longer needed (ie you don’t pay for them when they aren’t running).   You are the one to configure the parameters for the UP/DOWN auto-scaling.

try doing that in your school or data center where 1st you have to buy the servers, rack/stack/cable/ pay for HVAC, maintenance contracts, insurance, replace parts, etc.

I like letting Amazon worry about that stuff!

I will copy some information from the AWS web site.

You can sign up for an AWS account free (again you only get billed if you start using something).

As you can see below a “small” server costs just 10 cents/hr while the largest (8 or 20 core) just 80 cents/hr.

I learned about AWS by starting a “small” Ubuntu server, installing my applications, testing etc. then blowing it away when I was done.   I spent 4-5 hours a day ($0.50/day) to do this.
It was very easy to learn !


Instance Types

Standard Instances

Instances of this family are well suited for most applications.

  • Small Instance (Default) (ie virtual server)
    • 1.7 GB of memory
    • 1 virtual core
    • 160 GB of instance storage
    • 32-bit platform
  • Large Instance 7.5 GB of memory, 4 core, 850 GB of instance storage, 64-bit platform
  • Extra Large Instance (ie virtual server)
    • 15 GB of memory
    • 8 core
    • 1.7 TB of instance storage
    • 64-bit platform

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

  • High-CPU Medium Instance 1.7 GB of memory, 5 core, 350 GB of instance storage, 32-bit platform
  • High-CPU Extra Large Instance
    • 7 GB of memory,
    • 20 core
    • 1.7 TB of instance storage,
    • 64-bit platform



NOTE:   as of 9/2010 AWS has introduced an approximately 18% price decrease for most of the AWS EC2 compute instance sizes.    The pricing below does NOT reflect this change.

AWS has also introduced a new “micro” instance which provides 640Meg of RAM,  1/2 a cpu for only  $0.02 cents per hour —  48 cents per day ??

Pay only for what you use. There is no minimum fee. Estimate your monthly bill using AWS Simple Monthly Calculator.

On-Demand Instances

On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments.

This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.

The pricing below includes the cost to run private and public AMIs on the specified operating system.

Amazon also provides you with additional instances with other option for Amazon EC2 running Microsoft and Amazon EC2 running IBM that are priced differently.

United States

Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.10 per hour $0.125 per hour
Large $0.40 per hour $0.50 per hour
Extra Large $0.80 per hour $1.00 per hour
High CPU On-Demand Instances Linux/UNIX Usage Windows Usage
Medium $0.20 per hour $0.30 per hour
Extra Large $0.80 per hour $1.20 per hour

United States
Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.11 per hour $0.135 per hour
Large $0.44 per hour $0.54 per hour
Extra Large $0.88 per hour $1.08 per hour
High CPU On-Demand Instances Linux/UNIX Usage Windows Usage
Medium $0.22 per hour $0.32 per hour
Extra Large $0.88 per hour $1.28 per hour

Pricing is per instance-hour consumed for each instance type, from the time an instance is launched until it is terminated. Each partial instance-hour consumed will be billed as a full hour.

Reserved Instances

Reserved Instances give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance.

After the one-time payment for an instance, that instance is reserved for you, and you have no further obligation.

You may choose to run that instance for the discounted usage rate for the duration of your term, or when you do not use the instance, you will not pay usage charges on it.

United States

Linux/UNIX One-time Fee
Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.03 per hour
Large $1300 $2000 $0.12 per hour
Extra Large $2600 $4000 $0.24 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.06 per hour
Extra Large $2600 $4000 $0.24 per hour

United States
Linux/UNIX One-time Fee
Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.04 per hour
Large $1300 $2000 $0.16 per hour
Extra Large $2600 $4000 $0.32 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.08 per hour
Extra Large $2600 $4000 $0.32 per hour

Reserved Instances can be purchased for 1 or 3 year terms, and the one-time fee per instance is non-refundable.

Usage pricing is per instance-hour consumed.

Instance-hours are billed for the time that instances are in a running state; if you do not run the instance in an hour, there is zero usage charge. Partial instance-hours consumed are billed as full hours.

Here’s how I make use of this.

On AWS you can pick from hundreds of pre-built “public” servers types (different flavors of Linux – Fedora, Ubuntu, Centos etc etc), 32 bit or 64 bit.

Some are “server” linux some are desktop linux.

Some have been built with apps already installed (Apache, MySQL, etc etc)

You get the idea.

So what have I been doing for kids/education… ?

Server Side:

I’m using AWS Desktop images where I’ve installed the x2go one-server.

x2go utilizes the NoMachine NX transport protocol libraries that are Open Source but x2go implements its own server-side and client modules.   The server side comes in a single user home version and also a x2go server implementation that is clustered and load balanced.

Unlike NoMachine’s current NX server/client …. where audio is a big problem.   x2go supports audio extremely well from server to client.    Local printing and sharing of folders between server and client is also supported.

Client Side:

Client side boots off of a Ubuntu USB thumb drive – preloaded with the x2go Open Source Windows, Mac or Linux clients.

x2go also has introduced a Web Portal capability for accessing the remote desktop.    Any user with a Browser that supports java can now access the Remote Desktop without installing any other client software on their local PC.

Each kid can have one and that way they can use it at school or — at home (same desktop, same cloud servers as at school).

Since the “real work” in terms of CPU and Storage is out on the AWS “cloud” it does NOT even matter what type PC they use…. all you use the local machine for is basically to boot off of
the USB and the local keyboard, mouse, screen and network connection (everything becomes a thin-client)

  • old pc, new pc
  • old laptop, new laptop
  • netbook
  • thin client

Since the “Desktop” that the students see is exported over NX from the AWS Desktop server where I can have from 1 – 20 CPU and I can have as many servers as I want… or can pay for <g>?

— and —

because storage using AWS’s S3 – Simple Storage Service and EBS – Elastic Block Storage is more or less infinite (at least as far as I’m concerned)

Now how’s performance.

Well you have to have a working and stable local network first of all but that’s true even if using a client/server model or a Thin Client model LTSP or Citrix etc.

The NX protocol is terrific and you can read about just how good it is here.

Here’s my basic process to create a server IF I start by using one of AWS’s Public Amazon Machine Image (AMI) that are  available.

  1. Launch the AMI instance I want
  2. Modify it by adding all the applications I need and configuring everything.
  3. Save the running “instance” using the free AWS EC2 AMI tools to what is called an S3 storage “bucket”.
  4. Re-register my now saved AMI “image” as a NEW Amazon AMI (once registered w/AWS I’ll be able to LAUNCH it from the AWS Management Console like any other AWS AMI.
  5. I then LAUNCH my new image like any other AWS AMI
    1. tell AWS how many “instance” … ie # virtual machines
    2. tell AWS what size server (32/64 bit small … up to Extra Large)
    3. Assign my firewall/access lists to the new instance
    4. Create and Assign an AWS Elastic IP address to MY “instance” (simple – takes 2 seconds)
  6. Once it’s in a “running” state.. just use the AWS cloud based server

Elastic IP Addresses – Elastic IP addresses are static IP addresses designed for dynamic cloud computing.
An Elastic IP address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it.
Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically
remapping your public IP addresses to any instance in your account. Rather than waiting on a data technician to reconfigure or replace your host,
or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by
quickly remapping your Elastic IP address to a replacement instance.

By the way, in case this isn’t obvious… got a new school that needs to be setup?

Other than the USBs for the kids and some kind of computer for them to use … the server can take only minutes to setup and there’s no physical installation involved !!!

Finally, I use my local machine with NX Client software to log in and I get a Desktop… and it’s all PFM …  magic !

Today (right now) I’m writing this while I have 4 AWS servers running that I am testing.

On my desk is a Lenovo T61p laptop

  • Dual Core
  • 4 Gig RAM

next to it I have an ASUS 1000HE Netbook

  • Atom processor
  • 1 G RAM

Both machines booted off of a USB.

I next used the  NX Client software to log into one of my AWS Desktop servers on each one and started working.

Performance is exactly the same on both clients (well  they ASUS display can only go 1400×600)

I wrote this on my AWS desktop server session using the ASUS while several of the  sessions on the Lenovo were doing some other things for me

I’d really like to get more in the Linux K-12 and K-20 community trying this so we can all share more of what we are doing for education of our kids.

Let me know if any of you would like some more pointers or information as I said I’d like some folks to work with on all of this.

I’ve also got some pretty cool AWS based solutions for the “Windows” in your life…

Hope you found this interesting!

Brian Mullan

June 18, 2009

Part 1 – Using Cloud & Virtualization Technologies for Education -or- how Education and the Cloud met, married and had smarter kids!

U.S. Education Secretary Arne Duncan wants to use some seed money in a Race to the Top to see what innovative States can come up with in regards to best ideas, concepts, implementations and results.   Good idea… kind of like prototyping and trialing then picking the best.

From my view there are many things that can be addressed in education.   Technology being just one of several approaches to the overall issues related to improving K-12 education.

I recently heard a short comment that made an impression.

In 1909 if you had gone into a classroom in a large city school you would have seen kids seated at desks with pencils and paper.

At the front of the classroom would be a teacher sitting facing the children with the teacher’s pencil and paper on her desk.

Of course books would be on the desks and a blackboard with chalk on the front wall.

Fast forward 100 years to 2009.

How much has that picture really changed ?

Ok… there may be some classrooms at some schools that have some “newer” technologies

  1. a projector ? some
  2. <lets skip a few era’s of technology here>?
  3. computer on every desk ? more rare than common
  4. networked servers/computers — rarer than #3
  5. maintained network computers – rarer than #4 and #5
  6. #5 & #6 maintained by someone other than the Librarian and Librarian assistant ???

Well you get the idea and if you work at or for a school you know the picture.

Click here to see some “Race to the Top” Slides

Geez where to start?

I am fairly certain that Cloud and Virtualization technologies are going to play major roles in some of the successes.

But what kind of Cloud ?   Private, Public .. hybrid and whats the Total Cost of Ownership (TCO) for each of those paths.


  • the State or the LEA owns/manages/pays for a Data Center and support staff, electricity equipment, heat/air, safety, insurance


  • Amazon Web Services (AWS), Rackspace, Google owns the infrastructure, etc but you may still be the “operator”


  • Private Data Center augmented by compute or storage resource provided by a Public cloud provider

Well lets make it more muddled?

Should you go with an Infrastructure-as-a-Service (IaaS) Cloud provider like Amazon.

or Amazon as a Software-as-a-Service (SaaS) yes… it does exist via 3rd party developers that are offering many services ranging from Db2, Oracle, Mail, WebServers, Video servers etc.

What about using Google as a Platform-as-a-Service (PaaS) where you write or rewrite you own applications using Java/PHP and then host them on Google.

or possibly Google as a Software-as-a-Service (SaaS) cloud provider (think gMail, Google Docs).

I don’t think there necessarily has to be one choice… or one Cloud Service…  after all it is the Internet.

To get started I think one of the first things that should be done is getting all the schools in all the LEAs on a level starting platform.   Why?

Some schools have

  • old Desktops
  • new Desktops
  • old laptops
  • new laptops
  • thin client (re using something like citrix)
  • maybe netbooks

The above computers may vary

  • CPU’s ranging from Pentium to Dual Core Intel to AMD to Atom processors
  • Memory ranges from 512Meg to 4 Meg
  • Hard disks (if they have them) 40G – 100 G

Network connectivity ability from

  • 10Mbps to 100Mbps ethernet
  • Wireless B, G or maybe N

For the most part those computers run Windows -but- that can mean anything from Windows 95 to  Windows 98, Windows 2000, XP  or Vista

Sorry Mac and Linux users … gotta focus here to make a point.  We’ll get to you later.

To level the starting platform you can’t just tell people to junk everything… and for the most part there isn’t a reason to if you think of clever solutions.

That’s enough to start the conversation… I’ll add more later but wanted to get my ramblings on this topic started.

Brian Mullan

Blog at