Education and the Cloud

May 12, 2015

Proof-of-Concept – Using Mesh VPN to interconnect LXC containers on Multiple Hosts on Multiple Clouds

Filed under: LXC, mesh VPN, ubuntu, Uncategorized, VPN — Tags: , , , , , — bmullan @ 4:34 pm

Proof-of-Concept

Secure Mesh VPN Network Interconnect for

LXC containers in Multiple IaaS Clouds

by Brian Mullan (bmullan.mail@gmail.com)

April 2015

Preface

I’ll start off this blog post by saying LXC containers are great (see www.linuxcontainers.org) !

LXC provides pre-built OS containers for Centos 6, Debian (Jessie, Sid, Squeeze & Wheezy), Oracle linux v6.5, Plamo linux, and multiple releases of Ubuntu from Precise up to and including Wily.

Its important to understand that the Host of LXC containers can be one distro and the containers can actually be any of the other supported Distro’s.   The only requirement for a container OS is that it utilize the same Linux Kernel as the Host OS.

This post though is about LXC  so lets begin.

The rest of this post is about my proof-of-concept testing of using a full mesh VPN to provide LXC container connectivity between any remote host whether on an IaaS Cloud like AWS, Digital Ocean or Rackspace or your own servers.

This document should be considered a Work-in-Progress Draft as I have been receiving a lot of good input
from others & will continue to edit for additional information, improvements and/or corrections.

Problem Statement

On any of the existing IaaS Cloud providers like AWS, Digital Ocean etc you can easily create virtual machine “instances” of running Linux servers.

Note that some IaaS clouds (Azure, AWS as examples) let you also create Windows virtual machines also.   But again tthis is about Linux & LXC not Windows.

Although you can create and run a Linux server in those Clouds you cannot “nest” other linux servers inside of those Cloud server “instances” … by “nesting” I am referring to using KVM or VirtualBox etc inside of an AWS Ubuntu server instance to create other virtual machines (vm inside a vm).   There may be some IaaS providers that permit nested VMs but I am not aware of any.  AWS for instance does not allow this.

The reason is that those clouds do not permit nested hardware virtualization.

NOTE:  On your home linux pc/server you can nest KVM hw virtualized instances.

LXC containers are:

  • much more light-weight than using full HW virtualization like KVM, vmware, VirtualBox etc.   This means LXC is faster and use less “host” server resources (memory, cpu etc).    Canonical (ubuntu, lxc, juju etc) just published some performance test results of LXD (LXD utilizes LXC !) versus KVM instances.   LXD/LXC in terms of both scalability & performance far surpasses KVM.    On a server where you may be limited to running ~20 HW virtualized VMs you may be able to run 80-100+ LXC containers.
    • LXC containers can be “nested” within a HW virtualized Linux on AWS, Digital Ocean etc
  • the LXC containers also all share the same kernel as the host machine so they are able to take advantage of the “host” security, networking, file system management etc.
  • extremely fast to start up & shut-down… almost instantaneous.
  • flexible because you can use say an Ubuntu host and have LXC containers that are other linux distro’s such as Debian or Centos.

A benefit of LXC is that you can use it to create create full container based servers in IaaS Clouds like AWS, Digital Ocean etc and you can also “nest” LXC containers (containers inside a container) on those cloud “instances”.

LXC has some characteristics which are “default”.   These can be modified/changed but I will not be going to cover that in this document.

LXC containers are by default created/started/running behind a NAT’d bridge interface called “lxcbr0” which is created when you install LXC on a server.

lxcbr0 is by default given a 10.0.3.x network/subnet

NOTE: you can change this if you want/need to 

Each LXC container you create on the “host” will be assigned an IP address in that 10.0.3.x subnet (examples:  10.0.3.123, 10.0.3.18 etc).

NOTE: Your Cloud “instance” (re VM) will be assigned an IP address at the time you create the cloud “instance” by whoever the Cloud IaaS provider is.   Actually there are usually 2 ip addresses assigned, one private to the cloud and one “public” so the Cloud instance can be reached from the Internet.

The LXC containers you create & run on any Cloud instance (the “instance” will from now on be referred to as the LXC “host”) can by default reach anything on the Internet which the “host” can reach.   Again, that is configurable.

By default, all LXC containers running on any one “host” can also reach each other.

But what if you wanted LXC containers running on a host on say AWS to interact with LXC containers running on a host on Digital Ocean’s cloud?    No you can’t… not without some network configuration magic the LXC containers running on one host cannot talk to containers running on another host because all will be running behind their own hosts lxcbr0 NAT’d interface.

Also, LXC containers running on AWS cannot reach LXC containers running on another host also on AWS (ditto for other Clouds).

So the problem becomes… what if you wanted to do this though?

What if you wanted your LXC containers on a host somewhere (cloud or elsewhere) to be able to reach & interact with LXC containers running on any other host anywhere (assuming firewalls etc don’t prevent it).

Also, how could you make this secure so not just anyone could do this?

A Solution Approach I Utilized

Virtual Private Networks (VPNs) are commonly used in the normal networking world to securely interconnect remote sites & servers.  Think of a VPN as a “tunnel”.

VPNs encrypt the data links utilized for this interconnect to keep the VPN and any data traversing it  “private”.   So a VPN is an encrypted “tunnel”.

Most common VPN are peer-to-peer (P2P).   P2P VPN usually require configuration of each server that you want to connect to.   If you have 100 servers or sites then that means configuring each individual site for 99 different connections (1 for each “peer” site/server).

That solution if used beyond a few servers can be both complicated & messy to maintain.

The solution to this is to use what is called a Mesh VPN.  A “mesh” VPN means that every host configured as part of the VPN can connect to every other host connected to that VPN without necessarily being configured specifically  to do so.

 

mesh vpn with lxc

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In Open Source there are quite a few Mesh VPN choices and some offer more or less features and are more or less complicated to setup.   Some mesh vpn solutions are more complicated to configure than others.

Some Mesh VPN utilize a concept of a “super-node” which is used to keep a stateful database of all “member” servers/hosts that are part of the VPN.

Other Mesh VPN have been designed so as to not require a “super-node” at all!    This reduces overhead traffic to/from the “super-node” and any subsequent delays that traffic can cause.

In this blog post I am describing how to utilize one such Open Source Mesh VPN named PeerVPN (http://www.peervpn.net/) which is the work of Tobias Volk.

Key PeerVPN Features include:

  • Ethernet tunneling support using TAP devices.
  • IPv6 support.
  • Full mesh network topology.
  • Automatically builds tunnels through firewalls and NATs without any further setup (for example, port forwarding).
  • Shared key encryption and authentication support.
  • Open Source (GPLv3)

PeerVPN uses UDP exclusively and PeerVPN sends UDP packets that are larger than the MTU.
Tobias Volk, the author of PeerVPN, has indicated that PeerVPN fragments/reassembles packets itself to enable this MTU capability.

PeerVPN is both simple to setup and it does create a full mesh VPN and it does not require a “super-node”.

You can define multiple separate VPN’s on each host(s)!   To define additional VPN networks just create additional copies of your peervpn.conf using a new unique name for each.

  • edit each new configuration file (call the new config file anything you want)
  • change the networkname variable to be a unique name of the additional VPN
  • change the port variable to be unique for each new VPN
  • generate a different PSK encryption/authentication key for each additonal  VPN and add that PSK key to the appropriate VPN’s  .conf file after the PSK variable in that file.

NOTE:   All servers that you want to be part of the same VPN must use the same config file values (exceptions:  “interface” & “ifconfig4/ifconfig6” values)

The minimum PeerVPN configuration file requires only 9-11 items to be configured depending on whether you are using IPv4 and IPv6 or not:

port 7000                                                 # your specified Port number to be used by any individual/distinct PeerVPN (other VPN’s require a different Port #)
networkname VPNnet1                          # your name for each unique VPN network deployed (other VPN’s require a different “networkname”)
psk MyCryptoSeedPassword                    # PSK is an encryption/authentication “password” upto 512 characters (other VPN PSKs should be unique). For PeerVPN, the PSK that you enter in the config is just a seed password used to generate the “real” crypto keys. The “real” crypto keys are always AES keys with 256 bit length, which are generated individually for each VPN link.
enabletunneling <yes|no>                    # Default is YES.  Enables tunneling interface (refer to config documentation link below)
enableipv4 <yes|no>                              # Default is YES
enableipv6 <yes|no>                              # Default is YES
interface peervpn0                                 # name you want to give the local VPN Tunnel End Point (TEP) on a “host” (name it whatever you like)
ifconfig4 10.8.0.1/24                             # IP address of “this” hosts TEP. Next Host TEP may be 10.8.0.2/24 etc.
ifconfig6 <configure>                              # this is the node’s IPv6 address that should be assigned to the tunneled interface (i.e the encrypted tunnel).
initpeers 10.8.0.2 7000 10.8.0.3 7000  # For HA you should ID at least several Peer Node Public IPv4 addresses that “this” node should try to initially connect or reconnect to if connection is lost
initpeers 2001:DB8:1337::1  7000         # For HA you should ID at least several Peer Node Public IPv6 addresses that “this” node should try to initially connect or reconnect to if connection is lost
enablendpcache <yes|no>                    # Default is NO.  If using IPv6 set to YES. Enables caching of tunneled IPv6 NDP messages to improve performance by reducing the amount of NDP multicast messages that need to be sent between peers

For a basic PeerVPN configuration file that’s it!    Pretty simple to implement I think compared to other mesh VPN solutions I have seen!

To start peervpn use the following command:

usage:  ./peervpn <path to peervpn config file>

IMPORTANT NOTE:   For complete PeerVPN configuration options and descriptions see:   https://github.com/peervpn/peervpn/blob/master/peervpn.conf

The 10,000 ft view of the overall process to setup and use PeerVPN are:

  • Create ubuntu server instance (re host) on each cloud
  • On each Cloud Instance/host open port 7000 which is used by PeerVPN.
  • Install peervpn on each cloud instance.   Copy the .zip file and unzip it in a subdirectory of your choosing.
  • Create a peervpn.conf configuration file.   Refer to:  http://www.peervpn.net/tutorial/
  • Generate a PSK encryption password “seed”.  I  used “psktool”…  and set the “psk” variable in your peervpn.conf file to that key
    • note::  use the same PSK on all VPN “member” hosts within the same VPN
  • Follow the instructions on the above PeerVPN link in regards to adding more server/hosts to the VPN.    You can add as many as you can support from a traffic perspective.

Start a new instance of the peervpn and point to the additional .conf configuration file

examples:

  • ./peervpn ./vpn-network-A.conf
  • ./peervpn ./vpn-network-B.conf
  • etc

If you do this, each VPN will be separate & isolated from every other VPN not of the same “networkname”.

How to Install & Use PSKTOOL to generate your PSK encryption password

An important part of any VPN is the encryption of the data traversing the VPN tunnel. This is especially true for data crossing the Internet. To insure the security of the data you send through your VPN tunnel PeerVPN’s configuration file (peervpn.conf) allows you to specify a PSK encryption password.   The PSK you enter into the peervpn.conf file is used as a “seed” to generate the actual 256bit AES keys used to encrypt the VPN link.

Pre-Shared Keys (PSK) can be used to provide both authentication & encryption. Pre-Shared Keys is the most common authentication method used today.

I used psktool for my experiment and it is included in the gnutls package(s).

On Ubuntu the following will install what is required for you to use psktool:

$ sudo apt-fast install gnutls-bin gnutls26-doc guile-gnutls -y

Usage : psktool [options]
-u, – -username username
specify username (username not important for our use-case here with peervpn but tool requires one)
-p, – -passwd FILE specify a password file.
-s, – -keysize SIZE specify the key size in bytes! NOTE: the max keysize is 64 bytes (ie 512 bits)
-v, – -version prints the program’s version number
-h, – -help shows this help text

then… to generate a 512 bit PSK for “any” username and save it to some file (example = ./mypsk):

example: $ psktool -u bmullan -p ./mypsk -s 64

Edit the mypsk file and copy everything after the name you used (the name will be the only readable text in that file) and add that copied PSK password key into your peervpn.conf file after the variable “PSK”

How to use the PeerVPN “mesh” VPN with LXC

The big question is…  how does this help interconnect LXC containers running on possibly many remote and independent server/hosts

All it takes is a small networking change…

If on each host you configured & started PeerVPN properly, then on each host if you execute “ifconfig” you will see one or more VPN Tunnel End Point (TEP) “interfaces” created by PeerVPN.

NOTE:  The TEP will be named the same “name” as you entered in the PeerVPN configuration for the variable called “interface” … refer to the above PeerVPN tutorial.

To connect LXC containers running on any PeerVPN configured host you attach the “peervpn0” interface to the lxcbr0 bridge that lxc uses on that host.

NOTE:   Depending on your peervpn.conf file configuration you are the one that defines the PeerVPN TEP interface IP address.   In the PeerVPN Tutorial example the peervpn0 interface is given a 10.8.x.x address

When you installed LXC on a host (sudo apt-get install lxc) a default LXC bridge will be created and given a 10.0.3.x IP address.   Also, any lxc containers created using the lxc-create command on that host will by default get a 10.0.3.x IP address.

While logged into each of your servers you should now be able to ping the 10.8.0.x IP address of the other PeerVPN member servers.

Our next step is to connect our TEP to the LXCBR0 bridge to enable containers attached to that bridge to pass data over the VPN tunnel.

Since the PeerVPN TEP interface (“peervpn0” in the Tutorial example) is just like any other Linux ethernet interface we can use the “ip link” command to connect the peervpn0 interface to the LXC lxcbr0 bridge.

$ sudo ip link set dev peervpn0 master lxcbr0

NOTE:   After executing this command on EACH Host… you will find that you can no longer PING the 10.8.0.x IP addresses of the other PeerVPN member servers!

This is expected and is OK because if you still have the terminal up where you executed the command to start the PeerVPN (ie  sudo peervpn … you should still see your “peers connected” !

Next create an LXC container on each “host”

example:

$ sudo lxc-create -t download -n my-container —  -d ubuntu -r trusty -a amd64

Note:  this will create a new LXC container named “my-container” using Ubuntu and the Trusty release (re v14.04) and also make it a 64Bit OS in that container.

Next… start the container your created on each host and then get access into the LXC container “my-container”

$ sudo lxc-start -n my-container -d

$ sudo lxc-attach -n my-container

If you look closely at the Terminal window you are using you will see that the “prompt” has now changed to show that you are logged into the container “my-container” and that you are logged in as root.

Note:   root in a container is NOT the same as root in the “host”

On each host get the IP address of each host’s container that you created and write it down.

You can get those IP addresses using the following LXC command on both Host A and Host B

$ sudo lxc-ls -f

Or if you are logged into the Container on each host just do:

$ ifconfig

NOTE:   (your container IP addresses will be different but for our example here let’s say

  • eth0 of Host A’s container has IP address 10.0.3.136
  • eth0 of Host B’s container has IP address 10.0.3.15

 

peervpn lxc diagram

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

While logged into the Container on Host A, try to ping the Container IP address on Host B

Using our example IP addresses from above (again your own Container IP addresses will be different:

$ ping 10.0.3.15

This should now work and Containers on Host A can reach Containers on Host B via the PeerVPN Tunnel you created.

Important Note

For our proof-of-concept trial here you need to understand that we have left LXC on each host node to utilize the default LXC configuration.   So each host will have its own LXC lxcbr0 bridge … and … the lxcbr0 bridge on each host will all have the same 10.0.3.x subnet & and ip address from that 10.0.3.x subnet defined for lxcbr0.

Furthermore, the LXC containers created and running on the individual “hosts” will all also have been assigned a 10.0.3.x ip address by the local lxcbr0 dnsmasq.

Even though LXC “by default” creates & assigns “unique” IP addresses to each LXC container created inside a particular “host”…    LXC running on separate “hosts” is NOT by default aware of IP addresses used by LXC on any other host.

For our “proof-of-concept” here, that means there is the potential for a “duplicate” 10.0.3.x IP address to be assigned to a container on one or more “hosts”.

For a small proof-of-concept this is probably unlikely to occur and so for this blog write-up we will ignore that fact.   But for a production environment you will want to look into using a centralized IPAM (ip address management) solution which will probably involve other linux tools such as DNSMASQ, DHCP, DNS.    However, that is beyond the purpose of this proof-of-concept article/blog post.

Finally Step — repeat this process for each cloud instance/host if you’d like to test beyond just a couple servers.  However, remember there is a “remote” possibility of some LXC container getting a duplicate IP address in your own proof-of-concept trial.   it is remote but it is possible.

NOTE:   you can configure LXC on each host to use a different bridge you create (say br0) and then on 1 host create & add a DNSMASQ and attach it to the br0 bridge.   After doing so, all LXC containers on any host anywhere that is part of the same PeerVPN tunnel will get their IP assigned by a single dnsmasq and you will not have to worry about IP duplication.

Now each lxc container on each cloud instance should be able to ping the 10.8.x.x address of any other lxc container on any other PeerVPN host you have setup anywhere as well as ping any other LXC container on any of those Hosts.

Also, for any Production use it might be advantageous to utilize Non-Privileged LXC containers.    All of this blog post has just talked about “privileged” LXC containers.

Use & Implementation of IPv6 as a Production Solution

The introduction & increasing use of IPv6 instead of IPv4 will greatly simplify this overall PeerVPN solution in regards to IPAM because IPv6 was designed to allow local IPv6 address assignments which are guaranteed to be unique even between separate and remote host compute nodes/containers.   Google “ipv6” and read up to become more familiar with it as the “internet of things” or IOT as its popularly called will require the vast number of available IPV6 addresses in order to connect the future world’s billions of inter-connected internet devices (phones, tv’s, cars, tablets, laptops etc).

ARIN announced in June 2015 that it has exhausted ALL IPv4 addresses !

So no more new IPv4 is available.   For this reason, its important to start learning, testing, deploying IPv6 where you can.   In the U.S. almost all ISP’s (Cable, ATT, Mobile etc) now support IPv6 !

NOTE:  The main advantage of IPv6 over IPv4 is its larger address space. The length of an IPv6 address is 128 bits, compared with 32 bits in IPv4. The address space therefore has 2128 or approximately 7038340000000000000♠3.4×1038 addresses

General IPv6 Configuration for LXC

Searching the web I found a good write-up describing the configuration of IPv6 for LXC container use.

Although this article does not address anything about VPNs I think it provides a great background to understand the critical steps & considerations to configure IPv6 for LXC and the LXC Host machine.

Refer to:     LXC Host featuring IPv6 connectivity

Unique Local IPv6 Generator

There is a great online tool to help you generate a unique “local” IPv6 address to utilize with your mesh network or simply to use IPv6 with LXC or Linux configurations.   See:     http://unique-local-ipv6.com/

Suggested Readings

To really start understanding LXC be sure to read through the terrific 10 part Series on LXC by one of the Principle LXC Developers Stephane Graber.   Refer to:  https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/

To gain a good understanding of IPv6 configuration in Linux one web site that is fairly comprehensive in its description of the terms, configuration options, and usage refer to:     IPv6 – Set UP An IPv6 LAN with Linux

Also just for a good reference I have found the iproute2 cheat sheet web page extremely valuable.

Last Words…   As I am not any kind of expert in IPv6, LXC or Linux feel free to suggest improvements, changes and/or configuration examples to this  approach in any of the related areas !

Have fun…!

Advertisements

August 3, 2009

Part 2 – Using Cloud & Virtualization Technologies for Education -or- how Education and the Cloud met, married and had smarter kids!

Here I continue my last discussion about K-20 education and how to use cloud technology to possibly do things.

Lately, I’ve been following this thread… and would like to share some ideas and thoughts with you all…

===============================================================================================================================================


Message: 1
Date: Thu, 30 Jul 2009 15:32:18 -0600
From: xxxxxxxxxxxxxx
Subject: Re: [Ltsp-discuss] Recommend Server for 25 clients
To: ltsp-discuss@lists.sourceforge.net
Message-ID:
Content-Type: text/plain; charset=UTF-8

On Thu, Jul 30, 2009 at 1:40 PM, xxxxx xxxxxxxxx<xxxxxxxxxx> wrote:
> xxxxxx xxxxxxxxxx :
>
>> How powerful server would you recommend for 25 users ?
>
> “Server sizing in an LTSP network is more art than science. Ask any LTSP
> administrator how big a server you need to use, and you’ll likely be
> told “It depends”.”
>
> http://www.ltsp.org/~sbalneav/LTSPManual.html#id2697011

===============================================================================================================================================

So I replied to that thread with the following response with I’ll share here on my blog…

I’ve been using Amazon Web Services (AWS) ie Amazon’s cloud for K-20 proof-of-concept work. So bear with me while I describe some things…

  1. Amazon’s Elastic Compute Cloud (EC2) service is very inexpensive and easy to use and provides 5-6 different choices for “compute resources” (ie servers).
  2. Amazon uses a “Utility” based pricing model (you pay only for how much of something you use like water or electricity) and only when you are using it.

ie.  need a bigger server… just pick one and start it up (ie Launch it in AWS terminology) migrate your apps (won’t go into that here)

Need 10 or 100 servers… easy… pick the server model (linux/windows, 32/64 bit etc) — this is called an AMI – Amazon Machine Instance — and when you LAUNCH the AMI just put the # of servers you need into the “Number of Instances” box that pops up when you select to LAUNCH the AMI you picked.

5 minutes later… they will all be running.

You manage all the startup/shutdown, IP address’s, Security Firewall/Access lists etc using Amazon’s web based AWS Management Console.

Now I’ve always wanted say this … But WAIT there’s MORE… it gets better yet <g> !!

You can take ADVANTAGE of Amazon’s Auto-Scaling and Auto-Load-Balancing features.

Since AWS costs are based like a Utility …  you can start off with just 1 server at 5am and if you set it up for auto-scaling …

As students/teachers (ie Load) starts to build say around 9am… the server “can” Auto-Scale UP by cloning itself and at the end of the day the servers will Auto-Scale DOWN by terminating
themselves when no longer needed (ie you don’t pay for them when they aren’t running).   You are the one to configure the parameters for the UP/DOWN auto-scaling.

try doing that in your school or data center where 1st you have to buy the servers, rack/stack/cable/ pay for HVAC, maintenance contracts, insurance, replace parts, etc.

I like letting Amazon worry about that stuff!

I will copy some information from the AWS web site.

You can sign up for an AWS account free (again you only get billed if you start using something).

As you can see below a “small” server costs just 10 cents/hr while the largest (8 or 20 core) just 80 cents/hr.

I learned about AWS by starting a “small” Ubuntu server, installing my applications, testing etc. then blowing it away when I was done.   I spent 4-5 hours a day ($0.50/day) to do this.
It was very easy to learn !

===============================================================================================================================================

Instance Types

Standard Instances

Instances of this family are well suited for most applications.

  • Small Instance (Default) (ie virtual server)
    • 1.7 GB of memory
    • 1 virtual core
    • 160 GB of instance storage
    • 32-bit platform
  • Large Instance 7.5 GB of memory, 4 core, 850 GB of instance storage, 64-bit platform
  • Extra Large Instance (ie virtual server)
    • 15 GB of memory
    • 8 core
    • 1.7 TB of instance storage
    • 64-bit platform

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

  • High-CPU Medium Instance 1.7 GB of memory, 5 core, 350 GB of instance storage, 32-bit platform
  • High-CPU Extra Large Instance
    • 7 GB of memory,
    • 20 core
    • 1.7 TB of instance storage,
    • 64-bit platform


===============================================================================================================================================

Pricing

NOTE:   as of 9/2010 AWS has introduced an approximately 18% price decrease for most of the AWS EC2 compute instance sizes.    The pricing below does NOT reflect this change.

AWS has also introduced a new “micro” instance which provides 640Meg of RAM,  1/2 a cpu for only  $0.02 cents per hour —  48 cents per day ??

Pay only for what you use. There is no minimum fee. Estimate your monthly bill using AWS Simple Monthly Calculator.

On-Demand Instances

On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments.

This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.

The pricing below includes the cost to run private and public AMIs on the specified operating system.

Amazon also provides you with additional instances with other option for Amazon EC2 running Microsoft and Amazon EC2 running IBM that are priced differently.

United States

Europe
Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.10 per hour $0.125 per hour
Large $0.40 per hour $0.50 per hour
Extra Large $0.80 per hour $1.00 per hour
High CPU On-Demand Instances Linux/UNIX Usage Windows Usage
Medium $0.20 per hour $0.30 per hour
Extra Large $0.80 per hour $1.20 per hour

United States
Europe
Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.11 per hour $0.135 per hour
Large $0.44 per hour $0.54 per hour
Extra Large $0.88 per hour $1.08 per hour
High CPU On-Demand Instances Linux/UNIX Usage Windows Usage
Medium $0.22 per hour $0.32 per hour
Extra Large $0.88 per hour $1.28 per hour

Pricing is per instance-hour consumed for each instance type, from the time an instance is launched until it is terminated. Each partial instance-hour consumed will be billed as a full hour.

Reserved Instances

Reserved Instances give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance.

After the one-time payment for an instance, that instance is reserved for you, and you have no further obligation.

You may choose to run that instance for the discounted usage rate for the duration of your term, or when you do not use the instance, you will not pay usage charges on it.

United States

Europe
Linux/UNIX One-time Fee
Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.03 per hour
Large $1300 $2000 $0.12 per hour
Extra Large $2600 $4000 $0.24 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.06 per hour
Extra Large $2600 $4000 $0.24 per hour

United States
Europe
Linux/UNIX One-time Fee
Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.04 per hour
Large $1300 $2000 $0.16 per hour
Extra Large $2600 $4000 $0.32 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.08 per hour
Extra Large $2600 $4000 $0.32 per hour

Reserved Instances can be purchased for 1 or 3 year terms, and the one-time fee per instance is non-refundable.

Usage pricing is per instance-hour consumed.

Instance-hours are billed for the time that instances are in a running state; if you do not run the instance in an hour, there is zero usage charge. Partial instance-hours consumed are billed as full hours.
===============================================================================================================================================

Here’s how I make use of this.

On AWS you can pick from hundreds of pre-built “public” servers types (different flavors of Linux – Fedora, Ubuntu, Centos etc etc), 32 bit or 64 bit.

Some are “server” linux some are desktop linux.

Some have been built with apps already installed (Apache, MySQL, etc etc)

You get the idea.

So what have I been doing for kids/education… ?

Server Side:

I’m using AWS Desktop images where I’ve installed the x2go one-server.

x2go utilizes the NoMachine NX transport protocol libraries that are Open Source but x2go implements its own server-side and client modules.   The server side comes in a single user home version and also a x2go server implementation that is clustered and load balanced.

Unlike NoMachine’s current NX server/client …. where audio is a big problem.   x2go supports audio extremely well from server to client.    Local printing and sharing of folders between server and client is also supported.

Client Side:

Client side boots off of a Ubuntu USB thumb drive – preloaded with the x2go Open Source Windows, Mac or Linux clients.

x2go also has introduced a Web Portal capability for accessing the remote desktop.    Any user with a Browser that supports java can now access the Remote Desktop without installing any other client software on their local PC.

Each kid can have one and that way they can use it at school or — at home (same desktop, same cloud servers as at school).

Since the “real work” in terms of CPU and Storage is out on the AWS “cloud” it does NOT even matter what type PC they use…. all you use the local machine for is basically to boot off of
the USB and the local keyboard, mouse, screen and network connection (everything becomes a thin-client)

  • old pc, new pc
  • old laptop, new laptop
  • netbook
  • thin client

Since the “Desktop” that the students see is exported over NX from the AWS Desktop server where I can have from 1 – 20 CPU and I can have as many servers as I want… or can pay for <g>?

— and —

because storage using AWS’s S3 – Simple Storage Service and EBS – Elastic Block Storage is more or less infinite (at least as far as I’m concerned)

Now how’s performance.

Well you have to have a working and stable local network first of all but that’s true even if using a client/server model or a Thin Client model LTSP or Citrix etc.

The NX protocol is terrific and you can read about just how good it is here.

Here’s my basic process to create a server IF I start by using one of AWS’s Public Amazon Machine Image (AMI) that are  available.

  1. Launch the AMI instance I want
  2. Modify it by adding all the applications I need and configuring everything.
  3. Save the running “instance” using the free AWS EC2 AMI tools to what is called an S3 storage “bucket”.
  4. Re-register my now saved AMI “image” as a NEW Amazon AMI (once registered w/AWS I’ll be able to LAUNCH it from the AWS Management Console like any other AWS AMI.
  5. I then LAUNCH my new image like any other AWS AMI
    1. tell AWS how many “instance” … ie # virtual machines
    2. tell AWS what size server (32/64 bit small … up to Extra Large)
    3. Assign my firewall/access lists to the new instance
    4. Create and Assign an AWS Elastic IP address to MY “instance” (simple – takes 2 seconds)
  6. Once it’s in a “running” state.. just use the AWS cloud based server

Elastic IP Addresses – Elastic IP addresses are static IP addresses designed for dynamic cloud computing.
An Elastic IP address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it.
Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically
remapping your public IP addresses to any instance in your account. Rather than waiting on a data technician to reconfigure or replace your host,
or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by
quickly remapping your Elastic IP address to a replacement instance.

By the way, in case this isn’t obvious… got a new school that needs to be setup?

Other than the USBs for the kids and some kind of computer for them to use … the server can take only minutes to setup and there’s no physical installation involved !!!

Finally, I use my local machine with NX Client software to log in and I get a Desktop… and it’s all PFM …  magic !

Today (right now) I’m writing this while I have 4 AWS servers running that I am testing.

On my desk is a Lenovo T61p laptop

  • Dual Core
  • 4 Gig RAM

next to it I have an ASUS 1000HE Netbook

  • Atom processor
  • 1 G RAM

Both machines booted off of a USB.

I next used the  NX Client software to log into one of my AWS Desktop servers on each one and started working.

Performance is exactly the same on both clients (well  they ASUS display can only go 1400×600)

I wrote this on my AWS desktop server session using the ASUS while several of the  sessions on the Lenovo were doing some other things for me

I’d really like to get more in the Linux K-12 and K-20 community trying this so we can all share more of what we are doing for education of our kids.

Let me know if any of you would like some more pointers or information as I said I’d like some folks to work with on all of this.

I’ve also got some pretty cool AWS based solutions for the “Windows” in your life…

Hope you found this interesting!

Brian Mullan

June 18, 2009

Part 1 – Using Cloud & Virtualization Technologies for Education -or- how Education and the Cloud met, married and had smarter kids!

U.S. Education Secretary Arne Duncan wants to use some seed money in a Race to the Top to see what innovative States can come up with in regards to best ideas, concepts, implementations and results.   Good idea… kind of like prototyping and trialing then picking the best.

From my view there are many things that can be addressed in education.   Technology being just one of several approaches to the overall issues related to improving K-12 education.

I recently heard a short comment that made an impression.

In 1909 if you had gone into a classroom in a large city school you would have seen kids seated at desks with pencils and paper.

At the front of the classroom would be a teacher sitting facing the children with the teacher’s pencil and paper on her desk.

Of course books would be on the desks and a blackboard with chalk on the front wall.

Fast forward 100 years to 2009.

How much has that picture really changed ?

Ok… there may be some classrooms at some schools that have some “newer” technologies

  1. a projector ? some
  2. <lets skip a few era’s of technology here>?
  3. computer on every desk ? more rare than common
  4. networked servers/computers — rarer than #3
  5. maintained network computers – rarer than #4 and #5
  6. #5 & #6 maintained by someone other than the Librarian and Librarian assistant ???

Well you get the idea and if you work at or for a school you know the picture.

Click here to see some “Race to the Top” Slides

Geez where to start?

I am fairly certain that Cloud and Virtualization technologies are going to play major roles in some of the successes.

But what kind of Cloud ?   Private, Public .. hybrid and whats the Total Cost of Ownership (TCO) for each of those paths.

Private

  • the State or the LEA owns/manages/pays for a Data Center and support staff, electricity equipment, heat/air, safety, insurance

Public

  • Amazon Web Services (AWS), Rackspace, Google owns the infrastructure, etc but you may still be the “operator”

Hybrid

  • Private Data Center augmented by compute or storage resource provided by a Public cloud provider

Well lets make it more muddled?

Should you go with an Infrastructure-as-a-Service (IaaS) Cloud provider like Amazon.

or Amazon as a Software-as-a-Service (SaaS) yes… it does exist via 3rd party developers that are offering many services ranging from Db2, Oracle, Mail, WebServers, Video servers etc.

What about using Google as a Platform-as-a-Service (PaaS) where you write or rewrite you own applications using Java/PHP and then host them on Google.

or possibly Google as a Software-as-a-Service (SaaS) cloud provider (think gMail, Google Docs).

I don’t think there necessarily has to be one choice… or one Cloud Service…  after all it is the Internet.

To get started I think one of the first things that should be done is getting all the schools in all the LEAs on a level starting platform.   Why?

Some schools have

  • old Desktops
  • new Desktops
  • old laptops
  • new laptops
  • thin client (re using something like citrix)
  • maybe netbooks

The above computers may vary

  • CPU’s ranging from Pentium to Dual Core Intel to AMD to Atom processors
  • Memory ranges from 512Meg to 4 Meg
  • Hard disks (if they have them) 40G – 100 G

Network connectivity ability from

  • 10Mbps to 100Mbps ethernet
  • Wireless B, G or maybe N

For the most part those computers run Windows -but- that can mean anything from Windows 95 to  Windows 98, Windows 2000, XP  or Vista

Sorry Mac and Linux users … gotta focus here to make a point.  We’ll get to you later.

To level the starting platform you can’t just tell people to junk everything… and for the most part there isn’t a reason to if you think of clever solutions.

That’s enough to start the conversation… I’ll add more later but wanted to get my ramblings on this topic started.

Brian Mullan

Blog at WordPress.com.