Education and the Cloud

August 3, 2009

Part 2 – Using Cloud & Virtualization Technologies for Education -or- how Education and the Cloud met, married and had smarter kids!

Here I continue my last discussion about K-20 education and how to use cloud technology to possibly do things.

Lately, I’ve been following this thread… and would like to share some ideas and thoughts with you all…


Message: 1
Date: Thu, 30 Jul 2009 15:32:18 -0600
From: xxxxxxxxxxxxxx
Subject: Re: [Ltsp-discuss] Recommend Server for 25 clients
Content-Type: text/plain; charset=UTF-8

On Thu, Jul 30, 2009 at 1:40 PM, xxxxx xxxxxxxxx<xxxxxxxxxx> wrote:
> xxxxxx xxxxxxxxxx :
>> How powerful server would you recommend for 25 users ?
> “Server sizing in an LTSP network is more art than science. Ask any LTSP
> administrator how big a server you need to use, and you’ll likely be
> told “It depends”.”


So I replied to that thread with the following response with I’ll share here on my blog…

I’ve been using Amazon Web Services (AWS) ie Amazon’s cloud for K-20 proof-of-concept work. So bear with me while I describe some things…

  1. Amazon’s Elastic Compute Cloud (EC2) service is very inexpensive and easy to use and provides 5-6 different choices for “compute resources” (ie servers).
  2. Amazon uses a “Utility” based pricing model (you pay only for how much of something you use like water or electricity) and only when you are using it.

ie.  need a bigger server… just pick one and start it up (ie Launch it in AWS terminology) migrate your apps (won’t go into that here)

Need 10 or 100 servers… easy… pick the server model (linux/windows, 32/64 bit etc) — this is called an AMI – Amazon Machine Instance — and when you LAUNCH the AMI just put the # of servers you need into the “Number of Instances” box that pops up when you select to LAUNCH the AMI you picked.

5 minutes later… they will all be running.

You manage all the startup/shutdown, IP address’s, Security Firewall/Access lists etc using Amazon’s web based AWS Management Console.

Now I’ve always wanted say this … But WAIT there’s MORE… it gets better yet <g> !!

You can take ADVANTAGE of Amazon’s Auto-Scaling and Auto-Load-Balancing features.

Since AWS costs are based like a Utility …  you can start off with just 1 server at 5am and if you set it up for auto-scaling …

As students/teachers (ie Load) starts to build say around 9am… the server “can” Auto-Scale UP by cloning itself and at the end of the day the servers will Auto-Scale DOWN by terminating
themselves when no longer needed (ie you don’t pay for them when they aren’t running).   You are the one to configure the parameters for the UP/DOWN auto-scaling.

try doing that in your school or data center where 1st you have to buy the servers, rack/stack/cable/ pay for HVAC, maintenance contracts, insurance, replace parts, etc.

I like letting Amazon worry about that stuff!

I will copy some information from the AWS web site.

You can sign up for an AWS account free (again you only get billed if you start using something).

As you can see below a “small” server costs just 10 cents/hr while the largest (8 or 20 core) just 80 cents/hr.

I learned about AWS by starting a “small” Ubuntu server, installing my applications, testing etc. then blowing it away when I was done.   I spent 4-5 hours a day ($0.50/day) to do this.
It was very easy to learn !


Instance Types

Standard Instances

Instances of this family are well suited for most applications.

  • Small Instance (Default) (ie virtual server)
    • 1.7 GB of memory
    • 1 virtual core
    • 160 GB of instance storage
    • 32-bit platform
  • Large Instance 7.5 GB of memory, 4 core, 850 GB of instance storage, 64-bit platform
  • Extra Large Instance (ie virtual server)
    • 15 GB of memory
    • 8 core
    • 1.7 TB of instance storage
    • 64-bit platform

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

  • High-CPU Medium Instance 1.7 GB of memory, 5 core, 350 GB of instance storage, 32-bit platform
  • High-CPU Extra Large Instance
    • 7 GB of memory,
    • 20 core
    • 1.7 TB of instance storage,
    • 64-bit platform



NOTE:   as of 9/2010 AWS has introduced an approximately 18% price decrease for most of the AWS EC2 compute instance sizes.    The pricing below does NOT reflect this change.

AWS has also introduced a new “micro” instance which provides 640Meg of RAM,  1/2 a cpu for only  $0.02 cents per hour —  48 cents per day ??

Pay only for what you use. There is no minimum fee. Estimate your monthly bill using AWS Simple Monthly Calculator.

On-Demand Instances

On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments.

This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.

The pricing below includes the cost to run private and public AMIs on the specified operating system.

Amazon also provides you with additional instances with other option for Amazon EC2 running Microsoft and Amazon EC2 running IBM that are priced differently.

United States

Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.10 per hour $0.125 per hour
Large $0.40 per hour $0.50 per hour
Extra Large $0.80 per hour $1.00 per hour
High CPU On-Demand Instances Linux/UNIX Usage Windows Usage
Medium $0.20 per hour $0.30 per hour
Extra Large $0.80 per hour $1.20 per hour

United States
Standard On-Demand Instances Linux/UNIX Usage Windows Usage
Small (Default) $0.11 per hour $0.135 per hour
Large $0.44 per hour $0.54 per hour
Extra Large $0.88 per hour $1.08 per hour
High CPU On-Demand Instances Linux/UNIX Usage Windows Usage
Medium $0.22 per hour $0.32 per hour
Extra Large $0.88 per hour $1.28 per hour

Pricing is per instance-hour consumed for each instance type, from the time an instance is launched until it is terminated. Each partial instance-hour consumed will be billed as a full hour.

Reserved Instances

Reserved Instances give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance.

After the one-time payment for an instance, that instance is reserved for you, and you have no further obligation.

You may choose to run that instance for the discounted usage rate for the duration of your term, or when you do not use the instance, you will not pay usage charges on it.

United States

Linux/UNIX One-time Fee
Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.03 per hour
Large $1300 $2000 $0.12 per hour
Extra Large $2600 $4000 $0.24 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.06 per hour
Extra Large $2600 $4000 $0.24 per hour

United States
Linux/UNIX One-time Fee
Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.04 per hour
Large $1300 $2000 $0.16 per hour
Extra Large $2600 $4000 $0.32 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.08 per hour
Extra Large $2600 $4000 $0.32 per hour

Reserved Instances can be purchased for 1 or 3 year terms, and the one-time fee per instance is non-refundable.

Usage pricing is per instance-hour consumed.

Instance-hours are billed for the time that instances are in a running state; if you do not run the instance in an hour, there is zero usage charge. Partial instance-hours consumed are billed as full hours.

Here’s how I make use of this.

On AWS you can pick from hundreds of pre-built “public” servers types (different flavors of Linux – Fedora, Ubuntu, Centos etc etc), 32 bit or 64 bit.

Some are “server” linux some are desktop linux.

Some have been built with apps already installed (Apache, MySQL, etc etc)

You get the idea.

So what have I been doing for kids/education… ?

Server Side:

I’m using AWS Desktop images where I’ve installed the x2go one-server.

x2go utilizes the NoMachine NX transport protocol libraries that are Open Source but x2go implements its own server-side and client modules.   The server side comes in a single user home version and also a x2go server implementation that is clustered and load balanced.

Unlike NoMachine’s current NX server/client …. where audio is a big problem.   x2go supports audio extremely well from server to client.    Local printing and sharing of folders between server and client is also supported.

Client Side:

Client side boots off of a Ubuntu USB thumb drive – preloaded with the x2go Open Source Windows, Mac or Linux clients.

x2go also has introduced a Web Portal capability for accessing the remote desktop.    Any user with a Browser that supports java can now access the Remote Desktop without installing any other client software on their local PC.

Each kid can have one and that way they can use it at school or — at home (same desktop, same cloud servers as at school).

Since the “real work” in terms of CPU and Storage is out on the AWS “cloud” it does NOT even matter what type PC they use…. all you use the local machine for is basically to boot off of
the USB and the local keyboard, mouse, screen and network connection (everything becomes a thin-client)

  • old pc, new pc
  • old laptop, new laptop
  • netbook
  • thin client

Since the “Desktop” that the students see is exported over NX from the AWS Desktop server where I can have from 1 – 20 CPU and I can have as many servers as I want… or can pay for <g>?

— and —

because storage using AWS’s S3 – Simple Storage Service and EBS – Elastic Block Storage is more or less infinite (at least as far as I’m concerned)

Now how’s performance.

Well you have to have a working and stable local network first of all but that’s true even if using a client/server model or a Thin Client model LTSP or Citrix etc.

The NX protocol is terrific and you can read about just how good it is here.

Here’s my basic process to create a server IF I start by using one of AWS’s Public Amazon Machine Image (AMI) that are  available.

  1. Launch the AMI instance I want
  2. Modify it by adding all the applications I need and configuring everything.
  3. Save the running “instance” using the free AWS EC2 AMI tools to what is called an S3 storage “bucket”.
  4. Re-register my now saved AMI “image” as a NEW Amazon AMI (once registered w/AWS I’ll be able to LAUNCH it from the AWS Management Console like any other AWS AMI.
  5. I then LAUNCH my new image like any other AWS AMI
    1. tell AWS how many “instance” … ie # virtual machines
    2. tell AWS what size server (32/64 bit small … up to Extra Large)
    3. Assign my firewall/access lists to the new instance
    4. Create and Assign an AWS Elastic IP address to MY “instance” (simple – takes 2 seconds)
  6. Once it’s in a “running” state.. just use the AWS cloud based server

Elastic IP Addresses – Elastic IP addresses are static IP addresses designed for dynamic cloud computing.
An Elastic IP address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it.
Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically
remapping your public IP addresses to any instance in your account. Rather than waiting on a data technician to reconfigure or replace your host,
or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by
quickly remapping your Elastic IP address to a replacement instance.

By the way, in case this isn’t obvious… got a new school that needs to be setup?

Other than the USBs for the kids and some kind of computer for them to use … the server can take only minutes to setup and there’s no physical installation involved !!!

Finally, I use my local machine with NX Client software to log in and I get a Desktop… and it’s all PFM …  magic !

Today (right now) I’m writing this while I have 4 AWS servers running that I am testing.

On my desk is a Lenovo T61p laptop

  • Dual Core
  • 4 Gig RAM

next to it I have an ASUS 1000HE Netbook

  • Atom processor
  • 1 G RAM

Both machines booted off of a USB.

I next used the  NX Client software to log into one of my AWS Desktop servers on each one and started working.

Performance is exactly the same on both clients (well  they ASUS display can only go 1400×600)

I wrote this on my AWS desktop server session using the ASUS while several of the  sessions on the Lenovo were doing some other things for me

I’d really like to get more in the Linux K-12 and K-20 community trying this so we can all share more of what we are doing for education of our kids.

Let me know if any of you would like some more pointers or information as I said I’d like some folks to work with on all of this.

I’ve also got some pretty cool AWS based solutions for the “Windows” in your life…

Hope you found this interesting!

Brian Mullan



  1. Interesting layout. I am working on a project that could benefit from parts of your layout. How many clients can login to one instance at a time before you need to auto-scale up and create clones?

    Secondly, from the first time I read this article you changed from FreeNX to X2go. Is X2go really better?

    Great blog. Thanks.


    Comment by Rodman Henley — November 6, 2010 @ 9:54 pm

    • Rod

      2nd question first.

      FreeNX works but Client audio was a constant problem to get working. I had the same problems using NoMachine’s NX client & server.

      FreeNX also is relatively easy to setup for most folks familiar with linux but if not then editing the /etc/node.conf file could be a bit overwhelming.

      I happened on x2go and found that installing it on server and client was almost too simple… and audio was solid and worked all the time.

      Their new release of x2go is nearing completion and will include a java plugin so you can just use say Firefox as your desktop “client”
      Printing, file shares, audio it all worked so I was satisfied and until I see something better come along x2go works for me.

      First question – about scaling… that’s going to be hard to answer because it depends on several variables

      1) in the cloud which “size” of virtual machine “instance” are you using for the Desktop “server”
      On Amazon those machine sizes range from:
      Note: EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

      “micro” – Micro Instance 613 MB of memory, up to 2 ECUs (ECU=elastic compute units) (for short periodic bursts), EBS storage only, 32-bit or 64-bit platform

      High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform


      Then there is always the consideration of “what” the users are doing. Does it require more cpu or more memory or lots of disk i/o

      You would have to test it with your environment to find out. Luckily to get your feet wet with AWS you can start just using the “micro” instances as they are only $.02 cents per hour
      so you can make lots of mistakes and it doesn’t cost you much. I’ve tried x2go on a “micro” instance and it worked ok. I wouldn’t expect 1 micro instance to support more than a couple users though

      Also, for AWS when to scale up or down is configurable by you, see –

      Using Auto Scaling

      Getting started with using Auto Scaling is easy. If you are signed up for the Amazon EC2 service, you are automatically registered for Auto Scaling. You simply:

      * Download the Auto Scaling Command Line Tools from Amazon EC2 API tools.
      * Use the as-create-launch-config command to create a Launch Configuration for your Auto Scaling Group. A Launch Configuration captures the parameters necessary to launch new Amazon EC2 instances.
      * Use the as-create-auto-scaling-group command to create an Auto Scaling Group. An Auto Scaling Group is a collection of Amazon EC2 instances to which you want to apply certain scaling conditions.
      * Use the as-create-or-update-trigger command to define the conditions under which you want to add or remove Amazon EC2 instances within the Auto Scaling Group. You can define conditions based on any metric that Amazon
      CloudWatch collects. Examples of metrics on which you can set conditions include average CPU utilization, network activity or disk utilization.
      * Auto Scaling tracks when your conditions have been met and automatically takes the corresponding scaling action on your behalf.

      All the commands mentioned above are also available as Auto Scaling APIs.

      I should mention that you can use Eucalypus or Canonical’s Ubuntu Enterprise Cloud (UEC) and do everything I’ve mention in your own “private” cloud to test things out.
      UEC utilizes Eucalyptus and both are very compatible with AWS’s Public cloud.

      Comment by bmullan — November 7, 2010 @ 4:47 pm

      • Brian,

        Thanks for the further explanation. I look forward to testing the AWS servers with several different scenarios.

        I am stuck on one issue though. I created 2 versions of a USB thumb drive. With the Ubootin program, the USB boot is quick but there is no persistence. As a result, as you would expect, I can’t save any changes or keep any downloads. I then created a bootable USB using the Universal USB installer. That program enabled a persistent USB up to 8 Gbs, however the boot is much slower and brings me to an additional screen asking if I want to “try Ubuntu” or “install Ubuntu on HD”. When I choose “Try Ubuntu”, it takes up to 10 mins to get to the Ubunu desktop

        Do you have any suggestions as to how I can create a persistent USB? And I must mention that I am a novice with the command line.

        There seems to be ample opportunities employing this type of architecture using X2go and AWS if the response time is minimal over an average Internet connection (2-5Mbs down, and 500kb up.

        My email is

        Rod Henley

        Comment by Rodman Henley — November 14, 2010 @ 2:51 pm

  2. Rod

    Just so you can read about different USB options are you familiar with PenDrive Linux and their website:

    Lots of good information and tools to help build USB boot systems as well as install systems.

    They also have info on how to optimize performance.

    You also need to make sure you are using at least a usb 2.0 port and device. Some ports on some machines are still the earlier 1.0 USB.

    As to USB I assume it is only a matter of time till we also see the new USB 3.0 interfaces and USB 3.0 memory sticks.

    That ought to be interesting with the 10x speed improvement.

    Last thoughts on the USB pen drive boot time. You can make a huge difference if all you really want to do is create the smallest OS
    you can that will just launch an x2go Client (re if all the work gets done on the remote server).

    If you don’t have to have lots of apps installed on the USB them just the smallest install you can with the minimum gnome or kde desktop installed.
    Then install the x2go client.

    Since you wouldn’t have tons of other software and drivers installed then that would boot much faster. But as I said it depends on what you are trying to do.
    The PenDrive Linux site can give you ideas.

    x2go to another system which is not on the same LAN ie. over the internet, works very well with the only exception being streaming video but that is a problem with everything I’ve tried so far. Given a beefy enough server machine and a fast internet connection though even the video can be made acceptible. The recent release of the libjpeg-turbo code also helped make a big difference.

    2-5Mbps down and 500kb up should work ok.

    In x2go client’s setup there is a SETTING that I initially didn’t understand on NoMachine’s NX but learned the secret of.

    This Setting works the same way with x2go.

    On x2go client click on

    Session Preferences

    then click on


    There is a slider that goes from


    The farther the slider is to the left the greater the compression of the desktop stream will be — which means greater CPU use at server and at client.

    Sliding it all the way to LAN means either minimal or no compression is done so it requires less CPU at the server and the client.

    You have to try different settings to see what works best for you.

    At my home I have 15Mbps down and 2Mbps up so I’ve kept my setting at ‘LAN’ I should probably try WAN and do a couple tests to see if it is better or worse?

    Comment by bmullan — November 14, 2010 @ 4:57 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: