Nowadays, while everyone is familiar with the term “cloud computing”, very few people truly understand what the term truly means and how it really works. Although we continuously hear about cloud computing from advertisers, managers and salespeople, how many of us really know what it means when something is “in the cloud”? Or what the origin of the term is? This series of articles will explore the history of cloud computing, explain some of the technologies behind the concept and look at some of the latest developments.

What is a “cloud”?

The use of “cloud” to describe a nebulous cluster of resources is not new. In the current sense, it first came to prominence when Amazon launched their Elastic Cloud Compute (EC2) service in 2006. Companies could sign up for EC2 and dynamically create virtual computer servers and storage to help run their backend systems. This time-sharing of computer resources allowed small companies to access server power that would otherwise be beyond their reach. In turn, this led to the enormous growth in smart-phone applications, most of which rely on some form of cloud services.

As the concept has evolved, “Cloud” has been used to refer to any computing service that is hosted remotely and globally accessible. Many of these services are simply providing remote storage (e.g. iCloud, DropBox), while others provide complete virtual servers (e.g. Microsoft Azure or Google Cloud Service) or provide scalable SaaS (Software as a Service) solutions (e.g. BambooHR or Slack). None of these could have existed without the invention of virtualisation.


The concept of virtualisation has been around for almost as long as modern computing. Virtualisation allows you to take one host computer or server and share its physical resources between several virtual guest computers. This is achieved using a specialised piece of software called a “hypervisor” that is able to present the physical hardware as virtual services to the guest OS. In a pure virtualised environment, each guest operating system believes it is running in isolation on its own dedicated hardware. In such systems the hypervisor sits between the guest OS and the physical hardware. This setup is referred to as a Type 1 hypervisor.

The first commercially-successful hypervisor was called GSX and was created by VMware in 1998. GSX is a so-called Type 2 hypervisor. Rather than sitting directly on the hardware, it is hosted by a native or host operating system. However, this can cause problems when there is resource contention on the host (either between the guest OS’s or with the host OS itself). Paravirtualisation solves this issue by letting the guest OS know that it is in a shared environment and providing it with drivers that allow it to communicate with the underlying host OS or hypervisor.

The first commercially successful paravirtualised hypervisor was the Xen system that came out of the University of Cambridge Computer Lab. Xen (now owned by Citrix), rose to prominence when their software was used by Tesco to significantly cut the costs of their data centre operations.

Nowadays, there are a number of different flavours of hypervisor, some pure type 1 such as VMware ESX or Microsoft HyperV, some type 2 such as VirtualBox or Parallels and some hybrid such as Xen or Linux KVM. The choice of hypervisor will usually depend on the nature of the virtual servers you want to run and most modern hypervisors are agnostic to the guest operating system.

The modern cloud

Amazon’s introduction of EC2 opened a floodgate. Major competitors soon launched their own cloud computing services, with Microsoft launching Azure in 2008 and Google launching their Cloud Platform in 2011. Meanwhile, lots of smaller companies were starting to offer specialised cloud services, driving prices further and further down. In fact, you can now rent a virtual server for just a few pounds a month. These services are now known generically as Infrastructure as a Service (or IaaS).

Modern cloud computing companies offer a range of different services. These include virtual storage, virtual networks and specialised database services. To get an idea of the huge range on offer visit the Amazon Web Services page. These services can be used to construct extremely complex products and are the driving force behind the phenomenal growth of Software as a Service.

In the next instalment of our series on Cloud Computing, we will be taking a look at the physical infrastructure needed to run cloud services.

Anything to add? Questions or suggestions?

Get in touch with us at or via Twitter at @Kapitalise!

Posted in Innovation, Technology.