What Everyone Knows About Cloud Computing

Cloud computing is a buzzword that has erupted in the past few years, but there seems to be a dichotomy separating the marketing-team spawned features and the actual advantages and disadvantages of the technological paradigm.

Due to the nature of the beast, the details of cloud computing are often completely invisible to an end user; Instead, the developer and server administrator enter into a symbiotic relationship whereby both parties take on some extra work for gains in efficiency.

Cloud computing works by distributing computational and storage capabilities across clusters of servers. This differs from grid computing in that the latter is concerned, usually, with tasks that require incredible processing power. The former is instead used to maximize hardware: In order to have enough computational power and bandwidth to survive huge spikes in traffic, a traditional server usually runs at 10% capacity.

With this new paradigm a server can run with a much lower safety factor. The services running these severs will usually allocate another spot or two in the cloud (often implemented by adding new “instances,” or virtual machines) or scale up the resources allowed to the instances already running.

The main advantage here for the developing party is agility: A cheap instance can be purchased for the development of the web application, and instantly scaled up to serve any audience size. Costs are cut across the board because of the higher density of paying developers per set of hardware.

For the party administrating the servers, the primary advantage is slashing back expenses: By placing many virtual machines on more expensive hardware, the host can increase profits and decrease relative energy and hardware costs per application hosted.

This was a guest article from John Klein of Rapid Application Development.