Cloud computing is the latest industry attempt to merge computing with networking. While previous efforts have all failed, the gathering evidence suggests that cloud computing may have got things right this time. Indeed it is set to have a marked effect on how enterprises do business, while driving the growth of network traffic and new switch architectures for the data centre.

In the mid-1990s, Oracle proposed putting computing within a network, and coined the term "network computer". The idea centred on a diskless desktop for businesses on which applications were served. The concept failed in its bid to dislodge Intel and Microsoft, but was resurrected during the dot-com boom with the advent of application service providers (ASPs).

ASPs delivered computer-based services to enterprises over the network. The ASPs faltered partly because applications were adapted from existing ones rather than being developed with the web in mind. Equally, the ASPs' business models were immature, and broadband access was in scarce supply. But the idea has since taken hold in the shape of software-as-a-service (SaaS). SaaS provides enterprises with business software on demand over the web, so a firm does not need to buy and maintain that software on its own platforms.

SaaS can be viewed as part of a bigger trend in cloud computing. Examples of cloud services include Google's applications such as e-mail and online storage, and Amazon with its Elastic Compute Cloud service, where application developers configure the computing resources they need.

Cloudy thinking

The impact of cloud starts and finishes in the IT sector. "Cloud computing is not just [for] Web 2.0 companies, it is a game-changer for the IT industry," said Dennis Quan, director of IBM's software group. "In general it's about massively scalable IT services delivered over the network."

An ecosystem of other players is required to make cloud happen. The early movers in this respect are data-centre owners and IT services companies like Amazon and IBM, and the suppliers of data-centre hardware, which include router vendors Cisco Systems and Juniper Networks, and Ethernet switch makers such as Extreme Networks and Force10 Networks.

Telecommunications carriers too are jumping on the bandwagon, which is not surprising given their experience as providers of hosting and managed services coupled with the networking expertise needed for cloud computing. International carrier AT&T, for instance, launched its Synaptic Hosting service in August 2008, a cloud-based, on-demand managed service where enterprises define their networking, computing and storage requirements, and pay for what they use. "There is a base-level platform for the [enterprise's] steady-state need, but users can tune up and tune down [resources] as required," explained Steve Caniano, vice-president, hosting and application services at AT&T.

"The top 10 operators in Europe are all adding utility-based offerings [such as storage and computing], and are moving to cloud computing by adding management and provisioning on top," said Alfredo Nulli, solutions manager for service provision at Cisco. However, it is the second- and third-tier operators in Europe that are "really going for cloud", he says, as they strive to compete with the likes of Amazon and steal a march on the big carriers.

The idea of using IT resources on a pay-as-you-go basis rather than buying platforms for in-house use is appealing to companies, especially in the current economic climate. "Enterprises are tired of over-provisioning by 150% only for equipment to sit idle and burn power," said Steve Garrison, vice-president of marketing at Force10 Networks.

Danny Dicks, an independent consultant and author of a recent Light Reading Insider report on cloud computing, agrees. But he stresses it is a huge jump from using cloud computing for application development to an enterprise moving its entire operations into the cloud. For a start-up developing and testing an application, the cost and scalability benefits of cloud are so great that it makes a huge amount of sense, he says. Once an application is running and has users, however, an enterprise is then dependent on the reliability of the connection. "No-one would worry if a Facebook application went down for an hour but it would make a big difference to an enterprise offering financial services," he commented.

The network perspective

The importance of the network and the implied demand for bandwidth as more and more applications and IT resources sit somewhere remote from the user is good news for operators and equipment makers.

If done right, there is a tremendous opportunity for telecoms operators to increase the value of their networks and create new revenue streams. At a minimum, cloud computing is expected to increase the amount of traffic on their networks.

Service providers stress the need for high-bandwidth, low-latency links to support cloud-based services. AT&T has 38 data centres worldwide, which are connected via its 40 Gbit/s MPLS global backbone network, says Gregg Sexton, AT&T's director of product development. The carrier is concentrating Synaptic Hosting applications in five "super data centres" located across three continents, linked using its OPT-E-WAN virtual private LAN service (VPLS). Using the VPLS, enterprise customers can easily change bandwidth assigned between sites and to particular virtual LANs.

BT, which describes its data centres and network as a "global cloud", also highlights the potential need for higher capacity links. "The big question we are asking ourselves is whether to go to 40 Gbit/s or wait for 100 Gbit/s," said Tim Hubbard, head of technology futures, BT Design.

Likewise, systems vendors are seeing the impact of cloud computing. Ciena first noted interest from large data-centre players seeking high-capacity links some 12 to 24 months ago. "It wasn't a step jump, more an incremental change in the way networks were being built and who was building them," said John-Paul Hemingway, chief technologist, EMEA for Ciena.

Cloud is also having an impact on access network requirements, he says. There is a need to change dynamically the bandwidth per application over a connection to an enterprise. Services such as LAN, video conferencing and data back-up need to be given different priorities at different times of the day, which requires technologies such as virtual LAN with quality-of-service and class-of-service settings.

German vendor ADVA Optical Networking has also noticed a rise in connectivity links to enterprises via demand for its FSP-150 Ethernet access product, which may be driven in part by increased demand for cloud-based services. Another area that's being driven by computing over long distances is the need to carry Infiniband natively over a DWDM lightpath. "Infiniband is used for computing nodes due to its highest connectivity and lowest latency," explained Christian Illmer, ADVA's director of business development.

Virtualization virtues

Cloud computing is also starting to influence the evolution of the data centre. One critical enabling technology for cloud computing in the data centre is virtualization, which refers to the ability to separate a software function from the underlying hardware, so that the hardware can be shared between different software usages without the user being aware. Networks, storage systems and server applications can all be "virtualized", giving end-users a personal view of their applications and resources, regardless of the network, storage or computing device they are physically using.

Virtualization enables multiple firms to share the same SaaS application while maintaining their own unique data, compute and storage resources. Virtualization has also led to a significant improvement in the utilization of servers and storage. Traditionally usage levels have been at a paltry 10 to 15%.

However, virtualization remains just one of several components needed for cloud computing. A separate management-tools layer is also needed to ensure that IT resources are efficiently provisioned, used and charged for. "This reflects the main finding of our report, that the cloud-computing world is starting to stratify into clearly defined layers," said Dicks.

Such management software can also shift applications between platforms to balance loads. An example is moving what is called a virtual machine image between servers. A virtual machine image may comprise 100 GB of storage, middleware and application software. "If [the image] takes up 5% of a server's workload, you may consolidate 10 or 20 such images onto a single machine and save power," said IBM's Quan.

Force10's Garrison notes that firms issuing request for proposals for new data centres typically don't mention cloud directly. Instead they ask questions like "Help me see how you can move an application from one rack to another, or between adjacent rows, or even between adjacent data centres 50 miles apart", he said.

Clearly, shuffling applications between servers and between data centres will drive bandwidth requirements. It also helps to explain why vendors are exploring how to consolidate and simplify the switching architecture within the data centre.

"Everything is growing exponentially, whether it is the number of servers and storage installed each year or the amount of traffic," said Andy Ingram, vice-president of product marketing and business development, data-centre business group at Juniper Networks. "The data centre is becoming a wonderful, dynamic and scary place."

This explains why vendors such as Juniper are investigating how current tiered Ethernet switching within the data centre — passing traffic between the platforms and users — can be adapted to handle the expected growth in data-centre traffic. Such growth will also strain connections between equipment: between servers, and between the servers and storage.

According to Ingram the first approach is to simplify the existing architecture. With this in mind, Juniper is looking to collapse the tiered switching from three layers to two, by linking its top-of-rack switches in a loop. Longer term, vendors are investigating the development of a singled-tiered switch in a project code-named Stratus. "We are looking to develop a scalable, flat, non-blocking, lossless data-centre fabric," said Ingram.

A flat fabric means processing a packet only once, while a non-blocking architecture removes the possibility of congestion. Such a switch fabric will scale to hundreds or even thousands of 10 Gigabit Ethernet access ports, says Ingram, who stresses that Juniper is in the first year of what will be a multi-year project to develop such an architecture.

Data centre convergence

Another development is Fibre Channel over Ethernet (FCOE) which promises to consolidate the various networks that run within a data centre. At present, servers connect to the LAN using Ethernet and to storage via Fibre Channel. This requires separate cards within the server: a LAN network interface card and a host-bus adapter for storage. FCOE promises to enable Ethernet, and one common converged network adapter card, to be used for both purposes. But this requires a new variant of Ethernet to be adopted within the data centre. Such an Ethernet development is already being referred to by a variety of names in the industry: Data Centre Ethernet, Converged Enhanced Ethernet, lossless Ethernet, and Data Centre Bridging.

Lossless Ethernet could be used to carry Fibre Channel packets since the storage protocol's key merit is that it does not lose packets. Such a development would remove one of the three main protocols in the data centre, leaving Ethernet to challenge Infiniband. But even though FCOE has heavyweight backers in the shape of Cisco and Brocade, it will probably be some years before a sole switching protocol rules the data centre.

Equipment makers believe they can benefit from the widespread adoption of cloud computing, at least in the short term. Although there will be efficiencies arising from virtualization and ever more enterprises sharing hardware, this will be eclipsed by the boost that cloud services provide to IT in general, meaning that more datacoms equipment will be sold rather than less. Longer term, however, it will probably impact hardware sales as fewer firms choose to invest in their own IT.

IBM's Quan notes that enterprises themselves are considering the adoption of cloud capabilities within their private data centres due to the efficiencies it delivers. The company thus expects to see growth of such "private" as well as "public" cloud-enabled data centres.

Dicks believes that cloud computing has a long road map. There will be plentiful opportunities for companies to deliver innovative products for cloud, from software and service support to underlying platforms, he says.

Further information

Cloud Computing: A Definition

Cloud computing is a term that hints at its meaning. And like a cloud, it is hard to pick out the detail. The term implies a user's applications and IT resources reside somewhere in the network "cloud". But the "computing" moniker is misleading, implying servers and compute nodes only, when in fact cloud computing encompasses storage and networking too.

Steve Garrison, vice-president of marketing at Force10 Networks, offers a straightforward description of cloud: accessing applications and resources and not caring where they reside.

Danny Dicks, an independent consultant, has come up with a more rigorous definition. He classifies cloud computing as "the provision and management of rapidly scalable, remote, computing resources, charged according to usage, and of additional application development and management tools, generally using the internet to connect the resources to the user."