ADVA's 100 Terabit data centre interconnect platform  
Tuesday, June 9, 2015 at 10:25AM
Roy Rubenstein in 16-QAM, 8-QAM, ADVA Optical Networking, CloudConnect, EDFA, Jim Theodoras, OFC 2015, QuadFlex, backplane, cluster, pod, recipe

ADVA Optical Networking has unveiled its FSP 3000 CloudConnect, a data centre interconnect product designed to cater for the needs of the different data centre players. The company has developed several sized platforms to address the workloads and bandwidth needs of data centre operators such as Internet content providers, communications service providers, enterprises, cloud and colocation players.

Certain Internet content providers want to scale the performance of their computing clusters across their data centres. A cluster is a grouping of distributed computing comprising a defined number of virtual machines and processor cores (see Clusters, pods and recipes explained, bottom). Yet there are also data centre operators that only need to share limited data between their sites.

ADVA Optical Networking highlights two internet content providers - Google and Microsoft with its Azure cloud computing and services platform - that want their distributed clusters to act as one giant global cluster.

“The performance of the combined clusters is proportional to the bandwidth of the interconnect,” says Jim Theodoras, senior director, technical marketing at ADVA optical Networking. “No matter how many CPU cores or servers, you are now limited by the interconnect bandwidth.”  

ADVA Optical Networking cites a Google study that involved running an application on different cluster configurations, starting with a single cluster; then two, side-by-side; two clusters in separate buildings through to clusters across continents. Google claimed the distributed clusters only performed at 20 percent capacity due to the limited interconnect bandwidth. “The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster,” says Theodoras.

Yet other internet content providers have far more modest interconnect demands. ADVA cites one, as large as the two global cluster players, that wants only 1.2 terabit-per-second (Tbps) between its sites. “It is normal duplication/ replication between sites,” says Theodoras. “They want each campus to run as a cluster but they don’t want their networks to behave as a global cluster.”   

 

FSP 3000 CloudConnect

The FSP 3000 CloudConnect has several configurations. The company stresses that it designed CloudConnect as a high-density, self-contained platform that is power-efficient and that comes with advanced data security features. 

All the CloudConnect configurations use the QuadFlex card that has a 800 Gigabit throughput: up to 400 Gigabit client-side interfaces and 400 Gigabit line rates. 

Jim TheodorasThe QuadFlex card is thin, measuring only a half rack unit (RU). Up to seven can be fitted in ADVA’s four rack-unit (4 RU) platform, dubbed the SH4R, for a line side transport capacity of 2.8 Tbps. The SH4R’s remaining, eighth slot hosts either one or two management controllers.   

The QuadFlex line-side interface supports various rates and reaches, from 100 Gigabit ultra long-haul to 400 Gigabit metro/ regional, in increments of 100 Gigabit. Two carriers, each using polarisation-multiplexing, 16 quadrature amplitude modulation (PM-16-QAM), are used to achieve the 400 Gbps line rate, whereas for 300 Gbps, 8-QAM is used on each of the two carriers. 

 

“The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster” 

 

The advantage of 8-QAM, says Theodoras, is that it is 'almost 400 Gigabit of capacity' yet its can span continents. ADVA is sourcing the line-side optics but uses its own code for the coherent DSP-ASIC and module firmware. The company has not confirmed the supplier but the design matches Acacia's 400 Gigabit coherent module that was announced at OFC 2015.  

ADVA says the CloudConnect 4 RU chassis is designed for customers that want a terabit-capacity box. To achieve a terabit link, three QuadFlex cards and an Erbium-doped fibre amplifier (EDFA) can be used. The EDFA is a bidirectional amplifier design that includes an integrated communications channel and enables the 4 RU platform to achieve ultra long-haul reaches. “There is no need to fit into a [separate] big chassis with optical line equipment,” says Theodoras. Equally, data centre operators don’t want to be bothered with mid-stage amplifier sites.         

Some data centre operators have already installed 40 dense WDM channels at 100GHz spacing across the C-band which they want to keep. ADVA Optical Networking offers a 14 RU configuration that uses three SH4R units, an EDFA and a DWDM multiplexer, that enables a capacity upgrade. The three SH4R units house a total of 20 QuadFlex cards that fit 200 Gigabit in each of the 40 channels for an overall transport capacity of 8 terabits.

ADVA CloudConnect configuration supporting 25.6 Tbps line side capacity. Source: ADVA Optical Networking

The last CloudConnect chassis configuration is for customers designing a global cluster. Here the chassis has 10 SH4R units housing 64 QuadFlex cards to achieve a total transport capacity of 25.6 Tbps and a throughput of 51.2 Tbps.   

Also included are 2 EDFAs and a 128-channel multiplexer. Two EDFAs are needed because of the optical loss associated with the high number of channels, such that an EDFA is allocated for each of the 64 channels. “For the [14 RU] 40 channels [configuration], you need only one EDFA,” says Theodoras.   

The vendor has also produced a similar-sized configuration for the L-band. Combining the two 40 RU chassis delivers 51.2Tbps of transport and 102.4 Tbps of throughput. “This configuration was built specifically for a customer that needed that kind of throughput,” says Theodoras.  

Other platform features include bulk encryption. ADVA says the encryption does not impact the overall data throughput while adding only a very slight latency hit. “We encrypt the entire payload; just a few framing bytes are hidden in the existing overhead,” says Theodoras.   

The security management is separate from the network management. “The security guys have complete control of the security of the data being managed; only they can encrypt and decrypt content,” says Theodoras.

CloudConnect consumes only 0.5W/ Gigabit. The platform does not use electrical multiplexing of data streams over the backplane. The issue with using such a switched backplane is that power is consumed independent of traffic. The CloudConnect designers has avoided this approach. “The reason we save power is that we don’t have all that switching going on over the backplane.” Instead all the connectivity comes from the front panel of the cards.  

The downside of this approach is that the platform does not support any-port to any-port connectivity. “But for this customer set, it turns out that they don’t need or care about that.”     

 

Open hardware and software  

ADVA Optical Networking claims is 4 RU basic unit addresses a sweet spot in the marketplace. The CloudConnect also has fewer inventory items for the data centre operators to manage compared to competing designs based on 1 RU or 2 RU pizza boxes, it says.   

Theodoras also highlights the system’s open hardware and software design.

“We will let anybody’s hardware or software control our network,” says Theodoras. “You don’t have to talk to our software-defined networking (SDN) controller to control our network.” ADVA was part of a demonstration last year whereby an NEC and a Fujitsu controller oversaw ADVA’s networking elements.

 

Every vendor is always under pressure to have the best thing because you are only designed in for 18 months 

 

By open hardware, what is meant is that programmers can control the optical line system used to interconnect the data centres. “We have found a way of simplifying it so it can be programmed,” says Theodoras. “We have made it more digital so that they don’t have to do dispersion maps, polarisation mode dispersion maps or worry about [optical] link budgets.” The result is that data centre operators can now access all the line elements.    

“At OFC 2015, Microsoft publicly said they will only buy an open optical line system,” says Theodoras. Meanwhile, Google is writing a specification for open optical line systems dubbed OpenConfig. “We will be compliant with Microsoft and Google in making every node completely open.”

General availability of the CloudConnect platforms is expected at the year-end. “The data centre interconnect platforms are now with key partners, companies that we have designed this with,” says Theodoras. 

 

Clusters, pods and recipes explained

A cluster is made up of a number of virtual machines and CPU cores and is defined in software. A cluster is a virtual entity, says Theodoras, unrelated to the way data centre managers define their hardware architectures. 

“Clusters vary a lot [between players],” says Theodoras. “That is why we have had to make scalability such a big part of CloudConnect.” 

The hardware definition is known as a pod or recipe. “How these guys build the network is that they create recipes,” says Theodoras. “A pod with this number of servers, this number of top-of-rack switches, this amount of end-of-row router-switches and this transport node; that will be one recipe.”    

Data centre players update their recipes every 18 months. “Every vendor is always under pressure to have the best thing because you are only designed in for 18 months,” says Theodoras.   

Vendors are informed well in advance what the next hardware requirements will be, and by when they will be needed to meet the new recipe requirements.    

In summary, pods and recipes refer to how the data centre architecture is built, whereas a cluster is defined at a higher, more abstract layer.   

Article originally appeared on Gazettabyte (https://www.gazettabyte.com/).
See website for complete article licensing information.