Next-gen 100 Gigabit optics
Briefing: 100 Gigabit
Part 2: Interview
Gazettabyte spoke to John D'Ambrosia about 100 Gigabit technology
John D'Ambrosia laughs when he says he is the 'father of 100 Gig'.
He spent five years as chair of the IEEE 802.3ba group that created the 40 and 100 Gigabit Ethernet (GbE) standards. Now he is the chair of the IEEE task force looking at 100 Gig backplane and copper cabling. D'Ambrosia is also chair of the Ethernet Alliance and chief Ethernet evangelist in the CTO office of Dell's Force10 Networks.
“People are also starting to talk about moving data operations around the network based on where electricity is cheapest”
"Part of the reason why 100 Gig backplane technology is important is that I don't know anybody that wants a single 100 Gig port off whatever their card is," says D'Ambrosia. "Whether it is a router, line card, whatever you want to call it, they want multiple 100 Gig [interfaces]: 2, 4, 8 - as many as they can."
Earlier this year, there was a call for interest for next-generation 100 Gig optical interfaces, with the goal of reducing the cost and power consumption of 100 Gig interfaces while increasing their port density. "This [next-generation 100 Gig optical interfaces] is going to become very interesting in relation to what is going on in the industry,” he said.
Next-gen 100 Gig
The 10x10 MSA is an industry initiative that is an alternative 100 Gig interface to the IEEE 100 Gigabit Ethernet standards. Members of the 10x10 MSA include Google, Brocade, JDSU, NeoPhotonics (Santur), Enablence, CyOptics, AFOP, MRV, Oplink and Hitachi Cable America.
"Unfortunately, that [10x10 MSA] looks like it could cause potential interop issues,” says D'Ambrosia. That is because the 10x10 MSA has a 10-channel 10 Gigabit-per-second (Gbps) optical interface while the IEEE 100GbE use a 4x25Gbps optical interface.
The 10x10 interface has a 2km reach and the MSA has since added a 10km variant as well as 4x10x10Gbps and 8x10x10Gbps versions over 40km.
The advent of the 10x10 MSA has led to an industry discussion about shorter-reach IEEE interfaces. "Do we need something below 10km?” says D’Ambrosia.
Reach is always a contentious issue, he says. When the IEEE 802.3ba was choosing the 10km 100GBASE-LR4, there was much debate as to whether it should be 3 or 4km. "I won’t be surprised if you have people looking to see what they can do with the current 100GBASE-LR4 spec: There are things you can do to reduce the power and the cost," he says.
One obvious development to reduce size, cost and power is to remove the gearbox chip. The gearbox IC translates between 10x10Gbps and the 4x25Gbps channels. The chip consumes several watts each way (transmit to receive and vice versa). By adopting a 4x25Gbps input electrical interface, the gearbox chip is no longer needed - the electrical and optical channels will then be matched in speed and channel count. The result is that the 100GbE designs can be put into the upcoming, smaller CFP2 and even smaller CFP4 form factors.
As for other next-gen 100Gbps developments, these will likely include a 4x25Gbps multi-mode fibre specification and a 100 Gig, 2km serial interface, similar to the 40GBASE-FR.
The industry focus, he says, is to reduce the cost, power and size of 100Gbps interfaces rather than develop multiple 100 Gig link interfaces or expand the reach beyond 40km. "We are going to see new systems introduced over the next few years not based on 10 Gig but designed for 25 Gig,” says D’Ambrosia. The ASIC and chip designers are also keen to adopt 25Gbps signalling because they need to increase input-output (I/O) yet have only so may pins on a chip, he says.
D’Ambrosia is also part of an Ethernet bandwidth assessment ad-hoc committee that is part of the IEEE 802.3 work. The group is working with the industry to quantify bandwidth demand. “What you see is a lot of end users talking about needing terabit and a lot of suppliers talking about 400 Gig,” he says. Ultimately, what will determine the next step is what technologies are going to be available and at what cost.
Backplane I/0 and switching
Many of the systems D'Ambrosia is seeing use a single 100Gbps port per card. "A single port is a cool thing but is not that useful,” he says. “Frankly, four ports is where things start to become interesting.”
This is where 25Gbps electrical interfaces come into play. "It is not just 25 Gig for chip-to-chip, it is 25 Gig chip-to-module and 25 Gig to the backplane."
Moreover modules, backplane speeds, and switching capacity are all interrelated when designing systems. Designing a 10 Terabit switch, for example, the goal is to reduce the number of traces on a board and that go through the backplane to the switch fabric and other line cards.
Using 10Gbps electrical signals, between 1,200 to 2,000 signals are needed depending on the architecture, says D'Ambrosia. With 25Gbps the interface count reduces to 500-750. “The electrical signal has an impact on the switch capacity,” he says.
100 Gig in the data centre
D’Ambrosia stresses that care is needed when discussing data centres as the internet data centres (IDC) of a Google or a Facebook differ greatly from those of enterprises. “In the case of IDCs, those people were saying they needed 100 Gig back in 2006,” he says.
Such mega data centres use tens of thousands of servers connected across a flat switching architecture unlike traditional data centres that use three layers of aggregated switching. According to D'Ambrosia such flat architectures can justify using 100Gbps interfaces even when the servers each have a 1 Gig Ethernet interfaces only. And now servers are transitioning to 10 GbE interfaces.
“You are going to have to worry about the architecture, you are going to have to worry about the style of data centre and also what the server applications are,” says D'Ambrosia. “People are also starting to talk about moving data operations around the network based on where electricity is cheapest.” Such an approach will require a truly wide, flat architecture, he says.
D'Ambrosia cites the Amsterdam Internet exchange that announced in May its first customer using a 100 Gig service. "We are starting to see this happen,” he says.
One lesson D'Ambrosia has learnt is that there is no clear relationship between what comes in and out of the cloud and what happens within the cloud. Data centres themselves are one such example.
100 Gig direct detection
In recent months lower power, 200km to 800km reach, 100Gbps direct detection interfaces that are cheaper than coherent transmission have been announced by ADVA Optical Networking and MultiPhy. Such interfaces have a role in the network and are of varying interest to telco operators. But these are vendor-specific solutions.
D’Ambrosia stresses the importance of standards such as the IEEE and the work of the Optical Internetworking Forum (OIF) that has adopting coherent. “I still see customers that want a standards-based solution,” says D'Ambrosia, who adds that while the OIF work is not a standard, it is an interoperability agreement. “It allows everyone to develop the same thing," he says.
There are also other considerations regarding 100 Gig direct-detection besides cost, power and a pluggable form factor. Vendors and operators want to know how many people will be able to source this, he says.
D'Ambrosia says that new systems being developed now will likely be deployed in 2013. Vendors must assess the attractiveness of any alternative technologies to where industry backed technologies like coherent and the IEEE standards will be then.
The industry will adopt a variety of 100Gbps solutions, he says, with particular decisions based on a customer’s cost model, its long term strategy and its network.
For Part 1 - 100 Gig: An operator view click here
Reader Comments