counter for iweb
Website
Silicon Photonics

Published book, click here

« Industry in a flurry of mid-reach MSA announcements | Main | OFC 2014 industry reflections - Part 1 »
Tuesday
Apr012014

OFC 2014 industry reflections - Part 2

Ciena and Ovum comment on the noteworthy developments and trends following the recent OFC 2014 exhibition and conference. 


The high cost of 100 Gigabit Ethernet client modules has been a major disappointment to me as it has slowed adoption

Joe Berthold, Ciena

 

Joe Berthold, vice president of network architecture at Ciena.

OFC 2014 was another great event, with interesting programmes, demonstrations and papers presented. A few topics that really grabbed my interest were discussions around silicon photonics, software-defined networking (SDN) and 400 Gigabit Ethernet (GbE).

The intense interest we saw at last year’s OFC around silicon photonics grew this year with lots of good papers and standing-room-only sessions. I look forward to future product announcements that deliver on the potential of this technology to significantly reduce cost of interconnecting systems over modest distances. The high cost of 100GbE client modules has been a major disappointment to me as it has slowed adoption.

Another area of interest at this year’s show was the great deal of experimental work around SDN, some more practical than others.

I particularly liked the reviews of the latest work under the DARPA-sponsored CORONET programme, whose Phase 3 focused on SDN control of multi-layer, multi-vendor, multi-data centre cloud networking across wide area networks.

In particular, there were talks from three companies I noted: Anne Von Lehman of Applied Communication Sciences, the prime contractor, provided a good program overview; Bob Doverspike of AT&T described a very extensive testbed using equipment of the type currently deployed in AT&T’s network, as well as two different processing and storage virtualisation platforms; and Doug Freimuth of IBM described its contributions to CORONET including an OpenStack virtualisation environment, as well as other IBM distributed cloud networking research.

 

All the action on rates above 100 Gig lies with the selection of client signals. 400 Gig seems to have the major mindshare but there are still calls for flexible rate clients and Terabit clients.



One thing I enjoyed about these talks was that they described an approach to SDN for distributed data centre networking that is pragmatic and could be realised soon.

I also really liked a workshop held on the Sunday on the question whether SDN will kill GMPLS. While there was broad consensus that GMPLS has failed in delivering on its original turn-of-the-century vision of IP routers control of multi-layer, multi-domain networks, most speakers recognised the value distributed control planes have in simplifying and speeding the control of single layer, single domain networks.

What I took away was that single layer distributed control planes are here to stay as important network control functions, but instead will work under the direction of an SDN network controller.

As we all know, 400 Gigabit dense wavelength division multiplexing (DWDM) is here from the technology perspective, but awaiting standardisation of the 400 Gig Ethernet signal from the IEEE, and follow-on work by the ITU-T on signal mapping to OTN. In fact, from the perspective of DWDM transmission systems, 1 Terabit-per-second systems can be had for the asking.

All the action on rates above 100 Gig lies with the selection of client signals. 400 Gig seems to have the major mindshare but there are still calls for flexible rate clients and Terabit clients.

One area that received a lot of attention, with many differing points of view, was the question of the 400GbE client. As the 400GbE project begins soon in the IEEE, it is time to take a lesson from the history of the 100 Gig client modules and do better.

 

Let us all agree that we don’t need 400 Gig clients until they can do better in cost, face plate density, and power dissipation than the best 100 Gig modules that will exist then.

 

The first 100 Gig DWDM transceivers were introduced in 2009. It is now 2014 and 100 Gig is the transmission rate of choice for virtually all high capacity DWDM network applications, with a strong economic value proposition versus 10 Gig. Yet the industry has not yet managed to achieve cost/bit parity between 100 Gig and 10 Gig  clients - far from it!

Last year's OFC, we saw many show floor demonstrations of CFP2 modules. They promise lower costs, but evidence of their presence in shipping products is still lacking. At the exhibit this year we saw 100 Gig QSFP28 modules. While progress is slow, the cost of the 100 Gig client module continues to result in many operators favouring 10 Gig handoffs to their 100 Gig optical networking systems.

Let us all agree that we don’t need 400 Gig clients until they can do better in cost, face plate density, and power dissipation than the best 100 Gig modules that will exist then. At this juncture the 100 Gig benchmark we should be comparing 400 Gig to is a QSFP28 package.

Lastly, last year we heard about the launch of an OIF project to create a pluggable analogue coherent optical module. There were several talks that referenced this project, and discussed its implications for shrinking size and supporting higher transceiver card density.

Broad adoption of this component will help drive down costs of coherent transceivers, so I look forward to its hearing about its progress at OFC 2015.

 

 

Daryl Inniss, vice president and practice leader, Ovum.

There was no shortage of client-side announcements at OFC and I’ve spent time since the conference trying to organise them and understand what it all means.

I’m tempted to say that the market is once again developing too many options and not quickly agreeing on a common solution. But I’m reminded that this market works collaboratively and the client-side  uncertainty we’re seeing today is a reflection of a lack of market clarity. 

Let me describe three forces affecting suppliers:

The IEEE 100GBASE-xxx standards represent the best collective information that suppliers have. Not surprisingly, most vendors brought solutions to OFC supporting these standards. Vendors sharpened their products and focused on delivering solutions with smaller form factors and lower power consumption. Advances in optical components (lasers, TOSAs and ROSAs), integrated circuits (CDRs, TIAs, drivers), transceivers, active optical cables, and optical engines were all presented.  A promising and robust supply base is emerging that should serve the market well.

A second driver is that hyperscale service providers want a cost-effective solution today that supports 500m to 2km. This is non-standard and suppliers have not agreed on the best approach. This is where the market becomes fragmented. The same vendors supporting the IEEE standard are also pushing non-standard solutions. There are at least four different approaches to support the hyperscale request:

 

  • Parallel single mode (PSM4) where an MSA was established in January 2014
  • Coarse wavelength division multiplexing—using uncooled directly modulated lasers and single mode fibre
  • Dense wavelength division multiplexing—this one just emerged on the scene at OFC with Ranovus and Mellanox introducing the OpenOptics MSA
  • Complex modulation—PAM-8 for example and carrier multi-tone.

 

Admittedly, the presence of this demand disrupts the traditional process. But I believe the suppliers’ behavior reflects their unhappiness with the standardisation solution.  

The good news is these approaches are using established form factors like the QSFP. And silicon photonic products are starting to emerge. Suppliers will continue to innovate.

 

Ambiguity will persist but we believe that clarity will ultimately prevail.


The third issue lurking in the background is knowledge that 400 Gig and one Terabit will soon be needed. The best-case scenario is to use 100 Gig as a platform to support the next generation. Some argue for complex modulation as you reduce the number of optical components thereby lowering cost. That’s good but part of the price is higher power consumption, an issue that is to be determined.

Part of today’s uncertainty is whether the standard solution is suitable to support the market to the next generation. Sixteen channels at 25 Gig is doable but feels more like a stopgap measure than a long-term solution.

These forces leave suppliers innovating in search of the best path forward. The approaches and solutions differ for each vendor. Timing is an issue too with hyperscale looking for solutions today while the mass market may be years away.  

We believe that servers with 25 Gig and/ or 40 Gig ports will be one of the catalysts to drive the mass market and this will not start until about 2016. Meanwhile, each vendor and the market will battle for the apparent best solution to meet the varying demands. Ambiguity will persist but we believe that clarity will ultimately prevail.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>