counter for iweb
Website
Silicon Photonics

Published book, click here

« Transmode chooses coherent for 100 Gigabit metro | Main | Is optical components becoming a buyer's market? »
Friday
Sep232011

Intelligent networking: Q&A with Alcatel-Lucent's CTO

Alcatel-Lucent's corporate CTO, Marcus Weldon, in a Q&A with Gazettabyte. Here, in Part 1, he talks about the future of the network, why developing in-house ASICs is important and why Bell Labs is researching quantum computing.


Marcus Weldon (left) with Jonathan Segel, executive director in the corporate CTO Group, holding the lightRadio cube. Photo: Denise Panyik-Dale

Q:  The last decade has seen the emergence of Asian Pacific players. In Asia, engineers’ wages are lower while the scale of R&D there is hugely impressive. How is Alcatel-Lucent, active across a broad range of telecom segments, ensuring it remains competitive? 

A: Obviously we have a Chinese presence ourselves and also in India. It varies by division but probably half of our workforce in R&D is in what you would consider a low-cost country.  We are already heavily present in those areas and that speaks to the wage issue.

But we have decided to use the best global talent. This has been a trait of Bell Labs in particular but also of the company. We believe one of our strengths is the global nature of our R&D. We have educational disciplines from different countries, and different expertise and engineering foci etc. Some of the Eastern European nations are very strong in maths, engineering and device design. So if you combine the best of those with the entrepreneurship of the US, you end up with a very strong mix of an R&D population that allows for the greatest degree of innovation.

We have no intention to go further towards a low-cost country model. There was a tendency for that a couple of years ago but we have pulled back as we found that we were losing our innovation potential.

We are happy with the mix we have even though the average salary is higher as a result. And if you take government subsidies into account in European nations, you can get almost the same rate for a European engineer as for a Chinese engineer, as far as Alcatel-Lucent is concerned.

One more thing, Chinese university students, interestingly, work so hard up to getting into university that university is a period where they actually slack off. There are several articles in the media about this. The four years that students spend in university, away from home for the first time, they tend to relax.

Chinese companies were complaining that the quality of engineers out of university was ever decreasing because of what was essentially a slacker generation, they were arguing, of overworked high-school students that relaxed at college. Chinese companies found that they had to retrain these people once employed to bring them to the level needed.  

So that is another small effect which you could argue is a benefit of not being in China for some of our R&D.

 

Alcatel-Lucent's Bell Labs: Can you spotlight noteworthy examples of research work being done?

Certainly the lightRadio cube stuff is pure Bell Labs. The adaptive antenna array design, to give you an example, was done between the US - Bell Labs' Murray Hill - and Stuttgart, so two non-Asian sites at Bell Labs involved in the innovations. These are wideband designs that can operate at any frequencies and are technology agnostic so they can operate for GSM, 3G and LTE (Long Term Evolution).

 

"We believe that next-generation network intelligence, 10-15 years from now, might rely on quantum computing"

 

The designs can also form beams so you can be very power-efficient. Power efficiency in the antenna is great as you want to put the power where it is needed and not just have omni (directional) as the default power distribution. You want to form beams where capacity is needed.

That is clearly a big part of what Bell Labs has been focussing on in the wireless domain as well as all the overlaying technologies that allow you to do beam-forming. The power amplifier efficiency, that is another way you lose power and you operate at a more costly operational expense. The magic inside that is another focus of Bell Labs on wireless.

In optics, it is moving from 100 Gig to 400 Gig coherent. We are one of the early innovators in 100 Gig coherent and we are now moving forward to higher-order modulation and 400 Gig. 

On the DSL side it the vectoring/ crosstalk cancellation work where we have developed our own ASIC because the market could not meet the need we had. The algorithms ended up producing a component that will be in the first release of our products to maintain a market advantage.

We do see a need for some specialised devices like the FlexPath FP3 network processor, the IPTV product, the OTN (Optical Transport Network) switch that is at the heart of our optical products is our own ASIC, and the vectoring/ crosstalk cancellation engine in our DSL products.  Those are the innovations Bell Labs comes up with and very often they lead to our portfolio innovations.

There is also a lot of novel stuff like quantum computing that is on the fringes of what people think telecoms is going to leverage but we are still active in some of those forward-looking disciplines.  

We have quite a few researchers working on quantum computing, leveraging some of the material expertise that we have to fabricate novel designs in our lab and then create little quantum computing structures.

 

Why would quantum computing be useful in telecom? 

It is very good for parsing and pattern matching. So when you are doing complex searches or analyses, then quantum computing comes to the fore.

We do believe there will be processing that will benefit from quantum computing constructs to make decisions in ever-increasingly intelligent networks. Quantum computing has certain advantages in terms of its ability to recognise complex states and do complex calculations. We believe that next-generation network intelligence, 10-15 years from now, might rely on quantum computing.

We don't have a clear application in mind other than we believe it is a very important space that we need to be pioneering.

 

"Operators realise that their real-estate resource - including down to the central office - is not the burden that it appeared to be a couple of years ago but a tremendous asset

 

You wrote a recent blog on the future of the network. You mentioned the idea of the emergence of one network with the melding of wireless and wireline, and that this will halve the total cost of ownership. This is impressive but is it enough?

The half number relates to the lightRadio architecture. There are many ingredients in it. The most notable is that traffic growth is accounted for in that halving of the total cost of ownership. We calculated what the likely traffic demand would be going forward: a 30-fold increase in five years.

Based on that growth, when we computed how much the lightRadio architecture, involving the adaptive antenna arrays, small cells and the move to LTE, if you combine these things and map it into traffic demand, the number comes up that you can build the network for that traffic demand and with those new technologies and still halve the total cost of ownership.

It really is quite a bit more aggressive than it appears because it is taking account of a very significant growth in traffic.

Can we build that network and still lower the cost? The answer is yes.

 

You also say that intelligence will be increasingly distributed in the network, taking advantage of Moore's Law.  This raises two questions. First, when does it make sense to make your own ASICs?

When I say ASICs I include FPGAs. FPGAs are your own design just on programmable silicon and normally you evolve that to an ASIC design once you get to the right volumes.

There is a thing called an NRE (non-recurring engineering) cost, a non-refundable engineering cost to product an ASIC in a fab. So you have to have a certain volume that makes it worthwhile to produce that ASIC, rather than keeping it in an FPGA which is a more expensive component because it is programmable and has excess logic. On the other hand, there is economics that says an FPGA is the right way for sub-10,000 volumes per annum whereas for millions of parts you would do an ASIC.

We work on both those types of designs. And generally, and I think even Huawei would agree with us, a lot of the early innovation is done in FPGAs because you are still playing with the feature set.

 

Photo: Denise Panyik-Dale

Often there is no standard at that point, there may be preliminary work that is ongoing, so you do the initial innovation pre-standard using FPGAs. You use a DSP or FPGA that can implement a brand new function that no one has thought of, and that is what Bell Labs will do. Then, as it starts becoming of interest to the standard bodies, you have it implemented in a way that tries to follow what the standard will be, and you stay in a FPGA for that process. At some point later, you take a bet that the functionality is fixed and the volume will be high enough, and you move to an ASIC.

So it is fairly commonplace for novel technology to be implemented by the [system] vendors. And only in the end stage when it has become commoditised to move to commercial silicon, meaning a Broadcom or a Marvell.

Also around the novel components we produce there are a whole host of commercial silicon components from Texas Instruments, Broadcom, Marvell, Vitesse and all those others. So we focus on the components where the magic is, where innovation is still high and where you can't produce the same performance from a commercial part. That is where we produce our own FPGAs and ASICs.

 

Is this trend becoming more prevalent? And if so, is it because of the increasing distribution of intelligence in network.

I think it is but only partly because of intelligence. The other part is speed. We are reaching the real edges of processing speed and generally the commercial parts are not at that nanometer of [CMOS process] technology that can keep up.

To give an example, our FlexPath processor for the router product we have is on 40nm technology. Generally ASICs are a technology generation behind FPGAs. To get the power footprint and the packet-processing performance we need, you can't do that with commercial components. You can do it in a very high-end FPGA but those devices are generally very expensive because they have extremely low yields. They can cost hundreds or thousands of dollars.

The tendency is to use FPGAs for the initial design but very quickly move to an ASIC because those [FGPA] parts are so rare and expensive; nor do they have the power footprint that you want.  So if you are running at very high speeds - 100Gbps, 400Gbps - you run very hot, it is a very costly part and you quickly move to an ASIC.

Because of intelligence [in the network] we need to be making our own parts but again you can implement intelligence in FPGAs. The drive to ASICs is due to power footprint, performance at very high speeds and to some extent protection of intellectual property.

FPGAs can be reverse-engineered so there is some trend to use ASICs to protect against loss of intellectual property to less salubrious members of the industry.

 

Second, how will intelligence impact the photonic layer in particular?

You have all these dimensions you can trade off each other. There are things like flexible bit-rate optics, flexible modulation schemes to accommodate that, there is the intelligence of soft-decision FEC (forward error correction) where you are squeezing more out of a channel but not just making it a hard-decision FEC - is it a '0' or a '1' but giving a hint to the decoder as to whether it is likely to be a '0' or a '1'. And that improves your signal-to-noise ratio which allows you to go further with a given optics.

So you have several intelligent elements that you are going to co-ordinate to have an adaptive optical layer.

I do think that is the largest area.

Another area is smart or next-generation ROADMs - we call it connectionless, contentionless, and directionless.

There is a sense that as you start distributing resources in the network - cacheing resources and computing resources - there will be far more meshing in the metro network. There will be a need to route traffic optically to locally positioned resources - highly distributed data centre resources - and so there will be more photonic switching of traffic. Think of it as photonic offload to a local resource.

We are increasingly seeing operators realise that their real-estate resource - including down to the central office - is not the burden that it appeared to be a couple of years ago but a tremendous asset if you want to operate a private cloud infrastructure and offer it as a service, as you are closer to the user with lower latency and more guaranteed performance.

So if you think about that infrastructure, with highly distributed processing resources and offloading that at the photonic layer, essentially you can easily recognise that traffic needs to go to that location. You can argue that there will be more photonic switching at the edge because you don't need to route that traffic, it is going to one destination only.

This is an extension of the whole idea of converged backbone architecture we have, with interworking between the IP and optical domains, you don't route traffic that you don't need to route. If you know it is going to a peering point, you can keep that traffic in the optical domain and not send it up through the routing core and have it constantly routed when you know from the start where it is going.

So as you distribute computing and cacheing resources, you would offload in the optical layer rather than attempt to packet process everything.

There are smarts at that level too - photonic switching - as well as the intelligent photonic layer. 

 

For the second part of the Q&A, click here

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>