Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Clariphy targets 400Gbps with new 16nm DSP silicon (lightwaveonline.com)
36 points by bifrost on March 26, 2016 | hide | past | favorite | 8 comments


Nifty... 70Tbps per fiber is about 2x - 2.5x the current capacity.


A better metric is bits/Hz/km.

64-QAM does not go very far and requires very low linewidth lasers. The OSNR requirements for 64-QAM at 32GBaud means that you basically have no reach (distance). So makes a nice announcement but it's not a very practical modulation format. Any nonlinearity and 64-QAM starts cycle slipping like crazy.


At long distance, inter-symbol interference caused by dispersion are the main causes of bit-errors. I saw a Clariphy whitepaper about a very good digital dispersion compensation engine, based on a custom implementation of the Viterbi algorithm.

No a lot of experience in the subject but I believe that, unlike RF, if you need more OSNR you can always inject a lot of power in the fiber (I've heard about 5-10 watts lasers for long-haul data links).

Also, if you need to go farther away, you can always increase the FEC overhead.


Unfortunately you can't just increase the power into the fiber. It's complicated but I'll try to clarify (pun not intended) why you are OSNR challenged here.

First, fiber and just about any optical material have a nonlinear index. So as you increase the power into the fiber, the light modifies the local index. This creates a kind of noise. A wavelength channel can do this to itself even. When you have lots of channels on the fiber, there is a lot of modulation of the refractive index by all those channels. This is call cross-phase modulation (XPM). You can think of XPM as a phase noise. As you increase the power in the fiber, there's a point where the XPM "noise" increases faster than the improvement in OSNR. So you have to keep the launch power low to keep the nonlinear penalty low. This limits our launch power and ultimately the OSNR we can achieve. Generally we try to operate near the peak of this curve.


Close, but dispersion compensation is now very easy to do digitally, and has very little impact on system performance now. As for power, it is actually the opposite, optical systems are ultimately limited in reach by nonlinearity, and so you can not just increase the transmit power like you might do in a wireless system, even if your amplifiers were very linear.


I think that the non-linear shannon-limit, limits currently LONG-RANGE installed fiber to around 12Tbps. It's possible to install wide-core fibers to improve on that, but that's extremely expensive, nobody wants to do it, and it's doubtful it would decrease data costs over fully appreciated fibers, in the short term.

Also we're not there yet, but we're pretty close and it might not make sense to upgrade further, so some say that in the next few years we'll stop seeing data prices go down.


Could be quite good for campus/metro datacenter interconnect, but that is a very good point.


I'd like to see more details about that claim, just to know how they were able to reach that ("via a combination of C-Band and L-Band transmission" it's a bit generic) using more than 170 channels on the same fiber. Btw, cool stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: