As others have mentioned, this is mostly a proof of concept for a high core count weakly-coupled fibre from Sumitomo. I also want to highlight the use of a 19 channels MIMO receiver structure which is completely impractical. The linked article also fails to mention a figure for MIMO gain.
I would guesstimate that if you try to run it live, the receiver [or rather its DSPs] would consume >100W of power, maybe even >1000W. (These things evolve & improve though.)
(Also, a kilowatt for the receiver is entirely acceptable for a submarine cable.)
To get a ballpark power usage, we can look at comparable (for some definition thereof) commercial offerings. Take a public datasheet from Arista[1], they quote 16W typical for a 400Gbps module for 120km of reach. You would need 2500 modems at 16W (38kW) jointly decoding (i.e. very close together) to process this data rate. GPU compute has really pushed the boundaries on thermal management, but this would be far more thermally dense.
It's important to note that wavelength channels are not coupled, so modems with different wavelengths don't need to be terribly close together (in fact one could theoretically do wavelength switching so they could be 100s of km apart). So the scaling we need to consider is the scaling of the MIMO which in current modems is 2x2. The difficulty is not necessarily just power consumption (also the power envelope of long haul modems is higher than the DCI modem you link, up to 70W IIRC), but also resourcing on the ASIC, your MIMO part (which needs to be highly parallel) will take up significant floorspace and you need to balance the delays.
The 38kW is not a very high number btw, the switches at the end points of submarine links are quite a bit more power hungry already.
Depending on phase matching criteria of lambda's on a given core, I would mostly agree that various wavelengths are not significantly coupled. I also agree there are a different power budget for LH modems vs. DCI, but power on LH modems is not something that often gets publicly disclosed. I am not too concerned with the overall power, more the power density (and component density) that 19 channel MIMO would require.
The main point I was trying to make is the impracticality of MIMO SDM. The topic has been discussed to death (see the endless papers from Nokia) and has yet to be deployed because the spatial gain is never worth the real world implementation issues.
I think the scaling parameters are a bit different here since the primary concern is the DSP power processing and correlating for MIMO 19 signals simultaneously. But the 16W figure for a 120km 400Gbps module includes a high-powered¹ transmitter amplifier & laser, as well as receive amplifiers on top of the DSP. My estimate is based on O(n²) scaling for 19×19 MIMO (=361) and then assuming 2≈3W of DSP power per unit factor.
[but now that I think about it… I think my estimate is indeed too low; I was assuming commonplace transceivers for the unit factor, i.e. ≤1Tb; but a petabit on 19 cores is still 53Tb per core…]
¹ note the setup in this paper has separate amplifiers in 86.1km steps, so the transmitter doesn't need to be particularly high powered.
38kW ~= 50 HP ~= 45A at 480V three-phase, which is a relatively light load handled by 3#6 AWG conductors and a #10 equipment ground.
I mean, it’s a shitload more power than a simple media converter that takes in fiber and outputs to a RJ-45 but not all that much compared to other commercial electrical loads. This Eaton/Tripplite unit draws ~40W at 120V - https://tripplite.eaton.com/gigabit-multimode-fiber-to-ether...
A smallish commercial heat pump/CRAC unit (~12kW) can handle the cooling requirements (assuming a COP of 3)
I can think of a number of cases where using a hard decision decoder can be a better choice. Power would be one factor and I would strongly disagree that the power delta between soft and hard codes at these data rates is small. Unfortunately, I can not find any public data to back up that claim for what appear (based on coding gain and overhead) an HD-FEC using RS and a SD-FEC using braided BCH or LDPC.
Other factors can include reduced routing complexity and area requirements of HD since you have to shuffle around soft information. Extra die space is expensive so you want to avoid it if possible.
However, I think the most likely is the latency reduction you get when using HD-FEC. I know that some applications of microwave links are extremely latency sensitive, could be that this research is targeting one of those applications.
Something not totally clear from the title, but it seems the claimed rate was actually achieve with a transmitter structure similar to what you would find in coherent optics (see figure 4). Instead of coupling to fiber, they couple to a high speed photodiode that radiates at the ~140GHz laser wavelength.
EDIT: Noticed that after a closer reading of the paper, the real goal was to assess the LO phase noise improvement when moving from a RF synthesizer to a SBS laser and PD based LO.
What is "reliable" though? All digital systems will undergo some rate of correctable/uncorrectable errors, it just depends on what you consider to be an acceptable rate of failure for your device.
That may hold for a trivial device or a perfectly spec compliant device. However, the former is not interesting and the later does not exist. I agree that more test coverage would be beneficial, but I think your heavily downplaying the difficulty of writing realistic mock hardware.
I fail to see how you could "disrupt" something like cooling solutions in the way I believe your post implies.
First, the ability to remove large volumes of heat from a DC (or analogous plant) is largely dependent on the local geography/environment. In modest cases, that may come down to average ambient temperatures and the cost of electricity. In more extreme cases, access to a massive heat sink such as a body of water may be necessary.
Second, the technology used to evacuate heat is very mature. The modern world depends on HVAC and has for a very long time. While there are incremental advances such as new refrigerants and compressor technology, there is always a cost to performance tradeoff.
I would argue the FFT algorithm is much more important in communication systems than the Viterbi algorithm. There are plenty of powerful Forward Error Correction schemes that do not use Convolutional codes (and thus cannot use Viterbi algorithms). However, every single system with a reasonably long channel equalizer makes use of a real time FFT algorithm.