I wonder how much better hollow core fiber would be. My guess is faster than copper, even given the conversion and retimer latencies.
MadVikingGod 9 hours ago [-]
So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
What I would actually like to see is how this performs in a more real world situation. Like does this increase line error rates, causing the transport or application to have to resend at a higher rate, which would erase all savings by having lower latency. Also if they are really signaling these in the multi GHz are these passive cables acting like antenna, and having a cabinet full of them just killing itself on crosstalk?
kazinator 1 hours ago [-]
They looked at the medium itself, not the attached data link hardware.
Look at the graphs. The fiber has a higher slope; each meter adds more latency than a meter of copper.
This is simply due to the speed of electromagnetic wave propgation in the different media.
Both the propagation of light in fiber and signal propagation in copper are much slower than the speed of lightin vaccuum, but they are not equal.
Palomides 9 hours ago [-]
high speed links all have forward error correction now (even PCIe); nothing in my small rack full of 40Gbe devices connected with DACs has any link level errors reported
p_l 9 hours ago [-]
DACs don't cause problems, but twisted pair at 10Gig is a PITA due to power and thermals
somanyphotons 7 hours ago [-]
What allows DACs to avoid the power/thermal issues that twisted pair has?
(My naive view is that they're both 'just copper'?)
kijiki 7 hours ago [-]
DACs are usually twin-ax, which is just 2 coax cables bundled. The shielding matters a lot, compared to unshielded twisted pairs.
Faster parallel DACs require more pairs of coax, and thus are thicker and more expensive.
Hilift 8 hours ago [-]
Storage over copper used to be sub optimal but not necessarily due to the cable. UDP QUIC is much closer to wire speed. so 10 GB copper and 10 GB fiber are probably the same, but 40+ GB fiber is quite common now.
laurencerowe 9 hours ago [-]
> So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
Surely resignaling should be the fixed cost they calculate at about 1ns? Why does it also incur a 0.4ns/m cost?
cenamus 8 hours ago [-]
Light speed is ~3ns per metre, so maybe the lowered speed through the fibre?
Speed of electricity in wire should be pretty close to c (at least the front)
myself248 8 hours ago [-]
Velocity factor in most cables is between 0.6 and 0.8 of what it is in a vacuum. Depends on the dielectric material and cable construction.
This is why point-to-point microwave links took over the HFT market -- they're covering miles with free space, not fiber.
jcims 7 hours ago [-]
I always thought it was about reduced path length. Interesting.
cycomanic 6 hours ago [-]
It's both. Those links try to minimise deviation from the straight link (and invest significant money to get antenna locations to do that), but they also use copper/coax cables for connecting radios as well as hollow core fibre for other connections to the modems.
laurencerowe 8 hours ago [-]
I misremembered the speed of electrical signal propagation from high school physics. It's around 2/3rds the speed of light in a vacuum not 1/3rd. The speed of light in an optical fibre is also around 2/3rds the speed in a vacuum.
It's c, but not the same c as in air or vacuum. The same applies in optic fibers. They're both around two thirds of the speed of light in vacuum.
GuB-42 8 hours ago [-]
c is constant, the speed of light is not.
c is the speed of light in a vacuum, but it is not really about light, it is a property of spacetime itself, and light just happens to be carried by a massless particle, which, according to Einstein's equations, make it go at c (when undisturbed by the medium). Gravity also goes at c.
bigfishrunning 7 hours ago [-]
I've always considered C the speed of light and gravity goes at the speed of light, not that light and gravity both go C, which is a property of spacetime. This is a much simpler mental model, thanks for the simple explanation!
Sniffnoy 1 hours ago [-]
You can think of c as the conversion rate between space and time; then, light (and anything else without mass, such as gravity or gluons) travels at a speed of 1. Everything else travels at a speed of less than 1.
(Physicists will in fact use the c=1 convention when keeping track of the distinction between distance units and time units is not important. A related convention is hbar=1.)
You can tell that c is fundamental, rather than just a property of light, from how it appears in the equations for Lorentz boosts (length contraction and time dilation).
Eldt 7 hours ago [-]
I've always thought of c as the speed limit of causality
Sesse__ 8 hours ago [-]
c is the speed of light in vacuum.
EM signals move at about 0,66c in fiber, and about 0,98c in copper.
BenjiWiebe 3 hours ago [-]
More like 0.6c to 0.75c in Cat6 Ethernet cable.
The insulation slows it down.
bhaney 7 hours ago [-]
> I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables
Especially since physics imposes a ~1.67ns/m penalty on fiber. The best-case inverse speed of light in copper is ~3.3ns/m, while it's ~5ns/m in fiber optics.
tcdent 8 hours ago [-]
PHYs are going away and fiber is going straight to the chip now, so while the article is correct, in the near future this will not be the case.
sophacles 6 hours ago [-]
The chip has a phy built into it on-die you mean. This affects timing for getting the signal from memory to the phy, but not necessary the switching times of transistors in the phy, nor the timings of turning the light on and off.
jerf 9 hours ago [-]
"Has lower latency than" fiber. Which is not so shocking. And, yes, technically a valid use of the word "faster" but I think I'm far from the only one who assumed they were going to make a bandwidth claim rather than a latency claim.
jcelerier 7 hours ago [-]
I wonder where does the idea of "fast" beign about throughput comes from. For me it always, always only ever meant latency.
nine_k 7 hours ago [-]
Latency to the first byte is one thing, latency to the last byte, quite another. A slow-starting high-throughput connection will bring you the entire payload faster than an instantaneously starting but low-throughput connection. The larger the payload, the more pronounced is the difference.
mouse_ 7 hours ago [-]
ehh... latency is an objective term that, for me at least, has always meant something like "how quickly can you turn on a light bulb at the other end of this system"
rusk 6 hours ago [-]
Term under discussion is “speed” which goes beyond latency. If you have a low latency but high bandwidth the link is “faster” i.e “time to last byte”
Latency is well defined and nobody is quibbling on that.
aidenn0 2 hours ago [-]
An SR-71 Blackbird flies faster than a 747. Nevertheless, a 747 can get 350 people from LA to New York faster than the SR-71.
TZubiri 2 hours ago [-]
If I have to download a 4gb movie the roundtrip latency is not so important. With 4MB/s I can get the file in 1000s, with 40MB/s I can get it in 100s
switchbak 7 hours ago [-]
A 9600 baud serial connection between two machines in the 90's would have low latency, but few would have called it fast.
Maybe it's all about sufficient bandwidth - now that it's ubiquitous, latency tends to be the dominant concern?
TacticalCoder 3 hours ago [-]
[dead]
throw0101b 1 hours ago [-]
> I wonder where does the idea of "fast" beign about throughput comes from.
A cat video will start displaying much sooner with 1 Mbps of bandwidth compared to 100 Kbps:
So an online experiences happens sooner (=faster-in-time) with more bandwidth.
p_j_w 7 hours ago [-]
Presumably from end users who care about how much time it takes to receive or send some amount of data.
wat10000 7 hours ago [-]
Until pretty recently, throughput dominated the actual human-relevant latency of time-until-action-completes on most connections for most tasks. "Fast" means that your downloads complete quickly, or web pages load quickly, or your e-mail client gets all of your new mail quickly. In the dialup age, just about everything took multiple seconds if not minutes, so the ~200ish ms of latency imposed by the modem didn't really matter. Broadband brought both much greater throughput and much lower latency, and then web pages bloated and you were still waiting for data to finish downloading.
kragen 8 hours ago [-]
I assumed they were going to make a bandwidth claim and was prepared to reject it as nonsense.
TZubiri 2 hours ago [-]
Instantly assumed that it was clickbait.
So basically: Lower latency, lower bandwidth?
throw0101b 1 hours ago [-]
> So basically: Lower latency, lower bandwidth?
No: DAC and (MMF/SMF) fibre will (in this example) both give you 10Gbps.
throw0101d 9 hours ago [-]
This coming from Arista is unsurprising because their original niche was low-latency, and the first industries that they made in-roads in against the 'incumbents' was finance:
> The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange[50] (largest U.S. options exchange) and RBC Capital Markets.[51] As of October 2009, one third of its customers were big Wall Street firms.[52]
They've since expanded into more areas, and are said to be fairly popular with hyper-scalers. Often recommended in forums like /r/networking (support is well-regarded).
One of the co-founders is Andy Bechtolsheim, also a co-founder of Sun, and who wrote Brin and Page one of the earliest cheques to fund Google:
That and the physical decoupling of information into another medium other than EM
Try running Cat cables on powerlines like Aerial Fibre
tejtm 14 minutes ago [-]
Pedantically, light is still EM.
But I think I understand what you mean.
The shape of individual EM waveforms is no longer relevant
instead there are just buckets of got some or not.
feitingen 2 hours ago [-]
The speed of light is also ever so slightly faster in twinax than in fiber(glass).
Not enough to matter in this comparison, but i thought I should mention it.
deepnotderp 2 hours ago [-]
FEC latency is >> propagation delays at these distances, so that's probably the dominant factor in most cases
citizenpaul 7 hours ago [-]
Its been long known that Direct Attach Copper (DAC's) are faster for short runs. It makes sense since there does not need to be an analog-digital conversion.
ezekiel68 3 hours ago [-]
I suppose you are right, but we may not say "it has been widely known". Lots of us who read HN come from the the software side and we coders often hand wave on these topics when shooting the breeze -- much like how a casual car enthusiast might not imagine it was possible for a 6-cylinder engine to have more more horsepower than a V8.
exabrial 8 hours ago [-]
IIRC, the passive copper SFP Direct Attach cables are basically just a fancy "crossover cable" (for those old enough to remember those days). Essentially there is no medium conversion.
zokier 9 hours ago [-]
What are applications where 5ns latency improvement is significant?
ethan_smith 36 minutes ago [-]
High-frequency trading is the primary application, where 5ns can represent millions in profit as firms compete to execute trades first, but you'll also see benefits in distributed database synchronization, real-time financial risk calculations, and some specialized scientific computing workloads.
thanhhaimai 9 hours ago [-]
High Frequency Trading is one.
Loughla 8 hours ago [-]
Anything else? Because that's the only one I can think of.
smj-edison 7 hours ago [-]
I'd expect HPC would be another, since a lot of algorithms that run on those clusters are bottlenecked by latency or throughput in communication.
ezekiel68 3 hours ago [-]
For the parent: and not only bottlenecked at single hops but also hampered by the propagation of latency as the hops increase, depending on the complexity of the distributed system design.
throw0101b 2 hours ago [-]
> […] by the propagation of latency as the hops increase […]
Which is why you get network topologies other than 'just' fat tree in HPC networks:
any high-utilization workload with a chatty protocol dominated by small IOs such as:
* distributed filesystems such as MooseFS, Ceph, Gluster used for hyperconverged infrastructure.
* SANs hosting VMs with busy OLTP databases
* OLTP replication
* CXL memory expansion where remote memory needs to be as close to inter-NUMA node latency as possible
vlovich123 9 hours ago [-]
Faster only because the distances involved are short enough that the PHY layer adds significant overhead. But if you somehow could wave a magic wand and make optical computing work, then fiber would be faster (& generate less heat).
throw0101d 9 hours ago [-]
> Faster only because the distances involved are short enough that the PHY layer adds significant overhead.
This specifically mentions the 7130 model, which is a specialized bit of kit, and which Arista advertises for (amongst other things):
> Arista's 7130 applications simplify and transform network infrastructure, and are targeted for use cases including ultra-low latency exchange trading, accurate and lossless network visibility, and providing vendor or broker based shared services. They enable a complete lifecycle of packet replication, multiplexing, filtering, timestamping, aggregation and capture.
It is advertised as a "Layer 1" device and has a user-programmable FPGA. Some pre-built applications are: "MetaWatch: Market data & packet capture, Regulatory compliance (MiFID II - RTS 25)", "MetaMux: Market data fan-out and data aggregation for order entry at nanosecond levels", "MultiAccess: Supporting Colo deployments with multiple concurrent exchange connection", "ExchangeApp: Increase exchange fairness, Maintain trade order based on edge timestamps".
Latency matters (and may even be regulated) in some of these use cases.
zokier 9 hours ago [-]
The PHY contributes only 1ns difference, but the results also show 400ps/m advantage for copper which I can only assume to come from difference in EM propagation speed in the medium.
myself248 8 hours ago [-]
No. Look at the graph -- the offset when extrapolated back to zero length is the PHY's contribution.
The differing slope of the lines is due to velocity factor in the cable. The speed of light in vacuum is much faster than in other media. And the lines diverge the longer you make them.
MadVikingGod 4 hours ago [-]
It's true, but also if you go look at their product catalog you will see none of their direct attach cables are longer then 5m, and the high bandwidth ones are 2m. So, again, it's true, but also limiting in other ways.
What I would actually like to see is how this performs in a more real world situation. Like does this increase line error rates, causing the transport or application to have to resend at a higher rate, which would erase all savings by having lower latency. Also if they are really signaling these in the multi GHz are these passive cables acting like antenna, and having a cabinet full of them just killing itself on crosstalk?
Look at the graphs. The fiber has a higher slope; each meter adds more latency than a meter of copper.
This is simply due to the speed of electromagnetic wave propgation in the different media.
https://networkengineering.stackexchange.com/questions/16438...
Both the propagation of light in fiber and signal propagation in copper are much slower than the speed of lightin vaccuum, but they are not equal.
(My naive view is that they're both 'just copper'?)
Faster parallel DACs require more pairs of coax, and thus are thicker and more expensive.
Surely resignaling should be the fixed cost they calculate at about 1ns? Why does it also incur a 0.4ns/m cost?
Speed of electricity in wire should be pretty close to c (at least the front)
This is why point-to-point microwave links took over the HFT market -- they're covering miles with free space, not fiber.
It seems there is quite a wide range for different types of cables so some will be faster and others slower than optical fibre. https://en.wikipedia.org/wiki/Velocity_factor
But the resignalling must surely be unrelated?
Obligatory Adm. Grace Hopper nanosecond reference:
* https://www.youtube.com/watch?v=si9iqF5uTFk&t=40m10s
c is the speed of light in a vacuum, but it is not really about light, it is a property of spacetime itself, and light just happens to be carried by a massless particle, which, according to Einstein's equations, make it go at c (when undisturbed by the medium). Gravity also goes at c.
(Physicists will in fact use the c=1 convention when keeping track of the distinction between distance units and time units is not important. A related convention is hbar=1.)
You can tell that c is fundamental, rather than just a property of light, from how it appears in the equations for Lorentz boosts (length contraction and time dilation).
EM signals move at about 0,66c in fiber, and about 0,98c in copper.
The insulation slows it down.
Especially since physics imposes a ~1.67ns/m penalty on fiber. The best-case inverse speed of light in copper is ~3.3ns/m, while it's ~5ns/m in fiber optics.
Latency is well defined and nobody is quibbling on that.
Maybe it's all about sufficient bandwidth - now that it's ubiquitous, latency tends to be the dominant concern?
A cat video will start displaying much sooner with 1 Mbps of bandwidth compared to 100 Kbps:
> taking a comparatively short time
* https://www.merriam-webster.com/dictionary/fast § 3(a)(2)
> done in comparatively little time; taking a comparatively short time: fast work.
* https://www.dictionary.com/browse/fast § 2
So an online experiences happens sooner (=faster-in-time) with more bandwidth.
So basically: Lower latency, lower bandwidth?
No: DAC and (MMF/SMF) fibre will (in this example) both give you 10Gbps.
> The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange[50] (largest U.S. options exchange) and RBC Capital Markets.[51] As of October 2009, one third of its customers were big Wall Street firms.[52]
* https://en.wikipedia.org/wiki/Arista_Networks
They've since expanded into more areas, and are said to be fairly popular with hyper-scalers. Often recommended in forums like /r/networking (support is well-regarded).
One of the co-founders is Andy Bechtolsheim, also a co-founder of Sun, and who wrote Brin and Page one of the earliest cheques to fund Google:
* https://en.wikipedia.org/wiki/Andy_Bechtolsheim
https://en.wikipedia.org/wiki/Velocity_factor
Try running Cat cables on powerlines like Aerial Fibre
But I think I understand what you mean.
The shape of individual EM waveforms is no longer relevant instead there are just buckets of got some or not.
Not enough to matter in this comparison, but i thought I should mention it.
Which is why you get network topologies other than 'just' fat tree in HPC networks:
* https://www.hpcwire.com/2019/07/15/super-connecting-the-supe...
* https://en.wikipedia.org/wiki/Torus_interconnect
https://en.wikipedia.org/wiki/High-performance_computing
This specifically mentions the 7130 model, which is a specialized bit of kit, and which Arista advertises for (amongst other things):
> Arista's 7130 applications simplify and transform network infrastructure, and are targeted for use cases including ultra-low latency exchange trading, accurate and lossless network visibility, and providing vendor or broker based shared services. They enable a complete lifecycle of packet replication, multiplexing, filtering, timestamping, aggregation and capture.
* https://www.arista.com/en/products/7130-applications
It is advertised as a "Layer 1" device and has a user-programmable FPGA. Some pre-built applications are: "MetaWatch: Market data & packet capture, Regulatory compliance (MiFID II - RTS 25)", "MetaMux: Market data fan-out and data aggregation for order entry at nanosecond levels", "MultiAccess: Supporting Colo deployments with multiple concurrent exchange connection", "ExchangeApp: Increase exchange fairness, Maintain trade order based on edge timestamps".
Latency matters (and may even be regulated) in some of these use cases.
The differing slope of the lines is due to velocity factor in the cable. The speed of light in vacuum is much faster than in other media. And the lines diverge the longer you make them.