← Back to context

Comment by throw0101c

10 hours ago

> The mainframe itself (or any other platform for that matter) is not magical with regards to latency.

Traveling at c, if a signal travels 300 mm (30 cm; 12") that is one nanosecond. And data signals do not travel over fibre or copper at c, but slower. Plus add network device processing latency. Now double all of that to get the response back to you.

When everything is with-in the distance of one rack, you save a whole lot of nanoseconds just by not having to go as far.

More to the point, to transmit a 1500 byte packet at some network data rate takes time. At 10 Gbps this is 3 microseconds for the round-trip even for a hypothetical "zero length" cable.

Then add in the switching, routing, firewall, and load balancer overheads. Don't forget the buffering, kernel-to-user-mode transitions, "work" such as packet inspection, etc...

The net result is at least 50 microseconds in the best networks I've ever seen, such as what AWS has between modern VM SKUs in the same VPC in the same zone. Typical numbers are more like 150-300 microseconds within a data centre.[1]

If anything ping-pongs between data centres, then add +1 milliseconds per hop.

Don't forget the occasional 3-way TCP handshake plus the TLS handshake plus the HTTP overheads!

I've seen PaaS services talking to each other with ~15 millisecond (not micro!) latencies.

[1] It's possible to get down to single digit microseconds with Infiniband, but only with software written specifically for this using a specialised SDK.