← Back to context

Comment by neverartful

17 hours ago

The mainframe itself (or any other platform for that matter) is not magical with regards to latency. It's all about proper architecture for the workload. Mainframes do provide a nice environment for being able to push huge volumes of IO though.

> The mainframe itself (or any other platform for that matter) is not magical with regards to latency.

Traveling at c, if a signal travels 300 mm (30 cm; 12") that is one nanosecond. And data signals do not travel over fibre or copper at c, but slower. Plus add network device processing latency. Now double all of that to get the response back to you.

When everything is with-in the distance of one rack, you save a whole lot of nanoseconds just by not having to go as far.

  • More to the point, to transmit a 1500 byte packet at some network data rate takes time. At 10 Gbps this is 3 microseconds for the round-trip even for a hypothetical "zero length" cable.

    Then add in the switching, routing, firewall, and load balancer overheads. Don't forget the buffering, kernel-to-user-mode transitions, "work" such as packet inspection, etc...

    The net result is at least 50 microseconds in the best networks I've ever seen, such as what AWS has between modern VM SKUs in the same VPC in the same zone. Typical numbers are more like 150-300 microseconds within a data centre.[1]

    If anything ping-pongs between data centres, then add +1 milliseconds per hop.

    Don't forget the occasional 3-way TCP handshake plus the TLS handshake plus the HTTP overheads!

    I've seen PaaS services talking to each other with ~15 millisecond (not micro!) latencies.

    [1] It's possible to get down to single digit microseconds with Infiniband, but only with software written specifically for this using a specialised SDK.

Again, missing the point. Just look at the numbers.

Mainframe manufacturers talk about "huge IO throughputs" but a rack of x86 kit with ordinary SSD SAN storage will have extra zeroes on the aggregate throughput. Similarly, on a bandwidth/dollar basis, Intel-compatible generic server boxes are vastly cheaper than any mainframe. Unless you're buying the very largest mainframes ($billions!), then for the same price a single Intel box will practically always win if you spend the same budget. E.g.: just pack it full of NVMe SSDs and enjoy ~100GB/s cached read throughput on top of ~20GB/s writes to remote "persistent" storage.

The "architecture" here is all about the latency. Sure, you can "scale" a data centre full of thousands of boxes far past the maximums of any single mainframe, but then the latency necessarily goes up because of physics, not to mention the practicalities of large-scale Ethernet networking.

The closest you can get to the properties of a mainframe is to put everything into one rack and use RDMA with Infiniband.

  • You have to think of the mainframe as a platform like AWS or Kubernetes or VMWare. Saying “AWS has huge throughput” is meaningless.

    The features of the platform are the real technical edge. You need to use those features to get the benefits.

    I’ve moved big mainframe apps to Unix or windows systems. There’s no magic… you just need to refactor around the constraints of the target system, which are different than the mainframe.

    • what you hint at is that most workloads today don't need most of the mainframe features any more, any you can move them to commodity hardware.

      There is much less need for most business functions to sit on a mainframe.

      However the mainframe offers some availability features in hardware and z/VM, which you need to compensate for in software and system architecture, if failure is not an option, business-wise.

      and if your organisation can build such a fail-operational system and software solution, then there is no reason today to stay on the mainframe. it's indeed more a convenience these days than anything else.

      1 reply →

  • > The closest you can get to the properties of a mainframe is to put everything into one rack and use RDMA with Infiniband.

    Or PCIe... I really would like to try building that.