Comment by jiggawatts

14 hours ago

Again, missing the point. Just look at the numbers.

Mainframe manufacturers talk about "huge IO throughputs" but a rack of x86 kit with ordinary SSD SAN storage will have extra zeroes on the aggregate throughput. Similarly, on a bandwidth/dollar basis, Intel-compatible generic server boxes are vastly cheaper than any mainframe. Unless you're buying the very largest mainframes ($billions!), then for the same price a single Intel box will practically always win if you spend the same budget. E.g.: just pack it full of NVMe SSDs and enjoy ~100GB/s cached read throughput on top of ~20GB/s writes to remote "persistent" storage.

The "architecture" here is all about the latency. Sure, you can "scale" a data centre full of thousands of boxes far past the maximums of any single mainframe, but then the latency necessarily goes up because of physics, not to mention the practicalities of large-scale Ethernet networking.

The closest you can get to the properties of a mainframe is to put everything into one rack and use RDMA with Infiniband.

You have to think of the mainframe as a platform like AWS or Kubernetes or VMWare. Saying “AWS has huge throughput” is meaningless.

The features of the platform are the real technical edge. You need to use those features to get the benefits.

I’ve moved big mainframe apps to Unix or windows systems. There’s no magic… you just need to refactor around the constraints of the target system, which are different than the mainframe.

  • what you hint at is that most workloads today don't need most of the mainframe features any more, any you can move them to commodity hardware.

    There is much less need for most business functions to sit on a mainframe.

    However the mainframe offers some availability features in hardware and z/VM, which you need to compensate for in software and system architecture, if failure is not an option, business-wise.

    and if your organisation can build such a fail-operational system and software solution, then there is no reason today to stay on the mainframe. it's indeed more a convenience these days than anything else.

    • I agree with most of this. I believe that mainframes have an advantage when you look at environmental factors (power consumption and cooling).

> The closest you can get to the properties of a mainframe is to put everything into one rack and use RDMA with Infiniband.

Or PCIe... I really would like to try building that.

  • I'm fairly certain you can't create a "mesh" with PCIe between multiple hosts. It's more like USB instead of Ethernet.

    • Don't treat the CPU board as the host but a peripheral. ;-)

      But I'd say it's much closer to Ethernet than USB. You have controllers (routers), switches and nodes... USB doesn't, not like this.