← Back to context

Comment by cgh

15 hours ago

Interesting, it's been awhile since I looked at this stuff so I did a little searching and found this: https://www.diva-portal.org/smash/get/diva2:1789103/FULLTEXT...

Their conclusion is io_uring is still slower but not by much, and future improvements may make the difference negligible. So you're right, at least in part. Given the tradeoffs, DPDK may not be worth it anymore.

There are also just a bunch of operational hassles with using DPDK or SPDK. Your usual administrative commands don't work. Other operations aren't intermediated by the kernel -- instead you need 100% dedicated application devices. Device counters usually tracked by the kernel aren't. Etc. It can be fine, but if io_uring doesn't add too much overhead, it's a lot more convenient.

"io_uring had a maximum throughput of 5.0 Gbit/s "

Wut? More than 10 years ago, a cheap beige box could saturated a 1Gbps link with a kernel as it came from e.g. Debian w/o special tuning. A somewhat more expensive box could get a good share of a 10Gbps link (using Jumbo frames), so these new results are, er, somewhat underwhelming.

  • Packet size does affect the throughput quite a lot, though. The tests in the paper are without jumbo frames (I do agree though, that the results in that paper can't really be described as 'close', when io_uring is 1/5 the speed, only achieves that at the largest packet size, and has much more packet loss under those conditions)

That's an interesting and valuable study. I was slightly disappointed though that only a single host was used in the 'network' performance tests:

"SR-IOV was used on the NIC to enable the use of virtual functions, as it was the only NIC that was available during the study for testing and therefore the use of virtual functions was a necessity for conducting the experiments."