Comment by m_ke

10 hours ago

https://developers.facebook.com/blog/post/2021/09/07/eli5-op...

That's exactly what they id with their server design.

I'm saying come up with an open standard for tensor processing chips, with open drivers and core compute libraries, then let hardware vendors innovate and compete to drive down the price.

Meta spent like 10% of their revenue on ML hardware, it's not a drop in a bucket and with model scaling and large scale deployment these costs are not going down. https://www.datacenterdynamics.com/en/news/meta-to-operate-6...