← Back to context

Comment by sangnoir

16 hours ago

> Tough to tell, given nobody is turning a net profit on LLMs yet

Perhaps, but Meta is definitely getting some money back from ad impressions supported by AI-generated content.

Correct. We know these models are producing fucktonnes of revenue. At least some of them can be run at a gross profit, i.e. where marginal power costs and capital costs are less than marginal revenues. (Put another way: if OpenAI were an absolute monopoly and stopped training new models, could it turn a profit?)

What’s unclear is if this is a short-term revenue benefit from people fucking around with the newest, shiniest model, or recurring revenue that is only appearing unprofitable because the frontier is advancing so quickly.

  • From the little we know about OpenAIs inference infra, I feel like I can confidently say that if training stopped today, and they got cut off Azure subsidies, their $20.00 subscription model would probably not cover the cost of Inference.

    I know nothing about the enterprise side of OpenAI but I'm sure they're profitable there. I doubt the subscription cost of a single power user of ChatGPT Plus covers the water they consume as a single user (This is probably an exaggeration, but I think I'm in the ballpark).

    • It may be that extra-large LLMs don’t make sense for ChatGPT. They’re for enterprise use, like supercomputers. The reason I say “at least some” is I’ve found use in running a local instance of Llama, which seems to imply there is some niche of (legal) activities AI can support sustainably. (Versus, e.g. crypto.)