← Back to context

Comment by ryandrake

18 hours ago

> What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

Even the biggest tech companies have limited engineering bandwidth to allocate to projects. What's wrong with those 100 experiments is the opportunity cost: they suck all the oxygen out of the room and could be shifting the company's focus away from fixing real user problems. There are many other problems that don't require AI to solve, and companies are starving these problems in favor of AI experiments.

It would be better to sort each potential project by ROI, or customer need, or profit, or some other meaningful metric, and do the highest ranked ones. Instead, we're sorting first by "does it use AI" and focusing on those.

What you describe, I don't see happening.

If you look at all the recent Google Docs features rolled out, only a small minority are AI-related:

https://workspaceupdates.googleblog.com/search/label/Google%...

There are a few relating to Gemini in additional languages and supporting additional document types, but the vast majority is non-AI.

Seems like the companies are presumably sorting on ROI just fine. But, of course, AI is expected to have a large return, so it's in there too.