← Back to context

Comment by ryandrake

15 hours ago

I'm making some big assumptions about Adobe's product ideation process, but: This seems like the "right" way to approach developing AI products: Find a user need that can't easily be solved with traditional methods and algorithms, decide that AI is appropriate for that thing, and then build an AI system to solve it.

Rather than what many BigTech companies are currently doing: "Wall Street says we need to 'Use AI Somehow'. Let's invest in AI and Find Things To Do with AI. Later, we'll worry about somehow matching these things with user needs."

I would interpret it that they're getting the same push from Wall Street and the same investor-hype-driven product leadership as every other tech firm, but this time they have the good fortune to specialize in one of the few verticals (image editing) where generative AI currently has superhuman performance.

This is a testable claim: where were Adobe in previous hype cycles? Googles "Adobe Blockchain"...looks like they were all about blockchains in 2018 [0], then NFTs and "more sustainable blockchains" in 2022 [1].

[0] https://blog.adobe.com/en/publish/2018/09/27/blockchain-and-...

[1] https://www.ledgerinsights.com/adobe-moves-to-sustainable-bl...

  • The article says clearly there's no guarantee this feature will be released.

    Which I'm reading as "Demo-ready, but far from production-ready."

    Somewhat relevant: my experience with Photoshop's Generative Fill has been underwhelming. Sometimes it's wrong, often it's comically wrong. I haven't had many easy wins with it.

    IMO this is a company that doodles with code for its own entertainment, not a company that innovates robust and highly useful production-ready features for the benefit of users.

    So we'll see if Mr Spinny Dragon makes it to production, and is as useful as billed in the demo.

    • you don't need to release to production for real value. I'm under intense pressure to scope out frothy AI features because just discussing them with prospects has a material impact on the costs of the sales funnel.

      3 replies →

  • I disagree with your analysis. I think this is a novel use of AI in a commercial art product. Is there any AI feature that Adobe could release that you would not view as "pushed from Wall Street"?

I think you're being a bit too generous with Adobe here :-). I shared this before, but it's worth resharing [1]. It covers the experience of a professional artist using Adobe tools.

The gist is that once a company has a captive audience with no alternatives, investors come first. Flashy (no pun intended :-p), cool features to impress investors become more important than the everyday user experience—and this feature does look super cool!

--

1: https://www.youtube.com/watch?v=lthVYUB8JLs

  • I don’t think those ideas are mutually exclusive. I heavily dislike Adobe and think they’re a rotten company with predatory practices. I also think “AI art” can be harmful to artists and more often than not produces uninteresting flawed garbage at an unacceptable energy cost.

    Still, when I first heard of Adobe Firefly, my initial reaction was “smart business move, by exclusively using images they have the rights to”. Now seeing Turntable my reaction is “interesting tool which could be truly useful to many illustrators”.

    Adobe can be a bad and opportunistic company in general but still do genuinely interesting things. As much as they deserve the criticism, the way in which they’re using AI does seem to be thought out and meant to address real user needs while minimising harm to artists.¹ I see Apple’s approach with Apple Intelligence a bit in the same vein, starting with the user experience and working backwards to the technology, as it should be.²

    Worth noting that I fortunately have distanced myself from Adobe for many years now, so my view may be outdated.

    ¹ Which I don’t believe for a second is out of the goodness of their hearts, it just makes business sense.

    ² However, in that case the results seem to be subpar and I don’t think I’d use it even if I could.

    •     > I also think “AI art” can be harmful to artists and more often than not produces uninteresting flawed garbage at an unacceptable energy cost.
      

      What do you think about Midjourney? The (2D) results are pretty incredible.

      1 reply →

    • Whether they avail of it, or not, Adobe have the possibility of accessing feedback and iterating on it for a lot of core design markets. I have a similar view to yours, but there is a segment of the AI community who feel that they are disrupting Adobe as much as other companies. In most cases, these companies have access to the domain experience which will enable AI and it won't work the other way around.

      All of this is orthogonal to Adobe's business practices. You should expect them to operate the way they do given their market share and the limited number of alternatives. I personally have almost moved completely to Affinity products, but I expect that Adobe should be better placed to execute products and for Affinity to be playing catchup to some extent.

  • You can have both!

    Cool features that excite users (and that they ultimately end of using), and that get investors excited.

    (i.e. Adobe mentioned in the day 1 keynote that Generative Fill, released last year and powered by Adobe Firefly is not one of the top 5 used features in Photoshop).

    The features we make, and how we use gen ai is based on a lot of discussions and back and forth with the community (both public and private)

    I guess Adobe could make features that look cool, but no one wants to use, but that doesn't seem to really make any sense.

    (I work for Adobe)

    • > is not one of the top 5 used features in Photoshop

      I mean, is there any Photoshop feature that’s come to dominate people’s workflows so quickly?

      People (e.g. photographers) who use Photoshop “in anger” for professional use-cases, and who already know how to fix a flaw in an image region without generative fill, aren’t necessarily going to adopt it right out of the gate. They’re going to tinker with it a bit, but time-box that tinkering, otherwise sticking with what they can guarantee from experience will get a “satisfactory” result, even if it takes longer and might not have as high a ceiling for how perfectly the image is altered.

      And that’d just people who repair flaws in images. Which I’m guessing aren’t even the majority of Photoshop users. Is the clone brush even in the top 5 Photoshop features by usage?

      1 reply →

    • That should read "is NOW one of the top 5 used features in Photoshop".

  • Moreover, when one looks at the chronology with which features were rolled out, all the computationally hard things which would save sufficient time/effort that folks would be willing to pay for them (and which competitors were unlikely to be able to implement) were held back until Adobe rolled out its subscription pricing model --- then and only then did the _really_ good stuff start trickling out, at a pace to ensure that companies kept up their monthly payments.

My company has decided to update its hr page to use AI for reasons unknown.

So instead of the old workflow:

"visit HR page" → "click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

it's now:

"visit HR page" → "do AI search for the same link which is suggested as the first option" → "wait 10-60 seconds for it to finally return something" → "click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

This is certainly a great immediately useful tool but also a relatively small ROI, both the return and the investment. Big tech is aiming for a much bigger return on a clearly bigger investment. That’s going to potentially look like a lot of useless stuff in the meantime. Also, if it wasn’t for big tech and big investments, there wouldn’t even be these tools / models at this level of sophistication for others to be using for applications like this one.

  • While the press lumps it all together as "AI", you have to differentiate LLMs (driven by big tech and big money) from unrelated image/video types of generative models and approaches like diffusion, NeRF, Gaussian splatting, etc, which have their roots in academia.

  • On the plus side, for Adobe, is that they have a fairly stable & predictable SaaS revenue stream so as long as their R&D and product hosting costs don't exceed their subscription base, they're ok. This is wildly different from -- for example -- the hyperscalers, who have to build and invest far in advance of a market [for new services especially].

This feels extremely ungenerous to the Big Tech companies.

What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"? You figure out the 10 that users find really valuable, another 10 that will be super-valuable with improvement, and eventually drop the other 80.

Especially when if Microsoft tries something and Google doesn't, that suddenly gives Microsoft a huge lead in a particular product, and Google is left behind because they didn't experiment enough. Because you're right -- Google investors wouldn't like that, and would be totally justified.

The fact is, it's often hard to tell which features users will find valuable in advance. And when being 6 or 12 months late to the party can be the difference between your product maintaining its competitive lead vs. going the way of WordPerfect or Lotus 123 -- then the smart, rational, strategic thing to do is to build as many features as possible around the technology, and then see what works.

I would suggest that if Adobe is being slower with rolling out AI features, it might be more because of their extreme monopoly position in a lot of their products, thanks to the stickiness of their file formats. That they simply don't need to compete as much, which is bad.

  • > What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

    For users? Almost everything is wrong with that.

    There are no users looking for wild churn in their user interface, no users crossing their fingers that the feature that stuck for them gets pruned because it didn't hit adoption targets overall, no users hoping for popups and nags interrupting their workflow to promote some new garbage that was rushed out and barely considered.

    Users want to know what their tool does, learn how to use it, and get back to their own business. They can welcome compelling new features, of course, but they generally want them to be introduced in a coherent way, they want to be able to rely on the feature being there for as long as their own use of those features persists, and they want to be able to step into and explore these new features on their own pace and without disturbance to their practiced workflow.

    • Think about the other side though -- if the tool you've learned and rely on goes out of business because they didn't innovate fast enough, it's a whole lot worse for you now that you have to learn an entirely new tool.

      And I haven't seen any "wild churn" at all -- like I said in another comment, a few informative popups and a magic wand icon in a toolbar? It's not exactly high on the list of disruptions. I can still continue to use my software the exact same way I have been -- it's not replacing workflows.

      But it's way worse if the product you rely on gets discontinued.

      10 replies →

  • > What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

    Even the biggest tech companies have limited engineering bandwidth to allocate to projects. What's wrong with those 100 experiments is the opportunity cost: they suck all the oxygen out of the room and could be shifting the company's focus away from fixing real user problems. There are many other problems that don't require AI to solve, and companies are starving these problems in favor of AI experiments.

    It would be better to sort each potential project by ROI, or customer need, or profit, or some other meaningful metric, and do the highest ranked ones. Instead, we're sorting first by "does it use AI" and focusing on those.

    • What you describe, I don't see happening.

      If you look at all the recent Google Docs features rolled out, only a small minority are AI-related:

      https://workspaceupdates.googleblog.com/search/label/Google%...

      There are a few relating to Gemini in additional languages and supporting additional document types, but the vast majority is non-AI.

      Seems like the companies are presumably sorting on ROI just fine. But, of course, AI is expected to have a large return, so it's in there too.

  • So it's ok for all of us to become lab rats for these companies?

    • Every consumer is a "lab rat" for every company at all times, if that's how you want to think about it.

      Each of our decisions to buy or not buy a product, to use or not use a feature, influences the future design of our products.

      And thank goodness, because that's the process by which products improve. It's capitalism at work.

      Mature technologies don't need as much experimentation because they're mature. But whenever you get new technologies, yes all these new applications battle each other out in the market in a kind of survival-of-the-fittest. If you want to call consumers "lab rats", I guess that's your choice.

      But the point is -- yes, it's not only OK -- it's something to be celebrated!

      2 replies →

  • Force-feeding 100s of different AI features (90% of which are useless at best) to users is what's wrong with the approach.

    • Why?

      It's not "force-feeding". You usually get a little popup highlighting the new feature that you close and never see again.

      It's not that hard to ignore a new "magic wand" button in the toolbar or something.

      I personally hardly use any of the features, but neither do I feel "force-fed" in the slightest. Aside from the introductory popups (which are interesting), they don't get in my way at all.

      1 reply →

I don't think it's a Big Tech problem. Big Tech can come up with moronic ideas and be fine because they have unlimited cash. It's the smaller companies that need to count pennies who decide to flush the money down the AI Boondoggle Toilet.

"But Google does it. If we do it, we will be like Google".

  • "But Google does it. If we do it, we will be like Google".

    Were you in my meeting about 40 minutes ago? Because that's almost exactly what was said.

    If the big tech companies wanted to be really evil, they could invent a nonsense tech that doesn't work, then watch as all the small upstart competitors bankrupt themselves to replicate it.

That approach makes sense for very specific domain-tethered technologies. But for AI I think letting it loose and allowing people to find their own use cases is an appropriate way to go. I've found valuable use cases with ChatGPT in the first months of its public release that I honestly think we still wouldn't have if it went through a traditional product cycle.

It is the 'make something for the user/client' vs. 'make something to sell' mindset.

The latter one is what overwhelmingly more companies (not only BigTech, not at all!) adopted nowadays.

And Boeing. ;)

  • If the lore is to be believed, Southwest (a airline that has made its business only the 737) saw the a320 neo and basically told Boeing "give us a new 737 or we go to airbus." they did what the client wanted, to their detriment.

    "If I asked people what they wanted they would've said faster horses," or whatever Henry Ford is falsely accused of saying.

Counterpoint, the pandering to the market has better stock price appreciation :)

Also I am sure Adobe is doing both. They released an OpenAI competitor recently

  • Been doing both. Just look at their asset store as of late. Complete mess if you work professionally.

    At the same time, apparently their generative autofill is top notch. It's just a shame the industry decided to mix together ML tools with generative art, so that it's hard to tell which from which on a casual glance

Yeah I much prefer this approach to the current standard of just putting a chat bot somewhere on the page and calling it a day.

Yeah but sometimes, they just f it up. Like the PS crop tool was aok then they introduced the move the background instead of the crop rectangle way of cropping which is still to this day a terrible experience.

Also, Lightroom is one of the worst camera tools out there. It's only known because ADOBE...

Precisely. There are many such use cases too! It's disappointing to see the industry go all in on chatbot wrappers.