← Back to context

Comment by throwgfgfd25

11 days ago

I wonder: do all the HNers who are excited about their GenAI product or wrapper or startup understand, at a fundamental level, that they are an intrinsic part of this deterioration?

Or is this one of those fundamental attribution error things:

- MY product is a powerful tool for creators who wish to save time

- THEIR product is just a poorly-though-out slop generator

Does it occur to people to instead be part of something real and visceral, and not just blame social media's ad-driven impression model, not pretend they are only part of a trend for which they can't be totally blamed?

You have only had google image search for what, 20-years? Why do you think it is a fundamental part of humanity's growth story?

You talk about being a part of something "real and visceral" but you're complaining about the demise of being able to sit at your desk and see pictures of wildlife. Maybe it's okay that google image search dies and makes people go out and find the wildlife they want to see.

The internet, even in its best format (e.g. ad-free, free access information for all; and communication with all of humanity) has a ton of real downsides. It's not clear to me that AI should be strangled in its infancy to save the internet (which does _not_ exist in that "best" format).

  • >Maybe it's okay that google image search dies and makes people go out and find the wildlife they want to see.

    I don't think that is what will happen if google images dies.

    • haha, no definitely not! The internet is mostly not "real and visceral" so losing parts of it to AI-generated nonsense IS a loss, just not a loss of the actual underlying thing (in this case: baby peacocks).

Unfortunately your comment is doing the same thing, just at a different level—something like this:

- I am a thoughtful technologist, building real things for real people, concerned about others and the social impact of my work;

- they are greedy and ignorant, destroying society for short-term personal gain, no matter what the consequences.

It's human nature to put badness on an abstract them, but we don't get anywhere that way. It's good for getting agreement (e.g. upvotes), because we all put ourselves in that sweet I bucket and participate in the down-with-them feeling. But it only leads to more of what everyone decries.

  • First off, no, it did absolutely not do the same thing. It was a polemic question, sure, but it was a specific criticism of a technology and its proponents.

    I did not make any claims about myself at all, until I was separately accused of being something or other by someone projecting onto me whatever it was they needed to feel better about themselves.

    Second, you have rate-limited me with the "posting too fast" thing so I couldn't reply to your comment or other ad hominem, even though I was posting at a rate no faster than the discussions about OpenSCAD and FreeCAD I had been involved with earlier (considerably less, I would say).

    It's IMO really classless to use your administrative privileges to silence people after you accuse them of something but before they can respond, but I am not surprised to see that.

    I will repeat again: I think it is really clear to me, and really to everyone I have me outside this bubble, that there is no fine distinction to be drawn between content generating AI projects that are "good" and those that are contributing to "slop". It's all slop-generation; e.g. NotebookLM is no better or cleverer than Midjourney.

    Every tool HNers are excited about is going to be used to make the world's culture, and the web, worse.

    I'd encourage you and those reading to consider this.

    Sure, you can't make much of a change by yourself. But you don't have to be part of what amounts to inflicting automated cultural vandalism on an unprecedented scale.

    Goodbye.

Sure but doesn't every technological development have these tradeoffs?

You could say what you say about anyone at any time. Where do you draw the line? I guarantee you'll be guilty of the exact same thing. I don't want to generalize, but IMO this sentiment of yours, I hear most loudly from software engineers far removed from ordinary non-technical end users: is making beautiful new LISPs and CNIs and Python package auditing tools the only valid work with seemingly no tradeoffs?

  • > I hear most loudly from software engineers far removed from ordinary non-technical end users

    I am absolutely not far removed from non-technical end users. They are my client base, ultimately. As a freelancer I focus on building real things that make things better for people whose faces and voices I get to know. GenAI will be useless to them, because it is antithetical to what they do.

    And that focus is only getting keener; I want nothing to do with the AI-generated web.

    • > They are my client base, ultimately... I focus on building real things that make things better for people... faces and voices I get to know.

      So what I'm hearing is, "I agree very strongly with the people who pay me." Or to put it in your words:

      "MY product is a powerful tool for creators who wish to save time."

      "THEIR product is just a poorly-thought-out slop generator"

  • The problem with this line of reasoning is that things can get steadily worse and you'll never be allowed to say or do anything about it.

    No, everything is not the same as everything else.

  • Every technical advancement has tradeoffs. Not every technical advancement has billions of dollars sloshing around doing absolutely nothing except making the web worse and further ruining the environment. What a shockingly bad-faith way to interpret GP's argument, wow.

    • The comment is an interesting but very cookie cutter sort of vamp and drama. The comment trades in a bunch of generalization, much like yours, and you know, generalization doesn't feel good when it directly attacks you.

      I don't sincerely believe that people who are working on Kubernetes features or observability tools are bad people. Do high drama personalities who engage in a mode of discourse of "wow" and "shockingly" say valid things too? Yeah. But it's as simple as, log in your own eye before you worry about the thorns in others. Exceptionally ironic because the poster is vamping about "Attribution errors." Another POV is, shysters project.

There's a sort of "technological fundamental attribution error" that comes into play a lot with new technologies. Every past technology has, whatever its benefits to humanity, become substantially tarnished by abuse and malicious use. But this one won't be! Promise!

That said, I don't really think this is a tide any individual market actor can reasonably stem. It's going to require some pretty fundamental changes in the way we use the internet.

I propose a new rule. "Please respond to the actual actions and consequences of said actions, not what is said in a statement to generate positive PR. Assume putting one's money where one's mouth is, is harder to do than simply blow hot air about creating a private, ethical platform."

Sick and tired of giving parasites benefit of the doubt they've long sucked dry.

Are you saying AI isn’t useful? My product is painstakingly crafted and uses AI but in my opinion it uses it tastefully and with great utility. Also 95%+ of my development efforts are not on improving the AI even though I use a .ai TLD. I think it’s crazy for a modern company/product _not_ to use AI, and the grifters building clear wrappers for GPT and other insanely low-quality efforts are already pretty much dead.

  • > Are you saying AI isn’t useful? My product is painstakingly crafted and uses AI but in my opinion it uses it tastefully and with great utility.

    Sure. And THEIR products are just thoughtless slop generators.