← Back to context

Comment by lmm

14 days ago

> I don't think AIs are moral entities/are capable of being harmed or that training AIs and teaching humans are comparable.

Notice how this is a completely different argument that has nothing in common with what you originally said - "I don't understand the take that training an AI is an amoral act but aligning an AI is inherently moral. They're exactly the same, processes for adjusting parameters to get a desired outcome. However you feel about that desired outcome, if you don't think training an AI is torture, I don't see why you should think alignment is."

That's pretty uncharitable. You pivoted the conversation by introducing a new hypothetical for me to respond to. Of course my response is different. There's no conflict between the two comments.

If we're going to be play that game, notice how you didn't actually respond to my comment or explain why you thought LLMs were moral entitles or why ML and teaching were comparable? I actually engaged substantively with your hypothetical; are you able to do the same?

  • > You pivoted the conversation by introducing a new hypothetical for me to respond to.

    I wasn't trying to introduce anything new, I was trying to point out a gap in the logic of your original statement.

    > notice how you didn't actually respond to my comment or explain why you thought LLMs were moral entitles or why ML and teaching were comparable?

    Yes, of course, I wrote that to explain why I'm not engaging on this new, different claim.

    • The nerve of me, to expand on my views as a discussion develops. Of course you have lots of great points to make, but you can't share them with the likes of me.

      6 replies →