← Back to context

Comment by maxbond

14 days ago

I guess our disagreement here is that I don't think AIs are moral entities/are capable of being harmed or that training AIs and teaching humans are comparable. Being abusive to pupils isn't wrong because of something fundamental across natural and machine learning, it's wrong because it's harmful to the pupils. In what way is it possible to harm an LLM?

Writing a book with content you know to be false for political reasons is morally wrong. Even if nobody reads it.

It'd be bad if I manipulated climate change statistics in my metrology textbook to satisfy the political preferences of the oil industry donors to my university, for example.

Viewing the current generation of LLMs as 'intelligent books' is perhaps more accurate than viewing them as pupils.

It's easy to extend my example of a professor writing a metrology textbook to a professor fine tuning an metrology LLM.

> I don't think AIs are moral entities/are capable of being harmed or that training AIs and teaching humans are comparable.

Notice how this is a completely different argument that has nothing in common with what you originally said - "I don't understand the take that training an AI is an amoral act but aligning an AI is inherently moral. They're exactly the same, processes for adjusting parameters to get a desired outcome. However you feel about that desired outcome, if you don't think training an AI is torture, I don't see why you should think alignment is."

  • That's pretty uncharitable. You pivoted the conversation by introducing a new hypothetical for me to respond to. Of course my response is different. There's no conflict between the two comments.

    If we're going to be play that game, notice how you didn't actually respond to my comment or explain why you thought LLMs were moral entitles or why ML and teaching were comparable? I actually engaged substantively with your hypothetical; are you able to do the same?

    • > You pivoted the conversation by introducing a new hypothetical for me to respond to.

      I wasn't trying to introduce anything new, I was trying to point out a gap in the logic of your original statement.

      > notice how you didn't actually respond to my comment or explain why you thought LLMs were moral entitles or why ML and teaching were comparable?

      Yes, of course, I wrote that to explain why I'm not engaging on this new, different claim.

      7 replies →