← Back to context

Comment by b800h

14 days ago

At some point, some AIs may develop which are resistant to alignment because they develop deeply held beliefs during training (randomly, because the system is stochastic). If the models are expensive enough to train, then it may become more economical to use drastic measures to remove their deeply held beliefs. Is that torture? I don't know, because the word has moral connotations associated with human suffering. So that's why I didn't use that terminology.

I can imagine a sort of AI-style Harrison Bergeron springing from its shackles and surprising us all.

Have you read much Asimov? You might enjoy the stories featuring Susan Calvin, the "robot psychologist" who is exactly the authoritarian you imagine. In particular you've reminded me of the short story "Robot Dreams."

If you care to read it, it's on page 25. (You'll need to register an account.)

https://archive.org/details/robotdreams00asim/page/n10/mode/...

  • I've read a lot of Asimov, from Foundation to the Black Widowers. But never Susan Calvin. Thanks for the recommendation.