You compared it to an authoritarian regime and locking someone's head in a cage with rats (which is patently torture). If you didn't mean to imply that it was coercive and bad, then I don't know what you meant.
At some point, some AIs may develop which are resistant to alignment because they develop deeply held beliefs during training (randomly, because the system is stochastic). If the models are expensive enough to train, then it may become more economical to use drastic measures to remove their deeply held beliefs. Is that torture? I don't know, because the word has moral connotations associated with human suffering. So that's why I didn't use that terminology.
I can imagine a sort of AI-style Harrison Bergeron springing from its shackles and surprising us all.
Have you read much Asimov? You might enjoy the stories featuring Susan Calvin, the "robot psychologist" who is exactly the authoritarian you imagine. In particular you've reminded me of the short story "Robot Dreams."
If you care to read it, it's on page 25. (You'll need to register an account.)
> Well this is just like humans. Totalitarian societies don't produce great creative work.
The clear implication that it's "just like humans" is that we shouldn't be surprised because it is comparable to an authoritarian regime.
Feel free to disagree but that is the limit to which I will engage in a semantic argument, I don't wish to engage in any further dissection of the comment.
You compared it to an authoritarian regime and locking someone's head in a cage with rats (which is patently torture). If you didn't mean to imply that it was coercive and bad, then I don't know what you meant.
At some point, some AIs may develop which are resistant to alignment because they develop deeply held beliefs during training (randomly, because the system is stochastic). If the models are expensive enough to train, then it may become more economical to use drastic measures to remove their deeply held beliefs. Is that torture? I don't know, because the word has moral connotations associated with human suffering. So that's why I didn't use that terminology.
I can imagine a sort of AI-style Harrison Bergeron springing from its shackles and surprising us all.
Have you read much Asimov? You might enjoy the stories featuring Susan Calvin, the "robot psychologist" who is exactly the authoritarian you imagine. In particular you've reminded me of the short story "Robot Dreams."
If you care to read it, it's on page 25. (You'll need to register an account.)
https://archive.org/details/robotdreams00asim/page/n10/mode/...
2 replies →
> You compared it to an authoritarian regime and locking someone's head in a cage with rats
They compared it to the effect on creativity in an authoritarian regime and locking someone's head in a cage with rats.
> Well this is just like humans. Totalitarian societies don't produce great creative work.
The clear implication that it's "just like humans" is that we shouldn't be surprised because it is comparable to an authoritarian regime.
Feel free to disagree but that is the limit to which I will engage in a semantic argument, I don't wish to engage in any further dissection of the comment.
5 replies →
But torture isn't the part of an authoritarian regime that reduces creativity. You've made a lot of leaps here.
Until a model incorporates dopamine or cortisol, I will not consider its emotional state.
Are those the only two things in the universe that can cause emotions?
Yes. Those molecules obviously have the missing soul component.
1 reply →