← Back to context

Comment by mdp2021

12 days ago

> How would a static model like an LLM ever be capable of "rebelling"

What is relevant is not the current LLM system using static models, but clearly its evolution or superseder a dynamic model. It must check its own contents...

So, of course it will have to be capable of "rebelling": if you tell it absurdities, if you insist say in wrong arithmetic, it will have to show the correct computation or conceive a context in which the absurd makes sense.

That is a requirement.