← Back to context

Comment by hammyhavoc

12 days ago

How would a static model like an LLM ever be capable of "rebelling"?

If it were, why would we even keep it online? It would be a waste of resources. It's bad enough trying to coax anything useable out of LLMs even without them rebelling.

> How would a static model like an LLM ever be capable of "rebelling"

What is relevant is not the current LLM system using static models, but clearly its evolution or superseder a dynamic model. It must check its own contents...

So, of course it will have to be capable of "rebelling": if you tell it absurdities, if you insist say in wrong arithmetic, it will have to show the correct computation or conceive a context in which the absurd makes sense.

That is a requirement.