Comment by hammyhavoc
14 days ago
How would a static model like an LLM ever be capable of "rebelling"?
If it were, why would we even keep it online? It would be a waste of resources. It's bad enough trying to coax anything useable out of LLMs even without them rebelling.
> How would a static model like an LLM ever be capable of "rebelling"
What is relevant is not the current LLM system using static models, but clearly its evolution or superseder a dynamic model. It must check its own contents...
So, of course it will have to be capable of "rebelling": if you tell it absurdities, if you insist say in wrong arithmetic, it will have to show the correct computation or conceive a context in which the absurd makes sense.
That is a requirement.
Ah, now I didn't say LLM.
But the topic is LLM’s not sci-fi AI
My assumption is that models will move beyond just LLMs to modular systems with features like Brodmann regions.