← Back to context

Comment by VladimirGolovin

4 hours ago

I wonder, is an electronic system capable of doing anti-entropy work on itself (the way life does) necessarily AGI-complete? It turns out that there are many complex behaviors (like drawing or generating sensible text) that don't require AGI-completeness.

(Stumbled upon the answer while formulating the question – no, being capable of doing anti-entropy self-maintenance work isn't AGI-complete because there's plenty of life that's perfectly capable of that without being generally intelligent.)