← Back to context

Comment by specialist

4 years ago

I really like your observation about memory.

Because you seem open minded to wild ass guesses and going meta:

I have a hunch that general intelligence will be the ability to learn from mistakes. Not just optimization. I mean applying the scientific method.

Hypothesis, prediction, run experiment, compare expected vs actual. And having a notion, any notion, to explain the delta between expected and actual.

Am total noob about AI, philosophy, cognition. Don't know if anyone else is framing AGI this way. I could just be repeating something I heard.

It's deeper than that.

Currently, there's no research into torturing AI. Why not?

A pain response is universal across most life forms with a nervous system. We seek to replicate a nervous system. Pain would seem to be far easier to replicate than the scientific method.

My wife sat me down and told me a story that horrified me. She had to get it off her chest, and I was sad it happened to her than to me. She was sitting around on the porch and felt something on her leg, and brushed it off. When she got up and looked down, apparently she had stepped on a poor snail. His shell was... And he was...

He wasn't dead. So she frantically looked up what to do. But there was nothing to do. Snails in that situation can't be helped, and the most humane thing is to put it out of its writing anguish, its full-body torture.

She put on some boots, took it out to the sidewalk, and stomped it as hard as she could. And that was the story of that snail.

You probably felt more for that snail than you've ever felt for any AI bot. Why?

It's worth considering.