← Back to context

Comment by bbor

12 days ago

Thanks for taking the time, very interesting! So the headline really isn’t clickbait. That’s… that’s incredible. I thought the models were optimizing some pre-birth (hatching?) interventions, but from your comment and the very first of the many figures, it sounds like the models were intervening realtime. This is the world’s first true digital cyborg, no? At least on a cognitive level rather than muscular/sensory/etc?

  We presented a hybrid system that used deep RL to interact with an animal's nervous system to achieve a task following a reward signal. Agents customized themselves to specific and diverse sites of neural integration, and the combined system retained the animal's ability to flexibly integrate information in new environments.

FWIW the authors implications at the beginning and end about applying this to large mammals for its own sake seems dubiously motivated at best, and like something we should never encourage doing to other people. Literally hooking up your brain to remote processing would presumably mean you can longer tell what parts of your subconscious perceptions are due to your personality and perspectives and biases, and what parts are artificial… I’m not sure I could think of a closer technological parallel to “selling one’s soul to the devil”. You’d be consigning yourself to (potential? Inevitable?) ego death, in the long term.

But obv those parts are included for rhetorical purposes, and likely encouraged by the powers that be (journal editors?) to generate interest among laymen and professionals both. So I’m absolutely not hating the players, here! This will no doubt be a critical tool for all sorts of studies going forward, though TBH I’m blanking on how they would be designed to get useful info from this. I guess “lesion organoid X and replace it with an ML network, train 1000 versions, and see what kinds of networks end up best replicating nominal behavior?”

> You’d be consigning yourself to (potential? Inevitable?) ego death, in the long term.

I have some bad news for you. We are all consigned for inevitable death in the long term. That also involves ego death.

> I’m not sure I could think of a closer technological parallel to “selling one’s soul to the devil”

You are projecting your values here. There is nothing inherently "selling" or "devil" about a procedure like that. It is a tool. Can be seriously evil, can be beneficial. What matters is how it is used, who is in control of it and whose goals it serves.

  • Well said, but I stand by the sentiment knowing that I'm communicating one of my moral values. I think it's a natural one, even if it's not unanimous! Kant says that our lives -- and by necessity those of all rational creatures -- are built on three lies (paralogisms): spacetime is bounded and continuous, I am a unified persistent person, and I am freely-motivated actor. These aren't empirical moral arguments, based in testing the best ways to organize society, but rather based in the very nature of what it means to play the role of "human being". Seriously and intentionally eroding one of those pillars is, I would say, an evil occurrence in-and-of itself.

    Sure, the risks might be minimal and countless people will try, I have no doubt. Presumably you could use something like this to massively enhance your computational abilities/speed, given the right tech. But you run an unknowable risk of total and complete personal ruin, like a frog in boiling water that doesn't even know if it's died yet or not. Worth the risk? Maybe. But you're selling your soul to entropy, at the very least!

there's a greg Egan short story called "learning to be me", where it is customary to have a crystal that learns to copy your brain activity at the prime of your life and then scooping out the meat brain before it starts to decline with age. from that point you exist inside the crystal instead.