Augmenting biological intelligence with RL in C.elegans using optogenetics [pdf]

10 days ago (klab.tch.harvard.edu)

Okay here are some background facts:

1. The worm is 1 mm long and few hundred microns across.

2. It has a fully characterized nervous system - 302 well defined neurons that have been mapped and named in exquisite detail. We even know which neurons connect to which! Have for decades. Still can’t tell how they function though.

3. The authors are expressing optogenetics proteins in various sets of neurons. Line 1 expresses them for example in 8 neurons known to play a role in chemotaxis (moves towards food).

4. They just shine a flash of light on the worm which would activate all these neurons together. This is probably still as blunt as putting an electrode into a mouse or monkey brain in some rough area and zapping them. I was hoping for slightly precise control.

I’ve been waiting for optogenetics to enter worms - mainly because they’re like the cheapest and easiest to work with creatures and I always believed understanding the simplest organism first is the right way to understanding any complex system.

  • Thanks for taking the time, very interesting! So the headline really isn’t clickbait. That’s… that’s incredible. I thought the models were optimizing some pre-birth (hatching?) interventions, but from your comment and the very first of the many figures, it sounds like the models were intervening realtime. This is the world’s first true digital cyborg, no? At least on a cognitive level rather than muscular/sensory/etc?

      We presented a hybrid system that used deep RL to interact with an animal's nervous system to achieve a task following a reward signal. Agents customized themselves to specific and diverse sites of neural integration, and the combined system retained the animal's ability to flexibly integrate information in new environments.
    

    FWIW the authors implications at the beginning and end about applying this to large mammals for its own sake seems dubiously motivated at best, and like something we should never encourage doing to other people. Literally hooking up your brain to remote processing would presumably mean you can longer tell what parts of your subconscious perceptions are due to your personality and perspectives and biases, and what parts are artificial… I’m not sure I could think of a closer technological parallel to “selling one’s soul to the devil”. You’d be consigning yourself to (potential? Inevitable?) ego death, in the long term.

    But obv those parts are included for rhetorical purposes, and likely encouraged by the powers that be (journal editors?) to generate interest among laymen and professionals both. So I’m absolutely not hating the players, here! This will no doubt be a critical tool for all sorts of studies going forward, though TBH I’m blanking on how they would be designed to get useful info from this. I guess “lesion organoid X and replace it with an ML network, train 1000 versions, and see what kinds of networks end up best replicating nominal behavior?”

    • > You’d be consigning yourself to (potential? Inevitable?) ego death, in the long term.

      I have some bad news for you. We are all consigned for inevitable death in the long term. That also involves ego death.

      > I’m not sure I could think of a closer technological parallel to “selling one’s soul to the devil”

      You are projecting your values here. There is nothing inherently "selling" or "devil" about a procedure like that. It is a tool. Can be seriously evil, can be beneficial. What matters is how it is used, who is in control of it and whose goals it serves.

      1 reply →

    • there's a greg Egan short story called "learning to be me", where it is customary to have a crystal that learns to copy your brain activity at the prime of your life and then scooping out the meat brain before it starts to decline with age. from that point you exist inside the crystal instead.

  • Could you add more scientific information about optogenetics and/or LLLT (low level laser therapy). Skimming over [0] now.

    I have some LLLT devices [1] and interested in the topic. Regarding LLLT I saw good temporal changes in people with dementia (sample = 2). I am not a doctor so I cannot give precise information more than a close observation. Novak Djokovic uses LLLT [2] though.

    [0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9063588/

    [1] https://weberlasersystems.com/

    [2] https://doctorjkrausend.com/ep-423-the-taopatch-tennis-star-...

  • I seriously wish I could find who works on this kind of biological reverse engineering.

  • with 302 neurons, it almost feels like you should be able to add it to each neuron with a simple coding to fire individual ones.

    • The problem is that even one individual neuron is really complicated. The model we imagine of how a neuron works is very simplified. Useful for some things, but not for simulating a worm's brain.

    • We know the connections and the neurons but we don’t know the connection strengths and the exact neurotransmitter mixes that work on them. This is just the bare minimum information we need to even try and simulate the system and we don’t have it..

  • They have already done optogenetics on non-human primates.

    https://www.frontiersin.org/articles/10.3389/fnana.2023.1193...

    However, I think those methods of triggering opsins are rather invasive (implants). I don't think it would be hard to design light activated opsins to recieve frequencies in the optical window of skin/bone. If I remember correctly the human body is somewhat resistant genetic transfection, so we have that going for us, which is great.

    There is an optical window for skin in the near infrared band.

    https://en.wikipedia.org/wiki/Near-infrared_window_in_biolog...

    Bone optical windows are being explored though that has some interference with water.

    https://ui.adsabs.harvard.edu/abs/2016SPIE.9689E..4JS/abstra...

    Better watch what you put in your body /s

There are some neat videos and images in the repo: https://github.com/ccli3896/RLWorms

Basically they took a worm and hijacked the genetic machinery such that a subset of neurons could be activated or inhibited via a light sensitive protein.

Then they hooked up an unsupervised RL algorithm to a camera, and it learned how to control the lights to direct the worm towards a goal like food.

The learned algorithms were able to navigate obstacles and generally enhanced the capability of the worm while coexisting with the remaining uncontrolled neurons.