BCI signals to decode motor intent: how precise can you tune them? Does aging/noise matter?

Just came off a talk by Leigh Hochberg by a precision neuroscience event that really inspired me…

I used to interact some with BCI people but not much b/c reading neural signals (a la Fred Rieke/Adrienne Fairhall) takes A LOT of specialized expertise and ive always been more interested in higher-level/associational signals than i have been in perception/motor signals. Now codex/AI makes it way easier to ask the right questions in perception/motor and since nothing in biology can be understood except in the context of aging (or noise/entropy or whatever more fundamental latent dimension leo pio lopez or that gero.ai guy is tryin to get at), I can get way more interested in motor/perception stuff quickl

BCI signals are after all, one of the best ways of reading into brain health.

also New AI model predicts disease risk while you sleep [tho they only use sleep bc sleep is the easiest way of collecting such a huge density of signals)

it’s interesitng how they are able to use the brain signals of paralyzed people to decode motor intent (or even word to say next), but paralyzed ppl are also more likely to be in conditions where their brains have intrinsically higher noise

some gladyshev lab alumni have been really into “noise” in general too [tarkhov, alina rui su] and i wonder if there is a correlation between their type of noise and the noise that shows up in the brain

==

the linearity of brain signals with change in “orientation angle” [eg change in direction to take the cursor] is also kind of interesting?

some people have used brain signals to play video games like FPS games or civ and I haven’t watched the videos but I do wonder how they do it only with motor-based EEGs

==

hmm there was a braingate talk that was kind of interesting

===

this is ALL aging-relevant b/c aging affects ur ability to get precise neural signals

[also neuro-alignment relevant] - NeuroAI for AI safety - by Patrick Mineault [amaranth.foundation…]. decoding the signal from the noise [even in just motor regions] is ONE “superficial” part of neuro-alignment!! [after all noise in motor cortex is most likely correlated with noise in many other regions of cortex]

there was also one KL divergence vs tuning curve i saw that was unusually neat…

==

some brains are also very noisy and can’t write coherent sentences but still can use AI-inspired fine-tuning to increase gain and reduce their noise. Maybe Trump is the thermodynamic entropy of the universe, but maybe so am I… [i mean ostensibly i am still anti-trump especially b/c of microplastics but i still find him hilarious and it had to take one longevity thiel fellow to point out to me that maybe there are some aspects of the trump victory i shouldn’t be discouraged by]

if ur a maximum-entropy reinforcement learner by nature, u’ve got to really get into neural and biological noise

also related to entropy - Security and Cryptography · Missing Semester | 6.5660 / Spring 2023

i know Andrew “bunnie” Huang made a REALLY fascinating talk on physical hardware security/cryptogprahy that was SUPER-unique I have to find again… I remember how a security talk used generating high-entropy hashes to guard against buffer overflows last summer (which I got super-fascinated by b/c mania <=> buffer overflows and I have way too many historical friends who historically had bipolar disorder)…

==

i mean, if superabundance was near (cf dario amodei) and you had all the money in the world, isn’t one of the functions to reduce your total level of integrated “dumb entropy/noise” across the rest of your life [and the integrated “dumb entropy/noise” integrated across ALL the people and systems you talk to]

and minimizing microplastics through GLP1s is one necessary component, but proper neural decoding is another necessary component

5) Use the right model for the job

“Precision” usually means low error and low jitter. That’s where model choice and priors help.

  • Linear decoders (Kalman filter, Wiener filter, ridge regression): shockingly strong, stable, interpretable.
  • State-space models: explicitly model dynamics, reduce jitter, handle delays.
  • Neural nets: can win, but only if you manage overfitting and drift. They’re needy.
  • Hybrid approaches: classifier for intent/state + continuous controller for motion.

A big precision hack: impose dynamics (movement is smooth) and constraints (humans don’t teleport).