The Brain-Machine Interface (BMI) is an emerging technology which directly translates neural activity into control signals for effectors such as computers, prosthetics, or even muscles. Work over the last decade has shown that high performance BMIs depend on machine learning to adapt parameters for decoding neural activity, but also on the brain learning to reliably produce desired neural activity patterns. How the brain learns neuroprosthetic skill de novo is not well-understood and could inform the design of next-generation BMIs in which both the brain and machine synergistically adapt.

During both neuroprosthetic and natural motor skill learning, movements and underlying neural activity initially exhibit large trial-to-trial variability which decreases over training, resulting in consolidated movement and neural patterns. However, it is unclear how task-relevant neural populations coordinate to explore and consolidate activity patterns underlying behavioral improvement. Exploration and consolidation could happen for each neuron independently, across the population jointly, or both. We disambiguated among these possibilities by investigating how subjects learned de novo to control a brain-machine interface using a fixed motor cortex population. We decomposed population activity into the sum of private and shared signals, which produce uncorrelated and correlated neural variance respectively, and examined how these signals' evolution causally shapes behavior. We found initially large trial-to-trial movement and private neural variability reduce over learning. Concomitantly, task-relevant shared variance increases, consolidating a manifold containing consistent neural trajectories that generate refined control. These results suggest that motor cortex acquires skillful control by leveraging both independent and coordinated variance to explore and consolidate neural patterns.




Download Full History