Inferring neural activity before plasticity as a foundation for learning beyond backpropagation.

Song Y
Millidge B
Salvatori T
Lukasiewicz T
Xu Z
Bogacz R

This paper proposes that the brain learns in a fundamentally different way than current artificial intelligence systems. It demonstrates that this biological learning mechanism enables faster and more effective learning in many tasks faced by animals and humans in nature.

Video Abstract
Scientific Abstract

For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as 'credit assignment'. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning. Here, we set out a fundamentally different principle on credit assignment called 'prospective configuration'. In prospective configuration, the network first infers the pattern of neural activity that should result from learning, and then the synaptic weights are modified to consolidate the change in neural activity. We demonstrate that this distinct mechanism, in contrast to backpropagation, (1) underlies learning in a well-established family of models of cortical circuits, (2) enables learning that is more efficient and effective in many contexts faced by biological organisms and (3) reproduces surprising patterns of neural activity and behavior observed in diverse human and rat learning experiments.

figure showing four panels, with neurons linked together in each, showing connections being strengthened or weakened.
Difference in learning between artificial neural networks and the brain, illustrated on an example of a bear fishing for salmon. The bear can see the river and it has learnt that if it can also hear the river and smell the salmon it is likely to catch one. But one day, the bear arrives at the river with a damaged ear, so it can’t hear it. The bear needs to weaken the connections between its neurons encoding the sight of the river and its sound (top left display). In an artificial neural network information processing model, this lack of hearing would also result in a lack of smell because while learning there is no sound, backpropagation would change multiple connections including those between neurons encoding the river and the smell of salmon (top right), and the bear would conclude that there is no salmon, and go hungry. But in the animal brain, where the neurons first settle to prospective configuration (bottom left), the lack of sound does not interfere with the knowledge that there is still the smell of the salmon (bottom right), therefore the salmon is still likely to be there for catching.
Citation

2024. Nat Neurosci, 27(2):348-358.

DOI
10.1038/s41593-023-01514-1
Related Content
Publication
Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ, Hafner D, Kepecs A, Kriegeskorte N, Latham P, Lindsay GW, Miller KD, Naud R, Pack CC, Poirazi P, Roelfsema P, Sacramento J, Saxe A, Scellier B, Schapiro AC, Senn W, Wayne G, Yamins D, Zenke F, Zylberberg J, Therien D, Kording KP
2019. Nat. Neurosci., 22:1761-1770.
Publication
Salvatori T, Song Y, Xu Z, Lukasiewicz T, Bogacz R
In Proceedings of the 36th AAAI Conference on Artificial Intelligence‚ AAAI 2022 ‚Vancouver, BC, Canada, February 22--March 1‚ 2022 (Vol. 10177, pp. 507-524). AAAI Press.
Publication
Song Y, Lukasiewicz T, Xu Z, Bogacz R
2020. Adv Neural Inf Process Syst, 33:22566-22579.
Publication
Salvatori T, Pinchetti L, Millidge B, Song Y, Bao T, Bogacz R, Lukasiewicz T
2022. Advances in Neural Information Processing Systems, 35, 38232-38244
Publication
Whittington JCR, Bogacz R
2019. Trends Cogn. Sci., 23, 235-250