2021-01-28 19:02:21
The Core Computational Principles of a Neuron – найцікавіше.
To spike or not to spikeConvergence towards principles always leads to simplification and generalization, but it can be dangerous. The brain is too diverse. You can simplify too much and lose something important.
Let’s dare to try.Learning is essentially a search for the right function.Some neurons transmit information by the frequency of spikes. For example, the more a motor-neuron activates in the spinal cord, the stronger the muscle contracts. For a long time, people thought that other neurons in the cerebral cortex send information in a similar way.
For many decades, the idea of frequency coding was dominant. Even now, artificial neural networks use the same coding (real numbers). But a nasty problem ruins the frequency view: to “measure” the rate of firing you need to wait for a long time (hundreds of milliseconds) to average spikes. (Сильно відчуваються співвідношення невизначеності для хвиль, приємно)
However, taking into account the time for spike generation, spike propagation and total path length there is just enough time to send 1–2 pulses… and no way to wait longer to “send” the frequency of firing.
Therefore, individual spikes (0 or 1) somehow should be enough to transmit information. Currently, the prevailing idea is that an individual neuron does not send a lot, but many of them can encode anything (population coding).
For the last 20 years, people showed that the [number of possible behaviors] of a biological neuron is much larger than previously thought. It turns out that the dendrites do not simply transmit the signal but also process it along the way. If a dendrite gets activated too much, it can amplify the signal (an event called a dendritic spike).Thus, synapses that are active close in space and time excite the cell much larger than simultaneous but non-local signals. Thus new learning paradigm: not only neurons' weights but their locations also store information.
In reality, synapses are very unreliable and stochastic. Often, neurons fail to transmit the signal. Interestingly, it may be not the weakness of evolution that could not invent reliable wires, but the discovery of how to make learning more efficient.
In contrast, the loss of some transistors could damage the whole chip and the computer stops working. Brain architecture is evolved to be stable to errors, to failure of neurons and synapses.
“will my algorithm still work if I remove 10% of neurons? Or 20%?”
Це перша частина. Друга буде у вигляді репосту з якого й дізнався про статтю
365 viewsedited 16:02