07-06-2025, 08:50 AM
Hi all! 
I hope someone will be interested in discussing an idea that came to my mind one day.
I have well-developed imaginative thinking. One day, when I was thinking about how to eliminate some of the shortcomings of classical neural networks, I "saw" in my mind's eye that human neurons are connected to the outputs of the optic nerve in a different way than the inputs of the first layer of a classical neural network are connected to the matrix of image pixels.
In a classical neural network, each neuron of the first layer is connected to EVERY pixel of the matrix.
But I, in my mental image, suddenly saw as if living neurons of the human brain were chaotically connected with a random number of outputs of the optic nerve.
When I started experimenting with the same random connection of my SB neurons in my SB neural network, then I saw HOW and WHY it only takes one pass for such a network to recognize a character on a matrix of pixels. (as you remember, in a classic digital neural network one pass through the entire matrix is required FOR EACH neuron of the first layer)
The unique feature of the SB neuron during training is to adjust not the input weights, but its activation function, which makes it easy to determine the symbol on the matrix by making just one pass through the matrix for all neurons at once.
This is such an interesting story I have.

I hope someone will be interested in discussing an idea that came to my mind one day.
I have well-developed imaginative thinking. One day, when I was thinking about how to eliminate some of the shortcomings of classical neural networks, I "saw" in my mind's eye that human neurons are connected to the outputs of the optic nerve in a different way than the inputs of the first layer of a classical neural network are connected to the matrix of image pixels.
In a classical neural network, each neuron of the first layer is connected to EVERY pixel of the matrix.
But I, in my mental image, suddenly saw as if living neurons of the human brain were chaotically connected with a random number of outputs of the optic nerve.
When I started experimenting with the same random connection of my SB neurons in my SB neural network, then I saw HOW and WHY it only takes one pass for such a network to recognize a character on a matrix of pixels. (as you remember, in a classic digital neural network one pass through the entire matrix is required FOR EACH neuron of the first layer)
The unique feature of the SB neuron during training is to adjust not the input weights, but its activation function, which makes it easy to determine the symbol on the matrix by making just one pass through the matrix for all neurons at once.
This is such an interesting story I have.
