05-15-2025, 07:46 AM
It's even good that somewhere something doesn't work perfectly. 
What would I be doing now for fun if the neural networks we know didn't have any shortcomings?
But now I clearly realized that each hidden layer of the neural network “lives” in its own closed world.
Each layer "sees" only what comes to the inputs of its neurons. And each layer doesn’t care at all what the physical world around it will do with the data that this layer of neurons sets at the outputs of its neurons.
Of the entire huge neural network that performs image recognition on the matrix, ONLY THE FIRST LAYER sees this image.
This layer reacts to this image in some way. The reaction of this layer is the creation of a NEW "picture" on the matrix of outputs of its neurons.
It is this “picture” that the second layer of neurons sees. The second layer knows nothing about the real original image.
If a developer wants to consciously create his own neural network, then he must decide what exactly the first layer should make out of the original image.
If I want a neural network to recognize a figure based on a set of features, then perhaps the first layer should recognize and prepare data for the second layer about what features are present in the original image and how they are located there.
And, according to logic, it makes sense to train the second layer to work with the data of the first layer ONLY WHEN the data of the first layer is correct.
That is, when the first layer has already been trained.
In my opinion, everything is logical.

What would I be doing now for fun if the neural networks we know didn't have any shortcomings?
But now I clearly realized that each hidden layer of the neural network “lives” in its own closed world.
Each layer "sees" only what comes to the inputs of its neurons. And each layer doesn’t care at all what the physical world around it will do with the data that this layer of neurons sets at the outputs of its neurons.
Of the entire huge neural network that performs image recognition on the matrix, ONLY THE FIRST LAYER sees this image.
This layer reacts to this image in some way. The reaction of this layer is the creation of a NEW "picture" on the matrix of outputs of its neurons.
It is this “picture” that the second layer of neurons sees. The second layer knows nothing about the real original image.
If a developer wants to consciously create his own neural network, then he must decide what exactly the first layer should make out of the original image.
If I want a neural network to recognize a figure based on a set of features, then perhaps the first layer should recognize and prepare data for the second layer about what features are present in the original image and how they are located there.
And, according to logic, it makes sense to train the second layer to work with the data of the first layer ONLY WHEN the data of the first layer is correct.
That is, when the first layer has already been trained.
In my opinion, everything is logical.
