05-13-2025, 08:58 PM
I am grateful to you for your patience.
Most likely, we understand the terms in the same way.
I think that we have different ideas about the situation in which the neurons of the first hidden layer and the second hidden layer are.
For example, during the first training run, the weights of the neurons in the first layer will be adjusted depending on the reliable information at their inputs.
But, since the neurons themselves have not yet been trained, their outputs will contain INCORRECT information.
Therefore, the same backpropagation cycle that trains the neurons of the first layer on reliable information will force the neurons of the second layer to study the false signals of the first layer.
And only after the neurons of the first layer have learned so well that their outputs will form sufficiently reliable information, the second layer will begin to retrain on this information.
Until this point, training the second layer does not make any sense.
Am I not right?

Most likely, we understand the terms in the same way.
I think that we have different ideas about the situation in which the neurons of the first hidden layer and the second hidden layer are.
For example, during the first training run, the weights of the neurons in the first layer will be adjusted depending on the reliable information at their inputs.
But, since the neurons themselves have not yet been trained, their outputs will contain INCORRECT information.
Therefore, the same backpropagation cycle that trains the neurons of the first layer on reliable information will force the neurons of the second layer to study the false signals of the first layer.
And only after the neurons of the first layer have learned so well that their outputs will form sufficiently reliable information, the second layer will begin to retrain on this information.
Until this point, training the second layer does not make any sense.

Am I not right?