05-13-2025, 09:33 AM
(05-13-2025, 08:35 AM)litdev Wrote: Back propagation optimisess all layer weights to better match the test value...
I don't argue with this statement.
What I don't like is that while the neurons in the first hidden layer are being trained on TRUE data, the neurons in the second layer are being trained on UNTRUE data from the neurons in the first layer that haven't been trained yet.
Why do I need such training?

I will feel more comfortable if I start training the second layer only when the finally trained neurons of the first layer provide the second layer with correct data for its training.
Is that logical?
