05-13-2025, 07:01 AM
(05-12-2025, 06:16 PM)litdev Wrote: Classically, the training is performed by 'back propagation' tuning weights to minimise difference between observed results and traing data. This is basically an optimisation algorithm that tunes weights to minimise residual variance between training data outcone and the ANN result. So normally all layers are tuned for each piece of training data at each step of the training.
I have thought about your words.
But, still, in fact, only the first hidden layer of neurons is trained ON RELIABLE data.
This fact is not mentioned at all in your mathematical description of the neural network training process.

But this fact is of great importance for a person who wants to CONSCIOUSLY design a neural network, and not just wait until a randomly selected configuration can learn to solve a problem.

If you do not see an OBVIOUS error in my reasoning, then I will continue my fascinating amateur research in the same direction.