Classically, the training is performed by 'back propagation' tuning weights to minimise difference between observed results and traing data. This is basically an optimisation algorithm that tunes weights to minimise residual variance between training data outcone and the ANN result. So normally all layers are tuned for each piece of training data at each step of the training.
Creation of SB-Neuron. Ours. Branded.(v2)
|
« Next Oldest | Next Newest »
|
Users browsing this thread: 4 Guest(s)