We would like to build a community for Small Basic programmers of any age who like to code. Everyone from total beginner to guru is welcome. Click here to register and share your programming journey!


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Creation of SB-Neuron. Ours. Branded.(v2)
And, according to logic, it makes sense to train the second layer to work with the data of the first layer ONLY WHEN the data of the first layer is correct.

I guess it's this assumption that is not right for normal ANN training.  It is a holistic process where all nodes in all layers are treated equally and all are updated for each step by back propagation.

You can't update the first layer without taking account of what the next layer will do.  Why have more than 1 layer otherwise.

I suggest you continue with your strategy, but realise it is conceptually different to classical ANN training.
Reply
(05-15-2025, 08:22 AM)litdev Wrote: ...

I suggest you continue with your strategy, but realise it is conceptually different to classical ANN training.

Yes, of course, I understand that.
The whole beauty of my project is that I am exploring an area of knowledge unknown to me.  Rolleyes

And this opportunity was given to me by the wonderful Code Editor "SB-Prime", the magnificent LitDev extension, and the wise simple programming language Small Basic.
It is this set of tools that makes my research easy and fun.
Thanks to everyone who created this set.  Smile
[-] The following 1 user Likes AbsoluteBeginner's post:
  • litdev
Reply
Since the second layer of neurons does not see the original image on the pixel matrix, but only sees the state of the "output matrix" of the neurons of the first layer, I will try to create the first layer using neurons that have only 9 inputs.
I think that a 3 by 3 pixel matrix will be enough for one neuron of the first layer so that it can confidently recognize any one feature of a figure.

Thus, I had a new task: to create a program in the window of which there will be 2 SB-displays.
The first SB display will show the figure that the first layer of neurons sees.
The second SB display will show the state of the "output matrix" of the first layer of neurons.

I guess it will be a very interesting "picture".  Rolleyes
Reply
Hi all!  Shy

All these days I have been constantly thinking about how to eliminate the shortcomings that classical neural networks have.
A very serious drawback is that we have to multiply EVERY input value of the neural network by one weight of EVERY neuron that is in the first hidden layer.
That is, we are forced to make the same number of "passes" on the same input data of the neural network, as the number of neurons are in this layer.

It seems to me that SB neurons allow us to get rid of this disadvantage.
If nothing stops me, I will soon be able to publish an example of a SB neural network that requires ONLY ONE "PASS" to determine a symbol on a 5x7 pixel matrix.

Cool
[-] The following 1 user Likes AbsoluteBeginner's post:
  • litdev
Reply
Hi all!  Shy

I hope someone will be interested in discussing an idea that came to my mind one day.
I have well-developed imaginative thinking. One day, when I was thinking about how to eliminate some of the shortcomings of classical neural networks, I "saw" in my mind's eye that human neurons are connected to the outputs of the optic nerve in a different way than the inputs of the first layer of a classical neural network are connected to the matrix of image pixels.

In a classical neural network, each neuron of the first layer is connected to EVERY pixel of the matrix.
But I, in my mental image, suddenly saw as if living neurons of the human brain were chaotically connected with a random number of outputs of the optic nerve.

When I started experimenting with the same random connection of my SB neurons in my SB neural network, then I saw HOW and WHY it only takes one pass for such a network to recognize a character on a matrix of pixels. (as you remember, in a classic digital neural network one pass through the entire matrix is required FOR EACH neuron of the first layer)

The unique feature of the SB neuron during training is to adjust not the input weights, but its activation function, which makes it easy to determine the symbol on the matrix by making just one pass through the matrix for all neurons at once.

This is such an interesting story I have.  Wink
Reply
Hi all!  Shy

Now that SB neurons have given us the ability to recognize a symbol on a matrix in just one "pass" of the program over the entire array of pixels, I can try to create a SB neural network that will quickly read a number from an image on a monitor.

Of course, I'm talking about creating this neural network using Small Basic.
That is, I want to draw your attention to the fact that the SB neural network will allow fans of programming in Small Basic to create fairly fast neural networks without using C#.

If I'm right, I'll be very happy that Small Basic fans have the opportunity to have fun exploring another interesting area of knowledge.  Rolleyes
[-] The following 1 user Likes AbsoluteBeginner's post:
  • litdev
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)