Small Basic Forum
Creation of SB-Neuron. Ours. Branded.(v2) - Printable Version

+- Small Basic Forum (https://litdev.uk/mybb)
+-- Forum: Small Basic (https://litdev.uk/mybb/forumdisplay.php?fid=1)
+--- Forum: Discussion (https://litdev.uk/mybb/forumdisplay.php?fid=4)
+--- Thread: Creation of SB-Neuron. Ours. Branded.(v2) (/showthread.php?tid=133)

Pages: 1 2 3 4 5 6


Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(translated by Google Translate)

Hello everyone. Smile 
I accidentally realized that there is a lot of room for fun and creativity in the topic of computer neurons and neural networks.
I propose to make this discussion thread where we can have fun learning old things and creating new ones.

Good luck to everyone.

(translated by Google Translate)

I don't know what projects you are doing in Small Basic right now, but now it's my turn to get back into neural networks.  Cool
However, the ANN extension will not be used here.

In order to easily and correctly use the ANN extension, I will first create a demo program that will allow us to visually observe how a neural network is trained.
I hope that when we see all this, we will be able to understand it all well.
And when we understand it all well, we will be able to easily use ANN neural networks in our SB projects.

(because if this continues, then my e-sportsmen in the games "AI Snake" and "Retro Football" will become champions of the SB championship this year, without even meeting a single opponent)


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(translated by Google translator)

Hi guys.  Shy
My study of virtual neurons led me to a completely unexpected result.
My task was not to learn the results of other people's research. I wanted to enjoy doing my research.
Therefore, my story about my results is not a scientific report. This is a story for entertainment and communication. Wink

My presentation is not ready yet.
But one piece of news was so unexpected for me that I have to share it with you right now.

As you know, when training a neuron, the program adjusts the weights of the input data so that the activation function produces the desired output.
The whole time I was studying this, I had a feeling of discomfort.
Intuitively, I felt that it was more correct to adjust the shape of the activation function, rather than the input data to the function.

Once I have everything ready, I will upload the files to my OneDrive.
But even now everything looks very interesting.


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(translated by Google translator)

Hi all.  [Image: shy.png]
I uploaded the SB-Neuron training demo file to my OneDrive.

( link in the post below )

During training, our SB Neuron does not adjust the weights of the input data. The SB-Neuron adjusts the shape of the activation function.
It seems to me that this greatly increases the capabilities of a single neuron.
For example, one such SB Neuron can easily perform Boolean operations alone. In my demo file, the same single SB-Neuron learns to multiply one number by another by changing the shape of the activation function graph.
You can watch this process for yourself.

Of course, I am not an expert. But, I have the feeling that our SB-Neuron is a very interesting thing.  [Image: wink.png]


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(translated by Google translator)

Hi all.  [Image: shy.png]
I have uploaded to my OneDrive the second version of the demo file "Neuron training v2.sb", which has corrected the mathematics of the program.
Here is a link to the program folder:

https://1drv.ms/f/s!AnoSlTzMqlL6jNx6H4Qi...w?e=8D73FE


RE: Creation of SB-Neuron. Ours. Branded.(v2) - litdev - 09-23-2024

Hi,

Your ideas are interesting and you are doing great things working on this.

Mathematically you are iterating (training) to obtain solution funtion F(G(X,Y)) = X.Y/10, where G(X,Y) = X+10Y+1 and X,Y are integers in range [0,9].  If you do enough training runs you will cover the space completely to fully describe F(G) (this is what you plot F v G). 

In a simple case you can do enough training to cover every case and therfore exactly get the right answer (fully fit F(G)).  This is also why I described neural nets at one point as fancy curve fitting, and why the training may interpolate well (test data within the training range), but extrapolate poorly (test data outside the training range).  Train in [0,9] range but test on range [50,60] for example.

The power of neural nets is when it is impossible to train against all possible inputs, but the interpolation works well.

Imagine using a neural net to do face recognition, impossible to train against every possible face image, although they do use an enormous number.  Also they do play carefully with the construction of the neural net, choice of training set, preprocessing input data as well as activation functions, but usually keep them constant and fairly simple, (cutoff, sigmoid, tanh).  This is analogous to training F(G) without using every possible test point, e.g. train with say 100 random inputs (points_X_max= 100) to see the effects, then consider selecting the 100 randomly or systematically in some way to maximise coverage and smoothness of the input X,Y space, e.g. (with 100 points you could train with every possible combination).

https://machinelearningmastery.com/choose-an-activation-function-for-deep-learning/

Also, is there a reason you do no use publish/import for your programs, its much easier this end and for casual users they would probably be more likely to look at it (1 click rather than several to download, cut and paste program somewhere then open it).


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(translated by Google translator)

Here's some unscientific theory from a true amateur and newbie.  Smile

Essentially, the job of a neuron is to set the output to the Y value of the activation function that corresponds to the specified X coordinate.
The value of this X coordinate is equal to the "Sum" which is obtained after adding up all the weighted inputs of the neuron.
As far as I know, a classical neuron tries to create a set of weights for its inputs that, after the weighting operation, will set the "Sum" to the desired value.
That is, "Sum" is the digital code (address on the X-axis) of the desired Y-point on the activation function graph.

Our branded SB-Neuron will use different technology.  Cool

Depending on the nature of the input data, the weights on the neuron's inputs will be set from the start and will not change.
In my example, I used 100 array cells to store the key points of the future activation function.
To ensure that any combination of inputs to the two inputs of my neuron would correspond to only one point on the X-axis, without repetition, the weights of the neuron's inputs were set to "1" and "10".
Thus, during training, any combination of input data in the range "0 >= X < 10" will point exactly to its own array cell, and not to any other.

If the input data has logical values (0 or 1), then 128 cells of the activation function array will be enough for the operation of a neuron with 7 inputs.

(I'm sorry, life forces me to interrupt my story. See you later)


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(09-23-2024, 09:36 AM)litdev Wrote: Hi,
...

Also, is there a reason you do no use publish/import for your programs, its much easier this end and for casual users they would probably be more likely to look at it (1 click rather than several to download, cut and paste program somewhere then open it).

I thought using the cloud was more convenient for users.  Blush

So, from now on I will use both methods of file distribution.  Angel

RCDS306.000


RE: Creation of SB-Neuron. Ours. Branded.(v2) - litdev - 09-23-2024

Worth pursuing, but ultimately I suspect you will conclude that the 'standard' way is best - but its great to test ideas and draw your own conclusions.  Do have a look at the link I posted previously, while it is great to pursure your own reasearch it is also good to see how others think about it.

One issue I see is that imagine we have a larger realistic case with say 100 neurons, storing an activation function for each one (potentially thousands of values in each) is going to need lots of memory and be slow to run - for non integers it will need to be interpolated.  My understanding is that the main purpose of the activation function is to convert summed weights from a previous layer to a neuron value between 0 and 1 (often called a compression function).


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

(09-23-2024, 11:15 AM)litdev Wrote: Worth pursuing, but ultimately I suspect you will conclude that the 'standard' way is best - but its great to test ideas and draw your own conclusions.  Do have a look at the link I posted previously, while it is great to pursure your own reasearch it is also good to see how others think about it.

One issue I see is that imagine we have a larger realistic case with say 100 neurons, storing an activation function for each one (potentially thousands of values in each) is going to need lots of memory and be slow to run - for non integers it will need to be interpolated.  My understanding is that the main purpose of the activation function is to convert summed weights from a previous layer to a neuron value between 0 and 1 (often called a compression function).

I am also very interested to know WHAT and WHY it will work out.  Rolleyes

This is the great value of the SB-Prime Editor and Extension you created. Thanks to you, many people have the opportunity to very easily use programming for benefit and pleasure.

Thank you on behalf of all these people!  Exclamation


RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-23-2024

I listened to my feelings and decided to see how SB-Neurons would learn to recognize a number on a matrix measuring 5 by 7 dots.  Undecided

Anyone who is interested can write a program (without the ANN extension) that will do the same thing, but using classical neurons.  Cool