We would like to build a community for Small Basic programmers of any age who like to code. Everyone from total beginner to guru is welcome. Click here to register and share your programming journey!


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Creation of SB-Neuron. Ours. Branded.(v2)
#21
LiitDev,
Thank you very much for discussing this topic with me.
The good thing about our public discussion is that it offers different ideas for people to think about for themselves.

I understand what is happening in my SB-Neurons now. But, I do my job this way because it is interesting and it visualizes for other people the LOGICAL CHAIN of reasoning.
  1. collect training data into a rigid array;
  2. replace the arrays data with mathematical equations that will describe these arrays with sufficient accuracy;
  3. obtain a trained neuron whose ACTIVATION FUNCTION best matches the training data. I really dislike the decision to distort the input data using weights to fit it into a fixed shape of a constant activation function.  Huh
We'll see what we can achieve.
But I'm sure it will be interesting.  Smile
Reply
#22
I think there are 100 of variables and array don't they will reduce speed of ANN
ZS
Reply
#23
The way I see it is that we want to find a correlation between input and output.  We think of the ANN as an analoge of a brain, where neurons are connected and the strength of the connection increases as it is used more often.  The strength of a connection is its weight in our model.

If we have only one set of connections between input and output, then the output is a linear function of input.  If we have multiple hidden layers, then the ANN is capable of a non-linear response.

We want each progressive layer to add increasing non-linearity (this is what enables an ANN to 'see' complex patterns, just like out brains do).  However we would like each layer to not be over-dominated by a previous layers.  For example, so that a very large weight doesn't over-saturate a node so all we see from then on is the one dominant weighted node - this is analagous to one node out-shining all those around it and we loose the detail coming from other nodes.  The activation functions have the role to 'normalise' all node values to between 0 and 1 so that each layer of the network can add more subtlety to the system.

In my understanding, the activation function is analagous to controlling the size of signal reaching a neuron that will determine if it fires itself.  We don't want to over swamp it if we want to capture complexity.  The activation function does not itself carry or hold information, rather it acts to keep the contrast nice in each layer.

Again, I am probably not clear in my explanation, especially accross a change in language, but at least it is an interesting discussion.
[-] The following 1 user Likes litdev's post:
  • z-s
Reply
#24
To LitDev,
It seems to me that I understand you correctly.
But, I see one specific place in the chain of patterns during the operation of the neutron.
This place can be clearly seen in this image:
   
In this image, you can see that "Sum" (e) is a simple coordinate on the x-axis of the activation function graph.
You can call this "Sum" by any name.
You can give "Sum" any meaning and perform any operations on it. But, you always want that every time a combination of values, for example, "ABCDEF", appears at the inputs of a neuron, then EVERY TIME the neuron sets the output to, for example, 0.8.

You don't care how this is achieved. You need that if input = "ABCDEF", then output = 0.8.
Then WHY should I, using weights, change the value of "ABCDEF" to "OPQRST" to get "0.8" from the sigmoid, if I can immediately write into my neuron that "ABCDEF" = 0.8 ?

If I need a continuous activation function, then I must create it in exactly the form that will correspond to the truth.

This is my goal right now.  Shy
Reply
#25
(translated by Google translator)

Hi all.
Today I woke up feeling completely confident that we should not try to create a neuron that can do ANY job well.
What facts do we have?..

At the moment, the process of creating and training a neural network is a separate operation. The process of using a neural network is a different separate operation.
Each operation solves its own tasks.
But the classical neuron is designed in such a way that it itself has to both learn and work.
( as a result, he does both the first and second poorly )

Now we are creating a neural network that should not self-learn.
Therefore, we have the opportunity to create a SB-Neuron that will work very well, but will not learn at all.
The design of the SB-Neurons in our SB-Neural Network will be determined by the TRAINING PROGRAM.

Our training program will:
  1. collect training information;
  2. analyze and process this information;
  3. create an activation function for the SB-Neurons that best matches the received training information.
Then, the created activation functions will be placed into the SB neurons and the network will be ready for use.  Cool

( Cool. Right? )  Big Grin
[-] The following 1 user Likes AbsoluteBeginner's post:
  • litdev
Reply
#26
(translated by Google translator)

Hi all.  Shy

I wanted to try to create a SB-Neural Network for the task of converting pixel color from RGB to HSL.
To begin with, I selected only the "H" parameter for the neural network output.
I plotted the required activation function for the case when no preliminary preparation of the input data is carried out.
My mouth dropped open in surprise when I saw this graph.
I posted two screenshots on my OneDrive so that people who want to can take a look.

https://1drv.ms/f/s!AnoSlTzMqlL6jNx6H4Qi...w?e=8D73FE

The screenshots show the same graph, but with different scales on the X-axis.
The first screenshot shows one of the thirty-two sections of the chart.
The second screenshot already shows the 4 very first such sections.

I wonder how much memory you wouldn't mind allocating to store the activation function of a neural network that can perform such a conversion with sufficient quality?  Wink
Reply
#27
I added 2 more screenshots to OneDrive.

I have a feeling that this task can only be solved by breaking it down into smaller tasks.  Huh
Reply
#28
(translated by Google translator)

Hello.  Shy
My thoughts about the structure of our SB-Neuron led me to splines.

As a starting point, the question that's in my head right now is: "What happens if we insert a custom Bezier curve into a classic neuron instead of a sigmoid as the activation function?". And, during training, we will change not only the input data weights, but also the shape of the Bezier curve.

What do you think will come of this?  Blush
Reply
#29
Hi,

Some fun Smile , but unfortunately not a useful ANN Sad
Reply
#30
(10-01-2024, 05:26 PM)litdev Wrote: Hi,

Some fun, but unfortunately not a useful ANN

But, in each case it depends on the user.  Cool
History knows many examples when human ingenuity turned a useless thing into a valuable tool.
Remember, at least, the stone.
With a little ingenuity we got axes, knives and spearheads.  Big Grin
[-] The following 1 user Likes AbsoluteBeginner's post:
  • litdev
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)