Creation of SB-Neuron. Ours. Branded.(v2) - Printable Version +- Small Basic Forum (https://litdev.uk/mybb) +-- Forum: Small Basic (https://litdev.uk/mybb/forumdisplay.php?fid=1) +--- Forum: Discussion (https://litdev.uk/mybb/forumdisplay.php?fid=4) +--- Thread: Creation of SB-Neuron. Ours. Branded.(v2) (/showthread.php?tid=133) |
RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-27-2024 LiitDev, Thank you very much for discussing this topic with me. The good thing about our public discussion is that it offers different ideas for people to think about for themselves. I understand what is happening in my SB-Neurons now. But, I do my job this way because it is interesting and it visualizes for other people the LOGICAL CHAIN of reasoning.
But I'm sure it will be interesting. RE: Creation of SB-Neuron. Ours. Branded.(v2) - z-s - 09-27-2024 I think there are 100 of variables and array don't they will reduce speed of ANN RE: Creation of SB-Neuron. Ours. Branded.(v2) - litdev - 09-27-2024 The way I see it is that we want to find a correlation between input and output. We think of the ANN as an analoge of a brain, where neurons are connected and the strength of the connection increases as it is used more often. The strength of a connection is its weight in our model. If we have only one set of connections between input and output, then the output is a linear function of input. If we have multiple hidden layers, then the ANN is capable of a non-linear response. We want each progressive layer to add increasing non-linearity (this is what enables an ANN to 'see' complex patterns, just like out brains do). However we would like each layer to not be over-dominated by a previous layers. For example, so that a very large weight doesn't over-saturate a node so all we see from then on is the one dominant weighted node - this is analagous to one node out-shining all those around it and we loose the detail coming from other nodes. The activation functions have the role to 'normalise' all node values to between 0 and 1 so that each layer of the network can add more subtlety to the system. In my understanding, the activation function is analagous to controlling the size of signal reaching a neuron that will determine if it fires itself. We don't want to over swamp it if we want to capture complexity. The activation function does not itself carry or hold information, rather it acts to keep the contrast nice in each layer. Again, I am probably not clear in my explanation, especially accross a change in language, but at least it is an interesting discussion. RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-27-2024 To LitDev, It seems to me that I understand you correctly. But, I see one specific place in the chain of patterns during the operation of the neutron. This place can be clearly seen in this image: In this image, you can see that "Sum" (e) is a simple coordinate on the x-axis of the activation function graph. You can call this "Sum" by any name. You can give "Sum" any meaning and perform any operations on it. But, you always want that every time a combination of values, for example, "ABCDEF", appears at the inputs of a neuron, then EVERY TIME the neuron sets the output to, for example, 0.8. You don't care how this is achieved. You need that if input = "ABCDEF", then output = 0.8. Then WHY should I, using weights, change the value of "ABCDEF" to "OPQRST" to get "0.8" from the sigmoid, if I can immediately write into my neuron that "ABCDEF" = 0.8 ? If I need a continuous activation function, then I must create it in exactly the form that will correspond to the truth. This is my goal right now. RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-28-2024 (translated by Google translator) Hi all. Today I woke up feeling completely confident that we should not try to create a neuron that can do ANY job well. What facts do we have?.. At the moment, the process of creating and training a neural network is a separate operation. The process of using a neural network is a different separate operation. Each operation solves its own tasks. But the classical neuron is designed in such a way that it itself has to both learn and work. ( as a result, he does both the first and second poorly ) Now we are creating a neural network that should not self-learn. Therefore, we have the opportunity to create a SB-Neuron that will work very well, but will not learn at all. The design of the SB-Neurons in our SB-Neural Network will be determined by the TRAINING PROGRAM. Our training program will:
( Cool. Right? ) RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-30-2024 (translated by Google translator) Hi all. I wanted to try to create a SB-Neural Network for the task of converting pixel color from RGB to HSL. To begin with, I selected only the "H" parameter for the neural network output. I plotted the required activation function for the case when no preliminary preparation of the input data is carried out. My mouth dropped open in surprise when I saw this graph. I posted two screenshots on my OneDrive so that people who want to can take a look. https://1drv.ms/f/s!AnoSlTzMqlL6jNx6H4QimMtZKF62Sw?e=8D73FE The screenshots show the same graph, but with different scales on the X-axis. The first screenshot shows one of the thirty-two sections of the chart. The second screenshot already shows the 4 very first such sections. I wonder how much memory you wouldn't mind allocating to store the activation function of a neural network that can perform such a conversion with sufficient quality? RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 09-30-2024 I added 2 more screenshots to OneDrive. I have a feeling that this task can only be solved by breaking it down into smaller tasks. RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 10-01-2024 (translated by Google translator) Hello. My thoughts about the structure of our SB-Neuron led me to splines. As a starting point, the question that's in my head right now is: "What happens if we insert a custom Bezier curve into a classic neuron instead of a sigmoid as the activation function?". And, during training, we will change not only the input data weights, but also the shape of the Bezier curve. What do you think will come of this? RE: Creation of SB-Neuron. Ours. Branded.(v2) - litdev - 10-01-2024 Hi, Some fun , but unfortunately not a useful ANN RE: Creation of SB-Neuron. Ours. Branded.(v2) - AbsoluteBeginner - 10-01-2024 (10-01-2024, 05:26 PM)litdev Wrote: Hi, But, in each case it depends on the user. History knows many examples when human ingenuity turned a useless thing into a valuable tool. Remember, at least, the stone. With a little ingenuity we got axes, knives and spearheads. |