This version (XXMZ135.000-0) uses LDFastArray - which is still not an optimal container with overheads.
On my PC using 50x50 and 10% initial fill, your original model takes about 106s, the modification above is about 55s, and if I comment out AffichageTableau then it takes about 12s. Still very slow compared to a native program that doesn't have the inherent overheads of SB, but we start to learn where the performance can be gained. Would we have learned this if we just accepted the run time of 1s - maybe I could write some carefully optimised GPU code and do it in 1ms, or with fancy GPU 1us.
PS - My basic code in C++ runs in about 12 ms for the same 50*50 , 10% case using your algorithm with no parallelisation or special vectorisations (also no visualisation), which is maybe comparable to the QB64 that I believe converts the source to C++ source and compiles that.
In the C++ parallelisation overheads don't help for such a small problem, if I do a 1000x1000 10% for 500 steps with and without parallelisation on my PC it is 0.5s and 4.2sec.
On my PC using 50x50 and 10% initial fill, your original model takes about 106s, the modification above is about 55s, and if I comment out AffichageTableau then it takes about 12s. Still very slow compared to a native program that doesn't have the inherent overheads of SB, but we start to learn where the performance can be gained. Would we have learned this if we just accepted the run time of 1s - maybe I could write some carefully optimised GPU code and do it in 1ms, or with fancy GPU 1us.
PS - My basic code in C++ runs in about 12 ms for the same 50*50 , 10% case using your algorithm with no parallelisation or special vectorisations (also no visualisation), which is maybe comparable to the QB64 that I believe converts the source to C++ source and compiles that.
In the C++ parallelisation overheads don't help for such a small problem, if I do a 1000x1000 10% for 500 steps with and without parallelisation on my PC it is 0.5s and 4.2sec.