This page here, is meant to help "play" with various algorithms, random data sets, etc.
And benchmark their performance on your computer (CPU vs GPU) respective to the datasize.

[SECURITY WARNING] : This playground here does an "evil" eval on the input functions. So obviously do not carelessly copy and paste things here.

Our main site is found at https://gpu.rocks

Enjoy =)

~ GPU.JS team

Step 1) Setup your input parameters generator

Setup your various argument generator settings here.

- Number of parameters : To pass to the final GPU.js function
- Sample size : Size for the sample display
- Random seed : Seed value, used by the random function
(Pseudo random is chosen to make testing and debugging the parameter function generator easier and consistent)

Each parameter function are passed in 2 attributes

- size : The sample size used for this iteration
- rand : The 0-1 floating value, pseudo random generator used for this sample size, and parameter count.

Additionally a dimension generator function, is required to declare the dimensions used

- size : The sample size used for this iteration

				

Step 2) Program your kernel function

Code out the kernel, do note the following tips.

- Parameter functions is automatically called to provide args
- if/else is expensive in GPU, if/else in loops is even more expensive


			

Step 3) BENCH! CPU vs GPU

Setup your sample size upper, lower bounds. Its increment size. Benchmark iterations. And bench it!

Generally speaking however, these are common learning notes.

- Due to the non-negligible overhead of running the webgl engine, small data sample sizes (such as <= 250) tends to be slower on GPU. Cut off point varies between kernel and machines

- There is a small data transfer cost, to move from JS to GPU, paid by the CPU. Which is proportional to the data size. As such extremely simple kernel (such as A+B) will always be slower in GPU

Average Time taken

GPU Performance improvement