Skip to content

Archive for March, 2014

24
Mar

No more CUDA in the tag line…

Yes, I removed CUDA from the tag line, and there are reasons behind this choice.

No, CUDA is not an obsolete or bad technology. In face CUDA is edgy for GPGPU computing, and from my point of view, the best for High-Performance Computing, clusters, research, etc. There’s no doubt for me that CUDA is and will be the platform for HPC for the next decade to come!

As a Mac user, all my Macs now have AMD GPU, and/or Intel iGPU. My PC development box have an AMD GPU installed, and no nVidia GPU. Was my choice for this one, because I am switching to OpenCL.

I am not targeting HPC, I am targeting everyone’s computer, and in this sense, CUDA is not appropriate anymore. OpenCL runs very well on Intel iGPU, AMD GPU and also nVidia GPU. It also run perfectly on AMD CPU and Intel CPU (using AVX2 on Haswell with OS X!).

Portability is a concern, an open-platform is also a plus (including for personal choices), so I am using OpenCL, and no more CUDA. Don’t think CUDA is a bad technology, it’s the best GPGPU solution for HPC, and the best proprietary GPGPU solution (but limited to nVidia GPU, and that’s the point!).

20
Mar

Fractal Forward

Fractal Forward is the name of my actual Chess Engine. It’s a strange beast, it don’t do thing the way it used to be, and that’s interesting in many ways.

Forward, as the preceding Chess Engine “Fast Forward”, going deep into the tree, that’s totally classical, and you will find that on any major actual Chess Engine, nothing to worry about, but at actual GPU speed, it go deeper and deeper. But Fast Forward is dumber than the other engines. It see more, but don’t understand. “The sage point to the moon and the idiot look at the finger” ?

Fractal because it consider the tree as a dynamic tree, and moreover it’s understanding of the tree itself as dynamic, changing over time, over iterations, with each in-depth iteration being identical as it’s tree iteration. What does that mean in practice?

Any major Chess Engine actually evaluate positions and intermediate nodes with different algorithms than for the tree by itself, position evaluation, quiescence, quick exchange evaluation, the need to evaluate some nodes more in-depth, each of this is a different algorithm, while clearly trying to do the same thing: see deeper on the tree without parsing it. If Quick Exchange Evaluation works well, or the others, why not using it at the root of the tree? Because they don’t work at all, they just hide that we don’t have processing ressources to parse the tree and have a correct view of what’s happening, and they do marvel at this, at the cost of algorithms and implementation complexity. Something that translates badly on GPU!

On the other side, if we have a good algorithm to travel the tree, why not applying it recurrently on each node? Recurrence, with a deeper and deeper view? Exactly as we could view a sponge closer and closer, it’s an endless tasks, but what’s interesting is that if we have MORE processing power using GPU, and a simple effective algorithm, instead of implementing specific algorithms for differently characterized nodes of the tree, we could just throw the tree at it, and it will grow naturally with a simple and unique view, that is homogenous wether you are at the root or considering a 18-plies deep move.

Interesting idea?