Fractal Forward is the name of my actual Chess Engine. It’s a strange beast, it don’t do thing the way it used to be, and that’s interesting in many ways.
Forward, as the preceding Chess Engine “Fast Forward”, going deep into the tree, that’s totally classical, and you will find that on any major actual Chess Engine, nothing to worry about, but at actual GPU speed, it go deeper and deeper. But Fast Forward is dumber than the other engines. It see more, but don’t understand. “The sage point to the moon and the idiot look at the finger” ?
Fractal because it consider the tree as a dynamic tree, and moreover it’s understanding of the tree itself as dynamic, changing over time, over iterations, with each in-depth iteration being identical as it’s tree iteration. What does that mean in practice?
Any major Chess Engine actually evaluate positions and intermediate nodes with different algorithms than for the tree by itself, position evaluation, quiescence, quick exchange evaluation, the need to evaluate some nodes more in-depth, each of this is a different algorithm, while clearly trying to do the same thing: see deeper on the tree without parsing it. If Quick Exchange Evaluation works well, or the others, why not using it at the root of the tree? Because they don’t work at all, they just hide that we don’t have processing ressources to parse the tree and have a correct view of what’s happening, and they do marvel at this, at the cost of algorithms and implementation complexity. Something that translates badly on GPU!
On the other side, if we have a good algorithm to travel the tree, why not applying it recurrently on each node? Recurrence, with a deeper and deeper view? Exactly as we could view a sponge closer and closer, it’s an endless tasks, but what’s interesting is that if we have MORE processing power using GPU, and a simple effective algorithm, instead of implementing specific algorithms for differently characterized nodes of the tree, we could just throw the tree at it, and it will grow naturally with a simple and unique view, that is homogenous wether you are at the root or considering a 18-plies deep move.
I am back on OpenCL development, having worked for a big Canadian media company in 2013, I will have time on 2014 to work on OpenCL, and I think it’s time for OpenCL to be mainstream!
The signal is the commitment of Apple to OpenCL technology, with the new Mac Pro, and it’s dual GPU. Maybe these 2 GPU are overstated or overrated on many websites, with performance-level ranging from (actual) Radeon R9 280X ($300 street price) to Radeon HD 7990 on customized Mac Pro. This is not expected level of performance expected by a $4000+ computer, especially with non-pro hardware (no ECC for example).
The main point is software. The new Apple’s Final Cut know hot to use OpenCL, across at least 2 GPU, and this is a big news, as it is much more complex to handle multiple OpenCL devices than just one, synchronize them and use them at their bests. Apple is making a strong point with Final Cut to show how OpenCL and multiple GPU could unload CPU, and may offers unprecedented performance-level for software that make good use of OpenCL.
At the same time, Intel offering for their Haswell integrated GPU is mature, with impressive hardware, Iris Pro HD5200 being an incredible iGPU for small dataset, and solid OpenCL drivers. Yes AMD is offering good iGPU, but we are all awaiting them to be built on GCN 1.1, and having same incredible memory/cache bandwidth (sorry AMD you seems to lag behind on iGPU).
nVidia is still playing it’s game with Kepler, that is all but impressive on real-world GENERAL PURPOSE GPU usages, but may unleash a new architecture in 2014 that may put them back into the game. CUDA is dead outside HPC world, OpenCL is leading the GPGPU world, that’s what I was expecting, nVidia must come back with strong OpenCL development tools (based on their current impressive CUDA tools), to re-establish itself as a leader in GPGPU for all. I remember my GeForce 8800GTS 320MB, the first generation of GPGPU, and an impressive performer for it’s time: a game changer.
I wish you all an awesome 2014, I know that for me and OpenCL developers, there will be incredible opportunities
This is a good news, Apple is communicating about the future Mac Pro raw processing power using OpenCL technology, a GPGPU technology that Apple created and then gave to Khronos Group. It’s a good step forward.
The step backwartd is that the 2008 Mac Pro supports 2 AMD Radeon 7990, offering them 16Tflops of raw power, more than 2X the power of the future Mac Pro. And this is deceptive, as usually with Apple announcement…
Adapteva presented a KickStarter project, Parallela, a $99 boards that promise super-computing for any one, with highly parallel RISC engine. Designed to be programmed using OpenCL drivers and compilers, it’s an alternative to other OpenCL devices, including GPU, IBM Cell, etc.
This project is interesting at first, but I wonder who could be interested by this projet, even if you limit the budget at $99.
The Parallela board promised 32Gflops peak, for approximately 5W (maybe more on some cases), while a Quad-core PC with n OpenCL GPU will offers you 4000 Gflops (125X) for 500W (100X). A PC is easily 25% more efficient with a single GPGPU, and could offers as much as 2X more power efficiency (Gflops/Watt) with 3 or 4 GPU!
A $1000 PC configuration with an high-end GPU will cost you 10X the price of the Parallela boards, while delivering up to 125X the performance-level: it’s 12X more effective on Gflops/$!
And it get worse if you consider having an high-end PC with 3 or 4 GPU!
Alternative at $99 price point
You might consider to invest $99 in an AMD Radeon HD 7750, that support OpenCL (and some other tools), and offers at leat 820 Gflops SP (25X faster) while adding up to 70W power-consumption.
That’s 25X more Gflops / $ invested, and it’s 3X more power-efficient if you already own a PC with an available slot! Ouch!
Development tools are available for Windows and Linux too, including Open-Source dev tools. Moreover your development will run as is on any PC or Mac having an OpenCL-enabled graphic card or CPU driver!
Parallela, the supercomputing for who?!?
The Adapteva chips are interesting if you plan to create embedded high-performance devices, but in no case Parallela could be considered a SuperComputer, neither in performance-level, in Gflops/Watt or in GFlops/$ invested.
Still it’s an interesting project, because new players in the parallel-computing fields may be game changers in the long-run. The Epiphany-IV processor for example is far more interesting than the Epiphany-III proposed for Parallela, with 3X more performance on the same power enveloppe, and thus more power-efficient than a GPGPU solution.
Why not launching Parallela with Epiphany-IV directly?!?
nVidia have been the first major GPU designer to jump into the OpenCL wagon, a project initiated by Apple to enable cross-platform GPGPU development that is OS and vendor-agnostic, then maintained by the Khronos Group.
Today AMD and Intel are big players for OpenCL support, for both their CPU and GPU, while the new AMD Radeon GCN architecture is clearly performance leader on OpenCL when you use complex algorithms, while new nVidia Kepler architecture lag far behind the old Fermi architecture! Intel 2013 CPU+GPU architecture, Haswell, is expected to beat entry-level Kepler GT640 on any usage (Intel GT3 will beat it, trust me!).
nVidia have an hard time, with uneffective and deceptive Kepler architecture that is slower than AMD new architecture for both 3D and GPGPU, and could not even compete with 2010 nVidia architecture for GPGPU, on an open playfield that is OpenCL.
nVidia that was OpenCL leader is actually trying everything it could to stop supporting it, including removing comments or documentation in EXISTING OpenCL examples, not updating them, removing them from the SDK, etc.