I was spied by local and foreign agencies since mid-2000′s, I had to strengthen security on my network, and have to isolate some of my development computers. Given the recent informations Edward Snowden gives us all, I might just have loss my time, since NSA and other agencies have incredibly efficient spying tools.
I am going back to OpenCL development, and don’t want to have them given to foreign companies, or even get examined by foreign agencies…
Through nVidia K20x (and now K40), I have received proposals to run my OpenCL developments on supercomputers based on USA, and paid by US agencies or US army. They never proposed me computers running on my country (Canada). If I would have accepted that to put my (virtual) hands on K20x or K40, I would have offered my code to any US agency, with a possible “leak” to US companies, that may use it, or even patent it or their own.
They are working hard to have insight into my personal work. They might already be able to spy on my main computers. I am clearly thinking about creating a strongly protected computer, non-networked, bought used, without sound I/O, to work on my personal projects for 2014. My code is MY code, not their. Happy 2014 NSA year!
I am back on OpenCL development, having worked for a big Canadian media company in 2013, I will have time on 2014 to work on OpenCL, and I think it’s time for OpenCL to be mainstream!
The signal is the commitment of Apple to OpenCL technology, with the new Mac Pro, and it’s dual GPU. Maybe these 2 GPU are overstated or overrated on many websites, with performance-level ranging from (actual) Radeon R9 280X ($300 street price) to Radeon HD 7990 on customized Mac Pro. This is not expected level of performance expected by a $4000+ computer, especially with non-pro hardware (no ECC for example).
The main point is software. The new Apple’s Final Cut know hot to use OpenCL, across at least 2 GPU, and this is a big news, as it is much more complex to handle multiple OpenCL devices than just one, synchronize them and use them at their bests. Apple is making a strong point with Final Cut to show how OpenCL and multiple GPU could unload CPU, and may offers unprecedented performance-level for software that make good use of OpenCL.
At the same time, Intel offering for their Haswell integrated GPU is mature, with impressive hardware, Iris Pro HD5200 being an incredible iGPU for small dataset, and solid OpenCL drivers. Yes AMD is offering good iGPU, but we are all awaiting them to be built on GCN 1.1, and having same incredible memory/cache bandwidth (sorry AMD you seems to lag behind on iGPU).
nVidia is still playing it’s game with Kepler, that is all but impressive on real-world GENERAL PURPOSE GPU usages, but may unleash a new architecture in 2014 that may put them back into the game. CUDA is dead outside HPC world, OpenCL is leading the GPGPU world, that’s what I was expecting, nVidia must come back with strong OpenCL development tools (based on their current impressive CUDA tools), to re-establish itself as a leader in GPGPU for all. I remember my GeForce 8800GTS 320MB, the first generation of GPGPU, and an impressive performer for it’s time: a game changer.
I wish you all an awesome 2014, I know that for me and OpenCL developers, there will be incredible opportunities
I was in vacation in the New Orleans two weeks ago, and I had the chance to visit the Cemetery Saint-Louis #1, where you could find the Voodoo mastress Marie Laveau, but also the chess prodigy and unofficial world chess champion Paul Morphy.
I appreciated the chess pieces that were offered to his memory. This cemetery is part of the history of New orleans, you must absolutely visit it, during the day, with a guide!
Intel stated that it’s Xeon Phi would be more efficient than GPGPU for HPC. That Xeon Phi would have better linpack/watt and also beter linpack/raw TFlop ratio.
The first 2 supercomputers on the Top500 june’13 list are using Intel Xeon Phi and nVidia K20x respectively, so we could compare their metrics, and especially their efficiency.
The first point is Linpack TFlop/Watt, Xeon Phi delivers 33862 Linpack Tflops with 17808 KW (1.90 Tflop/KW), when K20x delivers 17590 Linpack Tflops with 8209 KW (2.14 Tflop/KW), the K20x architectured supercomputer is 12.7% more power efficient!
The second point is the ratio of RAW processing power compared to Linpack, to evaluate effectivness of an architecture, knowing that GPU aren’t as efficient as CPU, and that Intel claimed that with it’s x86 core, the Xeon Phi will be much more efficient than nVidia K20x, providing real-world performances closer than it’s peak theorical performance.
Alas on this second point, as I expected, Xeon Phi lag behind K20x another time, with 61.7% efficiency, where nVidia GPU obtains 64.9%, proving that the Intel architecture is not mature enough and still needs iterations.
Clearly, the K20x consume less power, and it’s architecture (hardware and software) is more mature and efficient than Intel Xeon Phi!
I am working on Chess Engine, in fact Chess Engines, and I choose to use high-level object-oriented scripting language!
This is totally inefficient to have a Chess Engine in High-Level language, especially Object-Oriented (and yes I do use classes, objects and it’s fun!): they are far slower than any low-level language, and couldn’t compare in any way to good optimized assembly code, using AVX or AVX2!
But for development, it’s much more funny and cool to use a high-level language, it’s tool, it’s ability to detect errors, including array index errors, and in fact the most important thing is not how you optimize your code, but the quality of the algorithms, and for Chess that’s all that matter.
Naturally, for a product, you will code it into low-level language (as ANSI-C or hand-written assembly for some functions), but this is the last part of the development, not it’s core.
So, I am writing Chess Engines, I have choosen names that are no more in use, but will speak to people in their late forties (as myself) or older, because, they are history, they were dreaming about them, and probably, they want to play with them again, a sense of history…
It’s also a tribute to Alain Turing, war hero – marathon man – computer scientist – IA pioneer – bad chess player – gay – genious (choose your flavor!), and also Dan & Kathe Spracklen, that have done the first microcomputer chess program that win a contest and beat million dollar minicomputers, they have made history of chess computing through TuroChamp and Sargon Chess.