I was reading interesting some articles about Mainframes (aka “Big Iron”), stating that they are still interesting in our area of cloud computing, clusters of inexpensive computers, and availability of high-performance computing for the workstations.
What makes the Mainframe special
The mainframes are specially designed and engineered to provide an incredible physical uptime, with claims of IBM to have one downtime every 50 years usage on some series. They are delivered as a complete solution, with high tag price and usage licence for almost any software and hardware part, including the CPU.
The main selling point is to pretend to have an incredibly high CPU utilization on real-world usage, that IBM and other translates into “having massive throughput”. In fact IBM refuse to compare it’s mainframes to other solutions, explaining that “real-world” performance are far more better than benchmark results, rarely submitting their mainframes to industry-standard tests!
How to expand your CPU usage on a cluster?
Effectively, when you build a cluster of servers to handle a tasks, even with large RAID bays, and even SSD, you might end-up with your CPU waiting for IO tasks, even if you plan to aggregate graps of computing/database servers with dedicated storage bays (a good starting point).
It is *NOT* because you are a dummy, nor because your storage is too slow: your ratio computing power/storage IOps is just badly balanced, and it’s the point where mainframes are largely superior!
To have a better balanced computing power/storage IOps, Mainframes usually have incredibly low processing power compared to their cost, thus keeping their expensive CPU with incredibly high usage rate!
Wanna show you could do the same with your actual workload? Just downclock your CPU, remove some CPU on multi-CPU boards, deactivate some core at the BIOS or Kernel level: you will end-up with a perfectly balanced solution, that could be compared to much more expensive Mainframe
Hardware vs. Service
We are switching from hardware and software into services, wether they are local, remote facilities, or on the cloud (Amazon, etc), we no more care about the servers, we don’t want to have to. Instead we expect to have services, from low-level (ie: Amazon block-based storage) to high-level (ie: web-based enterprise CRM). And we just expect them to work.
How we now handle reliability
As Google, Amazon and other demonstrated, reliability is no more a matter of hardware, but a matter of networked service, running on many servers. And instead expecting the hardware to run incredibly well, it’s expected to fail regularly.
The reliability is handled at the service-level (using frameworks such as OpenMP), to launch tasks on group of servers, relaunching some if some server fails, thus enabling usage of far less expensive servers, with quality parts but not relying on hardware redundancy. A side-effect is the electric power efficiency that is better than any other redundant architecture (thus reducing expenses!).
Dinosaurs for legacy software
These Mainframes Dinos are too expensive compared to any other solutions, they are not more reliable at the service-level (the one that really matter today!), they are proprietary in any way possible, and obsolete the same way big fast storage have been replaced by RAID arrays.
The only reason they are still there is to ensure compatibility with legacy software, and since the IBM/360 it has been the real selling point behind the official marketing line. Proprietary software vendors are locking companies into Dinosaur-era of computing, and they are paying the dimme to Mainframes manufacturer each year (or even each month!)…
Proprietary closed-source software have proven to be much more expensive than anyone might have thought when big companies bought (or leased) their firsts Mainframes…
Adapteva presented a KickStarter project, Parallela, a $99 boards that promise super-computing for any one, with highly parallel RISC engine. Designed to be programmed using OpenCL drivers and compilers, it’s an alternative to other OpenCL devices, including GPU, IBM Cell, etc.
This project is interesting at first, but I wonder who could be interested by this projet, even if you limit the budget at $99.
The Parallela board promised 32Gflops peak, for approximately 5W (maybe more on some cases), while a Quad-core PC with n OpenCL GPU will offers you 4000 Gflops (125X) for 500W (100X). A PC is easily 25% more efficient with a single GPGPU, and could offers as much as 2X more power efficiency (Gflops/Watt) with 3 or 4 GPU!
A $1000 PC configuration with an high-end GPU will cost you 10X the price of the Parallela boards, while delivering up to 125X the performance-level: it’s 12X more effective on Gflops/$!
And it get worse if you consider having an high-end PC with 3 or 4 GPU!
Alternative at $99 price point
You might consider to invest $99 in an AMD Radeon HD 7750, that support OpenCL (and some other tools), and offers at leat 820 Gflops SP (25X faster) while adding up to 70W power-consumption.
That’s 25X more Gflops / $ invested, and it’s 3X more power-efficient if you already own a PC with an available slot! Ouch!
Development tools are available for Windows and Linux too, including Open-Source dev tools. Moreover your development will run as is on any PC or Mac having an OpenCL-enabled graphic card or CPU driver!
Parallela, the supercomputing for who?!?
The Adapteva chips are interesting if you plan to create embedded high-performance devices, but in no case Parallela could be considered a SuperComputer, neither in performance-level, in Gflops/Watt or in GFlops/$ invested.
Still it’s an interesting project, because new players in the parallel-computing fields may be game changers in the long-run. The Epiphany-IV processor for example is far more interesting than the Epiphany-III proposed for Parallela, with 3X more performance on the same power enveloppe, and thus more power-efficient than a GPGPU solution.
Why not launching Parallela with Epiphany-IV directly?!?
nVidia have been the first major GPU designer to jump into the OpenCL wagon, a project initiated by Apple to enable cross-platform GPGPU development that is OS and vendor-agnostic, then maintained by the Khronos Group.
Today AMD and Intel are big players for OpenCL support, for both their CPU and GPU, while the new AMD Radeon GCN architecture is clearly performance leader on OpenCL when you use complex algorithms, while new nVidia Kepler architecture lag far behind the old Fermi architecture! Intel 2013 CPU+GPU architecture, Haswell, is expected to beat entry-level Kepler GT640 on any usage (Intel GT3 will beat it, trust me!).
nVidia have an hard time, with uneffective and deceptive Kepler architecture that is slower than AMD new architecture for both 3D and GPGPU, and could not even compete with 2010 nVidia architecture for GPGPU, on an open playfield that is OpenCL.
nVidia that was OpenCL leader is actually trying everything it could to stop supporting it, including removing comments or documentation in EXISTING OpenCL examples, not updating them, removing them from the SDK, etc.
One month ago, the nVidia forums have been hacked, and all the credentials have been stolen, including password hash, salted (but no confirmation at this time that the salt is unique to each account!).
One month after that the forums is still down; 3 weeks since nVidia promised to send new credential to reset our accounts and passwords. I wonder what’s happening, did they have web developpers, or did they use a third-party closed-source software that could not be fixed by nVidia or consultants?!?
The nVidia Forum was one main point for CUDA and OpenCL developpers to exchange informations and ideas, and it’s totally sad to see it down for so long…
This is a website I discovered, with the help of Charle, an incredible guy
Bryan Whitby created it’s own Computer Chess, using micro-controllers, existing Computer Chess, and any piece that fit in-between! It gaves me so much ideas!
I have a side-side-project, to connect my Novag Citrine with my Macs, but now I want to do it and go further, giving it autonomy (batteries) and eventually replace the electronic with a quad-core ARM micro-controller, to be able to run latest chess software on it, with a probable ELO of 2600-2700 while running on a battery pack! Sexxxxyyyyy!