Skip to content

October 27, 2012

The Mainframe Dinosaur

I was reading interesting some articles about Mainframes (aka “Big Iron”), stating that they are still interesting in our area of cloud computing, clusters of inexpensive computers, and availability of high-performance computing for the workstations.

What makes the Mainframe special

The mainframes are specially designed and engineered to provide an incredible physical uptime, with claims of IBM to have one downtime every 50 years usage on some series. They are delivered as a complete solution, with high tag price and usage licence for almost any software and hardware part, including the CPU.

The main selling point is to pretend to have an incredibly high CPU utilization on real-world usage, that IBM and other translates into “having massive throughput”. In fact IBM refuse to compare it’s mainframes to other solutions, explaining that “real-world” performance are far more better than benchmark results, rarely submitting their mainframes to industry-standard tests!

How to expand your CPU usage on a cluster?

Effectively, when you build a cluster of servers to handle a tasks, even with large RAID bays, and even SSD, you might end-up with your CPU waiting for IO tasks, even if you plan to aggregate graps of computing/database servers with dedicated storage bays (a good starting point).

It is *NOT* because you are a dummy, nor because your storage is too slow: your ratio computing power/storage IOps is just badly balanced, and it’s the point where mainframes are largely superior!

To have a better balanced computing power/storage IOps, Mainframes usually have incredibly low processing power compared to their cost, thus keeping their expensive CPU with incredibly high usage rate!

Wanna show you could do the same with your actual workload? Just downclock your CPU, remove some CPU on multi-CPU boards, deactivate some core at the BIOS or Kernel level: you will end-up with a perfectly balanced solution, that could be compared to much more expensive Mainframe ;)

Hardware vs. Service

We are switching from hardware and software into services, wether they are local, remote facilities, or on the cloud (Amazon, etc), we no more care about the servers, we don’t want to have to. Instead we expect to have services, from low-level (ie: Amazon block-based storage) to high-level (ie: web-based enterprise CRM). And we just expect them to work.

How we now handle reliability

As Google, Amazon and other demonstrated, reliability is no more a matter of hardware, but a matter of networked service, running on many servers. And instead expecting the hardware to run incredibly well, it’s expected to fail regularly.

The reliability is handled at the service-level (using frameworks such as OpenMP), to launch tasks on group of servers, relaunching some if some server fails, thus enabling usage of far less expensive servers, with quality parts but not relying on hardware redundancy. A side-effect is the electric power efficiency that is better than any other redundant architecture (thus reducing expenses!).

Dinosaurs for legacy software

These Mainframes Dinos are too expensive compared to any other solutions, they are not more reliable at the service-level (the one that really matter today!), they are proprietary in any way possible, and obsolete the same way big fast storage have been replaced by RAID arrays.

The only reason they are still there is to ensure compatibility with legacy software, and since the IBM/360 it has been the real selling point behind the official marketing line. Proprietary software vendors are locking companies into Dinosaur-era of computing, and they are paying the dimme to Mainframes manufacturer each year (or even each month!)…

Proprietary closed-source software have proven to be much more expensive than anyone might have thought when big companies bought (or leased) their firsts Mainframes…

Read more from General

Comments are closed.