Is modular computing the new mainframe?

22 Aug 2011

Over the last few months, I have been talking with a range of hardware vendors, from those large, full-stack vendors, to the not-so-large, best of breed component vendors.  I have also been in discussions with quite a few datacentre managers, IT directors and business people, about where they see the hardware market going. What is becoming increasingly apparent is that there may well be blood on the carpet before too long.

With IBM, HP, Dell and Oracle, there is a strong move towards a complete vendor stack – all the hardware coming from the one vendor, with as much of the software stack as possible being under their control as well.  For Oracle, this can mean that everything right up to the application itself could be from a single vendor; with IBM, everything right up to the application server and the functions around it such as business intelligence and reporting.  From HP, it could be a full hardware stack with increasing amounts of software built on it; with Dell, a hardware stack built to support a software stack supplied by a highly trusted partner (in this case, Microsoft).  Even Cisco, with its unified computing system (UCS) is in on the game, with an equivalent approach to Dell.

These big guys have been massively acquisitive, plugging holes in their portfolios so that the stack becomes more and more complete.

So where does this leave the best of breed vendors?  Is there a place for the likes of Juniper and Brocade in the network space?  How about NetApp and Coraid – and even EMC - in storage? 

What we are seeing is the emergence of the new mainframe – an architected piece of technology that is tuned to do the total job required of it.  By gaining control over as much of the stack as is possible, the big vendors can gain overall benefits by using proprietary internal systems while ensuring that anything exposed to the outside world adheres to de facto standards.  As a by-product, it also tends to make customers more “sticky”, as moving away from such investments will not be technically or financially easy.

"What is becoming increasingly apparent is that there may well be blood on the carpet before too long"

The user gets systems that can be implemented more rapidly, with less set-up and integration required.  As the user runs out of resources, additional systems from the same vendor can be brought in, plugged in alongside existing units and be absorbed into the overall virtual environment in a (relatively) seamless manner. 

But there will be two issues that will mean the best of breed vendors still have a good run ahead of them.  First, buyers do not like lock-in, and single vendor stacks worry them both technically and financially.  Technically, because as the landscape changes, if the vendor you have chosen does not follow that route, you cannot gain the advantages of a new technology. Financially, because once you are tied into to a specific vendor, then you are at their mercy when it comes to spares, updates, maintenance and so on.  Although high levels of heterogeneity are anathema, complete homogeneity is seen by many as being as bad.

Next is the need to run different workloads in different environments. Oracle is unlikely to be a big player in the Windows application space.  Dell will be Windows with a splash of Linux.  Cisco is focusing on Windows and Linux.  HP is in a bit of a mess at the moment, but is likely to be Windows, plus Linux plus HP-UX .  IBM has AIX, Linux, Windows, iOS and MVS – and is looking at bringing all of these together on its hybrid platform, the zEnterprise.

This is where the best of breed vendors stand their chance.  With different workloads needing different compute capabilities; with different storage needs across, for example, server-based computing, big-data, email and file and print; with networks needing to be far more intelligent as to how they deal with the workloads thrown at them, the best of breeds have to become “better than the incumbent” – not as replacements, but as layered-on capabilities that will extend the capabilities of the underlying compute blocks.

This is what happened in the mainframe world – companies grew and prospered by offering things that the mainframe manufacturers themselves either could not or did not do as they failed to see the need, or saw the need but felt the financial returns would be inadequate.
And anyway – existing investments in standard data centre racks will not go away overnight.

The best of breeds can still look to low-hanging fruit for day-to-day revenues, can focus on morphing to become a vendor of value-add systems, and also keeping an eye out for where the big players still have holes in their portfolios – and trying in many cases to be acquired as that piece of the jigsaw.

Clive Longbottom, 
Service Director, Business Process Analysis
, Quocirca

blog comments powered by Disqus