Analysis Hyperconverged infrastructure appliances (HCIAs) are ready to take on the bulk of data centre x86 workloads but won’t necessarily kill off the SAN.
That’s the conclusion drawn from a conversation with Chad Sakac, president of Dell EMC’s converged platform division, at Dell EMC World in Las Vegas, where hotels hyperconverge conferencing, gambling, dining, drinking and shopping in one air-conditioned, casino-based appliance hell.
El Reg started out with three observations we wanted to test.
HCIAs are minicomputer clusters reinvented
In the 1980s a VAXCluster (PDF) was a group of 16 loosely coupled VAX minicomputers running as a single system with shared storage. The cluster used a proprietary networking scheme to link the cluster nodes.
Isn’t that a hyperconverged infrastructure appliance?
Yes but no, Sakac said. Inside an HCIA the network is in system scope whereas with the old minicomputer clusters it was not.
Also HCIAs scale out far, far more than an old VAXCluster and its 16-way clustering. VxRail can scale to hundreds of nodes and there is a path to VxRack SDDC and thousands of nodes. Dell EMC wants to make it easier to scale from VxRail to VxRack SDDC.
HCIAs are inherently on-premises systems
El Reg asserted that HCIAs are intrinsically on-premises systems and not for managed server providers. Was this the case?
Largely yes but not always. Sakac said Dell EMC has customers who deploy HCIA and CI in managed co-location facilities, which we reckon are virtually on-premises and not a public cloud-like facility.
He said there has been little success in supplying converged and hyperconverged infrastructure to managed service providers (MSPs), except when they provide single-tenant services.
Vodafone, for example, uses Vblocks for its single-tenant customers, and Atos uses VxRack, VxRack SDDC and Azure Stack in a single-tenant fashion.
The big cloud suppliers build their own systems from commodity components for multi-tenant customers.
Will HCIAs ever use shared external arrays?
HCIAs build their own virtual SAN from the direct-attached storage on each mode, agglomerating it and turning it into a shared resource. Therefore it would seem this replaces a shared access, external SAN such as a VMAX or Unity storage array. Isn’t this the case?
Yes, largely, and no, again. As a statement of direction, HCI and software-defined storage (SDS) are ready for the majority of x86 workloads by content, not value.
This is not the current situation. Some workloads must have shared arrays because they need:
- Symmetrix Remote Data Facility (SRDF) and/or other specific services
- Extreme capacity density (bit buckets) for which standard servers are not suited – too few PB/floor tile
- Consistent response being very sensitive to latency jitter
These workloads need an external array and are high in value but low in number.
Conceptual scheme showing an HCIA rack accessing a physical SAN
Such applications could run inside a VxRail or XC HCIA system but access a physical SAN accessed through a top-of-rack switch when their workloads demand it.
Sakac made a number of other points.
The HCI market has a 100 per cent compound annual growth rate (CAGR) while Dell EMC HCI has a 208 per cent CAGR. And, as of now, HCIAs outsell CI by an order of magnitude.
It’s game over for any other major vendor to enter the HCIA market, unless, maybe, with a turnkey cloud stack, like Azure Stack.
VxRack meets the requirements of composable systems, and is constructed from commodity components instead of using speciality hardware. Systems like HPE’s Synergy are proprietary with a separate API, and a separate management box.
So, he implies, why not run that management box code as an app in the HCI system itself, like VxRack? Then the whole thing runs on commodity components and is software defined.
Physical SAN endgame
El Reg thinks Dell EMC is telling its customers that HCIA and CI is the future, that it has a path for them to get there, and there is no need whatsoever to forklift upgrade their physical SANs for the logical SANs inside converged infrastructures.
The industry-wide physical to logical SAN transition will take many years and each customer can do it at their own pace and in their own time. The endgame for physical SANs for most x86 applications is ten, fifteen or more years out. Don’t book the forklift truck just yet. ®