Hyperconvergence is a lazy ideal
[NOTE: Here’s a quick comment about this on April 14th, 2014…]
This blog post has generated a bunch of attention over the past few days. Much of this is because Duncan Epping felt that I was trying to FUD VMware’s VSAN. In particular, Duncan really seems to dislike my use of the term “lazy” here.
I’m a bit surprised at the reaction, so let me quickly clarify a couple of things and then leave my original post to speak for itself:
First, Coho started its life as a converged compute/storage company: We originally wanted to build a VM appliance-based scale out storage system. This is hardly a secret, in that I’ve frequently talked about it in presentations about our background and architecture. One of the big lessons that we took from our early work was that it is hard to drive utilization from flash (and so it’s hard to get good value out of flash) when you co-locate it on your servers. I discuss this in a bit more detail in the article below.
Second, I get asked all the time by customers to explain the differences between our architecture and hyperconverged ones. In a lot of the conversations that I’ve had with customers lately there seems to be a strong attraction to hyperconverged as the only disruptive alternative to existing enterprise storage systems. This article is a response to that: The people selling hyperconverged offerings seem to be making the position that storage is naturally better if it runs on the same servers as your VMs — that basically there are only two ways to build enterprise storage.
This is the position that I think is lazy.
It really stings to be accused of producing FUD. Data center storage is complex. The idea that there is only one “new” way to do it is not giving the design space the full and careful consideration that it deserves. I’m sorry if pointing this out is ruffling feathers with the hyperconverged advocates, but as a conversation about system design, it’s absolutely not meant to be FUD. My blog post below talks about our architecture, and experiences that came from starting there, and how they have led to where we are now. It was intended to provoke discussion, and so in that regard it seems to have done its job!
[Original post follows:]
Let’s say you really like tea.
In the United Kingdom, there are a lot of people who really like tea. They drink a lot of the stuff every day. They own kettles with heating elements that are caked with calcium, and own cups whose insides are stained with earthy-toned residue — the wear on their crockery is a badge for their commitment and devotion. They drink tea in the afternoon. They drink it mid-morning, and sometimes even after dinner.
But the most rewarding cup of tea in the day — and as a coffee drinker, I think I can still relate to this — is that first cup of tea in the morning. The head-clearing, eye-opening mug that welcomes another new day.
And here lies the seed of innovation: You like tea in the morning. You need to wake up in the morning. What if the same machine that abruptly woke you from your cozy and restful night’s sleep could also welcome you to the day with open arms, by providing a steaming hot cup of English Breakfast?
This device actually exists: Invented in the 1930’s, a Teasmade is a combination alarm clock and single-serving tea steeper. Teasmades were incredibly popular in the 1960’s and 70’s, and share membership with other luxuries of the time (like electric blankets and smoking in bed) that were pleasurable, but had a discernible chance of either maiming you or burning down your house.
Teasmades are a “converged” appliance.
What is hyperconvergence anyway?
There has been a lot of talk in enterprise computing lately about “convergence,” “hyperconvergence,” and “converged infrastructure”. In pretty much all of the applications where I’ve seen the term invoked recently, convergence means: “Let’s build a single hardware appliance that unifies enterprise VM hosting and enterprise storage into a scalable brick.” As such, converged IT carries the ideal that it is preferable not to split out compute and storage into separate aspects of the datacenter, as has been done in the past.
Why is convergence attractive?
Convergence seems to be attractive for two main reasons: a (woefully ill-informed) perception of efficiency, and corporate politics.
The efficiency argument with convergence goes something along the lines of not needing to spend on two entirely independent IT cost centers. It argues that there is a natural efficiency in moving data closer to the applications that consume it. This argument is often punctuated with the observation that very large-scale datacenter companies, like Google, use converged architectures internally, rather than buying monolithic enterprise storage. Your workloads are a lot like Google’s, right? If so, it follows that your datacenter should look like theirs.
The politics argument seems to apply more in large organizations, where there is a feudal relationship between two clans of IT: clan virtualization and clan storage. A short caricature of this is that the storage guys are overworked because they have to maintain a lot of old and crappy storage gear, and the virtualization guys wish their provisioning requests would be answered faster. In this tussle, a “converged architecture” is really just a sneaky way for the virtualization guy to take ownership of his own storage.
Regardless, the promise of convergence is that you are going to be able to buy a small number of server-class machines, plug them into a switch, and add new ones as you need to scale out, all effortlessly and with perfectly balanced resource consumption. Sounds pretty darned good, right?
Hyperconvergence is lazy thinking.
Just like the Teasmade, converged architectures solve a very real and completely niche problem: at small scales, with fairly narrow use cases, converged architectures afford a degree of simplicity that makes a lot of sense. For example, if you have a branch office that needs to run 10-20 VMs and that has little or no local IT support, it seems like a good idea to keep that hardware install as simple as possible. If you can do everything in a single server appliance, go for it!
However, as soon as you move beyond this very small scale of deployment, you enter a situation where rigid convergence makes little or no sense at all. Just as you wouldn’t offer to serve tea to twelve dinner guests by brewing it on your alarm clock, the idea of scaling cookie-cutter converged appliances begs a bit of careful reflection.
Your workloads aren’t Google’s workloads.
Let’s start with the claim that converged architectures let you build a datacenter that’s just like Google’s. There are a lot of things wrong with this argument, but the most notable one has got to be the implicit assumption about workloads. Environments like Google’s, that run at the scale of hundreds of thousands of servers, and that schedule work across all of them have the luxury of statistics. They have the ability to balance work evenly across all of their resources. In fact, Google’s early work on MapReduce placed disks inside individual servers and maintained three replicas of each piece of data not for redundancy, but specifically so that they would have flexibility in placing and balancing compute within the cluster.
This is not your environment. If your environment is like many enterprises that I’ve worked with in the past, it has a big mix of server VMs. Some of them are incredibly demanding. Many of them are often idle. All of them consume RAM. The idea that as you scale up these VMs on a single server, that you will simultaneously exhaust memory, CPU, network, and storage capabilities at the exact same time is wishful thinking to the point of clinical delusion. But this is exactly what is required for the efficiency argument above to stand up. Especially in the case where converged architectures take advantage of expensive SSD-based storage, where the total cost of drives exceeds the cost of the rest of the that houses them.
Efficient and high-performance storage implementations aren’t free.
Enterprise storage systems do an awful lot of stuff, and on high-performance flash they have to do it very efficiently. As I’ve talked about in the past, an early experience that we had at Coho was the fact that a single PCIe flash device was capable of saturating an entire 10Gb network connection, and used a significant amount of CPU in doing it.
Implementing a storage system that gets efficient utilization out of storage hardware — which translates to actually getting good value on IT spend — takes more than just disks and SSDs. It requires in-memory data structures for things like address space management and storage-related metadata. It requires processor and network resources to manage replication and recovery. It needs a bunch of resources that, when co-located with application workloads on compute-oriented servers are suddenly in contention with each other. In other words, combining compute and storage in this way means that you’re likely to only get a fraction of your available flash performance.
If you’re planning to populate your servers with flash devices that double or triple the Bill of Materials, then you have to question the economics of only providing the storage sub-system a tenth of the CPU and memory resources. Without significant resources allocated to the storage side of your hyperconverged build, you’re probably short-changing yourself. You’re gaining marginal (and questionable at that) manageability benefits for a significantly wasteful hardware design.
What happens as this scales? With more than a handful of boxes, there is a complexity inversion. Even at moderate scale it’s actually easier to manage discrete pools of resources which are tuned for their purpose. Time to realize why teapots are so pervasive in most British homes, not teasmaids.
Compute/storage convergence doesn’t make the complexity of storage system implementation go away, and it doesn’t mean that storage systems will perform better: you still need to replicate data over the network if you want to survive host failure, for example. It just means that you have to replicate storage functionality and compete for resources with applications on servers that you are paying virtualization licences for.
In building highly-available systems, the notion of “fate sharing” is the idea that two components will live or die together because they share exposure to a single (fallible) component. Two VMs running on a single server are an example: if the motherboard fails, the VMs will fail together. Fate sharing is a good property to understand in system design because it allows you to safely optimize a system: A web frontend and an app logic VM that depend on one another for a broader service to work can be co-located on a physical server, because if either of them fails the whole service is shot anyway.
An understanding of fate sharing also allows us to avoid foolish design decisions: Running a zookeeper cluster of three VMs all on the same physical host probably isn’t helping you survive the types of physical failures that zookeeper was designed to survive.
From an economic perspective, this exact line of thinking applies to getting independent value out of scaling compute and storage on demand: In many datacenter environments, these two services have very different hardware refresh times. What value is there in an architecture that forces you to scale out, and to replace at end of life, all of your resources in equal proportion?
What problems are we really trying to solve?
As with other recent jargon-based trends (ahem, “software defined”) in enterprise computing, storage/VM convergence fixates on a single property of a possible solution, rather than spending time focussed on the actual problem that needs to be addressed.
So let’s be clear about the problem: there is currently a spectacular contrast between enterprise servers and enterprise storage: With virtualization, servers have moved beyond being a physical commodity (which they already were) to being a completely demand-driven resource. Your organization buys the servers it needs, as it needs them. I’ve talked to IT shops that are purchasing and installing new servers on a monthly basis, and in a few cases at even finer granularity than that.
Enterprise storage, on the other hand, is still delivered around a 5-year planning cycle. You buy more than you need up front, and hope to just barely saturate it about 5 years from now. Storage is expensive, economically wasteful, and exhibits none of the agility that has been realized through the virtualization of compute infrastructure.
Most importantly, virtualization of servers has let virtualization admins spend a lot less time worrying about hardware and device driver pain, and a lot more time worrying about workloads. This is less true of storage admins, who would love to be thinking about applications and performance, but who instead are stuck planning for the next round of forklift upgrades and data migrations.
Rather than jumping on this fantasy aesthetic of a one-size-fits-all converged appliance, let’s spend some time thinking about whether it’s solving the right problem. At Coho, we believe that enterprise storage has a lot to learn from the way that enterprise computing has evolved. Making storage efficient doesn’t mean that is has to be embedded within your VM infrastructure, and there are a lot of great reasons to keep it independent.
In fact, we think there’s a strong argument for convergence — and let’s be clear: by convergence I mean tight integration — between your storage and your network as an approach to making your data available to your scalable compute infrastructure as effectively as possible.
Interested in learning more about Coho and our products? Check out ESG’s report on our initial product offering, or our slightly gorier technical white paper that describes the system in a bit more detail.
63,169 total views, 10 views today