Why are you buying enterprise Tupperware?

Storage is a place in the back corner of an underground parking lot, where the tenants of your building stick old furniture, broken microwaves, and tires behind chain linked dividers.

Storage is a volumetric thing.  It’s a closet where you put shoes and coats.

Storage is a container.

Storage companies sell containers.  This has been the way things have worked in the datacenter for two decades: you buy a big box full of disks to put your data in.  You leave your data in it for five years or so, and then you buy a new box.  Ka-ching.  This is the storage business, and storage is basically Tupperware.

From a technology perspective, storage products have clamored to position themselves about the technology they use.  Whether it’s a Fiber Channel SAN, or an All-Flash Array, storage products have been motivated into existence not because of the problems that they solve, but because of the technologies that they are based on.  The weird result of this approach is that enterprise storage environments are becoming more diverse, more siloed, and more complex.  To meet the needs of all their workloads, the administrator is turning into a fleet manager: flash arrays for performance, virtualization-specific arrays for VMware, and so on…  And every week a new vendor is calling about IOPS.  Lots of IOPS.  Storage vendors may not be making life easier, but they’ve got fantastic Tupperware.  Gold-plated, battery-powered, deduplicating, one-gazillion-IOP Tupperware.

Technologies are becoming denser and faster, but Tupperware has been tupperware for a long time: it’s expensive, short-term, and a pain to manage.  What problems is it really solving for you?

A few big companies — ones with the kinds of datacenters that you ride bikes around because it’s more practical than walking — realized this fact over a decade ago.  Google and Amazon aren’t splashing out on mountains of enterprise storage.  They are building their own systems: they’ve turned data storage into an operational problem and they’ve hired piles of PhDs to build and run their cloud storage offerings.  These systems are cash efficient, they are scalable, and they let customers think about the data they are storing instead of the hardware it’s stored on.

You buy what you need when you need it, and you don’t worry about refreshes.

These techniques — allowing the move toward scalable, on-demand storage services — reflect some spectacular innovation.  But they have happened completely outside enterprise datacenters.  This is especially strange given that datacenter environments everywhere are benefiting from the huge changes that have been taking place in server and network virtualization over the past few years; server and network hardware is more densely used and easier to manage, and storage remains a monolith.

This is why we started Coho Data.  Storage is the elephant in your datacenter: it’s the biggest source of cost, administrative headache, and complexity.  As the team that wrote the Xen hypervisor (which powers Amazon’s EC2), we have a deep familiarity with computing efficiently at scale, and getting great value out of commodity hardware.  For the last two years, we have been working to decouple the value and functionality of enterprise storage from the physical hardware that is used to build it.  We have designed a storage system around two central ideas:

1. Storage isn’t defined by a physical device.  Hardware gets old, it breaks, and it gets replaced.  Flash and networks are improving year-on-year at a rate that spinning disks never, ever have.  We have taken ideas from hardware virtualization to completely decouple data and the software stack that lets you access it from the hardware that it runs on.  Coho’s products are delivered on the best price/performance commodity hardware that is available at the time, allowing you to buy what you need now, to extend both performance and capacity on the best available hardware as you need to, and to replace components without planned outages or disruption.

2. Presentation is power.  Storage systems are often hamstrung by their interfaces.  They use old protocols, have trouble adopting new ones, and the controllers that implement these protocols end up being scalability bottlenecks.  We’ve built a hosting environment for deploying scalable protocol implementations across the entire width of the system.  With Coho’s products, you get linearly scalable protocol implementations built on top of a high-performance, bare-metal object store.  Coho wants to make enterprise storage relevant again by growing to support all the interfaces and interactions with data that your business needs.

Coho Data is a young, ambitious, and energetic storage company.  We’ve started from the ground up on this system, in thinking about how data should be stored and accessed in your environment.  Coho’s DataStream architecture makes aggressive use of some really exciting technologies, like PCIe flash and software-defined network (SDN) switching — but we don’t exist because of these technologies.  We’re a group of people who want to think creatively and work hard to find clever ways to store and access your data.  Over the coming months, I’m looking forward to telling you more about our products and some of the interesting challenges we’ve faced over the past couple years of development.  Thanks very much for your interest, and please let us know if you’re getting tired of managing all that Tupperware.


7,399 total views, 3 views today