Why I Joined a Scale Out Storage Company

Changing Jobs and taking risks

One finds themselves at a point in their career when they have a desire to do more. I don’t necessarily mean something different (although that plays into it). I know this sounds somewhat simple or cliché, so it of course warrants further elaboration.

Background

First of all, a little bit of backstory… I was approached by/reached out to a certain All Flash Array vendor about an open Technical Marketing Engineer (TME) position almost a year ago. They were starting to ramp-up their TME organization and were looking to add someone to work on solutions in the VMware and virtualization space. These types of openings seem to come out of the woodwork in the months and weeks leading up to VMworld. I did a few preliminary interviews with them but at the time I didn’t have any desire to move to California, but I continued on with the stipulation that I didn’t want to relocate. It turns out that they decided not to hire outside of California for this role, so I was instantly out of the running.

This ignited in me the idea that, “Hey, you’re in IT, you have been for 15+ years now, maybe it’s time to step up to the big leagues and relocate to Silicon Valley”… I had lived in the Southwest (in Phoenix) for about 4yrs from 2000-2004 and worked at an enterprise software company. Those were some of the best growth years of my IT career. So I pitched the idea to my wife (who is from Japan) about the potential of moving to Cali. At the very least she’d be closer to her family and surrounded by an even-richer Asian community in “The Valley”. At the worst it would be a new adventure for both of us and our kids. A change of pace, if you will.

Renewed Opportunity

Fast forward to a few months ago, just after the beginning of 2014, when I get a message on LinkedIn from a VP at a stealth start-up in the Valley looking for TME talent. It’s not that often that a company at this stage of the game reaches out to you to help as they build their business. With all the buzz around flash, software-defined this and that and with the multitude of colleagues leaving NetApp for new opportunities, I thought this would be the ideal time to give it another go.

I ended up doing interviews for various TME roles, one at a storage company designed specifically for virtualization, one at a player in the “hyper-converged” space, and one at the big ‘V’ itself. These companies varied in size from 30, to 250, to one with over 10,000. I really didn’t want to work at another company similar in size to NetApp, but I thought there might the possibility to focus in on a particular area in that role, which sounded interesting. For the medium sized company I’d be one of a handful of TMEs that joined recently and would have some say in what I worked on but didn’t have quite as much control of my own destiny as I would have liked. Finally, the smallest company (the stealth startup) was still in alpha and I felt as though I would have more confidence joining a company that was selling or had recently GA’d their product.

So at this point from what I had seen, I was definitely leaning toward the smaller companies, i.e. 250 or fewer employees. Some would consider both of these to be startups as neither of them are public companies. One hasn’t even launched yet and hence I won’t call out the name here.

Time to Reflect

6043440613_578408af76_z.jpg

One of the potential issues I could see with one of the vendors I interviewed with was their plan for scalability. They had a very robust box for virtualized workloads, a very complete file system to allow for granular virtual machine management as well as replication. These are all good things and are helping them to further differentiate from the likes of NetApp and EMC, however, once you hit the capacity and performance limits of a single box, then what do you do? Their approach is to replicate the initial set-up with a rack of new equipment and only recently can they manage additional appliances from a single interface. Now to me this sounds like more of the same. This definitely doesn’t scale out. At the level of dozens of boxes, I’d say that it doesn’t scale; period. Who wants to manage all those islands of storage even from a single management interface. NetApp has solutions that look better than that with Clustered Data ONTAP.

The second company is an area gathering extreme interest of late, that being “hyper-converged” infrastructure, sometimes termed “server SAN”. Much like Nutanix, they are trying to build a converged software/hardware appliance where the compute, network and storage are all in a single appliance, built out using a building block approach. The problem with this design is, of course, with all of these components being in a single package, how do you ensure that all of the resources run out at the same time? Sure you can add disks if you have available bays but what about the other components. If you can’t, you’ll be leaving some portion of the performance you paid for on the table. In all likelihood, the storage performance will be what you’ll run out of first, not disk capacity. This solution may fit the definition of scale out, per say, however when you invest in a new hardware platform it would be nice to know that you don’t have to opt for complete replacement of the hardware when the next upgrade cycle comes along, not to mention the high cost of hypervisor licensing every time you add a new host, regardless whether or not you require the additional compute capacity.

The third company I interviewed with has their hands in every area of the Software-defined Data Center (SDDC). In fact, it could be argued that they coined the term. It should come as no surprise then that they are adding to their Software-defined storage (SDS) story. The advantage they have is that they are integrating all of their components together into a cohesive offering, in-kernel. The problem here is that for virtualized workloads which have a lot of variation in requirements, it’s very difficult to predict what resource the server will run out of first. You can add various components over time (i.e. PCIe flash cards, SSDs, hard disks, etc) as pointed out here, however, this can box you into a corner with regard to constantly micro-managing the resources in your servers. Isn’t this exactly the type of thing that virtualization and converged infrastructure are designed to eliminate? They do have a compelling story to tell to be sure, but there is more to the storage world than purely virtualized workloads. I would also agree that their concept of objects and components are a step in the right direction. Object storage has legs.

Completing the Puzzle

As I tend to do when I have these sorts of philosophical questions, especially when making the decision of a new company to work at (which really is what we do; it’s us evaluating the company as much as it is them evaluating us), I took my search for answers to my questions to the web. I started to check around and see what some competitors of the aforementioned companies are doing and equally important… are any of them hiring?

I visited all of the usual haunts when searching for this sort of thing. In this case, I reviewed some of the recent Tech Field Day events to see what companies were participating, and what was new, different and innovative about what they were selling. One vendor stood out from this list, Coho Data.

Coho Data was founded by the XenSource team and leverages their experience supporting web-scale virtualized compute and storage for Amazon to create a new model for scale-out storage that brings web-scale operations and economics to any enterprise datacenter.  The company was launched publicly last October, with several hosting providers participating as part of last year’s POCs and now running their v1 GA code. They are building a VMware storage building block geared toward private cloud deployments.

The Coho Data base offering delivers 180K IOPS in 2U with linear performance scaling by using patented OpenFlow SDN technology, with the ability to mix and match heterogenous hardware in your cluster to match your specific application performance needs dynamically.

To me this type of storage makes sense for the specialized usage requirements of virtualization and cloud deployments of today and for the changing storage needs of tomorrow. Add to that the support for bare metal object storage and it sounds like a well thought out storage portfolio to me. This is the mission of Coho Data.

To me this sounded like a very unique combination of SDS and SDN, so I had to learn more. I looked at all of their Tech Field Day videos, whitepaper as well as their website and, low and behold, they were looking for a TME. Nice!

Coming Full Circle

By the time I headed out to The Valley (for the 3rd time in about a month) I had offers from the other 2 start-ups I spoke with, so I wasn’t expecting a 3rd offer, to be honest. I figured let’s finish the face-to-face interviews, meet the team and see how it goes. The 3-4 people I met with were very personable and I felt the best culture fit of the companies I had talked with. Combine that with the fact that they had just released a GA product and had a plan going forward, made me feel a bit more at-ease. Finally, the pedigree of the management team and their vision for future paths of revenue left me blown away!

At the end of the day, I had an offer letter on-site and accepted at 9:00AM the very next morning. I hadn’t even slept after my red-eye the night before. It was one of the easiest decisions I have ever made and I very much look forward to what’s in store for the future. From my perspective, there looks to be unlimited potential!

Interested in Coho Data? You should download our datasheet or  get the independent analyst review by ESG here.

10,844 total views, 1 views today