Journey of a Storage Admin Part 2: Scaling from TB to PB and the advent of Virtualization.
As I mentioned in my previous post, I was looking for a scale out storage solution to solve some of the studio’s pain points. But before I could, ESC closed its doors.
When one door closes, another opens
As wayward storage administrators do, I moved on to a new opportunity at ImageMovers Digital (IMD), a subsidiary of Disney. At IMD, I was lucky enough to not inherit an existing storage infrastructure and have to build upon it. It was a great position to be in and architect the storage landscape as I saw fit!
The environment started small, roughly 50 employees but within 2 years, it grew to 500 people across four new sites that required the conversion of two dilapidated aircraft hangers into state-of-the-art offices and an accompanying data center that produced and released the movie “A Christmas Carol”. The ability to scale while delivering a movie on time was an extraordinary feat by any studio standards and I was proud to be part of it!
As you can imagine, scaling to meet headcount and delivering a movie on time meant that IT was constantly under the gun, especially with storage. Storage was a key component at IMD, and I was given the task of addressing the storage demands that came with delivering a movie on time while scaling the environment. I didn’t want a repeat of my previous experiences at ESC: having to continuously apply band-aid solutions to scale the environment. I decided up front to consider a few of the scale out solutions I mentioned earlier.
I did my due diligence and reviewed what EMC, IBRIX, HP, DDN and BlueArc offered, but these architectures only scaled-up and didn’t fit the performance and especially data volume requirements we had. In the end, it was down to two contenders: Isilon and Netapp C-mode (Spinnaker integrated) for their abilities to scale out.
Without going into all the gore, we quickly determined that based on performance benchmarks, compatibility testing, simplicity of setup, configuration and management, it would be harder to implement Netapp C-mode. Netapp C-mode didn’t meet our performance, usability requirements and it would have required more storage administrators and contract services to fine tune the cluster. We wanted to maintain a small and agile systems team, and this didn’t seem to be the solution to do that with.
With all this consideration behind me, it made sense to go with Isilon. However we all know cost and the vendor’s viability (financially) are always factors in the decision making. Since (at the time) Isilon was a relatively new company and didn’t have any presence at Disney prior to IMD, it was a bit of a hard sell. Thankfully, technology won over politics that day.
The scale-out architecture of Isilon was designed to be flexible (much like the Coho Datastream) and delivered well in IMD’s environment where demands for capacity requirements came dynamically. We were able to deploy a small cluster of 16 nodes at first and kept scaling the cluster based on the studio’s needs. The core storage infrastructure grew to 7 clusters for different workloads, nearline & backups and replication across 4 sites, totaling 2PB managed primarily by one person.
I truly believe the early adopters that embrace new innovation come out ahead in the long run, as they’re the ones pushing the boundaries in technology, which drives down costs and delivers results.
Over the course of my tenure at IMD, I really started to believe in the scale out storage solution and what it could solve for other customers. I made the move to the vendor side of the table with Isilon as a Systems Engineer for several big accounts: a large chip manufacturer and many large animation studios (I can’t mention which ones…).
Now a quick note about my characterization of these vendor’s products. This is a snapshot of where storage was as I was working with it over the past ten years or so. I’m not promoting Isilon, and I’m not trying to beat anyone up either. Netapp C-mode and the other storage vendors have come a long way since I looked at them 7 years ago as well.
Today, the architectures of all of these systems are fundamentally the same as the day they were first designed, and in some senses the same can be said of many of the newer storage architectures that I looked at before joining Coho Data. They are ‘old wine in a new bottle’, using the same storage stack and the same thinking around monolithic controllers and design assumptions that are based largely around spinning drive. They are trying desperately to leverage flash, but just by duct taping it onto the tops of their existing systems. They still have intrinsic scaling performance problems and the underlying issues with fixed resources on one or a few controllers.
I believe the storage environment trends in enterprise datacenters are leaning towards the scale-out approach of lower cost vs the traditional scale-up and their complexity managing multiple silos. This was absolutely evident through my interaction with various customers in the past when I worked there. Especially with the rise of virtualization in the datacenter, I started to notice customers wanted to leverage scale-out with virtualization for simplicity of management and scale. However Isilon’s architecture wasn’t able to deliver:
- fast, small random IO
- high performance with very low latency
- a single virtual name space (critical for VM storage management)
Because Isilon’s OneFS was not inherently designed for virtualized workloads. It was apparent, customers were looking for an alternative solution to solve storage scale for virtualized environments without compromising performance and simplicity.
By nature, technologists always gravitate towards innovation and I am no exception to this. When my path crossed with Coho Data, my jaw dropped. I’ve used a lot of enterprise storage through my career, and I believe that Coho’s architecture is the kind of thing that customers have been waiting for. It really is a completely different approach to storage system design: one that has been engineered from the ground-up using a bare-metal object store optimized for flash and without the baggage of a traditional storage stack. By leveraging SDN and OpenFlow to scale-out, it eliminates any bottlenecks in the storage network as systems scale from small to large. Currently, Coho Data provides a single presentation layer (NFS IP) to VMware, delivering high performance with low latency, simplified scalability and ease of management in a storage utility model.
There is a lot of innovation going on in storage right now — much more than I saw through administering a few generations of enterprise arrays. Innovation is about switching your thinking about how storage is consumed, managed and extended to the network. I joined Coho because I was excited at how broadly the team thinks about storage system design, and how ambitious a system they (now we) are working to deliver to customers.
Thanks for joining me on this journey down memory lane to present-day where I’m now helping customers just like myself benefit from the value of the next generation of storage innovation. Want to learn more about Coho? Don’t hesitate to drop me a line: firstname.lastname@example.org.
8,715 total views, 2 views today