Managing Storage or 4 hour Root Canal Everyday?

Managing Storage or 4 hour root canal everyday

Managing storage in an IT environment has always been a source of constant pain. It can be equated with getting a root canal without the novacaine and done over and over with no end in site. These problems still haven’t been solved entirely, but I believe a few companies like Coho Data have tackled it head on. Let’s take a step back to understanding why we have this storage pain and how I’ve dealt with it over the years.

The path to the Dentist’s Office

I started my IT career, much like a lot of my colleagues, as a Support Analyst on an IT Help Desk. As I moved up through the ranks, I eventually became a system administrator and was tasked with making decisions on both the hardware and software to use within the IT environment to solve a business problem… and, of course, managing it on a daily basis (the pain starts).

My background (prior to virtualization) involved a lot of Linux and Open Source, so I started out managing direct attached storage shared via an iSCSI target or via NFS, and sometimes SMB. The company I was at was very spend-conscious, especially in terms of software, so we eschewed the use of Microsoft products wherever possible and instead of buying a specialized appliance for a particular job, we tried to host many, many application services on a single box. Don’t forget, these were the days before the luxuries of virtualization were available, so application and storage management was a nightmare!

Keeping all of the services running all of the time was a major drain on my time. Not only that, but the storage to host it all was unreliable and very hard to expand. I remember being responsible for dealing with several failed RAID volumes that I had to laboriously attempt to rebuild, as well as having to deal with backups to the only server on the network with a tape drive attached. What a pain in the butt.

The Dentist is here

Eventually, the company finally got tired of the downtime and decided to purchase an enterprise network storage solution. This in an of itself, had its own set of challenges. Our main goal in evaluating different storage solutions boiled down to a few key factors, which are still in-use today. We had to balance the Performance, Price and Manageability of the competing solutions to find the right one.

The use case for us wasn’t performance intensive, so a solution that was easy to manage and reliable was the focus. Of course, price was also a concern as we didn’t have unlimited budget (who does?!).

At around the same time, I was introducing the use of VMware to the organization. At first we used VMware Workstation for the developers, but later I found VMTN, and for a small cost ($300) I purchased access to the entire catalog of VMware products for evaluation purposes. This drove the adoption of VMware in our datacenter environment. At first, it started out the same as in our old environment, with direct attached storage, and of course we had the same painful management issues that go along with that.

This lead us down the path of purchasing shared storage (in this case, NetApp) with the hopes of alleviating some of these pain points. The NetApp served as the storage backing our VMFS datastores. It wasn’t easy to set up, especially being a newbie with NAS/SAN, but it was orders of magnitude easier than managing the islands of storage we had to manage when each server had its own storage to be managed one-by-one. Not fun!

Doc, why do I still feel pain?!

I remember learning the ONTAP commands to create the aggregates, volumes, then LUNs… both via the FilerView and through the CLI. As easy as it was, for what we were using it for, it would have been great if it had taken care of more of this for you on its own. As we started adding more virtual workloads, it became a challenge to load balance and separate the workloads without running into performance issues. I had to start dedicating certain disk groups to certain workloads to manage all of the performance issues inherent from everything co-existing on the same box. Since I didn’t know what to look for in terms of bottlenecks, it was hard to troubleshoot when problems occurred and resulted in me spending hours trying to diagnose the issue with the storage, rather than have my storage manage this for me.

Here is the Novocaine

Fast forward to about 8 years later. There is now so much intelligence built-in to the next generation of storage (Coho), that tasks like provisioning of datastores, tiering to different types of storage media is done automagically, along with the ability to do big data analytics…. this list goes on and on and on, and we’re only just beginning! It’s amazing that technologies that were bleeding edge back then are now being applied to storage in the mainstream.

It’s a welcome improvement to the manual tasks of storage admins of the past. With many of today’s network storage arrays, the intelligence takes out a lot of the guess work and can make adjustments before problems occur… all the while, giving you more visibility into what’s happening; performance tuning – now a thing of the past. Storage admins today definitely have it easier. They now have the ability to concentrate on what’s important for the business.

It has helped me as well, since I can concentrate on how to deploy “solutions”, instead of “products”… which in the end is what businesses are looking for. I’m sure the experiences of my colleagues are quite similar. We’ve come a really long way in a short period of time and for that I am truly thankful!

Interested in Coho Data? You should download our datasheet or  get the independent analyst review by ESG here.

8,011 total views, 3 views today