Is Your Storage Ready for Change?

Is Your Storage System Ready for Change?

The first two articles in this series focused on embracing change and how to measure for change. Being willing to change and knowing how to assess the impact of a change are critical skills for any IT professional. This article will focus on what might be the most difficult aspect of change for an IT organization:

How do we build productive infrastructures that will be ready to change as needed?

In the datacenter, the infrastructures that we are deploying today are nothing like what we deployed in the past. Mainframes used to be the star of the datacenter, and the term forklift upgrade came from the need to literally use a forklift to replace an old mainframe with a newer model. When I first learned this fact when I was new to the IT profession the response I gave my mentor was “That is insane!”

But it wasn’t insane. At the time, it was needed. But soon, deploying mainframes gave way to deploying standalone nodes of computing as the cost of the technology was less while at the same time capable of meeting or even surpassing most organization’s IT needs. The datacenter soon was populated with various standalone servers each with their own storage installed locally and underutilizing the physical resources allocated to them.

When virtualization for x86 infrastructures was just emerging, I was lucky enough to work for someone who immediately saw the business value of virtualizing our environment. Our team was charged with maintaining a large software testing lab. When we realized that virtualizing those test systems allowed us to deploy up to twelve times more systems per physical host, we shared the results with other business units. We were early advocates for virtualizing not just the test labs, but the production infrastructures as well. But being early means facing the disbelief of your colleagues, who aren’t ready to change. While I may have thought the status quo was insane, others thought the same of me.

But now, virtualization is now the new normal. It seems that the only insane thing that we can do in IT is to not change!

The Virtualization Butterfly Effect

With virtualization taking root in our datacenters the need for centralized storage also became a critical component of infrastructure design. Just like the proverbial butterfly flapping its wings in Tokyo results in a storm in New York, virtualization started as a ripple that became a wave of change for IT infrastructures well beyond just eliminating traditional physical servers.

The first array that I ever worked on before virtualization was mainstream could support up to two terabytes of data accessible only via fiber channel switches and host bus adapters. Another limitation with that unit was that disk RAID groups had to be created and deleted in order, so if the physical storage had to be reallocated a data migration project was probably inevitable. Once virtualization emerged it was obvious that an array with such limitations would not suffice, and we had to start shopping for the next array.

As the evolution of storage was happening fueled by the spread of virtualization, I started working on arrays that could actually be expanded upon with disk shelf units, and you could reallocate physical storage without having to worry about where the disks resided within the system. We still needed to connect via fiber channel though despite all of the Ethernet equipment that we were purchasing for other projects.

This finally lead to arrays that were capable of supporting multiple protocols! NFS suddenly was an option that we could implement, and there were all sorts of new features that I could deploy to take greater advantage of my virtual infrastructure. With every generation of new storage I deployed you would think that I was ecstatic about acquiring a new array, but I wasn’t because…

“Storage Array Upgrade Project” is Code for “Drive People Insane”

If you ever had to replace a production storage array you know that everyone is going to be dealing with a lot of pressure to ensure that nothing goes wrong. You have to coordinate every step of the process with multiple teams to ensure that no system suffers a storage outage. Even if you have a completely virtualized environment you cannot risk migrating VMs using tools like Storage vMotion from a high performance tier to a lower performance tier of storage, so there is a lot of planning going on behind the scenes before, during and after a new storage purchase. Storage still seems to be this static isolated beast within the datacenter that just will not be taught new tricks like automatically tiering your workloads intelligently.

This is why I never was happy about a storage array upgrade project. They are difficult, time consuming, and, even when everything goes right, not highly rewarding.

Sure, I would be grateful for the new features and improvements that a new storage array would bring, but the work involved to reap the benefits required a complete commitment from me and my team. Instead of innovating and improving the way our company did business, we were spending all of our time merely keeping the lights on. Storage upgrades were necessary evils to be avoided for as long as possible.

Which means changing the storage was always something we would try to avoid, and that (are you sensing a theme here?) is just insane.

Insanity – Doing the Same Thing Over and Over but Expecting Different Results

This was the key problem that I and other IT professionals kept running into with our storage environments. We would repeat the process of buying whole new storage arrays instead of just fixing the problem of needing flexible storage. It is probably why every storage upgrade project ended with someone saying “Hopefully this one lasts longer than the last one.”

We need storage systems that can be acquired in pre-determined units of capacity and performance that will grow to include new technologies as they emerge instead of being replaced by them. We need storage that will automatically migrate data and re-balance workloads without human intervention. We need storage that we can purchase today that uses the best technology available and that can be expanded upon in the future with the best technology available.

We need Coho Data’s storage solution!

Coho Data’s storage solution uses a unique approach of treating your data like you treat your virtual machines — abstract the workload from the underlying physical infrastructure. Our object based datastore is distributed across every node in a Coho Data cluster. Our advanced analytics match workload requirements with the nodes best capable of meeting the performance and capacity needs of the infrastructure. This combination allows for your infrastructure’s storage to adapt to the current demands of your organization instead of you having to adapt the organization’s expectations to the storage system’s limitations.

With Coho Data’s commitment to using the best commodity hardware available for building cost-effective storage nodes with you know that your storage will keep up with the latest advancements in storage hardware. You’ll never have to do a forklift upgrade ever again, because Coho Data is a modular storage node architecture thanks to the Software Defined Networking (SDN) architecture that allows each node to operate independently within the cluster. As new generations of hardware are released, simply connect them to our SDN switches hosting our OpenFlow controller. No more upgrades of hardware, just increases in storage capacity and performance!

The best part is that you only configure a Coho Data solution once. As you add new nodes and features the system configures them for you. You only need to point your hypervisors at the Coho Data NFS datastore and Storage vMotion your VMs for the last time. The Coho Data solution will make sure that each VM is given the capacity and performance that it needs. No LUNs, volumes, RAID groups, or aggregates to design! Just the storage performance you need in the capacity that you need today, and an architecture ready to accommodate what you need for the future.

Combine this with our forthcoming API that will allow you to add new functionality of your own design and you have a system capable of changing to meet all of your storage needs. We have already seen how working with customers to enhance the Coho Data solution has resulted in our new SnapAttach feature. We are also eager to see what kinds of innovations customers will design on their own when we put you them in control.

Our Goal at Coho is to provide a storage solution that is built for change. A storage solution that is flexible, fast, and future-proof that you can manage through a simple and intuitive GUI. Change over to Coho Data for your storage needs, so that you can focus on innovating your business and not just your storage!

Interested in Coho Data? You should download our datasheet or  get the independent analyst review by ESG here.

4,242 total views, 3 views today