Starting the fabric journey – part two –

If you read my usual social output you’ll see Data Fabric is right at the top of my favourite topics, it’s a key discussion with IT decision makers in all the business I speak with “How do I build IT that allows us to take advantage of current technology shifts?” is normally the starting point. The discussion then quickly turns to “how do I build an appropriate Data Fabric?” so a few weeks back I posted the first in a series of practical posts on why and how to get started (You can find that post here). Now you’ve caught up on that, welcome to part two as we look at an alternative entry point into the data fabric.

Part one, looking at NetApp Alta Vault, was very much about how you start at the edge of your infrastructure in this we are looking at bringing data fabric right to the core of your storage.

I have ended many of these Data Fabric posts mentioning that, although much of my data fabric conversation is focused on NetApp, it doesn’t mean you have to be a NetApp user to design a data fabric, however in my experience none of the other major storage names are delivering anything like as complete a data fabric portfolio as NetApp. So in this post, I’d like to challenge you to look at what NetApp,  can deliver for you.

There’s the challenge for me – let’s take it on!

I’m taking it that those reading this post are already aware of NetApp as a brand, very briefly, formed in 1992 they are now the world’s largest independent storage company (with the pending purchase of EMC by Dell) turning over in excess of $6bn and employing around 12000 people across the globe. Always a hugely innovative company they are responsible for many of the things you expect as the norm in enterprise class storage today, NAS, multiprotocol connections, efficient zero overhead snapshots and space efficient cloning to name just a few and today that innovation continues with the development of the data fabric strategy.

The core technology that drives much of this innovation is the data ONTAP operating system, currently version 8.3.2. Contrary to popular myth NetApp are not just the ONTAP company there is a portfolio of non ONTAP platforms such as e-series, SolidFire, Storagegrid and Alta Vault. However ONTAP is a key building block of the NetApp data fabric strategy and is the glue that brings together the entire portfolio as well as physical, virtual and cloud worlds.

In the next few paragraphs we are going to look at what ONTAP is and why you should consider it as a core part of your fabric.

What is ONTAP?

ONTAP is NetApp’s storage operating system and is the most widely deployed storage OS in the world.

It is hugely flexible in deployment, supporting installation on physical, virtual and cloud infrastructures and also massively scalable supporting scale up for capacity (add additional disk shelves to your deployment) and scale out for performance (add additional compute capabilities), you can mix any type of storage media into your setup from mega fast flash, to big capacity SATA, to limitless cloud, ONTAP just handles it.

It not only delivers flexibility, it does that while maintaining all of the enterprise level features that we demand, from snapshot backups to space efficiency running across all of your media types and deployment types without compromise. No other storage operating system delivers this level of flexibility.

Why is ONTAP different?

That all sounds great, but don’t others claim to do much of that? – well yes and in some cases they can deliver scalability (upwards and outwards, although not usually both), do offer data protection and are highly available, so what is different about NetApp and ONTAP.

Data ONTAP is designed to deliver 4 key goals as part of a robust fabric;

  • Nondisruptive operations
  • Seamless scalability
  • Proven efficient
  • Ability to embrace new technology – for example – flash, cloud, object storage with seamless movement between them

These goals are achieved via two key technologies;

  • Clustering
  • Data mobility

Let’s look at how those two key technologies deliver what we need.

Why is an ONTAP cluster useful?

When we are trying to deliver true non-disruptive storage (and let’s face it, who can afford downtime in their infrastructure for maintenance and upgrades these days?) and seamless scale, many of the traditional storage deployments give us a problem, let me explain.

With many traditional storage arrays, deployment is akin to that of a traditional Windows cluster. That is multiple nodes (active\active or active\passive) each delivering resources and services in isolation, however in the event of a node failure those services failover to a partner to maintain availability, which is great, exactly what you want from a highly available platform. However, the issue with this model is it lacks flexibility, services only move between controllers in the event of a failover, meaning resources can’t be easily shared and if we need to move resources it normally means disruption to normal service and in the modern data centre that kind of disruption and lack of seamless movement is no longer acceptable.

That’s what an ONTAP cluster overcomes. An ONTAP cluster operates very much more like a virtual server environment, if we think about how our favourite hypervisor cluster works for a moment, our hypervisor is deployed on multiple nodes, and abstracts the hardware and storage from our hardware and presents a pool of resources. We segment this pool with virtual machines and these machines are the resources we connect to. We are never really interested in the underlying resources just the virtual machine endpoint.

What benefits do our favourite hypervisors bring to our IT infrastructure? Complete flexibility and scalability, the ability to move our workload to any part of our cluster without service interruption, having our workload delivered from the most appropriate place at any given time.

Our virtual infrastructure is completely and seamlessly scalable, if we want more processors, more memory, more storage, we drop it in and it is automatically available to us and we can move our workloads around non-disruptively to take advantage of it.

Wouldn’t it be great if our storage did that? if we think about data fabric, it’s exactly what we need, the ability to move our data anywhere we like so we can take advantage of new capabilities, be that extra compute, higher performance disk, or maybe a virtual or cloud storage controller. That would be great wouldn’t it? Well that is exactly what ONTAP delivers.

ONTAP is deployed exactly the same as our favourite hypervisors, it extracts the physical layer of our hardware controllers and presents a pool of shared resources into which we place storage virtual machines, which like a virtual machine on a hypervisor have complete flexibility of movement and scale. We need more compute resource, let’s add a controller and bring that power to the cluster. We need better performance from our disk, let’s drop flash drives in and seamlessly move our storage workloads across to it, all of it completely without disruption to our daily operations.

CDOT cluster

How does ONTAP make my data fabric work?

Nothing destroys the ability to build a flexible data fabric more than solutions that create data silo’s, data that can’t be easily moved from one place to another, making it really difficult for you to have the agility and flexibility that you need and breaking those silo’s is key to how ONTAP delivers our fabric.

If we look at our data fabric diagram we can see ONTAP at the very heart of it and what’s the thing that makes it that beating heart? it is the power of NetApp’s SnapMirror capability.

 

 

 

 

 

This technology has been opened up to ensure that we can move data between all elements of the NetApp portfolio and that’s not a distant dream, everything in the portfolio either does that now or will in the not too distant future, not a bit of slideware, but a technology reality right now.

It is this ability to mirror anywhere that makes ONTAP such a critical part of the data fabric, host data in my on-prem storage infrastructure and want to send it to the cloud, great just mirror it across. Want to send it to an e-series? that’s great just mirror it across. Send it to a virtual or cloud ONTAP instance, great, setup a mirror and away you go. Want to mirror between public cloud providers? Yep you guessed it, mirror and go.

As we explored in part one, we can start that process on the edge of our data infrastructure via Alta Vault, but the deployment of ONTAP as part of our core infrastructure opens up a wide range of data fabric options and allows us to move our core data to wherever we need it, whenever we need it there. This ensures that regardless of our solutions we are not creating any data silo’s in our network, completely opening up the capability to move our data where we need it, ask yourself, is that something you can do right now?

Why should I go ONTAP?

Well I wouldn’t presume to tell you that you should, so firstly ask yourself a few questions?

  1. Am I convinced that I need to start building a flexible data fabric?
  2. Do I think that the following capabilities are important to my future data infrastructure?
    1. Nondisruptive operations
    2. Seamless scalability
    3. Proven efficient
    4. Ability to seamlessly embrace new technology – for example – flash, cloud, object storage

If you answer yes to these questions, then there is only one other question to consider;

Does my current primary storage infrastructure deliver all the capabilities of ONTAP while allowing me to begin to build a robust data fabric?

If you answer no, then maybe it’s time to look at NetApp and Data ONTAP to see how it can help you to get the kind of infrastructure you need and start you on your data fabric journey.

Want to give ONTAP a look?

Well now is a great time, I’m running a couple of data fabric seminars in our Liverpool office in early April, so if you are in the area, why not look us up.

There are two events – one for existing NetApp users and one for those that aren’t, check out the details below, you’re welcome to join us.

Data Fabric an Introduction

Data Fabric for NetApp Users

If you’re not, don’t worry, contact me in all the normal ways, via comments here, or Linkedin or on twitter @techstringy, you should get me on one of them

Happy fabric building.


Advertisements

One thought on “Starting the fabric journey – part two –

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s