Starting the fabric journey – part two –

If you read my usual social output you’ll see Data Fabric is right at the top of my favourite topics, it’s a key discussion with IT decision makers in all the business I speak with “How do I build IT that allows us to take advantage of current technology shifts?” is normally the starting point. The discussion then quickly turns to “how do I build an appropriate Data Fabric?” so a few weeks back I posted the first in a series of practical posts on why and how to get started (You can find that post here). Now you’ve caught up on that, welcome to part two as we look at an alternative entry point into the data fabric.

Part one, looking at NetApp Alta Vault, was very much about how you start at the edge of your infrastructure in this we are looking at bringing data fabric right to the core of your storage.

I have ended many of these Data Fabric posts mentioning that, although much of my data fabric conversation is focused on NetApp, it doesn’t mean you have to be a NetApp user to design a data fabric, however in my experience none of the other major storage names are delivering anything like as complete a data fabric portfolio as NetApp. So in this post, I’d like to challenge you to look at what NetApp,  can deliver for you.

There’s the challenge for me – let’s take it on!

I’m taking it that those reading this post are already aware of NetApp as a brand, very briefly, formed in 1992 they are now the world’s largest independent storage company (with the pending purchase of EMC by Dell) turning over in excess of $6bn and employing around 12000 people across the globe. Always a hugely innovative company they are responsible for many of the things you expect as the norm in enterprise class storage today, NAS, multiprotocol connections, efficient zero overhead snapshots and space efficient cloning to name just a few and today that innovation continues with the development of the data fabric strategy.

The core technology that drives much of this innovation is the data ONTAP operating system, currently version 8.3.2. Contrary to popular myth NetApp are not just the ONTAP company there is a portfolio of non ONTAP platforms such as e-series, SolidFire, Storagegrid and Alta Vault. However ONTAP is a key building block of the NetApp data fabric strategy and is the glue that brings together the entire portfolio as well as physical, virtual and cloud worlds.

In the next few paragraphs we are going to look at what ONTAP is and why you should consider it as a core part of your fabric.

What is ONTAP?

ONTAP is NetApp’s storage operating system and is the most widely deployed storage OS in the world.

It is hugely flexible in deployment, supporting installation on physical, virtual and cloud infrastructures and also massively scalable supporting scale up for capacity (add additional disk shelves to your deployment) and scale out for performance (add additional compute capabilities), you can mix any type of storage media into your setup from mega fast flash, to big capacity SATA, to limitless cloud, ONTAP just handles it.

It not only delivers flexibility, it does that while maintaining all of the enterprise level features that we demand, from snapshot backups to space efficiency running across all of your media types and deployment types without compromise. No other storage operating system delivers this level of flexibility.

Why is ONTAP different?

That all sounds great, but don’t others claim to do much of that? – well yes and in some cases they can deliver scalability (upwards and outwards, although not usually both), do offer data protection and are highly available, so what is different about NetApp and ONTAP.

Data ONTAP is designed to deliver 4 key goals as part of a robust fabric;

  • Nondisruptive operations
  • Seamless scalability
  • Proven efficient
  • Ability to embrace new technology – for example – flash, cloud, object storage with seamless movement between them

These goals are achieved via two key technologies;

  • Clustering
  • Data mobility

Let’s look at how those two key technologies deliver what we need.

Why is an ONTAP cluster useful?

When we are trying to deliver true non-disruptive storage (and let’s face it, who can afford downtime in their infrastructure for maintenance and upgrades these days?) and seamless scale, many of the traditional storage deployments give us a problem, let me explain.

With many traditional storage arrays, deployment is akin to that of a traditional Windows cluster. That is multiple nodes (active\active or active\passive) each delivering resources and services in isolation, however in the event of a node failure those services failover to a partner to maintain availability, which is great, exactly what you want from a highly available platform. However, the issue with this model is it lacks flexibility, services only move between controllers in the event of a failover, meaning resources can’t be easily shared and if we need to move resources it normally means disruption to normal service and in the modern data centre that kind of disruption and lack of seamless movement is no longer acceptable.

That’s what an ONTAP cluster overcomes. An ONTAP cluster operates very much more like a virtual server environment, if we think about how our favourite hypervisor cluster works for a moment, our hypervisor is deployed on multiple nodes, and abstracts the hardware and storage from our hardware and presents a pool of resources. We segment this pool with virtual machines and these machines are the resources we connect to. We are never really interested in the underlying resources just the virtual machine endpoint.

What benefits do our favourite hypervisors bring to our IT infrastructure? Complete flexibility and scalability, the ability to move our workload to any part of our cluster without service interruption, having our workload delivered from the most appropriate place at any given time.

Our virtual infrastructure is completely and seamlessly scalable, if we want more processors, more memory, more storage, we drop it in and it is automatically available to us and we can move our workloads around non-disruptively to take advantage of it.

Wouldn’t it be great if our storage did that? if we think about data fabric, it’s exactly what we need, the ability to move our data anywhere we like so we can take advantage of new capabilities, be that extra compute, higher performance disk, or maybe a virtual or cloud storage controller. That would be great wouldn’t it? Well that is exactly what ONTAP delivers.

ONTAP is deployed exactly the same as our favourite hypervisors, it extracts the physical layer of our hardware controllers and presents a pool of shared resources into which we place storage virtual machines, which like a virtual machine on a hypervisor have complete flexibility of movement and scale. We need more compute resource, let’s add a controller and bring that power to the cluster. We need better performance from our disk, let’s drop flash drives in and seamlessly move our storage workloads across to it, all of it completely without disruption to our daily operations.

CDOT cluster

How does ONTAP make my data fabric work?

Nothing destroys the ability to build a flexible data fabric more than solutions that create data silo’s, data that can’t be easily moved from one place to another, making it really difficult for you to have the agility and flexibility that you need and breaking those silo’s is key to how ONTAP delivers our fabric.

If we look at our data fabric diagram we can see ONTAP at the very heart of it and what’s the thing that makes it that beating heart? it is the power of NetApp’s SnapMirror capability.

 

 

 

 

 

This technology has been opened up to ensure that we can move data between all elements of the NetApp portfolio and that’s not a distant dream, everything in the portfolio either does that now or will in the not too distant future, not a bit of slideware, but a technology reality right now.

It is this ability to mirror anywhere that makes ONTAP such a critical part of the data fabric, host data in my on-prem storage infrastructure and want to send it to the cloud, great just mirror it across. Want to send it to an e-series? that’s great just mirror it across. Send it to a virtual or cloud ONTAP instance, great, setup a mirror and away you go. Want to mirror between public cloud providers? Yep you guessed it, mirror and go.

As we explored in part one, we can start that process on the edge of our data infrastructure via Alta Vault, but the deployment of ONTAP as part of our core infrastructure opens up a wide range of data fabric options and allows us to move our core data to wherever we need it, whenever we need it there. This ensures that regardless of our solutions we are not creating any data silo’s in our network, completely opening up the capability to move our data where we need it, ask yourself, is that something you can do right now?

Why should I go ONTAP?

Well I wouldn’t presume to tell you that you should, so firstly ask yourself a few questions?

  1. Am I convinced that I need to start building a flexible data fabric?
  2. Do I think that the following capabilities are important to my future data infrastructure?
    1. Nondisruptive operations
    2. Seamless scalability
    3. Proven efficient
    4. Ability to seamlessly embrace new technology – for example – flash, cloud, object storage

If you answer yes to these questions, then there is only one other question to consider;

Does my current primary storage infrastructure deliver all the capabilities of ONTAP while allowing me to begin to build a robust data fabric?

If you answer no, then maybe it’s time to look at NetApp and Data ONTAP to see how it can help you to get the kind of infrastructure you need and start you on your data fabric journey.

Want to give ONTAP a look?

Well now is a great time, I’m running a couple of data fabric seminars in our Liverpool office in early April, so if you are in the area, why not look us up.

There are two events – one for existing NetApp users and one for those that aren’t, check out the details below, you’re welcome to join us.

Data Fabric an Introduction

Data Fabric for NetApp Users

If you’re not, don’t worry, contact me in all the normal ways, via comments here, or Linkedin or on twitter @techstringy, you should get me on one of them

Happy fabric building.


Advertisements

We built this security policy on rock’n’roll

Well pretty sure that was the song title – I’ve not done a post for a little while with a tenuous song lyric link in the title, but the Starship 80’s classic seems about right for this one… So what was behind this bit of song title chicanery ?

The surprising answer is a newsletter! This one from one of our innovative security partners, Varonis, which created an interesting conversation between my companies technical and sales folk.

What did this newsletter say that was such a great source of debate?

“How to Detect and Clean Cryptolocker Infections”

“Varonis customers have had success detecting and reacting to Cryptolocker infections, including the recent attacks, using DatAdvantage and DatAlert.”

That’s great news isn’t it, we all know cryptolocker and its ilk of ransomware attacks are potentially devastating to a business, at the least they are hugely inconvenient, at worst they can cause critical data loss and all that that entails.

On reading this, one of my sales colleagues asked a great question “does this mean this is a “cure” for cryptolocker?” this had special resonance for my colleague as one of his customers had a particular worry around this type of attack. You can see why he asked that question, but of course that’s not what the guys at Varonis meant (in fact have a read of their excellent blog post on the subject) and much to the disappointment of my sales colleague we had to inform him that actually no, Varonis don’t have a “cure” and that’s not what they meant.

Well what did they mean?

If you have a read of their post, you’ll see what Varonis actually talk about is something that I speak about often with clients and that is that data protection is not just one thing, there isn’t one magic bullet, in fact data protection is like a great big onion in reality (plug for another old BLOG post here! – Data Security Is A Great Big Onion) it is multi-layered, from core data, to people, the complexity of the problem means that every threat needs multiple layers of protection to ensure you are not easily exposed.

Let’s take this example specifically, the Varonis toolset in question here does not pretend to stop cryptolocker, in fact not only does it not stop it, in reality it doesn’t even know what it is. What Varonis actually do is assume another one of my favourite key tenants of security policy development “assume you are already compromised”. If you’ve never had that discussion in your business, you really should, if you’re assuming that firewalls and antimalware tools are all you need to protect yourself from the most devastating of malware attacks, you are probably setting yourself up for an unpleasant surprise. If we then assume our systems are compromised, what on earth do we do about that to protect ourselves?

If we then assume our systems are compromised, what on earth do we do about that to protect ourselves?

What do we do? in my experience two things;

  1. Make sure we can spot the signs of compromise
  2. Make sure we have the ability to recover from any damage caused by the compromise

Because Varonis assume you are already compromised, what their tools do is step number one and look for the signs of compromise, look for behaviour outside the norm of your polices and behaviour of your users, if we see deviation from those, we can act, so in the case of a ransomware attack we can spot the unusual file access behaviour and apply rules that can stop it. We do better than just stop it, the analytics engine also allows us to track the files affected by this behaviour.

That’s a great starting point and allows us to limit the damage, and importantly, greatly reduce both the time and cost of recovery. One of the issues with this kind of attack is we don’t know the extent of the damage and we can be finding encrypted files for months, what this tracking allows us to do is know exactly the extent, exactly the files and the users affected.

Once we have that information, step 2 kicks in, we identify the damaged data and move to our data recovery solution to recover an unaffected copy. The quicker we are in doing this, the less the damage and loss of data is, because if you’ve spent 8 hours working on a major proposal that then gets ransomware’d and you only have nightly backups, you are going to lose all that work, but that is a whole different conversation around recovery point objectives.

Is this all pie in the sky? does this stuff really happen? it certainly does, we had two instances last year where customers were victims of these very kind of attacks, however in both cases they had tools in place that massively reduced the impact of these attacks, they did have Varonis tools installed that had identified the attack and quickly limited its impact. They also had data protection capabilities that meant they carried out multiple snapshot backups during the day (NetApp based in these cases) that meant they could quickly recover these files from a backup no more than a couple of hours old, greatly reducing the impact, inconvenience and cost of these attacks.

Is the point of this post to say you have to run out and buy Varonis and NetApp solutions to protect you from ransomware attacks? No, of course not, if you regularly read my posts you know I try to avoid the blatant advert, all I’m saying is understand a couple of things;

  1. Data security is multi-layered, there is no magic bullet.
  2. Assume you are compromised and think about how you mitigate the impact of compromise.

Understanding just those couple of things can have a big impact on the security of your data and can greatly reduce the damage caused by any kind of compromise. Do those things in themselves do all the job?  no of course they don’t, but if you take those two things as part of your data security policy planning they will help, look for tools that help you to meet those goals, I’ve mentioned a couple here in this post, but there are others and some may be more suitable than others in your circumstances.

Hopefully the points here will be of use and will help you in building your data security policies with rock’n’roll or at least good strong security tools!

I’ve included a couple of links below for Varonis and NetApp and of course if you have any questions, contact me in the usual ways on Twitter or LinkedIn or give Gardner Systems a call on 0151 220 5552 and speak to one of the team.

For more information on Varonis click here

For more information on NetApp data protection solutions click here