Casting our eye over HCI – Ruairi McBride – Ep65

I’ve spoken a bit recently about the world of Hyper Converged Infrastructure (HCI) especially as the technology continues to mature, with both improved hardware stacks and software looking to take advantage of this hardware, it is becoming an ever more compelling prospect.

ruairiHow do these developments, an HCI version 2.0 if you like, manifest themselves? Recently I saw a good example in a series of blog posts and videos from friend of the show Ruairi McBride, which demonstrated really well both the practical deployment and look and feel of a modern HCI platform.

The videos focussed on NetApp’s new offering and covered the out of the box experience, how to physically cable together your HCI building blocks and how to take your build from delivery to deployment in really easy steps. This demonstration of exactly how you build a HCI platform was interesting, not just on a practical level, but also gave me some thoughts around why and how you may want to use HCI platforms in a business context.

With that in mind, I thought a chat with Ruairi about his experience with this particular HCI platform, how it goes together, how it is practically deployed and how it meets some of the demands of modern business would make an interesting podcast.

So hear it is, Ruairi joins me as we cast our eye over HCI (stole the title from Ruairi’s BLOG post!).

We start by discussing what HCI is and why it’s simplicity of deployment is useful, we also look at the pro’s and cons of the HCI approach. Ruairi shares some thoughts on HCI’s growing popularity and why the world of smartphones may be to blame!

We look at the benefit of a single vendor approach within our infrastructure, but also discuss that although the hardware elements of compute and storage are important, the true value of HCI lies in the software.

We discuss the modern business technology landscape and how a desire for a more “cloud like” experience within our on-premises datacentres has demanded a different approach to how we deploy our technology infrastructure.

We wrap up by looking at why as a business you’d consider HCI, what problems will it solve for you and what are the use cases that are a strong HCI fit and of course, it’s important to remember that HCI isn’t the answer to every question!

To find out more about NetApp HCI visit here.

Ruairi’s initial “Casting Our Eye Over HCI” blog and video series is here.

If you have further questions for Ruairi, you can find him on twitter @mcbride_ruairi.

Until next time.

Thanks for listening.

Advertisements

NetApp, The Cloud Company?

051718_1626_NetAppTheCl1.jpgLast week I was fortunate enough to be invited to NetApp’s HQ in Sunnyvale to spend 2 days with their leadership hearing about strategy, product updates and futures (under very strict NDA, so don’t ask! ) as part of the annual NetApp A-Team briefing session. This happened in a week were NetApp revealed their spring product updates which, alongside a raft of added capabilities to existing products, also included a new relationship with Google Compute Platform (GCP).

The GCP announcement now means NetApp offer services to the 3 largest hyperscale platform providers. Yes that’s right, NetApp the “traditional” On-prem storage vendor are offering an increasing amount of cloud services and what struck me while listening to their senior executives and technologists was this is not just a faint nod to cloud but is central to NetApp’s evolving strategy.

But why would a storage vendor have public cloud so central to their thinking? It’s a good question and I think the answer lies in the technology landscape many of us operate in. The use of cloud is commonplace, its flexibility and scale are driving new technology into businesses more quickly and easily than ever before.

However, this comes with its own challenges, while quick and easy is fine for deploying services and compute, the same can not be said of our data and storage repositories, not only does data continue to have significant “weight” but it also comes with additional challenges, especially when we consider compliance and security. It’s critical in a modern data platform that our data has as much flexibility as the services and compute that need to access it, while at the same time, allowing us to maintain full control and stringent security.

NetApp has identified this challenge as something upon which they can build their business strategy and you can see evidence of this within their spring technology announcements not only as they tightly integrate cloud into their “traditional” platforms, but also the continued development of cloud native services such as those in the GCP announcement, the additional capabilities in AWS and Azure, as well as Cloud Volumes and services such as SaaS backup and Cloud Sync. It is further reflected in an intelligent acquisition and partnering strategy with a focus on those who bring automation, orchestration and management to hybrid environments.

Is NetApp the on-prem traditional storage vendor no more?

In my opinion this is an emphatic no. During our visit we heard from NetApp Founder Dave Hitz, he talked about NetApp’s view of cloud and how initially they realised that it was something they needed to understand and decided to take a gamble on it and its potential. What was refreshing was that they did this without any guarantees they could make money from cloud, but just they understood how potentially important it would be.

Over the last 4 years NetApp has been reinvigorated with a solid strategy built around their data fabric and this strong cloud centric vision, which has not only seen share prices rocket, but has also seen market share and revenue grow. That growth has not been from cloud services alone, in fact the majority is from strong sales of their “traditional” on-prem platforms and they are convinced this growth has been driven by their embracing of cloud, a coherent strategy that looks to ensure your data is where you need it, when you need it, while maintaining all of the enterprise class qualities you’d expect on-prem, whether the data is in your datacentre, near the cloud or in it.

Are NetApp a cloud company?

No. Are they changing? Most certainly.

Their data fabric message honed over the last 4 years is now mature in not only strategy but in execution, with NetApp platforms, driven by ONTAP as a common transport engine, providing a capability to move data between platforms be they on-prem, near the cloud or straight into public hyperscalers, while crucially maintaining the high quality of data services and management we are used to within our enterprise across all of those repositories.

This strategy is core to NetApp and their success and it certainly resonates with businesses that I speak with as they become more data focussed than ever, driven by compliance, cost or the need to garner greater value from their data. Businesses do not want their data locked away in silo’s, nor do they want it at risk when they move it to new platforms to take advantage of new tools and services.

While NetApp are not a cloud company, during the two days It seemed clear to me that their embracing of cloud puts them in a unique position when it comes to providing data services. As businesses look to develop their modern data strategy they would be, in my opinion, remiss to not at least understand NetApp’s strategy and data fabric and the value that approach can bring, regardless of ultimately if they use NetApp technology or not.

NetApp’s changes over the last few years have been significant and their future vision is fascinating and I for one look forward to seeing their continued development and success.

For more information on the recent spring announcements, you can review the following;

The NetApp official Press Release

Blog post by Chris Maki summarising the new features in ONTAP 9.4

The following NetApp blogs provide more detail on a number of individual announcements;

New Fabric Pool Capabilities

The new AFF A800 Platform

Google Compute Platform Announcement

Latest NMVe announcements

Tech ONTAP Podcast – ONTAP 9.4 Overview

 

 

IoT more than a sensor – Mark Carlton -Ep64

Buzzwords are a constant in IT, Cloud, HCI, Analytics, GDPR are all in common parlance in technology discussions across businesses of all type. However often these words are bandied about and serious discussions are had, however not everyone is sure what some of these buzzwords mean, what the technology consists of and importantly what positive impact does it have on an organisation? if any positive impact at all!

Let me present another contender to the buzzword Olympics, IoT or “The Internet of Things” what does that mean? What is a thing? And do I want things? let alone an Internet of them! The only thing I know about IoT was that I don’tt really know much about it!

When I heard that a friend of the podcast had taken on a new role as an IoT Solutions Architect and wrote a great introduction to IoT blog post (Demystifying IoT)
It seemed like a great opportunity to get some IoT education for not only me, but also the Tech Interviews audience.

So, on this week’s show I’m joined by Mark Carlton, who is now an IoT Solutions Architect at Arrow ECS and asked him to share what he’s discovered in his time in the role, how he sees IoT as a technology and how implementing it can deliver value to a business.

We start off by trying to define what we mean by IoT and Mark shares how, like many a new IT trend, it isn’t really that new.

We also explore why IoT is more than just sensors and how in reality IoT is a platform architected from sensors, gateways and importantly analytics tools that can help us to make sense of the data we collect and turn it into something valuable.

We discuss how often IoT projects are too quick to focus on putting sensors in lots of places rather than starting with a focus on business outcomes and asking the question “What do I want to achieve with this sensor?”.

Mark shares the importance of looking at IoT projects like any IT project, with a focus on business outcomes, the Why? How? And What? Of a project and not the technology.

We then explore use cases, how are people using sensor data to discover new things about their business. Mark also explains how it’s not only this additional data from increasing amounts of IoT that is useful but how access to large amounts of historic data is allowing us to find new trends and information which is creating brand new opportunities and ways of working.

We finish up by looking at security and compliance, both crucial elements of an IoT platform design and how it’s critical they are included right at the outset because adding security retrospectively to these platforms could be almost impossible.

Finally, Mark shares some advice on where to start and some sources of information to consider.

I hope this episode has helped you better understand this emerging technology platform and how it could serve your business, I know it certainly has helped me.

For more information you can follow Mark on Twitter @mcarlton1983

His blog at justswitchitonandoff.com

Until next time thanks for listening.

Taking a GDPR Journey – Mike Resseler – Ep63

GDPR has been a constant business conversation over the last 18 months or so, it’s discussed in the press, on the news and social media, as well as a handful of episodes of this podcast. However, much of the conversation has focussed on what you should be considering and doing to take on the GDPR challenge, while very little has come from those who have already made great strides on their compliance journey.

With that in mind, a few weeks ago I read a fascinating series of blogs from software company Veeam, this series discussed the 5 principles they followed to build their compliance program. What was interesting, was this series of posts talked about the practical steps they took, not about the technology they deployed, or how their technology could help you, but a series of posts that shared their experiences and challenges they faced building their business compliance program.

As many of us are currently on our own compliance journey, I thought the opportunity to chat with someone who is already well down this path would be of real interest, so in this week’s podcast I’m joined by Mike Resseler, Mike is a Director of Product Management but is also a key member of Veeam’s global compliance team and has played a significant part in the way they have dealt with the challenges posed by GDPR.

In this week’s show Mike shares with us Veeam’s experience. We start at the beginning with the initial advice they took and research they did into what GDPR meant to them. We discuss the importance of putting together the right team to deal with business compliance and why it was important to realise the scope of the work they were about to undertake.

Mike also explains how it was important that Veeam saw GDPR as something that would have a positive impact on the business and how, although technology would play a part, this was something that would need a focus on people, workflow and procedures.

We also discussed how not everyone was enthused by the idea of business compliance and how they saw GDPR as just a European problem and how it was important that the compliance team educated all the business to the importance of compliance.

We also look at the practicalities of building a compliance program as Mike shares the 5 principles Veeam developed to help them, we look at those steps, knowing your data, managing your data, protecting the data, documentation and continual improvement. We discuss the importance of each step and the part they have played in building a global compliance program.

We wrap up looking at the future, discussing continual improvement, training and the way that Veeam are making compliance integral to everything they do across their business.

I hope you enjoy the fantastic insight that Mike provides into the way a company builds a compliance programme and tackles regulation such as GDPR.

To find out more from Mike you can find him on twitter @MikeResseler.

The original blog posts that inspired this episode can be found here https://www.veeam.com/executive-blog/our-journey-to-be-gdpr-compliant.html

Mike and his team have also produced this video in which they discuss how to accelerate your GDPR efforts https://www.veeam.com/veeamlive/accelerate-your-gdpr-efforts.html

Hope you enjoy the show and until next time, thanks for listening.

Building a modern data platform – Prevention (Office365)

In this series so far, we have looked at getting our initial foundations right and ensuring we have insight and control of our data and have looked at components that I use to help achieve this. However, this time we are looking at something that many organisations are already using which has a wide range of capabilities that can help to manage and control data but which are often underutilised.

For ever-increasing numbers of us Office365 has become the primary data and communications repository. However, I often find organisations are unaware of many powerful capabilities within their subscription which can greatly reduce the risks of data breach.

Tucked away with Office365 is the Security and Compliance Section (protection.office.com) and is the gateway to several powerful features that should be part of your modern data strategy.

In this article we are going to focus on two such features “Data Loss Prevention” and “Data Governance”, both offer powerful capabilities that can be deployed quickly across your organisation and can help to significantly mitigate against the risks of data breach.

Data Loss Prevention (DLP)

DLP is an important weapon in our data management arsenal, DLP policies are designed to ensure sensitive information does not leave our organisation in ways that it shouldn’t and Office365 makes this straightforward for us to get started.

We can quickly create policies that we can apply across our organisation to help identify types of data that we hold, several predefined options already exist including ones that identify financial data, personally identifiable information (PII), social security numbers, health records, passport numbers etc. with templates for a number of countries and regions across the world.

Once our policies which identify our data types are created we can apply rules to that data on how it can be used, we can apply several rules and, depending on requirement, make them increasingly stringent.

The importance of DLP rules should not be underestimated, while it’s important we understand who has access to and uses our data, too many times we feel this is enough and don’t take that next crucial step of controlling the use and movement of that data.

We shouldn’t forget that those with the right access to the right data, may accidentally or maliciously do the wrong thing with it!

Data Governance

Governance should be a cornerstone of a modern data platform it is what defines the way we use, manage, secure, classify and retain our data and can impact the cost of our data storage, it’s security and our ability to deliver compliance to our organisations.

Office365 provides two key governance capabilities.

Labels

Labels allow us to apply classifications to our data so we can start to understand what is important and what isn’t. We can highlight what is for public consumption, what is private, sensitive, commercial in confidence or any other range of potential classifications that you have within your organisation.

Classification is crucial part of delivering a successful data compliance capability, giving us granular control on exactly how we handle data of all types.

Labels can be applied automatically based on the contents of the data we have stored, they can be applied by users as they create content or in conjunction with the DLP rules we discussed earlier.

For example a DLP policy can identify a document with credit card details in, then automatically apply a rule that labels it as sensitive information.

Retention

Once we have classified our data into what is important and what isn’t we can then, with retention policies, define what we keep and for how long.

These policies allow us to effectively manage and govern our information and subsequently allows us to reduce the risk of litigation or security breach by either retaining data for a period, as defined by a regulatory requirement, or, importantly, permanently deleting old content that you’re no longer required to keep.

The policies can be assigned automatically based on classifications or can be applied manually by a user as they generate new data.

For example, a user creates a new document containing financial data which must be retained for 7 years, that user can classify the data accordingly, ensuring that both our DLP and retention rules are applied as needed

Management

Alongside these capabilities Office365 provides us with two management tools, disposition and supervision.

Disposition is our holding pen for data to be deleted so we can review any deletions before actioning.

Supervision is a powerful capability allowing us to capture employee communications for examination by internal or external reviewers.

These tools are important in allowing us to show we have auditable processes and control within our platform and are taking the steps necessary to protect our data assets as we should.

Summary

The ability to govern and control our data wherever we hold it is a critical part of a modern data platform. If you use Office365 and are not using these capabilities then you are missing out.

The importance of governance is only going to continue to grow as ever more stringent data privacy and security regulations develop, governance can allow us to greatly reduce many of the risks associated with data breach and services such as Office365 have taken things that have been traditionally difficult to achieve and made them a whole lot easier.

If you are building a modern data platform then compliance and governance should be at the heart of your strategy.

This is part 4 in a series of posts on building a modern data platform, the previous parts of the series can be found below.

modern data platform
Introduction

modern storage
The Storage

031318_0833_Availabilit1.png
Availability

control
Control

Getting your cyber essentials – Jason Fitzgerald – Ep62

Cyber Security, be it how we secure our perimeter, infrastructure, mobile devices or data, is a complex and ever-changing challenge. In the face of this complexity where do we start when it comes to building our organisations cyber security standards.

Well perhaps the answer may lie in standardised frameworks and accreditation’s. If you think about it, one of the biggest challenges we have when it comes to security is knowing where to start, so having a standard to work towards makes perfect sense.

That is the subject of this weeks show with my guest and colleague Jason Fitzgerald, as we discuss the value of a UK based accreditation, Cyber Essentials.

Jason is a very experienced technical engineer and consultant and today spends much of his time working with organisations to help them address their IT security concerns and develop policies, procedures, strategies and technologies to help them to improve their security baselines.

One of the tools that Jason uses extensively is a framework and accreditation produced by the National Cyber Security Centre here in the UK, Cyber Essentials. During this episode we discuss why such a framework is valuable and can help a business improve its security posture.

But first we start with discussing the kind of security landscape that Jason sees when he talks with businesses of all types, some of the confusion that they have and the often-misplaced confidence that comes with the “latest and greatest” security technology solution purchase.

We explore the importance of organisational “buy in” when it comes to security, why it can’t be just seen as an IT problem and how without senior sponsorship your security efforts may well be doomed to failure.

Jason shares with us the 5 key areas that Cyber Essentials covers, from perimeter to patching. He also provides some insight into the process that an organisation will head down when building their own security framework.

We also look at the value of getting your security foundation correct, how it can greatly reduce your exposure to many of the common cyber security risks, but also how without it, your attempts to build more robust security and compliance procedures may well fail.

We finish up with Jason sharing some of his top tips for starting your security journey and how, although Cyber Essentials is a UK based accreditation, the principles of it will be valuable to your organisation wherever in the world you may be based.

You can follow Jason on twitter @jay_fitzgerald and read more from him at his blog Bits with the Fitz

If you want to learn more about Cyber Essentials, then visit the UK’s National Cyber Security Centre website www.cyberessentials.ncsc.gov.uk

Next week, we are looking at GDPR as I’m joined by a special guest Mike Resseler from Veeam as he takes us through the business compliance process they have carried out across their global organisation.

Thanks for listening.

Thanks for memory – Alex McDonald – Ep61

At the start of 2018 the technology industry was hit by two new threats unlike anything it had seen before. Spectre and Meltdown used vulnerabilities not in operating system code or poorly written applications, but ones at a much lower level than that.

This vulnerability was not only something of concern to today’s technology providers, but also to those looking at architecting the way technology will work in the future.

As we try to push technology further and have it deal with more data, more quickly than ever before. The technology industry is having to look at ways of keeping up and have our tech work in different ways beyond the limits of our current ways of working. One of these developments is storage class memory, or persistent memory, were our data can be housed and accessed at speeds many times greater than they are today.

However, this move brings new vulnerabilities in the way we operate, vulnerabilities like those exposed by Spectre and Meltdown, but how did Spectre and Meltdown look to exploit operational level vulnerabilities? and what does that mean for our desire to constantly push technology to use data in ever more creative and powerful ways?

That’s the topic of this week’s Tech Interviews podcast, as I’m joined by the always fascinating Alex McDonald to discuss exactly what Spectre and Meltdown are, how they Impact what we do today and how they may change the way we are developing our future technology.

Alex is part of the Standards Industry Association group at NetApp and represents them on boards such as SNIA (Storage Networking Industry Association).

In this episode, he brings his wide industry experience to the show to share some detail on exactly what Spectre and Meltdown are, how they operate, what vulnerabilities they exploit, as well as what exactly these vulnerabilities put at risk in our organisations.

We take a look at how these exploits takes advantage of side channels and speculative execution to allow an attacker to access data that you never would imagine to be at risk, and how our eagerness to push technology to its limits created those vulnerabilities.

We discuss how this has changed the way the technology industry is now looking at the future developments of memory, as our demands to develop ever larger and faster data repositories show no sign of slowing down.

Alex shares some insights into the future, as we look at the development of persistent memory, what is driving demand and how the need for this kind of technology means the industry has no option but to get it right.

To ease our fears Alex also outlines how the technology industry is dealing with new threats to ensure that development of larger and faster technologies can continue, while ensuring the security and privacy of our critical data.

We wrap up discussing risk mitigation, what systems are at risk to attack from exploits like Spectre and Meltdown, what systems are not and how we ensure we protect them long term.

We finish on the positive message that the technology industry is indeed smart enough to solve these challenges and how it is working hard to ensure that it can deliver technology to the demands we have for our data to help solve big problems.

You can find more on Wikipedia about Spectre and Meltdown.

You can learn more about the work of SNIA on their website.

And if you’d like to stalk Alex on line you can find him on twitter talking about technology and Scottish Politics! @alextangent

Hope you enjoyed the show, with the Easter holidays here in the UK we’re taking a little break, but we’ll be back with new episodes in a few weeks’ time, but for now, thanks for listening.

Availability of all of the things – Michael Cade – Ep 60

Recently I wrote a blog post as part of a series that explored the importance of availability to a modern data platform, especially in a world were our reliance on technology is ever increasing, from the way we operate our business, to the way we live our lives and how the digitally focussed businesses can no longer tolerate downtime, planned or unplanned in the way they could even 5 years ago (you can read that post here).

So how do we mitigate against the evils of downtime? That’s simple, we build recovery and continuity plans to ensure that our system remain on regardless of the events that go on around it, from planned maintenance to the very much unplanned disaster. But there’s the problem, these things aren’t simple, are they?

I’ve recently worked on a project where we’ve been doing exactly this, building DR and continuity plans in the more “traditional” way, writing scripts, policies and procedures to ensure that in the event of some kind of disaster the systems could be recovered quickly and meet stringent recovery time and point objectives. What this project reminded me of is how difficult these things are, keeping your documentation up to date, making sure your scripts are followed and ensuring you can fully test these plans, is tricky.

With that in mind the recent product announcement from Veeam of their new Availability Orchestrator solution, caught my attention, a solution that promises to automate and orchestrate not only the delivery of a DR solution, but also automating its documentation and testing, this was something that I needed to understand more and thought I wouldn’t be the only one.

So that is the topic of this weeks podcast, as serial guest Michael Cade, Global Technologist at Veeam, joins me to provide an insight into Availability Orchestrator, what challenges it addresses, why Veeam thought it was important to develop and how it can help you deliver better availability to your critical systems.

During the show Michael shares some insight into understanding your availability gap and why today business cannot tolerate downtime of key systems as well as the difficulties that come with maintaining a robust and appropriate strategy.

We explore the challenges of testing when the business doesn’t want downtime, how to keep track of all of the little tricks that our tech team keep in their heads how to get that into a continuity plan.

We finish up looking at how Availability Orchestrator can help, by providing a automation and orchestration solution to automate testing, documentation and execution of our continuity plans and how it can also be a tool to help us build test and dev environments, as well as help us to migrate to cloud platforms like VMware on AWS.

Availability Orchestrator, in my opinion, is a very powerful tool, having just worked on a continuity and DR project, the challenges that come with manually maintaining these plans are still very fresh in my mind and had this tool been available when I started that project it would certainly of been worthy of investigation into how it could help.

If you want to find out more about Veeam availability orchestrator, check out the Veeam Website.

You can follow Michael on twitter @MichaelCade1

And if you’d like to read his blog series on Veeam replication you’ll find that on his blog site starting here.

Hope you’ve found the show useful.

Thanks for listening.

Managing the future – Dave Sobel – Ep59

As our IT systems become ever more complex, with more data, devices and ways of working, the demands on our systems and ensuring they are always operating efficiently grow. This in turn presents us and our IT teams with a whole new range of management challenges.

Systems management has always been a challenge for organisations, how do we keep on top of an ever-increasing amount of systems ? how do we ensure they remain secure and patched ? and how do we cope with our users and their multitude of devices and ensure we can effectively look after them?.

Like most of our technology, systems management is changing, but how? And what should we expect from future management solutions?

That’s the subject of this weeks podcast, as I’m joined by returning guest Dave Sobel. Dave is Senior Director of Community at SolarWinds MSP, working with SolarWinds partners and customers to ensure they deliver a great service.

As part of this role, Dave is also charged with looking at the future (not the distant future, but the near future of the next 2 years) of systems management and what these platforms need to include in them to continue to be relevant and useful.

Dave provides some excellent insight into the way the management market is shifting and some of the technology trends that will change and improve the way we control our ever more complex yet crucial IT systems.

We start by asking why looking at the future is such an important part of the IT strategists role, whether you are a CIO, IT Director, or any person who makes technology direction strategy decisions, if you are not taking a look at future trends, it will seriously limit your ability to make good technology decisions.

We see why we need to rethink how we see a “computer” and how this is leading to a proliferation of different devices with the emergence of Internet of Things (IoT) as well as looking at why that is such a horrible phrase and how this is affecting our ability to manage.

We discuss the part Artificial Intelligence is going to play in future systems management as we try to supplement our over stretched IT staff and provide them with ways of analysing ever more data and turning it into something useful.

We also investigate increased automation, looking at how our management systems can be more flexible in supporting new devices as they are added to our systems, as well as been smarter in the way we can apply management to all of our devices.

Finally, we look at the move to human centric management, instead of our systems been built to support devices, we need to be able to understand the person who uses the technology, and build our management and controls around them, allowing us to provide them with better management and importantly a better technology experience.

We wrap up looking at how smarter systems management is going to allow us to free our IT teams to provide increased value to the business, as well as looking at a couple of areas you can focus on today, to start to look at the way you manage your systems.

To find more from Dave you can follow him on twitter @djdaveet

You will find Dave’s Blog is here

I hope you found the chat as interesting as I did.

Until next time, thanks for listening.

Building a modern data platform – Control

In the first parts of this series we have looked at ensuring the building blocks of our platform are right so that our data is sitting on strong foundations.

In this part we look at bringing management, security and compliance to our data platform.

As our data, the demands we place on it and the amount of regulation controlling it, continues to grow then gaining deep insight into how it is used can no longer be a “nice to have” it has to be an integral part of our strategy.

If you look traditionally at the way we have managed data growth you can see the basics of the problem, we have added file servers, storage arrays and cloud repositories as demanded, because more, has been easier than managing the problem.

However, this is no longer the case, as we see our data as more of an asset we need to make sure it is in good shape, holding poor quality data is not in our interest, the cost of storing it is no longer going unnoticed, we can no longer go to the business every 12 months needing more and while I have no intention of making this a piece about the EU General Data Protection Regulation (GDPR), it and regulation like it, is forcing us to rethink how we view the management of our data.

So what do I use in my data platforms to manage and control data better?

Varonis

varonis logo

I came across Varonis and their data management suite about 4 years ago and this was the catalyst for a fundamental shift in the way I have thought about and talked about data, as it opened up brand new insights on how unstructured data in a business was been used and highlighted the flaws in the way people were traditionally managing it.

With that in mind, how do I start to build management into my data platform?

It starts by finding answers to two questions;

Who, Where and When?

Without understanding this point it will be impossible to properly build management into our platform.

If we don’t know who is accessing data how can we be sure only the right people have access to our assets?

If we don’t know where the data is, how are we supposed to control its growth, secure it and govern access?

And of course when is the data accessed or even, is it accessed? let’s face it if no one is accessing our data then why are we holding it at all?

What’s in it?

However, there are lots of tools that tell me the who, where and when of data access, that’s not really reason I include Varonis in my platform designs.

While who, where and when is important it does not include a crucial component, the what. What type of information is stored in my data.

If I’m building management policies and procedures I can’t do that without knowing what is contained in my data, is it sensitive information like finances, intellectual property or customer details? Or, as we look at regulation such as GDPR, knowing where we hold private and sensitive data about individuals is increasingly crucial.

Without this knowledge we cannot ensure our data and business compliance strategies are fit for purpose.

Building Intelligence into our system

In my opinion one of the most crucial parts of a modern data platform is the inclusion of behavioural analytics, as our platforms grow ever more diverse, complex and large, one of the common refrains I hear is “this information is great, but who is going to look at it, let alone action it?”, this is a very fair point and a real problem.

Behavioural Analytics tools can help address this and supplement our IT teams. These technologies are capable of understanding and learning the normal behaviour of our data platform and when those norms are deviated from can warn us quickly and allow us to address the issue.

This kind of behavioural understanding offers significant benefits from knowing who the owners of a data set are to helping us spot malicious activity, from ransomware to data theft.

In my opinion this kind of technology is the only realistic way of maintaining security, control and compliance in a modern data platform.

Strategy

As discussed in parts one and two, it is crucial the vendors who make up a data platform have a vision that addresses the challenges businesses see when it comes to data.

There should be no surprise then that Varonis’s strategy aligns very well with those challenges, as one of the first companies I came across that delivered real forethought to the management, control and governance of our data assets.

That vision continues, with new tools and capabilities continually delivered, such as Varonis Edge and the recent addition of a new automation engine which provides a significant enhancement to the Varonis portfolio, the tools now don’t only warn of deviations from the norm, but can also act upon them to remediate the threat.

All of this tied in with Varonis’ continued extension of its integration with On-Prem and Cloud, storage and service providers, ensure they will continue to play a significant role in bringing management to a modern data platform.

Regardless of whether you choose Varonis or not it is crucial you have intelligent management and analytics built into your environment, because without it, it will be almost impossible to deliver the kind of data platform fit for a modern data driven business.

You can find the other posts from this series below;

modern data platform
Introduction
modern storage
Part One – The Storage
alwayon
Part Two – Availability