Don’t be scared – GDPR is a good thing, embrace it!

I can’t open my inbox these days without someone telling me about the European Union, General Data Protection Regulation (GDPR), the content of these emails ranging from the complex to the scaremongering.

However, what I don’t see are the ones extolling the positives of the regulation.

In my humble opinion, GDPR is a driver for some very positive change in the way that we as businesses, use the data that we have and will continue to collect in ever-growing amounts.

I’m sure we’ve all heard how data is the new gold, oil, etc, and to many of us our data is among the most valuable assets we hold and as I heard recently “the ability to gain actionable insights from data is what will separate us from our competition.” I personally believe this to be true, the businesses that know how to manage and gain value from their data will be the ones that are the success stories of the future.

If data is such an asset, then…

Why do we keep hearing stories about high profile data breaches, such as Equifax and Deloitte, where sensitive information has found itself in the public domain? If data is an asset, then why are we so lax with its security? Are we that lax with other assets?

Data is hard

The problem is, that managing data is hard, we don’t know what we have, where it is, who has access, and when or even if they access it. This lack of insight makes securing and managing data a huge challenge — and why the idea of more stringent regulation is a frightening prospect for many.

Why is GDPR a good thing?

The GDPR is going to force organizations to address these problems head-on, something that, for mthumbs upany of us, is long overdue. Although the regulation focuses on the privacy of “data subjects,” the principles can and should be applied to all of our data.

To be clear, GDPR is not a data management framework. Its scope is much wider than that. It is a legal and compliance framework and should be treated as such. But, while GDPR is “not an IT problem,” it’s certainly a technology challenge, and technology will be crucial in our ability to be compliant.

Why GDPR and technology is helpful

Even If GDPR did not demand our compliance, I would still thoroughly recommend it as a set of good practices that, if you’re serious about the value of your data, you should be following.

I believe the principles of the GDPR, along with smart technology choices, can positively revolutionize how we look after and get the very best from our data.

In the last 12 months or so, I’ve done a lot of work in this area and have found 4 key areas, where the GDPR alongside some appropriate technology choices has made a real difference.

1. Assessment

assessment-1024x819

As with any project, we start by fully understanding our current environment. How else are you going to manage, secure and control something if you don’t know what it looks like, to begin with?

Your first step should be to carry out a thorough data assessment, understand what you have, where it is, how much there is, if it’s looked at, what’s contained within it and of course, who, when, where and why it’s accessed.

This is crucial in allowing us to decide what data is important, what you need to keep and what you can dispose of. This is not only valuable for compliance but has commercial implications as well: why take on the costs of storing, protecting and securing stuff that nobody even looks at?

2. Education

It’s too easy to look at our users as the weakness in our security strategy when they should be our strength. They won’t ever be, however, if we don’t encourage, educate and train them.

Technology can help provide training, develop simple-to-use document repositories or keep them on their toes with regular orchestrated phishing tests. This helps users develop skills, keeps them aware and allows us to develop metrics against which we can measure our success.

We must move away from the annual “lunch and learn” briefing and realize we need tools that allow us to continually educate.

3. Breaches

breachThe GDPR places a major focus on our ability to identify breaches quickly and accurately and be able to report on exactly what data we have lost. Traditionally this is an area in which business have been lacking, taking weeks, months or maybe even years to be aware of a breach. In a world where we are ever more data-reliant, this cannot be acceptable.

Technology is the only way to meet these stringent reporting requirements. How else will you know the when, where and how of a breach?

But technology isn’t only about reporting. The ability to have such visibility of data usage —  the who, where and when of access — will allow us to quickly detect and stop a breach, or at least reduce its impact.

4. Data protection by design

This is perhaps the most positive part of GDPR, as it will encourage us to build data protection into the very core of our infrastructure, systems and data repositories. Whether on-prem or in the cloud, under our control or a service providers, security has to be at the heart of our design — not an afterthought.

We need to use this as an opportunity to encourage cultural change, one where the importance of our data is not underestimated, where maintaining its integrity, security and privacy is a priority for everyone, not just IT.

Is the GDPR a lot of work? Yes.

Is it worth it? In my opinion, 100%, yes — GDPR is a real positive driver for a long overdue and crucial change and should be embraced.


Advertisements

Keeping on top of ONTAP

The last year has been a big one for NetApp, the turnaround in the company’s fortunes continues, fantastic growth in the all flash array market, the introduction of cloud native solutions with tools and of course not to forget Solidfire and the newly announced HCI platform. All have created lots of interest in this “new” NetApp.

If you have read any of my content previously, you’ll know I’m a fan of how NetApp operate and their data fabric strategy continues to make them the very best strategic data partner to meet the needs of many of the people I work with day-to-day.

Why am I telling you all of this? Well, like with all technology companies, it’s easy to get wrapped up in exciting new tech and sometimes forget the basics of why you work with them and what their core solutions still deliver.

For all the NetApp innovations of the last couple of years, one part of their business continues to be strong and even at 25 years old remains as relevant to customer needs as ever and that is the ONTAP operating system.

ONTAP, in its latest incarnation, version 9 (9.2 to be exact), maybe more than anything shows how NetApp continue to meet the ever-changing needs of the modern data market, because it would be easy, regardless of its strength, to write off an operating system that is 25 years old, but NetApp have not, they have developed it into something markedly different from the versions I first worked with 10 years ago.

These changes reflect the changes we, as users in more data focussed businesses, demand from our storage, it’s not even really storage we demand, it’s an ability to make our data a core part of our activities, to quote a friend “Storing is boring” and although storing is crucial, if all we are doing is worrying about storing it, then we are missing the point and if the focus for ONTAP was only that, then it would become very quickly irrelevant to a modern business.

How are NetApp ensuring that ONTAP 9 remains relevant and continues to be at the heart of data strategies big and small?

Staying efficient

Although storing may be boring, in a world where IT budgets continue to be squeezed and datacentre power and space are at a costly premium, squeezing more and more into less and less continues to be a core requirement.

Data Compaction, inline deduplication, and the newly introduced aggregate wide deduplication all provide fantastic efficiency gains. If you align this with integration of increasing media sizes (10TB SATA, 15TB Flash, something not always easy for NetApp’s competition), you can see how ONTAP continues to let you squeeze more and more of your data into smaller footprints (60Tb on one SSD drive anyone?), something that remains critical in any data strategy.

Let it grow

As efficient as ONTAP can be, nothing is efficient enough to keep up with our desire to store more data and different types of data. However, ONTAP is doing a pretty good job of keeping up. Not only adding additional scalability to ONTAP clusters (Supporting up to 24 nodes) NetApp have also taken on a different scaling challenge with the addition of FlexGroups.

FlexGroups allow you to aggregate together up to 200 volumes into one large, high performance single storage container, perfect for those who need a single point of storage for very large datasets. This is something I’ve already seen embraced in areas like analytics where high performance access to potentially billions of files is a must.

Keep it simple

A goal for any IT team should be the simplification of its environment.

NetApp have continued developing ONTAP’s ability to automate more tasks and by using intelligent analysis of system data they are helping you to take the guess-work out of workload placements and their impacts, allowing you to get it right, first time, every time.

The continued development of quick deployment templates has also greatly simplified provisioning of application storage environments from out of the box to serving data, taking just minutes not days.

In a world where an ability to respond quickly to business needs is crucial, then the value of developments like this cannot be underestimated.

Keep it secure

Maybe the most crucial part of our data strategy is security and in the last 12 months NetApp have greatly enhanced the capability and flexibility of this in ONTAP.

SnapLock functionality was added 12 months ago, allowing you to lock your data into data archives that can meet the most stringent regulatory and compliance needs.

However, the biggest bonus is the implementation of onboard, volume level encryption, previous to ONTAP9, the only way to encrypt data on a NetApp array, was like most storage vendors, with the use of self-encrypting drives.

This was a bit of an all or nothing approach, it meant buying different and normally more expensive drives and encrypting all data regardless of its sensitivity.

9.1 introduced the ability to deliver encryption on a more granular level, allowing you to encrypt single volumes, without the need for encrypting drives, meaning no need for additional hardware and importantly the ability to only encrypt what is necessary.

In modern IT, this kind of capability is critical both in terms of data security and compliance.

Integrate the future!

I started this piece by asking how you keep a 25-year-old operating system relevant, in my opinion the only way to do that is to ensure it seamlessly integrates with modern technologies.

ONTAP has a pretty good record of that, be it by luck or design, it’s port into the world of all flash, was smooth, no need for major rewrites, the ONTAP method of working was geared to work with flash before anyone had thought of flash!

The ability for ONTAP to see media as another layer of storage regardless of type was key in supporting 15TB SSD’s before any other major storage vendor and it is this flexibility of ONTAP to integrate new storage media which has led to one of my favourite features of the last 12 months, FabricPools.

This technology allows you to seamlessly integrate S3 storage directly into your production data, be that an on-prem object store, or a public cloud S3 bucket from a provider like AWS.

In the V1.0 release in ONTAP 9.2, FabricPools tier cold blocks from flash disk to your S3 complaint storage, wherever that is, bringing you the ability to lower your total cost of ownership for storage by moving data not actively in use to free up space for other workloads. All done automatically via policy, seamlessly providing an extension to your production storage capacity by integrating modern storage technology.

ONTAP everywhere

As ONTAP continues to develop, the ways you can consume it also continue to develop to meet our changing strategic needs.

Fundamentally ONTAP is a piece of software and like any piece of software it can run anywhere that meets the requirements to run it. ONTAP variants Select and Cloud, provide software defined versions of ONTAP that can be run on white box hardware or delivered straight from the cloud marketplaces of AWS and Azure.

The benefit of this stretches far beyond just been able to run ONTAP in more places, it means that management, security policies and data efficiencies are all equally transferable. It’s one way to manage, one set of policies to implement, meaning that where your data resides at a given moment becomes less important, as long as it is in the right place at the right time for the right people.

In my opinion, this flexibility is critical for a modern data strategy.

Keep it coming

Maybe what really keeps ONTAP relevant is the fact that these new capabilities are all delivered in software, none of the features have required new hardware or for you to purchase an add-on, they are all delivered as part of the ONTAP development cycle.

And the modern NetApp has fully embraced a more agile way of delivering ONTAP, with a 6-month release cadence, meaning they can quickly absorb feature requests and get them delivered to platforms that desire them quickly, allowing them and us to respond to changing business needs.

So, while NetApp have had a fascinating year, delivering great enhancements to their portfolio, ONTAP still retains a very strong place at the heart of their data fabric strategy and still, in my opinion, is the most complete data management platform, continuing to meet the needs presented by modern data challenges.

Find out more

If you want to know more about ONTAP and its development then try these resources.

NetApp’s Website

Justin Parisi’s BLOG – providing links to more detailed information on all of the technologies discussed and much more!

TechONTAP Podcast – NetApp’s excellent TechONTAP podcast has detailed information of all of the information shared here, it’s all in their back catalogue.

And of course you can leave a comment here or contact me on twitter @techstringy

What is a next generation data centre? – Martin Cooper – Ep35

There is no doubt that our organisations are becoming ever more data centric, wanting to know how we can gain insight into our day to day operations and continue to be competitive and relevant to our customers, while delivering a wide range of new experiences for them.

This move to a more data driven environment is also altering the way we engage and even purchase technology in our businesses, with technology decisions now no longer the preserve of IT people.

These changes do mean we need to reconsider how we design and deliver technology. Which has led to the idea of “The Next Generation Datacenter”, but what does that mean? What is a Next Generation Datacentre?

That is the subject of this week’s podcast, as I’m joined by Martin Cooper, Senior Director of the Next Generation Datacentre Group (NGDC), at NetApp.

With over 25 years in the technology industry, Martin is well placed to understand the changes that are needed to meet our increasingly digitally driven technology requirements.

In this episode, we look at a wide range of topics, starting with trying to define what we mean by Next Generation Datacentre. The good news is that NGDC is not necessarily about buying a range of new technologies, but about optimising the processes and technology that we already have.

We touch on how a modern business needs flexibility in its operations and how decisions made in different parts of the business, who focus on applications and data, not infrastructure, require IT teams to respond in an application and data focused way.

Martin also discusses the types of organisations that can benefit from this NGDC way of thinking, and how in fact, it’s not about entire organisations, but about understanding where the opportunities for transformation exist, and delivering change there, be that an entire business, a single department or even a single application.

We also provide a word of caution and how it’s important to understand that not all our current applications and infrastructure are going to migrate to this brave new world of Next Generation Datacentres.

Next Generation Datacentre is not about a technology purchase, but is about understanding how to optimise the things we do, to meet our changing business needs and Martin provides some excellent insight into how we do that and the kind of areas we need to consider.

To find out more from Martin and from NetApp you can follow them in all the usual ways.

Their website Netapp.com

On twitter @NetApp @NetAppEMEA

You can also follow Martin @mr_coops

Martin also mentioned a selection of podcasts that often discuss next generation datacentre, you can find more details on those shows by clicking the links below.

SpeakingINTech

The Cloudcast

NetApp’s own TechONTAP podcast.

I hope you enjoyed the show, if you did and want to catch all future Tech Interviews episodes, then please subscribe and leave us a review in all of the normal places.

Subscribe on Android

SoundCloud

Listen to Stitcher

All Aboard the Data Train

The other night myself and Mrs Techstringy were discussing a work challenge. She works for a well-known charity and one of her roles is to book locations for fundraising activities, on this occasion the team were looking at booking places at railway stations and considering a number of locations, however all they really had to go on was a “gut feeling”.

As we discussed it we did a bit of searching and came across this website http://www.orr.gov.uk/statistics/published-stats/station-usage-estimates which contained information of footfall in every UK railway station over the last 20 years, this information was not only train geek heaven, it also allowed us to start to use the data available to make a more informed choice and to introduce possibilities that otherwise would not have been considered.

This little family exercise was an interesting reminder of the power of data and how with the right analysis we can make better decisions.

Using data to make better decisions is hardly news, with the ever-increasing amounts of data we are collecting and the greater access to powerful analytics, machine learning and AI engines, all of us are already riding the data train taking us to a world of revolutionary ideas, aren’t we?

The reality is, that most of us are not, but why?

For many, especially with data sets gathered over many years, it’s hard, hard to package our data in such a way that we can easily present it to analytics engines and get something useful from it.

But don’t let it stop you, there is potentially huge advantage to be had from using our data effectively, all we need is a little help to get there.

So what kind of steps can we take so we too can grab our ticket and board the data train?

Understand our data

The first thing may seem obvious, understand our data, we need to know, where is it? what is it? is it still relevant?

Without knowing these basics, it is going to be almost impossible to identify and package up the “useful” data.

The reality of data analytics is we just can’t throw everything at it, remember the old adage garbage in, garbage out, it’s not changed, if we feed our data analytics elephant a lot of rubbish, we aren’t going to like what comes out the other end!

Triage that data

Once we’ve identified it, we need to make sure we don’t feed our analytics engine a load of nonsense, it’s important to triage, throw out the stuff that no one ever looks at, the endless replication, the stuff of no business value, we all store rubbish in our data sets, things that shouldn’t be there in the first place, so weed it out, otherwise at best we are going to process irrelevant information, at worst we are going to skew the answers and make them worthless.

Make it usable

This is perhaps the biggest challenge of all, how do we make our massive onsite datasets useful to an analytics engine.

Well we could deploy an on-prem analytics suite, but for most of us this is unfeasible and the reality is, why bother? Amazon, Microsoft, Google, IBM to name but a few have fantastic analytics services ready and waiting for your data, however the trick is how to get it there.

man-lifting-heavy-boxThe problem with data is it has weight, gravity, it’s the thing in a cloud led world that is still difficult to move around, it’s not only its size that makes it tricky, but there is our need to maintain control, meet security requirements, maintain compliance, these things can make moving our data into cloud analytics engines difficult.

This is where building an appropriate data strategy is important, we need to have a way to ensure our data is in the right place, at the right time, while maintaining control, security and compliance.

When looking to build a strategy that allows us to take advantage of cloud analytics tools, we have two basic options;

Take our data to the cloud

Taking our data to the cloud is more than just moving it there, it can’t just be a one off copy, ideally in this kind of setup, we need to move our data in, keep it synchronised with changing on-prem data stores and then move our analysed data back when we are finished, all of this with the minimum of intervention.

Bring the cloud to our data

Using cloud data services doesn’t have to mean moving our data to the cloud, we can bring the cloud to our data, services like Express Route into Azure or Direct Connect into AWS means that we can get all the bandwidth we need between our data and cloud analytics services, while our data stays exactly where we want it, in our datacentre, under our control and without the heavy lifting required for moving it into a public cloud data store.

Maybe it’s even a mix of the two, dependent on requirement, size and type of dataset, what’s important is that we have a strategy, a strategy that gives us the flexibility to do either.

All aboard

Once we have our strategy in place and have the technology to enable it, we are good to go, well almost, finding the right analytics tools and of course what to do with the results when we have them, are all part of the solution, but having our data ready is a good start.

That journey does have to start somewhere, so first get to know your data, understand what’s important and get a way to ensure you can present it to the right tools for the job.

Once you have that, step aboard and take your journey on the data train.

If you want to know more on this subject and are in or around Liverpool on July 5th, why not join me and a team of industry experts as we discuss getting the very best from your data assets at our North West Data Forum.

And for more information on getting your data ready to move to the cloud, check out a recent podcast episode I did with Cloud Architect Kirk Ryan of NetApp as we discuss the why’s and how’s of ensuring our data is cloud ready.

New fangled magic cloud buckets – Kirk Ryan – Ep32

Analysing the availability market – part two – Dave Stevens, Mike Beevor, Andrew Smith – Ep30

Last week I spoke with Justin Warren and Jeff Leeds at the recent VeeamON event about the wider data availability market, we discussed how system availability was more critical than ever and how or maybe even if our approaches where changing to reflect that, you can find that episode here Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29.

In part two I’m joined by three more guests from the event as we continue our discussion. This week we look at how our data availability strategy is not and can not just be a discussion for the technical department and must be elevated into our overall business strategy.

We also look how technology trends are affecting our views of backup, recovery and availability.

First I’m joined by Dave Stevens of Data Gravity,  as we look at how ou060617_0724_Analysingth1.jpgr backup data can be a source of valuable information, as well as a crucial part in helping us to be more secure, as well as compliant with ever more stringent data governance rules.

We also look at how Data Gravity in partnership with Veeam have developed the ability to trigger smart backup and recovery, Dave gives a great example of how a smart restore can be used to quickly recovery from a ransomware attack.

You can find Dave on Twitter @psustevens and find out more about Data Gravity on their website www.datagravity.com

Next I chat with Mike Beevor of HCI vendor Pivot3 about how simplifying our approach to system availability can be a huge benefit. Mike also makes a great point about how, although focussing on application and data availability is right, we must consider the impact on our wider infrastructure, because if we don’t we run the risk of doing more “harm than good”.

You can find Mike on twitter @MikeBeevor and more about Pivot 3 over at www.pivot3.com

Last but my no means least I speak with Senior Research Analyst at IDC, Andrew Smith, we chat about availability as part of the wider storage market and how over time, as vendors gain feature parity, their goal has to become to add additional value, particularly in areas such as security and analytics.

We also discuss how availability has to move beyond the job of the storage admin and become associated with business outcomes. Finally we look a little into the future and how a “multi cloud” approach is a key focus for business and how enabling this will become a major topic in our technology strategy conversations.

You can find Andrews details over on IDC’s website .

Over these two shows, to me, it has become clear that our views on backup and recovery are changing, the shift toward application and data availability is an important one and how, as businesses, we have to ensure that we elevate the value of backup, recovery and availability in our companies, making it an important part of our wider business conversations.

I hope you enjoyed this review, next week, is the last interview from VeeamON, as we go all VMWare as I catch up with the hosts of VMWare’s excellent Virtually Speaking Podcast Pete Flecha and John Nicholson.

As always, If you want to make sure you catch our VMWare bonanza then subscribe to the show in the usual ways.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29

Now honestly, this episode has not gone out today sponsored by British Airways, or in any way taking advantage of the situation that affected 1000’s of BA customers over the weekend, the timing is purely coincidental.

However, those incidents have made this episode quite timely as they again highlight just how crucial to our day to day activities as individuals and businesses technology is.

As technology continues to be integral to pretty much anything we do, the recent events at BA and the disruption caused by WannaCrypt are all examples of what happens when our technology is unavailable, huge disruption, reputational damage, financial impacts, as well as the stress it brings to the lives of both those trying to deal with the outage and those on the end of it.

Last week I spoke with Veeam’s Rick Vanover (Remaining relevant in a changing world – Rick Vanover – Ep28) about how they where working to change the focus of their customers from backup and recovery to availability, ensuring that systems and applications where protected and available, not just the data they contained.

As part of my time at the recent VeeamON event, I also took the opportunity to chat with the wider IT community who attended, not just those charged with delivering availability and data protection, but also those who looked at the industry through a broader lens, trying to understand not just how vendors viewed availability, but also at the general data market trends and whether businesses and end users where shifting their attitudes in reaction to those trends.

So over the next couple of weeks, I’ve put together a collection of those chats to give you a wider view of the availability market, how analysts see it and how building a stack of technologies can play a big part in ensuring that your data is available, secure and compliant.

First up, I speak with Justin Warren and Jeff Leeds.

Justin, is a well-known industry analyst and consultant as well as the host of the fantastic Eigencast podcast (if you don’t already listen you should try it out) Justin is often outspoken, but always provides a fascinating insight into the wider industry, and shares some thoughts here, on how the industry is maturing, how vendors and technology is changing and how organisations are changing or perhaps not changing to meet new availability needs.

You can follow Justin on twitter @jpwarren and do check out the fantastic Eigencast podcast.

Jeff Leeds, was part of a big NetApp presence at the event and I was intrigued why a storage vendor, famed for their own robust suite of data protection and availability technologies, should be such a supporter of a potential “competitor”.

However, Jeff shared how partnerships and complimentary technologies are critical in building an appropriate data strategy, helping us all ensure our businesses remain on.

You can follow Jeff on twitter at @HMBcentral and find out more about NetApp’s own solutions over at www.netapp.com

I hope you enjoyed the slightly different format and next week we’ll dig more into this subject as I speak with Andrew Smith from IDC and technology vendors Pivot3 and Data Gravity.

To catch it, please subscribe in all the normal homes of Podcasts, thanks for listening.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

What I’ve Learned About GDPR

The EU’s General Data Protection Regulation (GDPR) that comes into effect in May 2018 is a subject that has moved to the top of many a list of priorities and is going to have a major effect on how we handle personal data.

Over the last year, I’ve spoken with businesses about their data security, how to avoid data loss, leaks and insider threats. However, over the first 3 months of this year (2017) this conversation, driven by GDPR, has shifted to compliance and privacy.

However, it’s evident that not everyone is either aware of the forthcoming changes or how to build privacy and security policies to deal with the complex problems it presents.

Over the last few months I’ve been pretty absorbed in the world of GDPR and thought it’d be useful to share a few of the things I’ve learned that may help you with your own privacy and security strategy.

It’s complicated

GDPR is a complicated bit of legislation, its scope is vast and too some degree we will all be affected, whether as organisations having to sort out our compliance or as individuals whose data will fall under the scope of the regulation, we will see lots of changes.

Remember it is a complex bit of legislation, which leads to…

Good news, GDPR is not an IT problem

It’s true, it’s a legal and compliance issue, not an IT one, just because we are talking about data, an organisation cannot say, “it’s data so can’t IT just sort it out?”

Absolutely not, IT will be a critical partner for helping to deliver compliance, but only in the same way the Board, HR, Finance or anyone who touches data is going to be a key partner in maintaining compliance.

Is your organisations view of GDPR that it is only an IT problem? If it is then you need to look at how you educate them, quickly, that it isn’t!

Roughly what is it?

We’ve heard what it isn’t so what is it?

In its simplest form it is updated legislation, replacing the EU’s data protection directive, but it goes beyond updating, growing in scope and potential penalties for noncompliance.

To quote the EU ;

The EU General Data Protection Regulation (GDPR) replaces the Data Protection Directive 95/46/EC and was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy.

The goal of GDPR is too ensure the personal data held about us can only be used for the purposes it was gathered for and is treated with due care to ensure it is not abused by those who would wish to exploit it.

It’s privacy, not security

One of my go to people when it comes to data privacy is NetApp’s Sheila Fitzpatrick, Sheila is a data privacy attorney with nearly 35 years’ privacy experience and is NetApp’s data privacy officer and global privacy counsel.

Sheila makes the point that data security, IS NOT data privacy.

Data privacy is much wider in scope than just security, Sheila likes to use the example of a data privacy wheel, where security is just one spoke on that wheel.

When designing data privacy solutions, we should understand the full lifecycle of the personal data we collect, assess, process and use, from the minute we collect it until we finally destroy it.

If your organisation is looking at GDPR and saying, “isn’t that just more of that IT security stuff” then it’s time to educate again, it’s so much more than just security.

Will it affect me? Does it matter if I’m not in the EU?

Both valid and common questions, the answer, pretty much every time is a resounding yes. It doesn’t matter is you are inside or outside of the EU.

Location is irrelevant, if you hold data on EU citizens, regardless of where you are based, then you will fall under the scope of GDPR.

What about putting data in the cloud?

Cloud presents an interesting issue, as actually does the placing of data with any 3rd party, as the data controller, you are ultimately responsible for what happens to it. The general advice is to ensure two things, if you are passing your data to someone to process ensure that you have a clear contract in place with them.

If you are looking to a cloud provider, then ensure they have appropriate data privacy policies and safeguards in place so that you are not exposed to risk.

What should I do?

What are some steps you should be taking?

Dealing with GDPR is going to be a constant challenge so it’s important to get started, here’s where I’d start;

  • What are my current policies and are they appropriate?
  • Understand your current data, where is it, how much do I have, who has access, what does it contain?
  • Why do you have that data and why do you collect it.
  • Educate your business, so that from top to bottom people understand the importance of data privacy and the impact that this new regulation will have.
  • Deliver your GDPR compliance plan.

You’ll notice there is very little technology highlighted in those initial steps, maybe something to help you to understand your current data sets, but apart from that, it’s policies, procedures and education.

Technology will have a place, in reality, you are going to find it hard to remain compliant without some technical tools and resources to help you do it.

What have I learned?

There is lots too learn!

It’s complex, it’s not a technical problem with a “silver bullet” to fix it. It is a business legal and compliance issue.

The most interesting thing I’ve discovered though, is even if GDPR wasn’t something we had to comply with, it is something that contains such a level of good and sensible practice it is something that we would want to adopt anyway.

Because in the end, it’s all about our data, let’s keep it secure and private.

For more GDPR resources try out some of the following;

EU GDPR Site

UK Information Commissioners Office

You can also check out a friend of mine, Mark Carlton and an excellent GDPR post he recently published.

How GDPR could affect your business

I also did a series of podcasts to support a recent event that we ran, they cover GDPR in broad terms as well as looking at some specifics on data management and how to work with your people, feel free to check them out;

Best Take Care Of Those Crown Jewels – Sheila Fitzpatrick – Ep 17

Don’t Build Your Data Privacy House Upside Down – Sheila Fitzpatrick – Ep 18

What you don’t know, may hurt you – John Hughes – Ep 20

Make People Our Best Data Security Asset – Dom Saunders – Ep 19

.

Weaving a data fabric – Mark Carlton – Ep 23

Regular readers of my blog are probably familiar with the idea of the NetApp data fabric.

This fabric defines NetApp’s strategic direction for data management. How to plan, develop and deploy a solution suitable for a modern organisations data needs, not only the needs for today but also those for the mid and long term.

What I like about this data fabric approach is that it allows us to move away from thinking about “traditional” storage deployments, that you may associate with a vendor like NetApp, or It’s well known competitors’ like Dell-EMC, HPE, IBM and even the new kids like Pure, and to have a much broader data management conversation that encompasses cloud, analytics, software defined, security and privacy.

By shifting this focus, NetApp have been smart, but importantly for us as consumers of storage, they have allowed us to be smart as well, by focussing on the data and not on where it’s housed or the technology it lives on.

Recently a friend of mine from the NetApp A-Team, Mark Carlton, Mark Carltonwrote an excellent blog post “Top 4 questions about the value of the NetApp data fabric” in which he discussed the practicalities of this strategy, looking at its component parts, as well as some great examples of customer deployment.

It was such a good article, I thought I’d ask him onto this weeks Tech Interviews, so we could discuss in more detail his take on and experience of this data fabric strategy.

We not only discuss NetApp’s implementation of this, but also, and maybe more importantly, how the fabric has grown beyond a NetApp centric view and how 3rd party tools from the likes of Varonis, Veritas and Veeam are integrated into this fabric to enhance it further, making your data management solution more insightful, more complete.

Enjoy the conversation with Mark and then ask yourself, are you planning a data fabric strategy that allows you to meet your businesses ever changing needs? Because in the end, it’s all about the data!

If you want to follow up with either myself or Mark on this episode, you can find Mark on twitter @mcarlton1983 or of course me @techstringy

Don’t forget you can read Marks’ excellent blog post here “Top 4 questions about the value of the NetApp data fabric”

If you want more data fabric musings, then I wrote this piece about data fabric a little while a go, Data Fabric – What is it good for?

To make sure you catch the next Tech Interview, you can subscribe to the show wherever you get your podcasts. 

Subscribe on Android

 

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

The Future is Bright, The Future is Data – Matt Watts – Ep 21

The idea that our data is critical to the future of our organisation isn’t a new one, the focus around managing it, protecting and securing it underlines its importance to any modern organisation.

But protecting our data and ensuring we maintain its privacy and security is not the only important focus we should have.

You don’t need to look around the technology industry too much to hear phrases such as “data is the new gold” or “data is the new oil”, but like any good marketing phrase, it is based on a degree of fact.

As marketing-y as those phrases are, it would be wrong to dismiss them. The image I chose for this blog post suggests, “if the future is digital, the guy with the most data wins”,  However, I think that phrase is only partly correct.

It is certain that the modern organisation is becoming increasingly digital, transforming into one that is relying on data and digital workflows for its success, however when it comes to data, it’s not how much data you have, it’s what you do with it and learn from it that will determine who really wins.

That’s the focus of this week’s podcast as I’m joined by NetApp’s Director, Technology and Strategy, Matt Watts.

Matt is in an interesting position, working for one of the world’s largest “traditional” storage vendors and charged with helping them to develop a strategy for dealing with challenges faced by organisations in a world where “traditional” storage is seen as something less valuable.

Maybe to the surprise of many, Matt agrees, while NetApp have great products, they fully accept that the future isn’t about IOPS, Capacities and flashing lights. All that really matters is the data.

In this episode, Matt provides fascinating insights into the modern data world, how extracting valuable information from data is a significant advantage to an organisation, how 3rd party companies working with storage vendors is critical to the future of data management and how companies like Microsoft, Amazon and IBM with Watson are commoditising machine learning and artificial intelligence to a point where, organisations of all sizes, can take advantage of these very smart tools to give them insights and understanding that just a few years ago was out of the reach for all but the very wealthiest of companies.

We also look at how building an appropriate data management strategy is crucial in enabling organisations to access tools that can allow them to take full advantage of their data asset.

Have a listen, Matt provides some great information to help you to get the maximum from your data and be the person not with “the most data” but the one with “the most information from their data” that wins.

Enjoy the show.

To find out more from Matt you can find him on twitter @mtjwatts or follow his blog at watts-innovating.com (check out the article “Your Supermarket knows more about you than your Doctor) and to find out more about NetApp’s own data management strategies check out the “Data Fabric” section of their website.

If you enjoyed the show, why not subscribe to the Tech Interviews podcast;

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

What you don’t know, may hurt you – John Hughes – Ep 20

We are all familiar with the saying “what you don’t know, won’t hurt you”. Well in the world of data management, security and privacy the opposite is most definitely true.

For most of us, as our organisations become more digital, we are increasingly realising the value of our data, how big an asset it is and how important maintaining it is.

However, although we understand how valuable our data is, we actually have very little insight into what is happening to it on a day to day basis.

Ask yourself, do you know exactly what data you have across your business, do you know exactly who has access to it, where it is stored, when it gets accessed, if it even gets accessed and when it’s accessed what gets done with it?

In my time administering IT systems, or working with those that do, I’ve lost count of the amount of times I’ve been asked “who changed that file”, “who deleted that file?”, “can you tell me the files that a user has accessed and copied to a USB stick?” the answer is normally no, and it’s normally no, because our standard storage solutions can’t tell us.

Imagine a logistics company asking questions like, “who’s driving that lorry”, “who was the last person to drive it?”, “where is Fred taking that lorry?”, “can you tell me the type of lorries we have?” and been told, no, we don’t know any of that information, ridiculous right? Yet we do that with our data asset.

We have talked in recent episodes about the threat to our data security and privacy, be it policies or procedures or our people. Just as significant a threat is the inability to fully understand what is going on with our data sets, a lack of insight and analysis means it’s very easy for our data to be abused, lost and stolen without us having the slightest knowledge of it happening.

That’s our focus this week, in the last of our data security & privacy episodes, I chat withjohn hughes John Hughes of Varonis. Varonis provide data analytics and insights into how we use our data, what our data is, who is using it, what it’s used for and if it’s even used at all.

We discuss a little of the history of Varonis, why data insight is so critical, why it’s a cornerstone of our ability to meet compliance requirements and how it’s a crucial part of our defence against data security attacks.

Enjoy the show and thanks for listening.

To find out more about Varonis;

Check out varonis.com

Have a look at their excellent range of BLOGS at blog.varonis.com and of course follow them on twitter @varonis

You can also request a free GDPR data assessment via their website

If you want to learn more about any of the topics in this series, and you are in the North West England on April 5th, you can join me and a range of speakers at www.northwestdataforum.co.uk

You can find the previous 3 episodes in this series here;

Best Take Care Of Those Crown Jewels – Sheila Fitzpatrick – Ep 17

Don’t Build Your Data Privacy House Upside Down – Sheila Fitzpatrick – Ep 18

Make People Our Best Data Security Asset – Dom Saunders – Ep 19

If you’ve enjoyed this episode, then why not subscribe;
Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss