Getting your cyber essentials – Jason Fitzgerald – Ep62

Cyber Security, be it how we secure our perimeter, infrastructure, mobile devices or data, is a complex and ever-changing challenge. In the face of this complexity where do we start when it comes to building our organisations cyber security standards.

Well perhaps the answer may lie in standardised frameworks and accreditation’s. If you think about it, one of the biggest challenges we have when it comes to security is knowing where to start, so having a standard to work towards makes perfect sense.

That is the subject of this weeks show with my guest and colleague Jason Fitzgerald, as we discuss the value of a UK based accreditation, Cyber Essentials.

Jason is a very experienced technical engineer and consultant and today spends much of his time working with organisations to help them address their IT security concerns and develop policies, procedures, strategies and technologies to help them to improve their security baselines.

One of the tools that Jason uses extensively is a framework and accreditation produced by the National Cyber Security Centre here in the UK, Cyber Essentials. During this episode we discuss why such a framework is valuable and can help a business improve its security posture.

But first we start with discussing the kind of security landscape that Jason sees when he talks with businesses of all types, some of the confusion that they have and the often-misplaced confidence that comes with the “latest and greatest” security technology solution purchase.

We explore the importance of organisational “buy in” when it comes to security, why it can’t be just seen as an IT problem and how without senior sponsorship your security efforts may well be doomed to failure.

Jason shares with us the 5 key areas that Cyber Essentials covers, from perimeter to patching. He also provides some insight into the process that an organisation will head down when building their own security framework.

We also look at the value of getting your security foundation correct, how it can greatly reduce your exposure to many of the common cyber security risks, but also how without it, your attempts to build more robust security and compliance procedures may well fail.

We finish up with Jason sharing some of his top tips for starting your security journey and how, although Cyber Essentials is a UK based accreditation, the principles of it will be valuable to your organisation wherever in the world you may be based.

You can follow Jason on twitter @jay_fitzgerald and read more from him at his blog Bits with the Fitz

If you want to learn more about Cyber Essentials, then visit the UK’s National Cyber Security Centre website www.cyberessentials.ncsc.gov.uk

Next week, we are looking at GDPR as I’m joined by a special guest Mike Resseler from Veeam as he takes us through the business compliance process they have carried out across their global organisation.

Thanks for listening.

Advertisements

Thanks for memory – Alex McDonald – Ep61

At the start of 2018 the technology industry was hit by two new threats unlike anything it had seen before. Spectre and Meltdown used vulnerabilities not in operating system code or poorly written applications, but ones at a much lower level than that.

This vulnerability was not only something of concern to today’s technology providers, but also to those looking at architecting the way technology will work in the future.

As we try to push technology further and have it deal with more data, more quickly than ever before. The technology industry is having to look at ways of keeping up and have our tech work in different ways beyond the limits of our current ways of working. One of these developments is storage class memory, or persistent memory, were our data can be housed and accessed at speeds many times greater than they are today.

However, this move brings new vulnerabilities in the way we operate, vulnerabilities like those exposed by Spectre and Meltdown, but how did Spectre and Meltdown look to exploit operational level vulnerabilities? and what does that mean for our desire to constantly push technology to use data in ever more creative and powerful ways?

That’s the topic of this week’s Tech Interviews podcast, as I’m joined by the always fascinating Alex McDonald to discuss exactly what Spectre and Meltdown are, how they Impact what we do today and how they may change the way we are developing our future technology.

Alex is part of the Standards Industry Association group at NetApp and represents them on boards such as SNIA (Storage Networking Industry Association).

In this episode, he brings his wide industry experience to the show to share some detail on exactly what Spectre and Meltdown are, how they operate, what vulnerabilities they exploit, as well as what exactly these vulnerabilities put at risk in our organisations.

We take a look at how these exploits takes advantage of side channels and speculative execution to allow an attacker to access data that you never would imagine to be at risk, and how our eagerness to push technology to its limits created those vulnerabilities.

We discuss how this has changed the way the technology industry is now looking at the future developments of memory, as our demands to develop ever larger and faster data repositories show no sign of slowing down.

Alex shares some insights into the future, as we look at the development of persistent memory, what is driving demand and how the need for this kind of technology means the industry has no option but to get it right.

To ease our fears Alex also outlines how the technology industry is dealing with new threats to ensure that development of larger and faster technologies can continue, while ensuring the security and privacy of our critical data.

We wrap up discussing risk mitigation, what systems are at risk to attack from exploits like Spectre and Meltdown, what systems are not and how we ensure we protect them long term.

We finish on the positive message that the technology industry is indeed smart enough to solve these challenges and how it is working hard to ensure that it can deliver technology to the demands we have for our data to help solve big problems.

You can find more on Wikipedia about Spectre and Meltdown.

You can learn more about the work of SNIA on their website.

And if you’d like to stalk Alex on line you can find him on twitter talking about technology and Scottish Politics! @alextangent

Hope you enjoyed the show, with the Easter holidays here in the UK we’re taking a little break, but we’ll be back with new episodes in a few weeks’ time, but for now, thanks for listening.

Availability of all of the things – Michael Cade – Ep 60

Recently I wrote a blog post as part of a series that explored the importance of availability to a modern data platform, especially in a world were our reliance on technology is ever increasing, from the way we operate our business, to the way we live our lives and how the digitally focussed businesses can no longer tolerate downtime, planned or unplanned in the way they could even 5 years ago (you can read that post here).

So how do we mitigate against the evils of downtime? That’s simple, we build recovery and continuity plans to ensure that our system remain on regardless of the events that go on around it, from planned maintenance to the very much unplanned disaster. But there’s the problem, these things aren’t simple, are they?

I’ve recently worked on a project where we’ve been doing exactly this, building DR and continuity plans in the more “traditional” way, writing scripts, policies and procedures to ensure that in the event of some kind of disaster the systems could be recovered quickly and meet stringent recovery time and point objectives. What this project reminded me of is how difficult these things are, keeping your documentation up to date, making sure your scripts are followed and ensuring you can fully test these plans, is tricky.

With that in mind the recent product announcement from Veeam of their new Availability Orchestrator solution, caught my attention, a solution that promises to automate and orchestrate not only the delivery of a DR solution, but also automating its documentation and testing, this was something that I needed to understand more and thought I wouldn’t be the only one.

So that is the topic of this weeks podcast, as serial guest Michael Cade, Global Technologist at Veeam, joins me to provide an insight into Availability Orchestrator, what challenges it addresses, why Veeam thought it was important to develop and how it can help you deliver better availability to your critical systems.

During the show Michael shares some insight into understanding your availability gap and why today business cannot tolerate downtime of key systems as well as the difficulties that come with maintaining a robust and appropriate strategy.

We explore the challenges of testing when the business doesn’t want downtime, how to keep track of all of the little tricks that our tech team keep in their heads how to get that into a continuity plan.

We finish up looking at how Availability Orchestrator can help, by providing a automation and orchestration solution to automate testing, documentation and execution of our continuity plans and how it can also be a tool to help us build test and dev environments, as well as help us to migrate to cloud platforms like VMware on AWS.

Availability Orchestrator, in my opinion, is a very powerful tool, having just worked on a continuity and DR project, the challenges that come with manually maintaining these plans are still very fresh in my mind and had this tool been available when I started that project it would certainly of been worthy of investigation into how it could help.

If you want to find out more about Veeam availability orchestrator, check out the Veeam Website.

You can follow Michael on twitter @MichaelCade1

And if you’d like to read his blog series on Veeam replication you’ll find that on his blog site starting here.

Hope you’ve found the show useful.

Thanks for listening.

Managing the future – Dave Sobel – Ep59

As our IT systems become ever more complex, with more data, devices and ways of working, the demands on our systems and ensuring they are always operating efficiently grow. This in turn presents us and our IT teams with a whole new range of management challenges.

Systems management has always been a challenge for organisations, how do we keep on top of an ever-increasing amount of systems ? how do we ensure they remain secure and patched ? and how do we cope with our users and their multitude of devices and ensure we can effectively look after them?.

Like most of our technology, systems management is changing, but how? And what should we expect from future management solutions?

That’s the subject of this weeks podcast, as I’m joined by returning guest Dave Sobel. Dave is Senior Director of Community at SolarWinds MSP, working with SolarWinds partners and customers to ensure they deliver a great service.

As part of this role, Dave is also charged with looking at the future (not the distant future, but the near future of the next 2 years) of systems management and what these platforms need to include in them to continue to be relevant and useful.

Dave provides some excellent insight into the way the management market is shifting and some of the technology trends that will change and improve the way we control our ever more complex yet crucial IT systems.

We start by asking why looking at the future is such an important part of the IT strategists role, whether you are a CIO, IT Director, or any person who makes technology direction strategy decisions, if you are not taking a look at future trends, it will seriously limit your ability to make good technology decisions.

We see why we need to rethink how we see a “computer” and how this is leading to a proliferation of different devices with the emergence of Internet of Things (IoT) as well as looking at why that is such a horrible phrase and how this is affecting our ability to manage.

We discuss the part Artificial Intelligence is going to play in future systems management as we try to supplement our over stretched IT staff and provide them with ways of analysing ever more data and turning it into something useful.

We also investigate increased automation, looking at how our management systems can be more flexible in supporting new devices as they are added to our systems, as well as been smarter in the way we can apply management to all of our devices.

Finally, we look at the move to human centric management, instead of our systems been built to support devices, we need to be able to understand the person who uses the technology, and build our management and controls around them, allowing us to provide them with better management and importantly a better technology experience.

We wrap up looking at how smarter systems management is going to allow us to free our IT teams to provide increased value to the business, as well as looking at a couple of areas you can focus on today, to start to look at the way you manage your systems.

To find more from Dave you can follow him on twitter @djdaveet

You will find Dave’s Blog is here

I hope you found the chat as interesting as I did.

Until next time, thanks for listening.

Building a modern data platform – Control

In the first parts of this series we have looked at ensuring the building blocks of our platform are right so that our data is sitting on strong foundations.

In this part we look at bringing management, security and compliance to our data platform.

As our data, the demands we place on it and the amount of regulation controlling it, continues to grow then gaining deep insight into how it is used can no longer be a “nice to have” it has to be an integral part of our strategy.

If you look traditionally at the way we have managed data growth you can see the basics of the problem, we have added file servers, storage arrays and cloud repositories as demanded, because more, has been easier than managing the problem.

However, this is no longer the case, as we see our data as more of an asset we need to make sure it is in good shape, holding poor quality data is not in our interest, the cost of storing it is no longer going unnoticed, we can no longer go to the business every 12 months needing more and while I have no intention of making this a piece about the EU General Data Protection Regulation (GDPR), it and regulation like it, is forcing us to rethink how we view the management of our data.

So what do I use in my data platforms to manage and control data better?

Varonis

varonis logo

I came across Varonis and their data management suite about 4 years ago and this was the catalyst for a fundamental shift in the way I have thought about and talked about data, as it opened up brand new insights on how unstructured data in a business was been used and highlighted the flaws in the way people were traditionally managing it.

With that in mind, how do I start to build management into my data platform?

It starts by finding answers to two questions;

Who, Where and When?

Without understanding this point it will be impossible to properly build management into our platform.

If we don’t know who is accessing data how can we be sure only the right people have access to our assets?

If we don’t know where the data is, how are we supposed to control its growth, secure it and govern access?

And of course when is the data accessed or even, is it accessed? let’s face it if no one is accessing our data then why are we holding it at all?

What’s in it?

However, there are lots of tools that tell me the who, where and when of data access, that’s not really reason I include Varonis in my platform designs.

While who, where and when is important it does not include a crucial component, the what. What type of information is stored in my data.

If I’m building management policies and procedures I can’t do that without knowing what is contained in my data, is it sensitive information like finances, intellectual property or customer details? Or, as we look at regulation such as GDPR, knowing where we hold private and sensitive data about individuals is increasingly crucial.

Without this knowledge we cannot ensure our data and business compliance strategies are fit for purpose.

Building Intelligence into our system

In my opinion one of the most crucial parts of a modern data platform is the inclusion of behavioural analytics, as our platforms grow ever more diverse, complex and large, one of the common refrains I hear is “this information is great, but who is going to look at it, let alone action it?”, this is a very fair point and a real problem.

Behavioural Analytics tools can help address this and supplement our IT teams. These technologies are capable of understanding and learning the normal behaviour of our data platform and when those norms are deviated from can warn us quickly and allow us to address the issue.

This kind of behavioural understanding offers significant benefits from knowing who the owners of a data set are to helping us spot malicious activity, from ransomware to data theft.

In my opinion this kind of technology is the only realistic way of maintaining security, control and compliance in a modern data platform.

Strategy

As discussed in parts one and two, it is crucial the vendors who make up a data platform have a vision that addresses the challenges businesses see when it comes to data.

There should be no surprise then that Varonis’s strategy aligns very well with those challenges, as one of the first companies I came across that delivered real forethought to the management, control and governance of our data assets.

That vision continues, with new tools and capabilities continually delivered, such as Varonis Edge and the recent addition of a new automation engine which provides a significant enhancement to the Varonis portfolio, the tools now don’t only warn of deviations from the norm, but can also act upon them to remediate the threat.

All of this tied in with Varonis’ continued extension of its integration with On-Prem and Cloud, storage and service providers, ensure they will continue to play a significant role in bringing management to a modern data platform.

Regardless of whether you choose Varonis or not it is crucial you have intelligent management and analytics built into your environment, because without it, it will be almost impossible to deliver the kind of data platform fit for a modern data driven business.

You can find the other posts from this series below;

modern data platform
Introduction
modern storage
Part One – The Storage
alwayon
Part Two – Availability

Straight as an Arrow – David Fearne & Richard Holmes – Ep58

If there is one thing that we can say is a certainty in the technology industry it is the constant state of change, how technology works, how we want to use it, where we want to use it and what we expect from it is constantly changing and in reality is ever more demanding.

For those of us who work in technology, either as IT pro’s or IT decision makers, this presents its own challenges, when we are planning our IT strategy how do we know where to focus, what technology bets should we be taking and what trends are others taking advantage of that we could bring into our organisation to help us to improve our services.

One of the things I like to do in my role is spend time looking at technology predictions and listen to ideas from those in the industry tasked with defining the strategic direction of their businesses, not to judge whether they are right or wrong (predicting things in this industry is so very difficult) but to pick out trends and areas that are of interest to the work I do and then at least be aware of it and keep a watching brief on how it develops.

Keeping a watching brief gave me the idea for this week’s podcast as I catch up with two guests who produce an annual technology predictions blog and back that up with episodes on their own successful podcast where they look in more detail at those predictions.

David Fearne and Richard Holmes work for Arrow ECS, a global technology supplier and one of the worlds largest companies. David is Technical Director, charged with looking after the relationship and developing strategy for over 100 different technology partners and suppliers. Richard is Business Development Director for Arrow’s Internet Of Things (IoT) business. The gents also present the excellent Arrow Bandwidth podcast.

This week we look at their predictions from 2017, not to review whether they have been successful, but rather to focus on just a few areas of particular interest and look at how those areas have developed over the last 12 months and how we expect they will continue to shift.

We start by discussing data management and the concept of “data divorce” and why in a rapidly changing landscape how we look after our data will become increasingly important. We also look at how, in a world that is removing barriers to our ability to collect more and more data, how we manage that and importantly how we only collect things that are relevant and of use to us and our organisations.

The second area we explore is data analytics and how do we build into our businesses the ability to make data driven decisions. We discuss the fact that all businesses make decisions based on data, however, how do we remove our human inefficiencies and more importantly bias when we look at data, how many of us make decisions based on someone’s “version of the truth”?

We also investigate the inhibitors to more of us embracing data analytics capabilities, capabilities that are increasingly available to us particularly via providers like Microsoft, AWS and Google, the challenge isn’t a technology one, but more about how we get those tools into the hands of the right people and empower them.

We wrap up looking at security and David’s assertion of a change in “security posture” and how it’s crucial that we rethink the way we look at security of our systems. We discuss why “assuming breach” is an important part of that change. We look at, as the security problem becomes ever more complex, how do we continue to address it, is the answer to employ ever more security specialists?

We wrap up by discussing how each of these areas have a common thread running through them and how as technology strategists it is important that, when making technology decisions, we don’t focus on technology but fully understand the business outcomes we are trying to achieve.

It’s a great chat with David and Richard and we could have discussed these trends for hours, luckily for you, it’s only 40 minutes!

Enjoy the Show.

You’ll find David and Richards full list of prediction from 2017 here – https://www.arrowthehub.co.uk/blog/posts/2017/february/what-are-the-hottest-technology-trends-of-2017-part-1/

You’ll also find the 2018 predictions here https://www.arrowthehub.co.uk/blog/posts/2018/january/what-are-the-hottest-technology-trends-for-2018-part-1/

If you’d rather listen, then check out the excellent Arrow Bandwidth podcast you can find the episodes discussing all of last years predictions as well as this years in the following places Tech Trends 2017 Part One, Tech Trends 2017 Part Two, Tech Trends 2018 Part One, Tech Trends 2018 Part Two.

If you’d like to keep up with David and Richard, you can find them both on twitter @davidfearne and @_Rich_Holmes.

Thanks for listening.

Building a modern data platform – Availability

In part one we discussed the importance of getting our storage platform right, in part two we look at availability.

The idea that availability is a crucial part of a modern platform was something I first heard from a friend of mine, Michael Cade from Veeam, who introduced me to “availability as part of digital transformation” and how this was changing Veeam’s focus.

This shift is absolutely right, today as we build our modern platforms backup and recovery is still a crucial requirement, however, a focus on availability is at least, if not more, crucial. Today nobody in your business really cares how quickly you can recover a system, what our digitally driven businesses demand is that our systems are always there and downtime in ever more competitive environments is not tolerated.

With that in mind why do I choose Veeam to deliver availability to my modern data platform?

Keep it simple

Whenever I meet a Veeam customer their first comment on Veeam is “it just works”, the power of this rather simple statement should not be underestimated when you are protecting key assets. Too often data protection solutions have been overly complex, inefficient and unreliable and that is something I have always found unacceptable, for business big or small you need a data protection solution you can deploy and then forget and trust it just does what you ask, this is perhaps Veeam’s greatest strength and a crucial driver behind its popularity and what makes it such a good component part of a data platform.

I would actually say Veeam are a bit like the Apple of availability, although much of what they do has been done by others (Veeam didn’t invent data protection, in the same way Apple didn’t invent the smartphone) but what they have done is make it simple and usable and something that just works and can be trusted. Don’t underestimate the importance of this.

Flexibility

If ever there was a byword for modern IT, flexibility could well be it, it’s crucial that any solution and platform we build has the flexibility to react to ever changing business and technological demands. Look at how business needs for technology and the technology itself has changed in the last 10 years and how much our platforms have needed to change to keep up, flash storage, web scale applications, mobility, Cloud, the list goes on.

The following statement sums up Veeam’s view on flexibility perfectly

“Veeam Availability Platform provides businesses and enterprises of all sizes with the means to ensure availability for any application and any data, across any cloud infrastructure”

It is this focus on flexibility that make Veeam such an attractive proposition in the modern data platform, allowing me to design a solution that is flexible enough to meet my different needs, providing availability across my data platform, all with the same familiar toolset regardless of location, workload type or recovery needs.

Integration

As mentioned in part one, no modern data platform will be built with just one vendors tools, not if you want to deliver the control and insight into your data that we demand as a modern business. Veeam, like NetApp, have built a very strong partner ecosystem allowing them to integrate tightly with many vendors, but more than just integrate Veeam deliver additional value allowing me to simplify and do more with my platform (take a look at this blog about how Veeam allows you to get more from NetApp snapshots). Veeam are continuously delivering new integrations and not only with on-prem vendors, but also as mentioned earlier, with a vast range of cloud providers.

This ability to extend the capabilities and simplify the integration of multiple components in a multi-platform, multi-cloud world is very powerful and a crucial part of my data platform architecture.

Strategy

As with NetApp, over the last 18 months it has been the shift in Veeam’s overall strategy that has impressed me more than anything else, although seemingly a simple change, the shift from talking about backup and recovery to availability is significant.

As I said at the opening of this article, in our modern IT platforms nobody is interested in how quickly you can recover something, it’s about availability of crucial systems. A key part of Veeam’s strategy is to “deliver the next generation of availability for the Always-On Enterprise” and you can see this in everything Veeam are doing, focussing on simplicity, ensuring that you can have your workload where you need it when you need it and move those workloads seamlessly between on-prem, cloud and back again.

They have also been very smart, employing a strong leadership team and, as with NetApp, investing in ensuring that cloud services don’t leave a traditionally on-premises focussed technology provider adrift.

The Veeam and NetApp strategies are very similar, and it is this similarity that makes them attractive components in my data platform. I need my component providers to understand technology trends and changes so they, as well as our data platforms, can move and change with them.

Does it have to be Veeam?

In the same way it doesn’t have to be NetApp, of course it doesn’t have to be Veeam, but in exactly the same way, if you are building a platform for your data, then make sure your platform components deliver the kinds of things that we have discussed in the first two parts of this series, ensure that they provide the flexibility we need, the integration with components across your platform and a strategic vision that you are comfortable with, as long as you have that, that will give you rock solid foundations to build on.

In Part Three of this series we will look at building insight, compliance and governance into our data platform.

You can find the Introduction and Part One – “The Storage” below.

modern data platform
The Introduction
modern storage
Part One – The Storage

 

 

IT Pro’s and the Tech Community – Yadin Porter de Leon – Ep 57

One of the favourite parts of my role over the last few years has been my involvement in tech community, whether that’s been working with advocacy groups like the NetApp A-Team, with local user groups like TechUG, presenting at a range of different community events or just answering questions in technical communities, all of these investments (and they are investments) have paid back, they’ve introduced me to great people, given me access to resources and expertise I would never have found normally and opened up great opportunities for travel and too develop some great friendships.

We are fortunate to be part of an industry that does have a strong sense of community, full of people with shared interests and a passion for their subject, a passion they are often happy to share with anyone who’s interested.

One of the challenges with tech community is however its size and if you are new to it or even a part of it, it can be overwhelming and hard to know where to start? How do you find the resources you need, find out which events you can attend or find out who the leaders are that you can engage with?

071517_1725_Livingonthe1.jpgLast year I was invited to get involved in a project called “Level Up”, a project started by this week’s guest on the podcast Yadin Porter de Leon, Yadin has been on the show before in his capacity at data protection company Druva, however that’s not what we discuss this week as we chat about the Level Up project, why he started it, the project aims and how it can help you in your career.

In this week’s episode we discuss why you may want to get involved in community and what benefits it can bring and how involvement in the wider community can benefit both you and your business providing you with opportunities to develop your skills.

Yadin shares how one of the focuses of the project is to engage those who are not already involved in community and provide them a way to get started.

We look at Level Up’s first project the vTrail Map a fantastic guide to the world of VMware and the virtualisation community and we also look ahead to what’s next for the project and the longer terms aims.

We wrap up by asking Yadin about another project he is involved in which is the excellent Tech Village Podcast, again focussed on career development and the technology business, a great show which I’d recommend anyone gets on their regular podcast list you can find the show on Soundcloud and follow the show on twitter @TechVillagePod

For more information on Level Up, you can find them on twitter @Tech_LevelUp

You can also contact Yadin on twitter @porterdeleon

Hope you find the show interesting and if you’re not already involved in tech community maybe this will give you a bit of inspiration to involve yourself more, it’s most definitely worth it.

Thanks for listening.

Building a modern data platform – The Series – Introduction

For many of you who read my blog posts (thank you) or listen to the Tech Interviews Podcast (thanks again!) you’ll know talking about data is something I enjoy, it has played a significant part in my career over the last 20 years, but today data is more central than ever too what so many of us are trying to achieve.

pexels-photo-373543.jpegIn today’s modern world however, storing our data is no longer enough, we need to consider much more, yes storing it effectively and efficiently is important, however, so is its availability, security, privacy and of course finding ways to extract value from it, whether that’s production data, archive or backup, we are looking at how we can make it do more (For examples of what I mean, read this article from my friend Matt Watts introducing the concept of Data Amplification Ratio) and deliver a competitive edge to our organisations.

To do this effectively means developing an appropriate data strategy and building a data platform that is fit for today’s business needs. This is something I’ve written and spoken about on many occasions, however, one question I get asked regularly is “we understand the theory, but how do we build this in practice, what technology do you use to build a modern data platform?”.

That’s a good question, the theory is all great and important, however seeing practical examples of how you deliver these strategies can be very useful. With that in mind I’ve put together this series of blogs too go through the elements of a data strategy and share some of the practical technology components I use to help organisations build a platform that will allow them to get the best from their data assets.

Over this series we’ll discuss how these components deliver flexibility, maintain security and privacy, provide governance control and insights, as well as interaction with hyperscale cloud providers to ensure you can exploit analytics, AI and Machine Learning.

So, settle back and over the next few weeks I hope to provide some practical examples of the technology you can to deliver a modern data strategy, parts one and two are live now and can be accessed in the links below. The other links will become live as I post them, so do keep an eye out for them.

modern storage
Part One – The Storage
alwayon
Part Two – Availability
control
Part Three – Control

I hope you enjoy the series and that you find these practical examples useful, but remember, these are just some of the technologies I’ve used and are not the only technologies available and you certainly don’t have to use any of these to meet your data strategy goals, however, the aim of this series is to help you understand the art of the possible, if these exact solutions aren’t for you, don’t worry, go and find technology partners and solutions that are and use them to help you meet your goals.

Good Luck and happy building!

Coming Soon;

Part Four – What the cloud can bring

Part Five – out on the edges

Part Six – Exploiting the Cloud

Part Seven – A strategic approach

Building a modern data platform – The Storage

wp_20160518_07_53_57_rich_li.jpgIt probably isn’t a surprise to anyone who has read my blogs previously to find out that when it comes to the storage part of our platform, NetApp are still first choice, but why?

While it is important to get the storage right, getting it right is much more than just having somewhere to store data, it’s important, even at the base level, that you can do more with it. As we move through the different elements of our platform we will look at other areas where we can apply insight and analytics, however, it should not be forgotten that there is significant value in having data services available at all levels of a data platform.

What are data services?

These services provide added capabilities beyond just a storage repository, they may provide security, storage efficiency, data protection or the ability to extract value from data. NetApp provide these services as standard with their ONTAP operating system bringing considerable value regardless of whether data capacity needs are large or small, the ability to provide extra capabilities beyond just storing data is crucial to our modern data platform.

However, many storage providers offer data services on their platforms, not often as comprehensive as those provided in ONTAP, but they are there, so if that is the case, why else do I choose to use NetApp as a foundation of a data platform?

Data Fabric

“Data Fabric” is the simple answer (I won’t go into detail here, I’ve written about the Data-Fabric_shutterstock.jpgfabric before for example Data Fabric – What is it good for?), when we think about data platforms we cannot just think about them in isolation, we need considerably more flexibility than that, we may have data in our data centre on primary storage, but we may also want that data in another location, maybe with a public cloud provider, we may want that data stored on a different platform, or in a different format all together, object storage for example. However, to manage our data effectively and securely, we can’t afford for it to be stored in different locations that need a plethora of separate management tools, policies and procedures to ensure we keep control.

The “Data Fabric” is why NetApp continue to be the base storage element of my data platform designs, the key to the fabric is the ONTAP operating system and its flexibility which goes beyond an OS installed on a traditional controller. ONTAP can be consumed as a software service within a virtual machine or from AWS or Azure, providing the same data services, managed by the same tools, deployed in all kinds of different ways, allowing me to move my data between these repositories while maintaining all of the same management and controls.

Beyond that, the ability to move data between NetApp’s other portfolio platforms, such as Solidfire and StorageGrid (Their Object storage solution), as well as to third party storage such as Amazon S3 and Azure Blob, ensures I can build a complex fabric that allows me to place data where I need it, when I need it. The ability to do this while maintaining security, control and management with the same tools regardless of location is hugely powerful and beneficial.


API’s and Integration

When we look to build a data platform it would be ridiculous to assume it will only ever contain the components of a single provider and as we build through the layers of our platform, integration between those layers is crucial and does play a part in the selection of the components I use.

API’s are increasingly important in the modern datacentre as we look for different ways to automate and integrate our components, again this is an area where NetApp are strong, providing great third party integrations with partners such as Microsoft, Veeam, VMware and Varonis (some of which we’ll explore in other parts of the series) as well as options to drive many of the elements of their different storage platforms via API’s so we can automate the delivery of our infrastructure.

Can it grow with me?

One of the key reasons that we need a more strategic view of data platforms is the continued growth of our data and the demands we put on it, therefore scalability and performance are hugely important when we chose the storage components of our platform.

NetApp deliver this across all their portfolio. ONTAP allows me to scale a storage cluster up to 24 nodes delivering huge capacity, performance and compute capability. The Solidfire platform, inspired by the needs of service providers, allows simple and quick scale and a quality of service engine which lets me guarantee performance levels of applications and data, this is before we talk about the huge scale of the StorageGrid object platform or the fast and cheap capabilities of E-Series.

Crucially NetApp’s Data Fabric strategy means I can scale across these platforms providing the ability to grow my data platform as I need and not be restricted by a single technology.

Does it have to be NetApp?

Do you have to use NetApp to build a data platform? Of course not, but do look at whatever you choose as the storage element of your platform that it can tick the majority of the boxes we’ve discussed , data services, a strategic vision and ability to move data between repositories and locations and provide great integration , while ensuring your platform can meet the performance and scale demands you have on it.

If you can do that, then you’ll have a great start for your modern data platform.

In the next post In this series we’ll look at the importance of availability – that post is coming soon.

Click below to return to “The Intro”

 

modern data platform
Building a modern data platform – The Series – Introduction