Building a modern data platform – Prevention (Office365)

In this series so far, we have looked at getting our initial foundations right and ensuring we have insight and control of our data and have looked at components that I use to help achieve this. However, this time we are looking at something that many organisations are already using which has a wide range of capabilities that can help to manage and control data but which are often underutilised.

For ever-increasing numbers of us Office365 has become the primary data and communications repository. However, I often find organisations are unaware of many powerful capabilities within their subscription which can greatly reduce the risks of data breach.

Tucked away with Office365 is the Security and Compliance Section (protection.office.com) and is the gateway to several powerful features that should be part of your modern data strategy.

In this article we are going to focus on two such features “Data Loss Prevention” and “Data Governance”, both offer powerful capabilities that can be deployed quickly across your organisation and can help to significantly mitigate against the risks of data breach.

Data Loss Prevention (DLP)

DLP is an important weapon in our data management arsenal, DLP policies are designed to ensure sensitive information does not leave our organisation in ways that it shouldn’t and Office365 makes this straightforward for us to get started.

We can quickly create policies that we can apply across our organisation to help identify types of data that we hold, several predefined options already exist including ones that identify financial data, personally identifiable information (PII), social security numbers, health records, passport numbers etc. with templates for a number of countries and regions across the world.

Once our policies which identify our data types are created we can apply rules to that data on how it can be used, we can apply several rules and, depending on requirement, make them increasingly stringent.

The importance of DLP rules should not be underestimated, while it’s important we understand who has access to and uses our data, too many times we feel this is enough and don’t take that next crucial step of controlling the use and movement of that data.

We shouldn’t forget that those with the right access to the right data, may accidentally or maliciously do the wrong thing with it!

Data Governance

Governance should be a cornerstone of a modern data platform it is what defines the way we use, manage, secure, classify and retain our data and can impact the cost of our data storage, it’s security and our ability to deliver compliance to our organisations.

Office365 provides two key governance capabilities.

Labels

Labels allow us to apply classifications to our data so we can start to understand what is important and what isn’t. We can highlight what is for public consumption, what is private, sensitive, commercial in confidence or any other range of potential classifications that you have within your organisation.

Classification is crucial part of delivering a successful data compliance capability, giving us granular control on exactly how we handle data of all types.

Labels can be applied automatically based on the contents of the data we have stored, they can be applied by users as they create content or in conjunction with the DLP rules we discussed earlier.

For example a DLP policy can identify a document with credit card details in, then automatically apply a rule that labels it as sensitive information.

Retention

Once we have classified our data into what is important and what isn’t we can then, with retention policies, define what we keep and for how long.

These policies allow us to effectively manage and govern our information and subsequently allows us to reduce the risk of litigation or security breach by either retaining data for a period, as defined by a regulatory requirement, or, importantly, permanently deleting old content that you’re no longer required to keep.

The policies can be assigned automatically based on classifications or can be applied manually by a user as they generate new data.

For example, a user creates a new document containing financial data which must be retained for 7 years, that user can classify the data accordingly, ensuring that both our DLP and retention rules are applied as needed

Management

Alongside these capabilities Office365 provides us with two management tools, disposition and supervision.

Disposition is our holding pen for data to be deleted so we can review any deletions before actioning.

Supervision is a powerful capability allowing us to capture employee communications for examination by internal or external reviewers.

These tools are important in allowing us to show we have auditable processes and control within our platform and are taking the steps necessary to protect our data assets as we should.

Summary

The ability to govern and control our data wherever we hold it is a critical part of a modern data platform. If you use Office365 and are not using these capabilities then you are missing out.

The importance of governance is only going to continue to grow as ever more stringent data privacy and security regulations develop, governance can allow us to greatly reduce many of the risks associated with data breach and services such as Office365 have taken things that have been traditionally difficult to achieve and made them a whole lot easier.

If you are building a modern data platform then compliance and governance should be at the heart of your strategy.

This is part 4 in a series of posts on building a modern data platform, the previous parts of the series can be found below.

modern data platform
Introduction

modern storage
The Storage

031318_0833_Availabilit1.png
Availability

control
Control
Advertisements

Getting your cyber essentials – Jason Fitzgerald – Ep62

Cyber Security, be it how we secure our perimeter, infrastructure, mobile devices or data, is a complex and ever-changing challenge. In the face of this complexity where do we start when it comes to building our organisations cyber security standards.

Well perhaps the answer may lie in standardised frameworks and accreditation’s. If you think about it, one of the biggest challenges we have when it comes to security is knowing where to start, so having a standard to work towards makes perfect sense.

That is the subject of this weeks show with my guest and colleague Jason Fitzgerald, as we discuss the value of a UK based accreditation, Cyber Essentials.

Jason is a very experienced technical engineer and consultant and today spends much of his time working with organisations to help them address their IT security concerns and develop policies, procedures, strategies and technologies to help them to improve their security baselines.

One of the tools that Jason uses extensively is a framework and accreditation produced by the National Cyber Security Centre here in the UK, Cyber Essentials. During this episode we discuss why such a framework is valuable and can help a business improve its security posture.

But first we start with discussing the kind of security landscape that Jason sees when he talks with businesses of all types, some of the confusion that they have and the often-misplaced confidence that comes with the “latest and greatest” security technology solution purchase.

We explore the importance of organisational “buy in” when it comes to security, why it can’t be just seen as an IT problem and how without senior sponsorship your security efforts may well be doomed to failure.

Jason shares with us the 5 key areas that Cyber Essentials covers, from perimeter to patching. He also provides some insight into the process that an organisation will head down when building their own security framework.

We also look at the value of getting your security foundation correct, how it can greatly reduce your exposure to many of the common cyber security risks, but also how without it, your attempts to build more robust security and compliance procedures may well fail.

We finish up with Jason sharing some of his top tips for starting your security journey and how, although Cyber Essentials is a UK based accreditation, the principles of it will be valuable to your organisation wherever in the world you may be based.

You can follow Jason on twitter @jay_fitzgerald and read more from him at his blog Bits with the Fitz

If you want to learn more about Cyber Essentials, then visit the UK’s National Cyber Security Centre website www.cyberessentials.ncsc.gov.uk

Next week, we are looking at GDPR as I’m joined by a special guest Mike Resseler from Veeam as he takes us through the business compliance process they have carried out across their global organisation.

Thanks for listening.

Thanks for memory – Alex McDonald – Ep61

At the start of 2018 the technology industry was hit by two new threats unlike anything it had seen before. Spectre and Meltdown used vulnerabilities not in operating system code or poorly written applications, but ones at a much lower level than that.

This vulnerability was not only something of concern to today’s technology providers, but also to those looking at architecting the way technology will work in the future.

As we try to push technology further and have it deal with more data, more quickly than ever before. The technology industry is having to look at ways of keeping up and have our tech work in different ways beyond the limits of our current ways of working. One of these developments is storage class memory, or persistent memory, were our data can be housed and accessed at speeds many times greater than they are today.

However, this move brings new vulnerabilities in the way we operate, vulnerabilities like those exposed by Spectre and Meltdown, but how did Spectre and Meltdown look to exploit operational level vulnerabilities? and what does that mean for our desire to constantly push technology to use data in ever more creative and powerful ways?

That’s the topic of this week’s Tech Interviews podcast, as I’m joined by the always fascinating Alex McDonald to discuss exactly what Spectre and Meltdown are, how they Impact what we do today and how they may change the way we are developing our future technology.

Alex is part of the Standards Industry Association group at NetApp and represents them on boards such as SNIA (Storage Networking Industry Association).

In this episode, he brings his wide industry experience to the show to share some detail on exactly what Spectre and Meltdown are, how they operate, what vulnerabilities they exploit, as well as what exactly these vulnerabilities put at risk in our organisations.

We take a look at how these exploits takes advantage of side channels and speculative execution to allow an attacker to access data that you never would imagine to be at risk, and how our eagerness to push technology to its limits created those vulnerabilities.

We discuss how this has changed the way the technology industry is now looking at the future developments of memory, as our demands to develop ever larger and faster data repositories show no sign of slowing down.

Alex shares some insights into the future, as we look at the development of persistent memory, what is driving demand and how the need for this kind of technology means the industry has no option but to get it right.

To ease our fears Alex also outlines how the technology industry is dealing with new threats to ensure that development of larger and faster technologies can continue, while ensuring the security and privacy of our critical data.

We wrap up discussing risk mitigation, what systems are at risk to attack from exploits like Spectre and Meltdown, what systems are not and how we ensure we protect them long term.

We finish on the positive message that the technology industry is indeed smart enough to solve these challenges and how it is working hard to ensure that it can deliver technology to the demands we have for our data to help solve big problems.

You can find more on Wikipedia about Spectre and Meltdown.

You can learn more about the work of SNIA on their website.

And if you’d like to stalk Alex on line you can find him on twitter talking about technology and Scottish Politics! @alextangent

Hope you enjoyed the show, with the Easter holidays here in the UK we’re taking a little break, but we’ll be back with new episodes in a few weeks’ time, but for now, thanks for listening.

Availability of all of the things – Michael Cade – Ep 60

Recently I wrote a blog post as part of a series that explored the importance of availability to a modern data platform, especially in a world were our reliance on technology is ever increasing, from the way we operate our business, to the way we live our lives and how the digitally focussed businesses can no longer tolerate downtime, planned or unplanned in the way they could even 5 years ago (you can read that post here).

So how do we mitigate against the evils of downtime? That’s simple, we build recovery and continuity plans to ensure that our system remain on regardless of the events that go on around it, from planned maintenance to the very much unplanned disaster. But there’s the problem, these things aren’t simple, are they?

I’ve recently worked on a project where we’ve been doing exactly this, building DR and continuity plans in the more “traditional” way, writing scripts, policies and procedures to ensure that in the event of some kind of disaster the systems could be recovered quickly and meet stringent recovery time and point objectives. What this project reminded me of is how difficult these things are, keeping your documentation up to date, making sure your scripts are followed and ensuring you can fully test these plans, is tricky.

With that in mind the recent product announcement from Veeam of their new Availability Orchestrator solution, caught my attention, a solution that promises to automate and orchestrate not only the delivery of a DR solution, but also automating its documentation and testing, this was something that I needed to understand more and thought I wouldn’t be the only one.

So that is the topic of this weeks podcast, as serial guest Michael Cade, Global Technologist at Veeam, joins me to provide an insight into Availability Orchestrator, what challenges it addresses, why Veeam thought it was important to develop and how it can help you deliver better availability to your critical systems.

During the show Michael shares some insight into understanding your availability gap and why today business cannot tolerate downtime of key systems as well as the difficulties that come with maintaining a robust and appropriate strategy.

We explore the challenges of testing when the business doesn’t want downtime, how to keep track of all of the little tricks that our tech team keep in their heads how to get that into a continuity plan.

We finish up looking at how Availability Orchestrator can help, by providing a automation and orchestration solution to automate testing, documentation and execution of our continuity plans and how it can also be a tool to help us build test and dev environments, as well as help us to migrate to cloud platforms like VMware on AWS.

Availability Orchestrator, in my opinion, is a very powerful tool, having just worked on a continuity and DR project, the challenges that come with manually maintaining these plans are still very fresh in my mind and had this tool been available when I started that project it would certainly of been worthy of investigation into how it could help.

If you want to find out more about Veeam availability orchestrator, check out the Veeam Website.

You can follow Michael on twitter @MichaelCade1

And if you’d like to read his blog series on Veeam replication you’ll find that on his blog site starting here.

Hope you’ve found the show useful.

Thanks for listening.

Managing the future – Dave Sobel – Ep59

As our IT systems become ever more complex, with more data, devices and ways of working, the demands on our systems and ensuring they are always operating efficiently grow. This in turn presents us and our IT teams with a whole new range of management challenges.

Systems management has always been a challenge for organisations, how do we keep on top of an ever-increasing amount of systems ? how do we ensure they remain secure and patched ? and how do we cope with our users and their multitude of devices and ensure we can effectively look after them?.

Like most of our technology, systems management is changing, but how? And what should we expect from future management solutions?

That’s the subject of this weeks podcast, as I’m joined by returning guest Dave Sobel. Dave is Senior Director of Community at SolarWinds MSP, working with SolarWinds partners and customers to ensure they deliver a great service.

As part of this role, Dave is also charged with looking at the future (not the distant future, but the near future of the next 2 years) of systems management and what these platforms need to include in them to continue to be relevant and useful.

Dave provides some excellent insight into the way the management market is shifting and some of the technology trends that will change and improve the way we control our ever more complex yet crucial IT systems.

We start by asking why looking at the future is such an important part of the IT strategists role, whether you are a CIO, IT Director, or any person who makes technology direction strategy decisions, if you are not taking a look at future trends, it will seriously limit your ability to make good technology decisions.

We see why we need to rethink how we see a “computer” and how this is leading to a proliferation of different devices with the emergence of Internet of Things (IoT) as well as looking at why that is such a horrible phrase and how this is affecting our ability to manage.

We discuss the part Artificial Intelligence is going to play in future systems management as we try to supplement our over stretched IT staff and provide them with ways of analysing ever more data and turning it into something useful.

We also investigate increased automation, looking at how our management systems can be more flexible in supporting new devices as they are added to our systems, as well as been smarter in the way we can apply management to all of our devices.

Finally, we look at the move to human centric management, instead of our systems been built to support devices, we need to be able to understand the person who uses the technology, and build our management and controls around them, allowing us to provide them with better management and importantly a better technology experience.

We wrap up looking at how smarter systems management is going to allow us to free our IT teams to provide increased value to the business, as well as looking at a couple of areas you can focus on today, to start to look at the way you manage your systems.

To find more from Dave you can follow him on twitter @djdaveet

You will find Dave’s Blog is here

I hope you found the chat as interesting as I did.

Until next time, thanks for listening.

Building a modern data platform – Control

In the first parts of this series we have looked at ensuring the building blocks of our platform are right so that our data is sitting on strong foundations.

In this part we look at bringing management, security and compliance to our data platform.

As our data, the demands we place on it and the amount of regulation controlling it, continues to grow then gaining deep insight into how it is used can no longer be a “nice to have” it has to be an integral part of our strategy.

If you look traditionally at the way we have managed data growth you can see the basics of the problem, we have added file servers, storage arrays and cloud repositories as demanded, because more, has been easier than managing the problem.

However, this is no longer the case, as we see our data as more of an asset we need to make sure it is in good shape, holding poor quality data is not in our interest, the cost of storing it is no longer going unnoticed, we can no longer go to the business every 12 months needing more and while I have no intention of making this a piece about the EU General Data Protection Regulation (GDPR), it and regulation like it, is forcing us to rethink how we view the management of our data.

So what do I use in my data platforms to manage and control data better?

Varonis

varonis logo

I came across Varonis and their data management suite about 4 years ago and this was the catalyst for a fundamental shift in the way I have thought about and talked about data, as it opened up brand new insights on how unstructured data in a business was been used and highlighted the flaws in the way people were traditionally managing it.

With that in mind, how do I start to build management into my data platform?

It starts by finding answers to two questions;

Who, Where and When?

Without understanding this point it will be impossible to properly build management into our platform.

If we don’t know who is accessing data how can we be sure only the right people have access to our assets?

If we don’t know where the data is, how are we supposed to control its growth, secure it and govern access?

And of course when is the data accessed or even, is it accessed? let’s face it if no one is accessing our data then why are we holding it at all?

What’s in it?

However, there are lots of tools that tell me the who, where and when of data access, that’s not really reason I include Varonis in my platform designs.

While who, where and when is important it does not include a crucial component, the what. What type of information is stored in my data.

If I’m building management policies and procedures I can’t do that without knowing what is contained in my data, is it sensitive information like finances, intellectual property or customer details? Or, as we look at regulation such as GDPR, knowing where we hold private and sensitive data about individuals is increasingly crucial.

Without this knowledge we cannot ensure our data and business compliance strategies are fit for purpose.

Building Intelligence into our system

In my opinion one of the most crucial parts of a modern data platform is the inclusion of behavioural analytics, as our platforms grow ever more diverse, complex and large, one of the common refrains I hear is “this information is great, but who is going to look at it, let alone action it?”, this is a very fair point and a real problem.

Behavioural Analytics tools can help address this and supplement our IT teams. These technologies are capable of understanding and learning the normal behaviour of our data platform and when those norms are deviated from can warn us quickly and allow us to address the issue.

This kind of behavioural understanding offers significant benefits from knowing who the owners of a data set are to helping us spot malicious activity, from ransomware to data theft.

In my opinion this kind of technology is the only realistic way of maintaining security, control and compliance in a modern data platform.

Strategy

As discussed in parts one and two, it is crucial the vendors who make up a data platform have a vision that addresses the challenges businesses see when it comes to data.

There should be no surprise then that Varonis’s strategy aligns very well with those challenges, as one of the first companies I came across that delivered real forethought to the management, control and governance of our data assets.

That vision continues, with new tools and capabilities continually delivered, such as Varonis Edge and the recent addition of a new automation engine which provides a significant enhancement to the Varonis portfolio, the tools now don’t only warn of deviations from the norm, but can also act upon them to remediate the threat.

All of this tied in with Varonis’ continued extension of its integration with On-Prem and Cloud, storage and service providers, ensure they will continue to play a significant role in bringing management to a modern data platform.

Regardless of whether you choose Varonis or not it is crucial you have intelligent management and analytics built into your environment, because without it, it will be almost impossible to deliver the kind of data platform fit for a modern data driven business.

You can find the other posts from this series below;

modern data platform
Introduction
modern storage
Part One – The Storage
alwayon
Part Two – Availability

Straight as an Arrow – David Fearne & Richard Holmes – Ep58

If there is one thing that we can say is a certainty in the technology industry it is the constant state of change, how technology works, how we want to use it, where we want to use it and what we expect from it is constantly changing and in reality is ever more demanding.

For those of us who work in technology, either as IT pro’s or IT decision makers, this presents its own challenges, when we are planning our IT strategy how do we know where to focus, what technology bets should we be taking and what trends are others taking advantage of that we could bring into our organisation to help us to improve our services.

One of the things I like to do in my role is spend time looking at technology predictions and listen to ideas from those in the industry tasked with defining the strategic direction of their businesses, not to judge whether they are right or wrong (predicting things in this industry is so very difficult) but to pick out trends and areas that are of interest to the work I do and then at least be aware of it and keep a watching brief on how it develops.

Keeping a watching brief gave me the idea for this week’s podcast as I catch up with two guests who produce an annual technology predictions blog and back that up with episodes on their own successful podcast where they look in more detail at those predictions.

David Fearne and Richard Holmes work for Arrow ECS, a global technology supplier and one of the worlds largest companies. David is Technical Director, charged with looking after the relationship and developing strategy for over 100 different technology partners and suppliers. Richard is Business Development Director for Arrow’s Internet Of Things (IoT) business. The gents also present the excellent Arrow Bandwidth podcast.

This week we look at their predictions from 2017, not to review whether they have been successful, but rather to focus on just a few areas of particular interest and look at how those areas have developed over the last 12 months and how we expect they will continue to shift.

We start by discussing data management and the concept of “data divorce” and why in a rapidly changing landscape how we look after our data will become increasingly important. We also look at how, in a world that is removing barriers to our ability to collect more and more data, how we manage that and importantly how we only collect things that are relevant and of use to us and our organisations.

The second area we explore is data analytics and how do we build into our businesses the ability to make data driven decisions. We discuss the fact that all businesses make decisions based on data, however, how do we remove our human inefficiencies and more importantly bias when we look at data, how many of us make decisions based on someone’s “version of the truth”?

We also investigate the inhibitors to more of us embracing data analytics capabilities, capabilities that are increasingly available to us particularly via providers like Microsoft, AWS and Google, the challenge isn’t a technology one, but more about how we get those tools into the hands of the right people and empower them.

We wrap up looking at security and David’s assertion of a change in “security posture” and how it’s crucial that we rethink the way we look at security of our systems. We discuss why “assuming breach” is an important part of that change. We look at, as the security problem becomes ever more complex, how do we continue to address it, is the answer to employ ever more security specialists?

We wrap up by discussing how each of these areas have a common thread running through them and how as technology strategists it is important that, when making technology decisions, we don’t focus on technology but fully understand the business outcomes we are trying to achieve.

It’s a great chat with David and Richard and we could have discussed these trends for hours, luckily for you, it’s only 40 minutes!

Enjoy the Show.

You’ll find David and Richards full list of prediction from 2017 here – https://www.arrowthehub.co.uk/blog/posts/2017/february/what-are-the-hottest-technology-trends-of-2017-part-1/

You’ll also find the 2018 predictions here https://www.arrowthehub.co.uk/blog/posts/2018/january/what-are-the-hottest-technology-trends-for-2018-part-1/

If you’d rather listen, then check out the excellent Arrow Bandwidth podcast you can find the episodes discussing all of last years predictions as well as this years in the following places Tech Trends 2017 Part One, Tech Trends 2017 Part Two, Tech Trends 2018 Part One, Tech Trends 2018 Part Two.

If you’d like to keep up with David and Richard, you can find them both on twitter @davidfearne and @_Rich_Holmes.

Thanks for listening.

Building a modern data platform – Availability

In part one we discussed the importance of getting our storage platform right, in part two we look at availability.

The idea that availability is a crucial part of a modern platform was something I first heard from a friend of mine, Michael Cade from Veeam, who introduced me to “availability as part of digital transformation” and how this was changing Veeam’s focus.

This shift is absolutely right, today as we build our modern platforms backup and recovery is still a crucial requirement, however, a focus on availability is at least, if not more, crucial. Today nobody in your business really cares how quickly you can recover a system, what our digitally driven businesses demand is that our systems are always there and downtime in ever more competitive environments is not tolerated.

With that in mind why do I choose Veeam to deliver availability to my modern data platform?

Keep it simple

Whenever I meet a Veeam customer their first comment on Veeam is “it just works”, the power of this rather simple statement should not be underestimated when you are protecting key assets. Too often data protection solutions have been overly complex, inefficient and unreliable and that is something I have always found unacceptable, for business big or small you need a data protection solution you can deploy and then forget and trust it just does what you ask, this is perhaps Veeam’s greatest strength and a crucial driver behind its popularity and what makes it such a good component part of a data platform.

I would actually say Veeam are a bit like the Apple of availability, although much of what they do has been done by others (Veeam didn’t invent data protection, in the same way Apple didn’t invent the smartphone) but what they have done is make it simple and usable and something that just works and can be trusted. Don’t underestimate the importance of this.

Flexibility

If ever there was a byword for modern IT, flexibility could well be it, it’s crucial that any solution and platform we build has the flexibility to react to ever changing business and technological demands. Look at how business needs for technology and the technology itself has changed in the last 10 years and how much our platforms have needed to change to keep up, flash storage, web scale applications, mobility, Cloud, the list goes on.

The following statement sums up Veeam’s view on flexibility perfectly

“Veeam Availability Platform provides businesses and enterprises of all sizes with the means to ensure availability for any application and any data, across any cloud infrastructure”

It is this focus on flexibility that make Veeam such an attractive proposition in the modern data platform, allowing me to design a solution that is flexible enough to meet my different needs, providing availability across my data platform, all with the same familiar toolset regardless of location, workload type or recovery needs.

Integration

As mentioned in part one, no modern data platform will be built with just one vendors tools, not if you want to deliver the control and insight into your data that we demand as a modern business. Veeam, like NetApp, have built a very strong partner ecosystem allowing them to integrate tightly with many vendors, but more than just integrate Veeam deliver additional value allowing me to simplify and do more with my platform (take a look at this blog about how Veeam allows you to get more from NetApp snapshots). Veeam are continuously delivering new integrations and not only with on-prem vendors, but also as mentioned earlier, with a vast range of cloud providers.

This ability to extend the capabilities and simplify the integration of multiple components in a multi-platform, multi-cloud world is very powerful and a crucial part of my data platform architecture.

Strategy

As with NetApp, over the last 18 months it has been the shift in Veeam’s overall strategy that has impressed me more than anything else, although seemingly a simple change, the shift from talking about backup and recovery to availability is significant.

As I said at the opening of this article, in our modern IT platforms nobody is interested in how quickly you can recover something, it’s about availability of crucial systems. A key part of Veeam’s strategy is to “deliver the next generation of availability for the Always-On Enterprise” and you can see this in everything Veeam are doing, focussing on simplicity, ensuring that you can have your workload where you need it when you need it and move those workloads seamlessly between on-prem, cloud and back again.

They have also been very smart, employing a strong leadership team and, as with NetApp, investing in ensuring that cloud services don’t leave a traditionally on-premises focussed technology provider adrift.

The Veeam and NetApp strategies are very similar, and it is this similarity that makes them attractive components in my data platform. I need my component providers to understand technology trends and changes so they, as well as our data platforms, can move and change with them.

Does it have to be Veeam?

In the same way it doesn’t have to be NetApp, of course it doesn’t have to be Veeam, but in exactly the same way, if you are building a platform for your data, then make sure your platform components deliver the kinds of things that we have discussed in the first two parts of this series, ensure that they provide the flexibility we need, the integration with components across your platform and a strategic vision that you are comfortable with, as long as you have that, that will give you rock solid foundations to build on.

In Part Three of this series we will look at building insight, compliance and governance into our data platform.

You can find the Introduction and Part One – “The Storage” below.

modern data platform
The Introduction
modern storage
Part One – The Storage

 

 

IT Pro’s and the Tech Community – Yadin Porter de Leon – Ep 57

One of the favourite parts of my role over the last few years has been my involvement in tech community, whether that’s been working with advocacy groups like the NetApp A-Team, with local user groups like TechUG, presenting at a range of different community events or just answering questions in technical communities, all of these investments (and they are investments) have paid back, they’ve introduced me to great people, given me access to resources and expertise I would never have found normally and opened up great opportunities for travel and too develop some great friendships.

We are fortunate to be part of an industry that does have a strong sense of community, full of people with shared interests and a passion for their subject, a passion they are often happy to share with anyone who’s interested.

One of the challenges with tech community is however its size and if you are new to it or even a part of it, it can be overwhelming and hard to know where to start? How do you find the resources you need, find out which events you can attend or find out who the leaders are that you can engage with?

071517_1725_Livingonthe1.jpgLast year I was invited to get involved in a project called “Level Up”, a project started by this week’s guest on the podcast Yadin Porter de Leon, Yadin has been on the show before in his capacity at data protection company Druva, however that’s not what we discuss this week as we chat about the Level Up project, why he started it, the project aims and how it can help you in your career.

In this week’s episode we discuss why you may want to get involved in community and what benefits it can bring and how involvement in the wider community can benefit both you and your business providing you with opportunities to develop your skills.

Yadin shares how one of the focuses of the project is to engage those who are not already involved in community and provide them a way to get started.

We look at Level Up’s first project the vTrail Map a fantastic guide to the world of VMware and the virtualisation community and we also look ahead to what’s next for the project and the longer terms aims.

We wrap up by asking Yadin about another project he is involved in which is the excellent Tech Village Podcast, again focussed on career development and the technology business, a great show which I’d recommend anyone gets on their regular podcast list you can find the show on Soundcloud and follow the show on twitter @TechVillagePod

For more information on Level Up, you can find them on twitter @Tech_LevelUp

You can also contact Yadin on twitter @porterdeleon

Hope you find the show interesting and if you’re not already involved in tech community maybe this will give you a bit of inspiration to involve yourself more, it’s most definitely worth it.

Thanks for listening.

Building a modern data platform – The Series – Introduction

For many of you who read my blog posts (thank you) or listen to the Tech Interviews Podcast (thanks again!) you’ll know talking about data is something I enjoy, it has played a significant part in my career over the last 20 years, but today data is more central than ever too what so many of us are trying to achieve.

pexels-photo-373543.jpegIn today’s modern world however, storing our data is no longer enough, we need to consider much more, yes storing it effectively and efficiently is important, however, so is its availability, security, privacy and of course finding ways to extract value from it, whether that’s production data, archive or backup, we are looking at how we can make it do more (For examples of what I mean, read this article from my friend Matt Watts introducing the concept of Data Amplification Ratio) and deliver a competitive edge to our organisations.

To do this effectively means developing an appropriate data strategy and building a data platform that is fit for today’s business needs. This is something I’ve written and spoken about on many occasions, however, one question I get asked regularly is “we understand the theory, but how do we build this in practice, what technology do you use to build a modern data platform?”.

That’s a good question, the theory is all great and important, however seeing practical examples of how you deliver these strategies can be very useful. With that in mind I’ve put together this series of blogs too go through the elements of a data strategy and share some of the practical technology components I use to help organisations build a platform that will allow them to get the best from their data assets.

Over this series we’ll discuss how these components deliver flexibility, maintain security and privacy, provide governance control and insights, as well as interaction with hyperscale cloud providers to ensure you can exploit analytics, AI and Machine Learning.

So, settle back and over the next few weeks I hope to provide some practical examples of the technology you can to deliver a modern data strategy, parts one and two are live now and can be accessed in the links below. The other links will become live as I post them, so do keep an eye out for them.

modern storage
Part One – The Storage
alwayon
Part Two – Availability
control
Part Three – Control

I hope you enjoy the series and that you find these practical examples useful, but remember, these are just some of the technologies I’ve used and are not the only technologies available and you certainly don’t have to use any of these to meet your data strategy goals, however, the aim of this series is to help you understand the art of the possible, if these exact solutions aren’t for you, don’t worry, go and find technology partners and solutions that are and use them to help you meet your goals.

Good Luck and happy building!

Coming Soon;

Part Four – What the cloud can bring

Part Five – out on the edges

Part Six – Exploiting the Cloud

Part Seven – A strategic approach