Thanks for memory – Alex McDonald – Ep61

At the start of 2018 the technology industry was hit by two new threats unlike anything it had seen before. Spectre and Meltdown used vulnerabilities not in operating system code or poorly written applications, but ones at a much lower level than that.

This vulnerability was not only something of concern to today’s technology providers, but also to those looking at architecting the way technology will work in the future.

As we try to push technology further and have it deal with more data, more quickly than ever before. The technology industry is having to look at ways of keeping up and have our tech work in different ways beyond the limits of our current ways of working. One of these developments is storage class memory, or persistent memory, were our data can be housed and accessed at speeds many times greater than they are today.

However, this move brings new vulnerabilities in the way we operate, vulnerabilities like those exposed by Spectre and Meltdown, but how did Spectre and Meltdown look to exploit operational level vulnerabilities? and what does that mean for our desire to constantly push technology to use data in ever more creative and powerful ways?

That’s the topic of this week’s Tech Interviews podcast, as I’m joined by the always fascinating Alex McDonald to discuss exactly what Spectre and Meltdown are, how they Impact what we do today and how they may change the way we are developing our future technology.

Alex is part of the Standards Industry Association group at NetApp and represents them on boards such as SNIA (Storage Networking Industry Association).

In this episode, he brings his wide industry experience to the show to share some detail on exactly what Spectre and Meltdown are, how they operate, what vulnerabilities they exploit, as well as what exactly these vulnerabilities put at risk in our organisations.

We take a look at how these exploits takes advantage of side channels and speculative execution to allow an attacker to access data that you never would imagine to be at risk, and how our eagerness to push technology to its limits created those vulnerabilities.

We discuss how this has changed the way the technology industry is now looking at the future developments of memory, as our demands to develop ever larger and faster data repositories show no sign of slowing down.

Alex shares some insights into the future, as we look at the development of persistent memory, what is driving demand and how the need for this kind of technology means the industry has no option but to get it right.

To ease our fears Alex also outlines how the technology industry is dealing with new threats to ensure that development of larger and faster technologies can continue, while ensuring the security and privacy of our critical data.

We wrap up discussing risk mitigation, what systems are at risk to attack from exploits like Spectre and Meltdown, what systems are not and how we ensure we protect them long term.

We finish on the positive message that the technology industry is indeed smart enough to solve these challenges and how it is working hard to ensure that it can deliver technology to the demands we have for our data to help solve big problems.

You can find more on Wikipedia about Spectre and Meltdown.

You can learn more about the work of SNIA on their website.

And if you’d like to stalk Alex on line you can find him on twitter talking about technology and Scottish Politics! @alextangent

Hope you enjoyed the show, with the Easter holidays here in the UK we’re taking a little break, but we’ll be back with new episodes in a few weeks’ time, but for now, thanks for listening.

Advertisements

Availability of all of the things – Michael Cade – Ep 60

Recently I wrote a blog post as part of a series that explored the importance of availability to a modern data platform, especially in a world were our reliance on technology is ever increasing, from the way we operate our business, to the way we live our lives and how the digitally focussed businesses can no longer tolerate downtime, planned or unplanned in the way they could even 5 years ago (you can read that post here).

So how do we mitigate against the evils of downtime? That’s simple, we build recovery and continuity plans to ensure that our system remain on regardless of the events that go on around it, from planned maintenance to the very much unplanned disaster. But there’s the problem, these things aren’t simple, are they?

I’ve recently worked on a project where we’ve been doing exactly this, building DR and continuity plans in the more “traditional” way, writing scripts, policies and procedures to ensure that in the event of some kind of disaster the systems could be recovered quickly and meet stringent recovery time and point objectives. What this project reminded me of is how difficult these things are, keeping your documentation up to date, making sure your scripts are followed and ensuring you can fully test these plans, is tricky.

With that in mind the recent product announcement from Veeam of their new Availability Orchestrator solution, caught my attention, a solution that promises to automate and orchestrate not only the delivery of a DR solution, but also automating its documentation and testing, this was something that I needed to understand more and thought I wouldn’t be the only one.

So that is the topic of this weeks podcast, as serial guest Michael Cade, Global Technologist at Veeam, joins me to provide an insight into Availability Orchestrator, what challenges it addresses, why Veeam thought it was important to develop and how it can help you deliver better availability to your critical systems.

During the show Michael shares some insight into understanding your availability gap and why today business cannot tolerate downtime of key systems as well as the difficulties that come with maintaining a robust and appropriate strategy.

We explore the challenges of testing when the business doesn’t want downtime, how to keep track of all of the little tricks that our tech team keep in their heads how to get that into a continuity plan.

We finish up looking at how Availability Orchestrator can help, by providing a automation and orchestration solution to automate testing, documentation and execution of our continuity plans and how it can also be a tool to help us build test and dev environments, as well as help us to migrate to cloud platforms like VMware on AWS.

Availability Orchestrator, in my opinion, is a very powerful tool, having just worked on a continuity and DR project, the challenges that come with manually maintaining these plans are still very fresh in my mind and had this tool been available when I started that project it would certainly of been worthy of investigation into how it could help.

If you want to find out more about Veeam availability orchestrator, check out the Veeam Website.

You can follow Michael on twitter @MichaelCade1

And if you’d like to read his blog series on Veeam replication you’ll find that on his blog site starting here.

Hope you’ve found the show useful.

Thanks for listening.

Managing the future – Dave Sobel – Ep59

As our IT systems become ever more complex, with more data, devices and ways of working, the demands on our systems and ensuring they are always operating efficiently grow. This in turn presents us and our IT teams with a whole new range of management challenges.

Systems management has always been a challenge for organisations, how do we keep on top of an ever-increasing amount of systems ? how do we ensure they remain secure and patched ? and how do we cope with our users and their multitude of devices and ensure we can effectively look after them?.

Like most of our technology, systems management is changing, but how? And what should we expect from future management solutions?

That’s the subject of this weeks podcast, as I’m joined by returning guest Dave Sobel. Dave is Senior Director of Community at SolarWinds MSP, working with SolarWinds partners and customers to ensure they deliver a great service.

As part of this role, Dave is also charged with looking at the future (not the distant future, but the near future of the next 2 years) of systems management and what these platforms need to include in them to continue to be relevant and useful.

Dave provides some excellent insight into the way the management market is shifting and some of the technology trends that will change and improve the way we control our ever more complex yet crucial IT systems.

We start by asking why looking at the future is such an important part of the IT strategists role, whether you are a CIO, IT Director, or any person who makes technology direction strategy decisions, if you are not taking a look at future trends, it will seriously limit your ability to make good technology decisions.

We see why we need to rethink how we see a “computer” and how this is leading to a proliferation of different devices with the emergence of Internet of Things (IoT) as well as looking at why that is such a horrible phrase and how this is affecting our ability to manage.

We discuss the part Artificial Intelligence is going to play in future systems management as we try to supplement our over stretched IT staff and provide them with ways of analysing ever more data and turning it into something useful.

We also investigate increased automation, looking at how our management systems can be more flexible in supporting new devices as they are added to our systems, as well as been smarter in the way we can apply management to all of our devices.

Finally, we look at the move to human centric management, instead of our systems been built to support devices, we need to be able to understand the person who uses the technology, and build our management and controls around them, allowing us to provide them with better management and importantly a better technology experience.

We wrap up looking at how smarter systems management is going to allow us to free our IT teams to provide increased value to the business, as well as looking at a couple of areas you can focus on today, to start to look at the way you manage your systems.

To find more from Dave you can follow him on twitter @djdaveet

You will find Dave’s Blog is here

I hope you found the chat as interesting as I did.

Until next time, thanks for listening.

Building a modern data platform – Control

In the first parts of this series we have looked at ensuring the building blocks of our platform are right so that our data is sitting on strong foundations.

In this part we look at bringing management, security and compliance to our data platform.

As our data, the demands we place on it and the amount of regulation controlling it, continues to grow then gaining deep insight into how it is used can no longer be a “nice to have” it has to be an integral part of our strategy.

If you look traditionally at the way we have managed data growth you can see the basics of the problem, we have added file servers, storage arrays and cloud repositories as demanded, because more, has been easier than managing the problem.

However, this is no longer the case, as we see our data as more of an asset we need to make sure it is in good shape, holding poor quality data is not in our interest, the cost of storing it is no longer going unnoticed, we can no longer go to the business every 12 months needing more and while I have no intention of making this a piece about the EU General Data Protection Regulation (GDPR), it and regulation like it, is forcing us to rethink how we view the management of our data.

So what do I use in my data platforms to manage and control data better?

Varonis

varonis logo

I came across Varonis and their data management suite about 4 years ago and this was the catalyst for a fundamental shift in the way I have thought about and talked about data, as it opened up brand new insights on how unstructured data in a business was been used and highlighted the flaws in the way people were traditionally managing it.

With that in mind, how do I start to build management into my data platform?

It starts by finding answers to two questions;

Who, Where and When?

Without understanding this point it will be impossible to properly build management into our platform.

If we don’t know who is accessing data how can we be sure only the right people have access to our assets?

If we don’t know where the data is, how are we supposed to control its growth, secure it and govern access?

And of course when is the data accessed or even, is it accessed? let’s face it if no one is accessing our data then why are we holding it at all?

What’s in it?

However, there are lots of tools that tell me the who, where and when of data access, that’s not really reason I include Varonis in my platform designs.

While who, where and when is important it does not include a crucial component, the what. What type of information is stored in my data.

If I’m building management policies and procedures I can’t do that without knowing what is contained in my data, is it sensitive information like finances, intellectual property or customer details? Or, as we look at regulation such as GDPR, knowing where we hold private and sensitive data about individuals is increasingly crucial.

Without this knowledge we cannot ensure our data and business compliance strategies are fit for purpose.

Building Intelligence into our system

In my opinion one of the most crucial parts of a modern data platform is the inclusion of behavioural analytics, as our platforms grow ever more diverse, complex and large, one of the common refrains I hear is “this information is great, but who is going to look at it, let alone action it?”, this is a very fair point and a real problem.

Behavioural Analytics tools can help address this and supplement our IT teams. These technologies are capable of understanding and learning the normal behaviour of our data platform and when those norms are deviated from can warn us quickly and allow us to address the issue.

This kind of behavioural understanding offers significant benefits from knowing who the owners of a data set are to helping us spot malicious activity, from ransomware to data theft.

In my opinion this kind of technology is the only realistic way of maintaining security, control and compliance in a modern data platform.

Strategy

As discussed in parts one and two, it is crucial the vendors who make up a data platform have a vision that addresses the challenges businesses see when it comes to data.

There should be no surprise then that Varonis’s strategy aligns very well with those challenges, as one of the first companies I came across that delivered real forethought to the management, control and governance of our data assets.

That vision continues, with new tools and capabilities continually delivered, such as Varonis Edge and the recent addition of a new automation engine which provides a significant enhancement to the Varonis portfolio, the tools now don’t only warn of deviations from the norm, but can also act upon them to remediate the threat.

All of this tied in with Varonis’ continued extension of its integration with On-Prem and Cloud, storage and service providers, ensure they will continue to play a significant role in bringing management to a modern data platform.

Regardless of whether you choose Varonis or not it is crucial you have intelligent management and analytics built into your environment, because without it, it will be almost impossible to deliver the kind of data platform fit for a modern data driven business.

You can find the other posts from this series below;

modern data platform
Introduction
modern storage
Part One – The Storage
alwayon
Part Two – Availability