Fear of the delete button – Microsoft and compliance – Stefanie Jacobs – Ep69

Compliance of data continues to trouble many business execs, whether IT focused or not, it is high on the agenda for most organisations. Anyone who has listened to this show in the past will know, while technology only plays a small part in a building an organisations compliance programme, it can play a significant part in their ability to execute it.

A few weeks ago I wrote an article as part of the “Building a modern data platform” series, this article Building a modern data platform “prevention” focussed on how Microsoft Office365 could aide an organisation in preventing the loss of data, either accidental or malicious. This article explains how Microsoft have some excellent, if not well known tools, inside Office365 including a number of predefined templates which when enabled allow us to deploy a range of governance and control capabilities quickly and easily, immediately improving an organisations ability to execute its compliance plans and reduce the risk of data leaks.

This got me to thinking, what else do Microsoft have in their portfolio that people don’t know about? What is their approach to business compliance and can that help organisations to more effectively deliver their compliance plans?

This episode of the podcast explores that exact topic, this is a show I’ve wanted to do for a while and finally have found the right person to help explore Microsoft’s approach and what tools are quickly and easily available to help us deliver robust compliance.

This week’s guest is Stefanie Jacobs, a Technology Solutions Professional at Microsoft, with 18 years’ experience in compliance. Stefanie, who has the fantastic twitter handle of @GDPRQueen, shares with fantastic enthusiasm the importance of compliance, Microsoft’s approach and how their technology is enabling organisations to make compliance a key part of their business strategy.

In this episode we explore all the compliance areas you’d ever want, including the dreaded “fear of the delete button”. Stefanie shares Microsoft’s view of compliance and how it took them a while to realise that security and compliance are different things.

We talk about people, the importance of education and shared responsibility. We also look at the triangle of compliance, people, process and technology. Stefanie explains the importance of terminology and understanding exactly what we mean when we discuss compliance.

We also discuss Microsoft’s 4 steps to developing a compliance strategy, before we delve into some of the technology they have available to help underpin your compliance strategy, especially the security and compliance section of Office365.

We wrap up with a chat on what a regulator looks for when you have had a data breach and also what Joan Collins has to do with compliance!

Finally, Stefanie provides some guidance on the first steps you can take as you develop your compliance strategy.

Stefanie is a great guest, with a real enthusiasm for compliance and how Microsoft can help you deliver your strategy.

To find out more about how Microsoft can help with compliance you can visit both their Service Trust and GDPR Assessment portals.

You can contact Stefanie via email Stefanie.jacobs@microsoft.com as well as follow her on twitter @GDPRQueen.

Thanks for listening

If you enjoyed the show, why not subscribe, you’ll find Techstringy Tech Interviews in all good homes of podcasts.

While you are here, why not check out a challenge I’m undertaking with Mrs Techstringy to raise money for the Marie Curie charity here in the UK, you can find the details here.

Advertisements

Microsoft the digital transformation cool kids? – Andy Kent – Ep68

I recently attended a fascinating event with the British Interactive Media Association (BIMA) and Microsoft. BIMA exist to drive innovation and excellence across the digital industry. One of the ways they do this is via community events and forum to help give industry leaders an opportunity to share ideas with their peers.

I personally believe forum like these should play a central part in an IT strategists role, the opportunity to share ideas with and learn from peers in your industry is essential in helping to gain the understanding needed to develop modern business strategies. The benefit of involving yourself in communities regardless of your sector should not be underestimated, that ability to build relationships with others who face the same challenges and opportunities is incredibly valuable.

One of the other things communities like BIMA do well is engage with influential organisations who are shaping the way industries develop and BIMA have done that recently for thier members with a series of roadshows with Microsoft. So, when I was invited to the recent event in Liverpool, I was fascinated to understand what Microsoft were doing in the digital agency space and how their technology was shaping the way digitally focussed organisations are innovating and bringing new solutions to market.

What I found however was the technology Microsoft discussed was the same as that they were sharing with all other types of businesses, their cloud vision of how Azure and Office 365 allows quick and easy deployment of technology and services that only a few years ago were out of the reach of most organisations.

Why was this message the same? While we may assume “digital” companies are ones focussed on marketing and media creation, the reality is most organisations are rapidly becoming digital businesses and starting to transform our organisations with technologies that only a few years ago were unavailable to most, cognitive services, AI, machine learning, deep analytics capabilities, bots and business intelligence just a few examples of the kind of technology that can transform the way we operate.

The event covered some fascinating topics and on this weeks podcast I share some of that with you, with my guest and one of the organisers of the BIMA event, Andy Kent CEO of Angel Solutions and the chair of BIMA Liverpool. Andy joins me to discuss the event, the technology Microsoft shared and how the new Microsoft is helping companies transform the way they do business.

We begin with a look at BIMA and the part they play and value they bring to the digital community.

We discuss the new Microsoft and ask if they are now the technology “cool kids” and how their change in attitude is encouraging people to engage with them and explore how technology can help them and their organisation.

We chat about some of the technology highlights Andy had from the show, how he sees AI as having a huge transformational effect and how Cloud is commoditising access to this kind of technology so that all organisations can benefit, big or small.

We look at some of the smart technologies Microsoft are now embedding into their more familiar tools and allowing that intelligence to do much of the “heavy lifting” for the end user, but how that does not replace a skilled and experienced professional, but does help keep them focussed on the high value work they do.

Finally, we look at how advances in technology has also advanced customer expectations and how theses advances are allowing organisations to do new things with and ask new questions of their data.

Andy shares with enthusiasm both the technological direction of this “new” Microsoft as well as the value that community has played in his business life.

If you want to find out more about Andy and the work that Angel Solutions do you can find them on twitter @angel_solutions.

You can find also find Andy on twitter @AndyCKent

If you want to know more about BIMA check out thier website www.bima.co.uk

Thanks for listening.

Managing all of the clouds – Lauren Malhoit – Ep67

As the move to cloud continues we are starting to see a new development, with organisations no longer relying on a single cloud provider to deliver their key services, many now opting for multiple providers, from their own data centre to hyperscale big boys, multi-cloud environments are becoming the norm.

This multi-cloud environment makes perfect sense, the whole point of adopting cloud is to provide you with the flexibility to consume your data, infrastructure, applications and services from the best provider at any given time, which would be very difficult to do if we only had a single provider.

However, multi-cloud comes with a challenge, one rather well summed up at an event recently by the phrase “clouds are the new silo’s”. Our cloud providers are all very different in the way they build and operate their infrastructure and although when we take services from one provider we may well not notice or care, when we start to employ multiple vendors it can quickly become a problem.

How to avoid cloud silo’s is seemingly becoming a technology “holy grail” engaging many of the world’s biggest tech vendors.  This is only good news, as we move into a world where we want the freedom and flexibility to choose whichever “cloud” is the best fit for us at any given time, then will will only be able to do this if we overcome the challenge that comes with managing and operating across these multiple environments.

Taking on this challenge is the subject of this week’s podcast with my guest Lauren Malhoit of Juniper Networks and co-host of the excellent Tech Village Podcast.

Lauren recently sent me a document entitled “The Five Step Multi Cloud Migration Framework” It caught my attention as it discusses the multi-cloud challenge and provides some thoughts on how to address it and it is those ideas that form the basis for this week’s show..

We open the discussion by trying to define what multi-cloud is and why it’s important that we don’t assume that all businesses are already rushing headlong into self-driving, self-healing, multi-cloud worlds. We chat about how a strategy is more likely to be for helping a business start along this road, rather than managing something they already have.

We explore how multi-cloud doesn’t just mean Azure and AWS, but can equally apply to multiples of your own datacenters and infrastructure.

Lauren shares her view on the importance of automation, especially when we look at the need for consistency and how this is not just about consistent infrastructure, but also compliance, security and manageability.

We also ask the question, why bother? Do we really need a multi-cloud infrastructure? Does it really open up new ways for our organisation to operate?

We wrap up looking at the importance of being multi-vendor, multi-platform and open and how that openness cannot come with a cost of complexity.

Finally, we discuss some use cases for multi-cloud as well as taking on the challenge of people in our business and the importance of how a multi-cloud world shouldn’t be seen as a threat, but as an opportunity for career growth and development.

I hope you enjoy what I thought was a fascinating conversation about an increasingly pressing challenge.

To find out more about the work Juniper are doing in this space you can look out for forthcoming announcements at Juniper.net as well as check out some of the information published on their Github repo’s.

To find out more about the work Lauren is doing you can follow her on twitter @malhoit or her blog over at adaptingit.com

Also check out the fantastic Techvillage Podcast if you are interested in career development and finding out about the tech world of others in the IT community.

Juniper also have some great resources for learning about designing a multi cloud environment check out the original white paper that inspired this podcast The Five Step Multi Cloud Migration Framework and you’ll also find some great info in this post Get Your Data Center Ready for Multicloud

Until next time – thanks for listening

Wrapping up VeeamON – Michael Cade – Ep 66

A couple of weeks ago in Chicago Veeam had their annual tech conference VeeamON, it was one of my favourite shows from last year, unfortunately I couldn’t make it out this time but did catch up remotely and shared my thoughts on some of the strategic messages that where covered in a recent blog post looking at Veeam’s evolving data management strategy ( Getting your VeeamON!).

That strategic Veeam message is an interesting one and their shift from031318_0833_Availabilit2.jpg “backup” company to one focused on intelligent data management across multiple repositories is, in my opinion, exactly the right move to be making. With that in mind, I wanted to take a final look at some of those messages as well as some of the other interesting announcements from the show and that is exactly what we do on this week’s podcast, as I’m joined by recurring Tech Interviews guest, Michael Cade, Global Technologist at Veeam.

Michael, who not only attended the show but also delivered some great sessions, joins me to discuss a range of topics. We start by taking a look at Veeam’s last 12 months and how they’ve started to deliver a wider range of capabilities which builds on their virtual platform heritage with support for more traditional enterprise platforms.

Michael shares some of the thinking behind Veeam’s goal to deliver an availability platform to meet the demands of modern business data infrastructures, be they on-prem, in the cloud, SaaS or service provider based. We also look at how this platform needs to offer more than just the ability to “back stuff up”

We discuss the development of Veeam’s 5 pillars of intelligent data management, a key strategic announcement from the show and how this can be used as a maturity model against which you can compare your own progress to a more intelligent way of managing your data.

We look at the importance of automation in our future data strategies and how this is not only important technically, but also commercially as businesses need to deploy and deliver much more quickly than before.

We finish up by investigating the value of data labs and how crucial the ability to get more value from your backup data is becoming, be it to carry out test, dev, data analytics or a whole range of other tasks without impacting your production platforms or wasting the valuable resource in your backup data sets.

Finally, we take a look at some of the things we can expect from Veeam in the upcoming months.

You can catch up on the event keynote on Veeam’s YouTube channel https://youtu.be/ozNndY1v-8g

You can also find more information on the announcements on Veeam’s website here www.veeam.com/veeamon/announcements

If you’d like to catch up with thoughts from the Veeam Vanguard team, you can find a list of them on twitter – https://twitter.com/k00laidIT/lists/veeam-vanguards-2018

You can follow Michael on twitter @MichaelCade1 and on his excellent blog https://vzilla.co.uk/

Thanks for listening.

Casting our eye over HCI – Ruairi McBride – Ep65

I’ve spoken a bit recently about the world of Hyper Converged Infrastructure (HCI) especially as the technology continues to mature, with both improved hardware stacks and software looking to take advantage of this hardware, it is becoming an ever more compelling prospect.

ruairiHow do these developments, an HCI version 2.0 if you like, manifest themselves? Recently I saw a good example in a series of blog posts and videos from friend of the show Ruairi McBride, which demonstrated really well both the practical deployment and look and feel of a modern HCI platform.

The videos focussed on NetApp’s new offering and covered the out of the box experience, how to physically cable together your HCI building blocks and how to take your build from delivery to deployment in really easy steps. This demonstration of exactly how you build a HCI platform was interesting, not just on a practical level, but also gave me some thoughts around why and how you may want to use HCI platforms in a business context.

With that in mind, I thought a chat with Ruairi about his experience with this particular HCI platform, how it goes together, how it is practically deployed and how it meets some of the demands of modern business would make an interesting podcast.

So hear it is, Ruairi joins me as we cast our eye over HCI (stole the title from Ruairi’s BLOG post!).

We start by discussing what HCI is and why it’s simplicity of deployment is useful, we also look at the pro’s and cons of the HCI approach. Ruairi shares some thoughts on HCI’s growing popularity and why the world of smartphones may be to blame!

We look at the benefit of a single vendor approach within our infrastructure, but also discuss that although the hardware elements of compute and storage are important, the true value of HCI lies in the software.

We discuss the modern business technology landscape and how a desire for a more “cloud like” experience within our on-premises datacentres has demanded a different approach to how we deploy our technology infrastructure.

We wrap up by looking at why as a business you’d consider HCI, what problems will it solve for you and what are the use cases that are a strong HCI fit and of course, it’s important to remember that HCI isn’t the answer to every question!

To find out more about NetApp HCI visit here.

Ruairi’s initial “Casting Our Eye Over HCI” blog and video series is here.

If you have further questions for Ruairi, you can find him on twitter @mcbride_ruairi.

Until next time.

Thanks for listening.

IoT more than a sensor – Mark Carlton -Ep64

Buzzwords are a constant in IT, Cloud, HCI, Analytics, GDPR are all in common parlance in technology discussions across businesses of all type. However often these words are bandied about and serious discussions are had, however not everyone is sure what some of these buzzwords mean, what the technology consists of and importantly what positive impact does it have on an organisation? if any positive impact at all!

Let me present another contender to the buzzword Olympics, IoT or “The Internet of Things” what does that mean? What is a thing? And do I want things? let alone an Internet of them! The only thing I know about IoT was that I don’tt really know much about it!

When I heard that a friend of the podcast had taken on a new role as an IoT Solutions Architect and wrote a great introduction to IoT blog post (Demystifying IoT)
It seemed like a great opportunity to get some IoT education for not only me, but also the Tech Interviews audience.

So, on this week’s show I’m joined by Mark Carlton, who is now an IoT Solutions Architect at Arrow ECS and asked him to share what he’s discovered in his time in the role, how he sees IoT as a technology and how implementing it can deliver value to a business.

We start off by trying to define what we mean by IoT and Mark shares how, like many a new IT trend, it isn’t really that new.

We also explore why IoT is more than just sensors and how in reality IoT is a platform architected from sensors, gateways and importantly analytics tools that can help us to make sense of the data we collect and turn it into something valuable.

We discuss how often IoT projects are too quick to focus on putting sensors in lots of places rather than starting with a focus on business outcomes and asking the question “What do I want to achieve with this sensor?”.

Mark shares the importance of looking at IoT projects like any IT project, with a focus on business outcomes, the Why? How? And What? Of a project and not the technology.

We then explore use cases, how are people using sensor data to discover new things about their business. Mark also explains how it’s not only this additional data from increasing amounts of IoT that is useful but how access to large amounts of historic data is allowing us to find new trends and information which is creating brand new opportunities and ways of working.

We finish up by looking at security and compliance, both crucial elements of an IoT platform design and how it’s critical they are included right at the outset because adding security retrospectively to these platforms could be almost impossible.

Finally, Mark shares some advice on where to start and some sources of information to consider.

I hope this episode has helped you better understand this emerging technology platform and how it could serve your business, I know it certainly has helped me.

For more information you can follow Mark on Twitter @mcarlton1983

His blog at justswitchitonandoff.com

Until next time thanks for listening.

Taking a GDPR Journey – Mike Resseler – Ep63

GDPR has been a constant business conversation over the last 18 months or so, it’s discussed in the press, on the news and social media, as well as a handful of episodes of this podcast. However, much of the conversation has focussed on what you should be considering and doing to take on the GDPR challenge, while very little has come from those who have already made great strides on their compliance journey.

With that in mind, a few weeks ago I read a fascinating series of blogs from software company Veeam, this series discussed the 5 principles they followed to build their compliance program. What was interesting, was this series of posts talked about the practical steps they took, not about the technology they deployed, or how their technology could help you, but a series of posts that shared their experiences and challenges they faced building their business compliance program.

As many of us are currently on our own compliance journey, I thought the opportunity to chat with someone who is already well down this path would be of real interest, so in this week’s podcast I’m joined by Mike Resseler, Mike is a Director of Product Management but is also a key member of Veeam’s global compliance team and has played a significant part in the way they have dealt with the challenges posed by GDPR.

In this week’s show Mike shares with us Veeam’s experience. We start at the beginning with the initial advice they took and research they did into what GDPR meant to them. We discuss the importance of putting together the right team to deal with business compliance and why it was important to realise the scope of the work they were about to undertake.

Mike also explains how it was important that Veeam saw GDPR as something that would have a positive impact on the business and how, although technology would play a part, this was something that would need a focus on people, workflow and procedures.

We also discussed how not everyone was enthused by the idea of business compliance and how they saw GDPR as just a European problem and how it was important that the compliance team educated all the business to the importance of compliance.

We also look at the practicalities of building a compliance program as Mike shares the 5 principles Veeam developed to help them, we look at those steps, knowing your data, managing your data, protecting the data, documentation and continual improvement. We discuss the importance of each step and the part they have played in building a global compliance program.

We wrap up looking at the future, discussing continual improvement, training and the way that Veeam are making compliance integral to everything they do across their business.

I hope you enjoy the fantastic insight that Mike provides into the way a company builds a compliance programme and tackles regulation such as GDPR.

To find out more from Mike you can find him on twitter @MikeResseler.

The original blog posts that inspired this episode can be found here https://www.veeam.com/executive-blog/our-journey-to-be-gdpr-compliant.html

Mike and his team have also produced this video in which they discuss how to accelerate your GDPR efforts https://www.veeam.com/veeamlive/accelerate-your-gdpr-efforts.html

Hope you enjoy the show and until next time, thanks for listening.

Getting your cyber essentials – Jason Fitzgerald – Ep62

Cyber Security, be it how we secure our perimeter, infrastructure, mobile devices or data, is a complex and ever-changing challenge. In the face of this complexity where do we start when it comes to building our organisations cyber security standards.

Well perhaps the answer may lie in standardised frameworks and accreditation’s. If you think about it, one of the biggest challenges we have when it comes to security is knowing where to start, so having a standard to work towards makes perfect sense.

That is the subject of this weeks show with my guest and colleague Jason Fitzgerald, as we discuss the value of a UK based accreditation, Cyber Essentials.

Jason is a very experienced technical engineer and consultant and today spends much of his time working with organisations to help them address their IT security concerns and develop policies, procedures, strategies and technologies to help them to improve their security baselines.

One of the tools that Jason uses extensively is a framework and accreditation produced by the National Cyber Security Centre here in the UK, Cyber Essentials. During this episode we discuss why such a framework is valuable and can help a business improve its security posture.

But first we start with discussing the kind of security landscape that Jason sees when he talks with businesses of all types, some of the confusion that they have and the often-misplaced confidence that comes with the “latest and greatest” security technology solution purchase.

We explore the importance of organisational “buy in” when it comes to security, why it can’t be just seen as an IT problem and how without senior sponsorship your security efforts may well be doomed to failure.

Jason shares with us the 5 key areas that Cyber Essentials covers, from perimeter to patching. He also provides some insight into the process that an organisation will head down when building their own security framework.

We also look at the value of getting your security foundation correct, how it can greatly reduce your exposure to many of the common cyber security risks, but also how without it, your attempts to build more robust security and compliance procedures may well fail.

We finish up with Jason sharing some of his top tips for starting your security journey and how, although Cyber Essentials is a UK based accreditation, the principles of it will be valuable to your organisation wherever in the world you may be based.

You can follow Jason on twitter @jay_fitzgerald and read more from him at his blog Bits with the Fitz

If you want to learn more about Cyber Essentials, then visit the UK’s National Cyber Security Centre website www.cyberessentials.ncsc.gov.uk

Next week, we are looking at GDPR as I’m joined by a special guest Mike Resseler from Veeam as he takes us through the business compliance process they have carried out across their global organisation.

Thanks for listening.

Thanks for memory – Alex McDonald – Ep61

At the start of 2018 the technology industry was hit by two new threats unlike anything it had seen before. Spectre and Meltdown used vulnerabilities not in operating system code or poorly written applications, but ones at a much lower level than that.

This vulnerability was not only something of concern to today’s technology providers, but also to those looking at architecting the way technology will work in the future.

As we try to push technology further and have it deal with more data, more quickly than ever before. The technology industry is having to look at ways of keeping up and have our tech work in different ways beyond the limits of our current ways of working. One of these developments is storage class memory, or persistent memory, were our data can be housed and accessed at speeds many times greater than they are today.

However, this move brings new vulnerabilities in the way we operate, vulnerabilities like those exposed by Spectre and Meltdown, but how did Spectre and Meltdown look to exploit operational level vulnerabilities? and what does that mean for our desire to constantly push technology to use data in ever more creative and powerful ways?

That’s the topic of this week’s Tech Interviews podcast, as I’m joined by the always fascinating Alex McDonald to discuss exactly what Spectre and Meltdown are, how they Impact what we do today and how they may change the way we are developing our future technology.

Alex is part of the Standards Industry Association group at NetApp and represents them on boards such as SNIA (Storage Networking Industry Association).

In this episode, he brings his wide industry experience to the show to share some detail on exactly what Spectre and Meltdown are, how they operate, what vulnerabilities they exploit, as well as what exactly these vulnerabilities put at risk in our organisations.

We take a look at how these exploits takes advantage of side channels and speculative execution to allow an attacker to access data that you never would imagine to be at risk, and how our eagerness to push technology to its limits created those vulnerabilities.

We discuss how this has changed the way the technology industry is now looking at the future developments of memory, as our demands to develop ever larger and faster data repositories show no sign of slowing down.

Alex shares some insights into the future, as we look at the development of persistent memory, what is driving demand and how the need for this kind of technology means the industry has no option but to get it right.

To ease our fears Alex also outlines how the technology industry is dealing with new threats to ensure that development of larger and faster technologies can continue, while ensuring the security and privacy of our critical data.

We wrap up discussing risk mitigation, what systems are at risk to attack from exploits like Spectre and Meltdown, what systems are not and how we ensure we protect them long term.

We finish on the positive message that the technology industry is indeed smart enough to solve these challenges and how it is working hard to ensure that it can deliver technology to the demands we have for our data to help solve big problems.

You can find more on Wikipedia about Spectre and Meltdown.

You can learn more about the work of SNIA on their website.

And if you’d like to stalk Alex on line you can find him on twitter talking about technology and Scottish Politics! @alextangent

Hope you enjoyed the show, with the Easter holidays here in the UK we’re taking a little break, but we’ll be back with new episodes in a few weeks’ time, but for now, thanks for listening.

Availability of all of the things – Michael Cade – Ep 60

Recently I wrote a blog post as part of a series that explored the importance of availability to a modern data platform, especially in a world were our reliance on technology is ever increasing, from the way we operate our business, to the way we live our lives and how the digitally focussed businesses can no longer tolerate downtime, planned or unplanned in the way they could even 5 years ago (you can read that post here).

So how do we mitigate against the evils of downtime? That’s simple, we build recovery and continuity plans to ensure that our system remain on regardless of the events that go on around it, from planned maintenance to the very much unplanned disaster. But there’s the problem, these things aren’t simple, are they?

I’ve recently worked on a project where we’ve been doing exactly this, building DR and continuity plans in the more “traditional” way, writing scripts, policies and procedures to ensure that in the event of some kind of disaster the systems could be recovered quickly and meet stringent recovery time and point objectives. What this project reminded me of is how difficult these things are, keeping your documentation up to date, making sure your scripts are followed and ensuring you can fully test these plans, is tricky.

With that in mind the recent product announcement from Veeam of their new Availability Orchestrator solution, caught my attention, a solution that promises to automate and orchestrate not only the delivery of a DR solution, but also automating its documentation and testing, this was something that I needed to understand more and thought I wouldn’t be the only one.

So that is the topic of this weeks podcast, as serial guest Michael Cade, Global Technologist at Veeam, joins me to provide an insight into Availability Orchestrator, what challenges it addresses, why Veeam thought it was important to develop and how it can help you deliver better availability to your critical systems.

During the show Michael shares some insight into understanding your availability gap and why today business cannot tolerate downtime of key systems as well as the difficulties that come with maintaining a robust and appropriate strategy.

We explore the challenges of testing when the business doesn’t want downtime, how to keep track of all of the little tricks that our tech team keep in their heads how to get that into a continuity plan.

We finish up looking at how Availability Orchestrator can help, by providing a automation and orchestration solution to automate testing, documentation and execution of our continuity plans and how it can also be a tool to help us build test and dev environments, as well as help us to migrate to cloud platforms like VMware on AWS.

Availability Orchestrator, in my opinion, is a very powerful tool, having just worked on a continuity and DR project, the challenges that come with manually maintaining these plans are still very fresh in my mind and had this tool been available when I started that project it would certainly of been worthy of investigation into how it could help.

If you want to find out more about Veeam availability orchestrator, check out the Veeam Website.

You can follow Michael on twitter @MichaelCade1

And if you’d like to read his blog series on Veeam replication you’ll find that on his blog site starting here.

Hope you’ve found the show useful.

Thanks for listening.