Protecting 365 – a look at Veeam Backup for Office 365

Recently Veeam announced version 2.0 of their Backup for Office 365 product this extended the functionality of its predecessor with much needed support for SharePoint and OneDrive for business. While looking into the release and what’s new it prompted me to revisit the topic of protecting Office 365 data, especially the approach of building your own solution to do so.

Back in April I wrote a post for Gestalt IT (“How to protect Office 365 data”), the basics of which considered the broadly held misconception that Microsoft are taking care of your data on their SaaS platform. While Microsoft provide some protection via retention and compliance rules and a 30-day rolling backup of OneDrive, this is not a replacement for a robust enterprise level data protection solution.

The article examined this issue and compared two approaches for dealing with the challenge, either via SaaS (NetApp’s SaaS backup platform was used as an example) or doing it yourself with Veeam. The article wasn’t intended to cover either approach in detail but to discuss the premise of Office 365 data protection.

This Veeam release though seemed like a good opportunity to look in more detail into the DIY approach to protecting our Office 365 data.

Why flexibility is worth the work

One of the drivers for many in the shift to 365 is simplification, removing the complexity that can come with SharePoint and Exchange deployments. It then surely follows that if I wanted simplicity, I’d want the same with my data protection platform. Why would I want to worry about backup repositories, proxy and backup servers or any other element of infrastructure?

The reality however, is when it comes to data protection, simplification and limiting complexity may not be the answer. Simplicity of SaaS can come at a price of reducing our ability to be flexible enough to meet our requirements, for example limiting our options to;

  • Have data backed up where we want it.
  • Deal with hybrid infrastructure and protect on-prem services.
  • Have full flexibility with restore options.

These limitations can be a problem for some organisations and when we consider mitigation against provider “lock-in” and the pressures of more stringent compliance, then you can see how for some, flexibility quickly overrides the desire for simplicity.

It is this desire for flexibility that makes building your own platform an attractive proposition. We can see with Veeam’s model the broad flexibility this approach can provide;

Backup Repository

Data location is possibly the key deciding factor when deciding to build your own platform, Veeam provide the flexibility to store our data in our own datacentre, a co-lo facility, or even a public cloud repository. Giving the flexibility to meet the most stringent data protection needs.

Hybrid Support

The next most important driver for choosing to build your own solution, is protecting hybrid workloads. While many have embraced Office365 in its entirety, there are still organisations who, for numerous reasons, have maintained an on-premises element to their infrastructure. This hybrid deployment can be a stumbling block for SaaS providers, with an Office 365 focus only.

Veeam Backup for Office365 fully supports the protection of data both on-prem and in the cloud, all through one console and one infrastructure, under a single licence. This capability is hugely valuable, simplifying the data protection process for hybrid environments and removing any need to have multiple tools protecting the separate elements.

Recovery

It’s not just backup flexibility when building your own platform that has value, it is also the range of options this can bring to recovery. This flexibility to take data backed up in any location and restore it to multiple different locations is highly valuable and sometimes an absolute necessity for anything from practicality to regulatory reasons.

What’s the flexibility cost?

Installation

Does this extra flexibility come with a heavy price of complexity and cost? In Veeam’s case no, they are renowned for simplicity of deployment and Backup for Office 365 is no different. It requires just the usual components of backup server, proxy, backup repository and product explorers with the size of the protected infrastructure dictating the scale of the protection platform.

There are of course limitations (Backup for Office 365 System Requirements) one major consideration is bandwidth, it’s important to consider how much data you’ll be bringing into your backup repository both initially and for subsequent incremental updates. While most SaaS providers will have substantial connectivity into Microsoft’s platform for these operations, you may not.

Licencing

A major benefit of software as a service is the commercial model, paying by subscription can be very attractive and can be lost when deploying our own solution. This is not the case with Backup for Office 365 which is licenced on a subscription basis.

Do it Yourself V as a Service

The Gestalt IT article ended with a comparison of the “pro’s and Cons” of the two approaches.

Do It Yourself

As A Service

Pro’s

Cons

Pro’s

Cons

Flexibility Planning Simplicity Lack of control
Control Management Overhead Lower Management Overhead Inability to customise
Customisation Responsibility Ease of Deployment Cloud only workloads
Protect Hybrid Deployments Data Sovereignty

I think these points remain equally relevant and when deciding what approach is right for you, regardless of what we’ve discussed here with Veeam’s offering. If SaaS is the right approach, it remains so, but If you do take the DIY approach, then I hope this post gives you an indication of the flexibility and customisation that is possible and why this can be crucial as part of your data protection strategy.

If building your own platform is your chosen route then Veeam Backup for Office365 V2 is certainly worthy of your consideration, But regardless of approach remember the data sat in Office365 is your responsibility, make sure its protected.

If you want to know more, you can contact me on twitter @techstringy or check out Veeam’s website.

Advertisements

Wrapping up VeeamON – Michael Cade – Ep 66

A couple of weeks ago in Chicago Veeam had their annual tech conference VeeamON, it was one of my favourite shows from last year, unfortunately I couldn’t make it out this time but did catch up remotely and shared my thoughts on some of the strategic messages that where covered in a recent blog post looking at Veeam’s evolving data management strategy ( Getting your VeeamON!).

That strategic Veeam message is an interesting one and their shift from031318_0833_Availabilit2.jpg “backup” company to one focused on intelligent data management across multiple repositories is, in my opinion, exactly the right move to be making. With that in mind, I wanted to take a final look at some of those messages as well as some of the other interesting announcements from the show and that is exactly what we do on this week’s podcast, as I’m joined by recurring Tech Interviews guest, Michael Cade, Global Technologist at Veeam.

Michael, who not only attended the show but also delivered some great sessions, joins me to discuss a range of topics. We start by taking a look at Veeam’s last 12 months and how they’ve started to deliver a wider range of capabilities which builds on their virtual platform heritage with support for more traditional enterprise platforms.

Michael shares some of the thinking behind Veeam’s goal to deliver an availability platform to meet the demands of modern business data infrastructures, be they on-prem, in the cloud, SaaS or service provider based. We also look at how this platform needs to offer more than just the ability to “back stuff up”

We discuss the development of Veeam’s 5 pillars of intelligent data management, a key strategic announcement from the show and how this can be used as a maturity model against which you can compare your own progress to a more intelligent way of managing your data.

We look at the importance of automation in our future data strategies and how this is not only important technically, but also commercially as businesses need to deploy and deliver much more quickly than before.

We finish up by investigating the value of data labs and how crucial the ability to get more value from your backup data is becoming, be it to carry out test, dev, data analytics or a whole range of other tasks without impacting your production platforms or wasting the valuable resource in your backup data sets.

Finally, we take a look at some of the things we can expect from Veeam in the upcoming months.

You can catch up on the event keynote on Veeam’s YouTube channel https://youtu.be/ozNndY1v-8g

You can also find more information on the announcements on Veeam’s website here www.veeam.com/veeamon/announcements

If you’d like to catch up with thoughts from the Veeam Vanguard team, you can find a list of them on twitter – https://twitter.com/k00laidIT/lists/veeam-vanguards-2018

You can follow Michael on twitter @MichaelCade1 and on his excellent blog https://vzilla.co.uk/

Thanks for listening.

Getting your VeeamON!

Recently software vendor Veeam held its 2018 VeeamON conference in Chicago. VeeamON was one of my favourite conferences of last year, unfortunately I couldn’t make it out this time, but I did tune in for the keynote to listen to the new strategy messages that were shared.

The availability market is an interesting space at the minute, highlighted by the technical innovation and talent recruitment you can see companies like Veeam, Rubrik and others making. Similar to the storage industry of 5 years ago, the data protection industry is being forced to change its thinking with backup, replication and recovery no longer enough to meet modern demands. Availability is now the primary challenge, and not just of the data in our datacentre but also that sat in service providers, on SaaS platforms or with the big public hyperscalers, we need our availability strategy to cover all of these locations.

As with the storage industry when it was challenged by performance and the emergence of flash, two things are happening; New technology companies are emerging offering different approaches and thinking to take on modern challenges that traditional vendors are not addressing. But that challenge also inspires those established vendors, with experience, proven technologies, teams and budgets to react and find answers to these new challenges, well at least it encourages the smart ones.

This is where the availability industry currently sits and why the recent VeeamON conference was of interest. Veeam’s position is interesting, a few years ago they were the upstart with a new way of taking on the challenge presented by virtualisation. However, as our world continues to evolve so do the challenges, cloud, automation, security, governance and compliance just a few of the availability headaches many of us face and Veeam must react to.

One of the things I like about Veeam (and one of the reasons I was pleased to be asked to be a part of their Vanguard program this year) is that they are a very smart company, some of the talent acquisition is very impressive and the shift in how they see themselves and the problem they are trying to solve is intriguing.

VeeamON 2018 saw a further development of this message as Veeam introduced their 5 stages of intelligent data management which sees them continue to expand their focus beyond Veeam “The backup company”. The 5 stages provide the outline of a maturity model, something that can be used to measure progress towards a modern way of managing data.

Of these 5 stages, many of us are on the left-hand side of the graph with a robust policy-based backup approach as the extent of our data management. However, for many this is no longer appropriate as our infrastructures become more complex, changing more rapidly with data stored in a range of repositories and locations.

This is coupled with a need to better understand our data for management, privacy and compliance reasons, we can no longer operate an IT infrastructure without understanding at the very least where our data is located and what that means for its availability.

In my opinion, modern solutions must provide us with a level of intelligence and the ability to understand the behaviour of our systems and act accordingly. This is reflected on the right-hand side of Veeam’s strategy, that to meet this modern challenge will demand increasingly intelligent systems that can understand the criticality of a workload or what is being done to a dataset and act to protect it accordingly.

Although Veeam aren’t quite doing all of that yet, you can see steps moving them along the way, solutions such as Availability Orchestrator which takes the complexities of continuity and delivers automation to its execution, documentation and ongoing maintenance, are good examples.

It’s also important to note that Veeam understand they are not the answer to all of an organisations data management needs, they are a ultimately a company focussed on availability, but what they do realise is that availability is crucial and far beyond just recovering lost data, this is about making sure data is available “any data, any app, across any cloud” and they see the opportunity in becoming the integration engine in the data management stack.

Is all this relevant? Certainly, a major challenge for most businesses I chat with is how to build an appropriate data strategy, one that usually includes only having the data they need, to know how it’s been used and by who, where it is at any given time and having it in the right place when needed so they can extract “value” and make data driven decisions. This can only be achieved with a coherent strategy that ties together multiple repositories and systems, ensures that data is where it should be and maintains the management and control of that data across any platform that is required.

With that in mind Veeam’s direction makes perfect sense, with the 5 steps to intelligent data management model providing a framework upon which you can build a data management strategy, which is hugely beneficial to anyone who is tasked with developing their organisations data management platform.

In my opinion, Veeam’s direction is well thought out and I’ll be watching with interest in not only how it continues to develop, but importantly how they deliver tools and partnerships that allow those invested in their strategy to successfully execute it.

You can find more information on the announcements from VeeamON on Veeam’s website here www.veeam.com/veeamon/announcements

IT Avengers Assemble – Part One – Ep38

This weeks Tech Interviews is the first in a short series, where I bring together a selection of people from the IT community to try to gauge the current state of business IT and to gain some insight into the key day-to-day issues affecting those delivering technology to their organisations.

For this first episode i’m joined by three returning guests to the show.

Mich040317_0726_Availabilit1.jpgael Cade is a Technical Evangelist at Veeam. Michael spends his time working closely with both the IT community and Veeam’s business customers to understand the day-to-day challenges that they face from availability to cloud migration.

You can find Michael on twitter @MichaelCade1 and his blog at vzilla.co.uk 

mike andrews

Mike Andrews is a Technical Solutions Architect at storage vendor NetApp, specialising in NetApp’s cloud portfolio, today Mike works closely with NetApp’s wide range of customers to explore how to solve the most challenging of business issues.

You can find Mike on social media on twitter @TrekinTech and on his blog site trekintech.com

Mark CarltonMark Carlton is Group Technical Manager at Concorde IT Group, he has an extensive experience in the industry having worked in a number of different types of technology businesses, today Mark works closely with a range of customers helping them to use technology to solve business challenges.

Mark is on twitter @mcarlton1983 and at his fantastically titled justswitchitonandoff.com blog.

The panel discuss a range of issues, from availability to cloud migration, the importance of the basics and how understanding the why, rather than the how is a crucial part of getting your technology strategy right.

The team provide some excellent insights into a whole range of business IT challenges and I’m sure there’s some useful advice for everyone.

Next time I’m joined by four more IT avengers, as we look at some of the other key challenges facing business IT.

If you enjoyed the show and want to catch the next one, then please subscribe, links are below.

Thanks for listening.

Subscribe on Android

SoundCloud

Listen to Stitcher

 

 

 

 

Analysing the availability market – part two – Dave Stevens, Mike Beevor, Andrew Smith – Ep30

Last week I spoke with Justin Warren and Jeff Leeds at the recent VeeamON event about the wider data availability market, we discussed how system availability was more critical than ever and how or maybe even if our approaches where changing to reflect that, you can find that episode here Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29.

In part two I’m joined by three more guests from the event as we continue our discussion. This week we look at how our data availability strategy is not and can not just be a discussion for the technical department and must be elevated into our overall business strategy.

We also look how technology trends are affecting our views of backup, recovery and availability.

First I’m joined by Dave Stevens of Data Gravity,  as we look at how ou060617_0724_Analysingth1.jpgr backup data can be a source of valuable information, as well as a crucial part in helping us to be more secure, as well as compliant with ever more stringent data governance rules.

We also look at how Data Gravity in partnership with Veeam have developed the ability to trigger smart backup and recovery, Dave gives a great example of how a smart restore can be used to quickly recovery from a ransomware attack.

You can find Dave on Twitter @psustevens and find out more about Data Gravity on their website www.datagravity.com

Next I chat with Mike Beevor of HCI vendor Pivot3 about how simplifying our approach to system availability can be a huge benefit. Mike also makes a great point about how, although focussing on application and data availability is right, we must consider the impact on our wider infrastructure, because if we don’t we run the risk of doing more “harm than good”.

You can find Mike on twitter @MikeBeevor and more about Pivot 3 over at www.pivot3.com

Last but my no means least I speak with Senior Research Analyst at IDC, Andrew Smith, we chat about availability as part of the wider storage market and how over time, as vendors gain feature parity, their goal has to become to add additional value, particularly in areas such as security and analytics.

We also discuss how availability has to move beyond the job of the storage admin and become associated with business outcomes. Finally we look a little into the future and how a “multi cloud” approach is a key focus for business and how enabling this will become a major topic in our technology strategy conversations.

You can find Andrews details over on IDC’s website .

Over these two shows, to me, it has become clear that our views on backup and recovery are changing, the shift toward application and data availability is an important one and how, as businesses, we have to ensure that we elevate the value of backup, recovery and availability in our companies, making it an important part of our wider business conversations.

I hope you enjoyed this review, next week, is the last interview from VeeamON, as we go all VMWare as I catch up with the hosts of VMWare’s excellent Virtually Speaking Podcast Pete Flecha and John Nicholson.

As always, If you want to make sure you catch our VMWare bonanza then subscribe to the show in the usual ways.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss