Intelligent, secure, automated, your data platform future

I was recently part of a workshop event where we discussed building “your future data platform”, during the session I presented a roadmap of how a future data platform can look. The basis of the presentation, which looked at the relatively near future, was how developments from “data” vendors are allowing us to rethink the way we manage the data that we have in our organisations.

What’s driving the need for change?

Why do we need to change the way we manage data? The reality is that the world of technology is changing extremely quickly and at the heart of it is our desire for data, be it creating it, storing it, analysing it or learning from it and demanding that we use data increasingly to help drive business outcomes, strategies and improve customer experience.

Alongside this need to use our data more are other challenges, from increasing regulation to the ever more complex security risk (See the recent Marriot Hotels breach of 500 million customer records) which are making further, unprecedented demands on our technology platforms.

Why aren’t current approaches meeting the demand?

What’s wrong with what we are currently doing? Why aren’t current approaches helping us to meet the demands and challenges of modern data usage?

As the demands on our data grow, the reality for many is we have architected platforms that have never considered many of these issues.

Let’s consider what happens when we place data onto our current platform.

We take our data, it could be confidential, it may not be, often we don’t know, that data is placed into our data repository when it’s placed there how many of us know;

  • Where it is?
  • Who owns it?
  • What does it contain?
  • Who is accessing it?
  • What’s happening to it?

In most cases, we don’t, and this presents a range of challenges from management and security to reducing our ability to compete with those who are effectively using their data to innovate and gain an advantage.

What If that platform instead could recognise the data as it was deposited? Then made sure it was in the right secure area, with individual file securities that would ensure it remained secure regardless of its location, then the system, if necessary, would protect the file immediately (not when the next protection job ran) and then tracked the use of that data from its creation to deletion.

That would be useful, wouldn’t it?

What do we need our modern platform to be?

As the amount of data and the ways we want to use it continues to evolve, our traditional approaches will not be able to meet the demands placed upon them and we certainly cannot expect human intervention to be able to cope, the data sets are too big, the security threats too wide-reaching and the compliance requirements ever more stringent.

However, that’s the challenge we need our future data platforms to meet, they need to be, by design, secure, intelligent and automated. The only way we are going to be able to deliver this is with the help of technology augmenting our efforts in education, process and policy to ensure we use our data and get the very best from it.

That technology needs to be able to deliver this secure, intelligent and automated environment from the second it starts to ingest data, it needs to understand what we have and how it should be used and importantly it shouldn’t just be reactive, it has to be proactive, the minute new data is written it applies intelligence, ensuring immediately we secure our data, store and protect it accordingly and be able to fully understand its use throughout its lifecycle.

Beyond this, we also need to make sure that what we architect is truly a platform, something that acts as a solid foundation for how we want to use our data. We need to ensure once we have our data organised, secure and protected, that our platform can make sure that we can move it to places we need it, allowing us to take advantage of new cloud services, data analytics tools, machine learning engines or whatever may be around the corner, while ensuring we continue to maintain control and retain insights into its use regardless of where it resides.

These are key elements of our future data platform and ones we are going to need to consider to ensure that our data can meet the demands of our organisations to make better decisions and provide better services, driven by better use of data.

How do we do it?

Of course, the question is, can this be done today and if it can how?

The good news is, much of what we need to do this, is already available or coming very soon and which means, realistically within the next 6-18 months, if you have the desire, you can develop a strategy and build a more secure, intelligent and automated method for managing your data.

I’ve shared some thoughts here on why we need to modernise our platforms and what we need from them, in the next post I’ll share a practical example of how you can build this kind of platform by using tools that are available to you today or coming very shortly, to show that a future data platform is closer than you may think.

Advertisements

Veeam, heading in the right direction?

As the way we use data in our ever more fragmented, multi-cloud world continues to change, the way we manage, protect and secure our data is having to change with it. This need to change is mirrored by the leading data protection vendors who are starting to take new approaches to the challenge.

Around 18 months ago Veeam started shifting theirs and their customers focus by introducing their “Intelligent Data Management” methodology, highlighting the importance of visibility, orchestration and automation in meeting the modern demands of data protection.

Recently I was invited to the Veeam Vanguard summit in Prague, to learn about the latest updates to their platforms, I was very interested to see how these updates would build upon this methodology and ensure they remained well placed to tackle these new problems.

There was a huge amount covered but I just wanted to highlight a couple of key strategic areas that caught my attention.

The initial challenge facing Veeam, as they evolve, is their “traditional” background, the innovative approach to protecting virtual workloads, upon which they have built their success has to change as protecting modern workloads is a very different challenge and we have seen Veeam, via a mix of innovation and acquisition starting to redesign and fill gaps in their platform to tackle these new challenges.

However, this has introduced a new problem, one of integrating these new developments into a cohesive platform.

Tying it together

Looking across many of the updates it is clear Veeam also recognise the importance integration plays in delivering a platform that can protect and manage the lifecycle of data in a hybrid, multi-cloud environment.

A couple of technologies really highlighted moves in this direction, the addition of an external repository to their Availability for AWS components, allows the backups of native EC2 instances to be housed in an object store external to AWS or the native snapshots of EC2. On its own this is useful, however, when we add the upcoming update 4 for Veeam Backup and Replication(B&R), we can see a smart strategic move.

Update 4 brings the ability for B&R to be able to read and use the information held inside this object store, providing the capability for an on-prem B&R administrator to be able to browse the repository and recover data from it to any location.

Update 4 also includes a “cloud tier” extension to a backup repository, this is a remote S3/Azure blob external tier in which aged backup data can be moved into, to enable an unlimited backup repository. With this an organisation can take advantage of “cheap and deep” storage to retain data for the very long term, without needing to continually grow more expensive primary backup tiers, this integration is seamless and allows the integration of cloud storage, where appropriate, to a data protection strategy.

This is only the start, the potential of providing similar capabilities and integration with other public clouds and storage types is clearly there and it would seem only a matter of time before the flexibility of the platform expands further.

Smart Protection thinking

While integration is crucial to Veeam’s strategy, more intelligence about how we can use our protected data is equally crucial, particularly as the demands to ensure system availability continues to grow and put pressure on our already strained IT resources.

Secure and staged restore both add intelligence to the data recovery process allowing for modifications to be made to a workload before placing it back into production.

Secure Restore

Allows a data set to be pre-scanned before been returned into production, think about this as part of an “anti-virus” strategy. Imagine as you recover a set of data after a virus infection if you could pre-scan the data and address any issues before you place it back into production, that is secure restore, a Powerful, time saving and risk-reducing step.

Staged Restore

An equally powerful capability, allowing for a system to have alterations made to it before restoring it into production. The example given during the session was based on compliance, carrying out a check on data ahead of recovery to make sure that non-compliant data is removed before recovery. However, use cases such as patching would be equally useful with staged restore allowing a VM to be mounted and system updates applied ahead of it been placed back in production. Again simple, but very useful.

Both additions are excellent examples of smart strategic thinking on Veeam’s part, reducing the risks of recovering data and systems into a production environment.

How are they doing?

I went to Prague wanting to see how Veeam’s latest updates would help them and their customers to meet the changing needs of data management and the signs are positive, the increased integration between the on-prem platforms and the capabilities of the public cloud are starting to make a reality of the “Intelligent Data Management” strategy and with update 4 of Backup and Replication, Veeam can protect a VM on-prem or in the cloud and restore that VM to any location, given you true workload portability.

Veeam’s Intelligent Data Management platform is by no means all in place, however, the direction of travel is certainly clear and, even now, you can see how elements of that strategy are deliverable today.

There was lots covered at the summit, which built on much of the intelligence and automation discussed here, Veeam, In my opinion, remain a very smart player in the data protection space and alongside some of the new and innovative entrants, continue to make the world of data protection a fascinating and fast-moving part of the data market, which is useful, as availability and data protection is central to pretty much all of our long-term data strategies.

Want to know more?

Interested in finding out more about Veeam? Then there’s a great opportunity coming up on December 5th, with the VeeamON Virtual event, where you can hear the very latest from Veeam, with both strategic and technical tracks for you to log in and watch, this event will give you a lot more detail on everything covered in this blog and a whole lot more.

You can find out more about the event and register here https://go.veeam.com/veeamon-virtual

If you want to find out for yourself if Veeam is on track, this is a great way to do it.

Protecting 365 – a look at Veeam Backup for Office 365

Recently Veeam announced version 2.0 of their Backup for Office 365 product this extended the functionality of its predecessor with much needed support for SharePoint and OneDrive for business. While looking into the release and what’s new it prompted me to revisit the topic of protecting Office 365 data, especially the approach of building your own solution to do so.

Back in April I wrote a post for Gestalt IT (“How to protect Office 365 data”), the basics of which considered the broadly held misconception that Microsoft are taking care of your data on their SaaS platform. While Microsoft provide some protection via retention and compliance rules and a 30-day rolling backup of OneDrive, this is not a replacement for a robust enterprise level data protection solution.

The article examined this issue and compared two approaches for dealing with the challenge, either via SaaS (NetApp’s SaaS backup platform was used as an example) or doing it yourself with Veeam. The article wasn’t intended to cover either approach in detail but to discuss the premise of Office 365 data protection.

This Veeam release though seemed like a good opportunity to look in more detail into the DIY approach to protecting our Office 365 data.

Why flexibility is worth the work

One of the drivers for many in the shift to 365 is simplification, removing the complexity that can come with SharePoint and Exchange deployments. It then surely follows that if I wanted simplicity, I’d want the same with my data protection platform. Why would I want to worry about backup repositories, proxy and backup servers or any other element of infrastructure?

The reality however, is when it comes to data protection, simplification and limiting complexity may not be the answer. Simplicity of SaaS can come at a price of reducing our ability to be flexible enough to meet our requirements, for example limiting our options to;

  • Have data backed up where we want it.
  • Deal with hybrid infrastructure and protect on-prem services.
  • Have full flexibility with restore options.

These limitations can be a problem for some organisations and when we consider mitigation against provider “lock-in” and the pressures of more stringent compliance, then you can see how for some, flexibility quickly overrides the desire for simplicity.

It is this desire for flexibility that makes building your own platform an attractive proposition. We can see with Veeam’s model the broad flexibility this approach can provide;

Backup Repository

Data location is possibly the key deciding factor when deciding to build your own platform, Veeam provide the flexibility to store our data in our own datacentre, a co-lo facility, or even a public cloud repository. Giving the flexibility to meet the most stringent data protection needs.

Hybrid Support

The next most important driver for choosing to build your own solution, is protecting hybrid workloads. While many have embraced Office365 in its entirety, there are still organisations who, for numerous reasons, have maintained an on-premises element to their infrastructure. This hybrid deployment can be a stumbling block for SaaS providers, with an Office 365 focus only.

Veeam Backup for Office365 fully supports the protection of data both on-prem and in the cloud, all through one console and one infrastructure, under a single licence. This capability is hugely valuable, simplifying the data protection process for hybrid environments and removing any need to have multiple tools protecting the separate elements.

Recovery

It’s not just backup flexibility when building your own platform that has value, it is also the range of options this can bring to recovery. This flexibility to take data backed up in any location and restore it to multiple different locations is highly valuable and sometimes an absolute necessity for anything from practicality to regulatory reasons.

What’s the flexibility cost?

Installation

Does this extra flexibility come with a heavy price of complexity and cost? In Veeam’s case no, they are renowned for simplicity of deployment and Backup for Office 365 is no different. It requires just the usual components of backup server, proxy, backup repository and product explorers with the size of the protected infrastructure dictating the scale of the protection platform.

There are of course limitations (Backup for Office 365 System Requirements) one major consideration is bandwidth, it’s important to consider how much data you’ll be bringing into your backup repository both initially and for subsequent incremental updates. While most SaaS providers will have substantial connectivity into Microsoft’s platform for these operations, you may not.

Licencing

A major benefit of software as a service is the commercial model, paying by subscription can be very attractive and can be lost when deploying our own solution. This is not the case with Backup for Office 365 which is licenced on a subscription basis.

Do it Yourself V as a Service

The Gestalt IT article ended with a comparison of the “pro’s and Cons” of the two approaches.

Do It Yourself

As A Service

Pro’s

Cons

Pro’s

Cons

Flexibility Planning Simplicity Lack of control
Control Management Overhead Lower Management Overhead Inability to customise
Customisation Responsibility Ease of Deployment Cloud only workloads
Protect Hybrid Deployments Data Sovereignty

I think these points remain equally relevant and when deciding what approach is right for you, regardless of what we’ve discussed here with Veeam’s offering. If SaaS is the right approach, it remains so, but If you do take the DIY approach, then I hope this post gives you an indication of the flexibility and customisation that is possible and why this can be crucial as part of your data protection strategy.

If building your own platform is your chosen route then Veeam Backup for Office365 V2 is certainly worthy of your consideration, But regardless of approach remember the data sat in Office365 is your responsibility, make sure its protected.

If you want to know more, you can contact me on twitter @techstringy or check out Veeam’s website.

IT Avengers Assemble – Part One – Ep38

This weeks Tech Interviews is the first in a short series, where I bring together a selection of people from the IT community to try to gauge the current state of business IT and to gain some insight into the key day-to-day issues affecting those delivering technology to their organisations.

For this first episode i’m joined by three returning guests to the show.

Mich040317_0726_Availabilit1.jpgael Cade is a Technical Evangelist at Veeam. Michael spends his time working closely with both the IT community and Veeam’s business customers to understand the day-to-day challenges that they face from availability to cloud migration.

You can find Michael on twitter @MichaelCade1 and his blog at vzilla.co.uk 

mike andrews

Mike Andrews is a Technical Solutions Architect at storage vendor NetApp, specialising in NetApp’s cloud portfolio, today Mike works closely with NetApp’s wide range of customers to explore how to solve the most challenging of business issues.

You can find Mike on social media on twitter @TrekinTech and on his blog site trekintech.com

Mark CarltonMark Carlton is Group Technical Manager at Concorde IT Group, he has an extensive experience in the industry having worked in a number of different types of technology businesses, today Mark works closely with a range of customers helping them to use technology to solve business challenges.

Mark is on twitter @mcarlton1983 and at his fantastically titled justswitchitonandoff.com blog.

The panel discuss a range of issues, from availability to cloud migration, the importance of the basics and how understanding the why, rather than the how is a crucial part of getting your technology strategy right.

The team provide some excellent insights into a whole range of business IT challenges and I’m sure there’s some useful advice for everyone.

Next time I’m joined by four more IT avengers, as we look at some of the other key challenges facing business IT.

If you enjoyed the show and want to catch the next one, then please subscribe, links are below.

Thanks for listening.

Subscribe on Android

SoundCloud

Listen to Stitcher

 

 

 

 

Analysing the availability market – part two – Dave Stevens, Mike Beevor, Andrew Smith – Ep30

Last week I spoke with Justin Warren and Jeff Leeds at the recent VeeamON event about the wider data availability market, we discussed how system availability was more critical than ever and how or maybe even if our approaches where changing to reflect that, you can find that episode here Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29.

In part two I’m joined by three more guests from the event as we continue our discussion. This week we look at how our data availability strategy is not and can not just be a discussion for the technical department and must be elevated into our overall business strategy.

We also look how technology trends are affecting our views of backup, recovery and availability.

First I’m joined by Dave Stevens of Data Gravity,  as we look at how ou060617_0724_Analysingth1.jpgr backup data can be a source of valuable information, as well as a crucial part in helping us to be more secure, as well as compliant with ever more stringent data governance rules.

We also look at how Data Gravity in partnership with Veeam have developed the ability to trigger smart backup and recovery, Dave gives a great example of how a smart restore can be used to quickly recovery from a ransomware attack.

You can find Dave on Twitter @psustevens and find out more about Data Gravity on their website www.datagravity.com

Next I chat with Mike Beevor of HCI vendor Pivot3 about how simplifying our approach to system availability can be a huge benefit. Mike also makes a great point about how, although focussing on application and data availability is right, we must consider the impact on our wider infrastructure, because if we don’t we run the risk of doing more “harm than good”.

You can find Mike on twitter @MikeBeevor and more about Pivot 3 over at www.pivot3.com

Last but my no means least I speak with Senior Research Analyst at IDC, Andrew Smith, we chat about availability as part of the wider storage market and how over time, as vendors gain feature parity, their goal has to become to add additional value, particularly in areas such as security and analytics.

We also discuss how availability has to move beyond the job of the storage admin and become associated with business outcomes. Finally we look a little into the future and how a “multi cloud” approach is a key focus for business and how enabling this will become a major topic in our technology strategy conversations.

You can find Andrews details over on IDC’s website .

Over these two shows, to me, it has become clear that our views on backup and recovery are changing, the shift toward application and data availability is an important one and how, as businesses, we have to ensure that we elevate the value of backup, recovery and availability in our companies, making it an important part of our wider business conversations.

I hope you enjoyed this review, next week, is the last interview from VeeamON, as we go all VMWare as I catch up with the hosts of VMWare’s excellent Virtually Speaking Podcast Pete Flecha and John Nicholson.

As always, If you want to make sure you catch our VMWare bonanza then subscribe to the show in the usual ways.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss