Storage Ferraris in the cloud for $20 an hour – Lee Jiles – Ep80

 

A couple of months ago I wrote an article about the importance of enterprise data services inside of the public cloud (Building a modern data platform – exploiting the cloud) and why they are crucial to IT strategies of organisations as they look to transition to the public cloud.

The idea of natively been able to access data services that are commonplace In our datacentres such as the ability to apply service levels to performance, storage efficiencies and other enterprise-level capabilities to our cloud apps is very attractive.

In this week’s episode we take a look at one such solution, in the first in a series of shows recorded at some of the recent Tech Conferences I’ve visited, I’m joined by Lee Giles a Senior Manager from NetApp’s Cloud Business Division at their Insight Conference, to discuss Azure NetApp Files, an enterprise data services solution available natively inside Microsoft’s Azure datacentres.

Azure NetApp files is a very interesting technology and another example of the fascinating work NetApp’s cloud business unit is doing in extending enterprise data services to the locations we need them, on-prem, near to and inside the public cloud.

I discuss with Lee what Azure NetApp Files is, and why it was developed. We explore some of the challenges of public cloud storage and how it often leads to all of those good storage management practices you are used to on-prem having to be abandoned as we move into the cloud.

We look at why the ability to deliver a “familiar” experience has great advantages when it comes to speed and agility and Lee explains to us why stripping away the complexity of cloud storage is like getting yourself a Ferrari for $20 an hour!

I ask Lee about the technical deployment of Azure NetApp files and why it is different to solutions that are “near the cloud”. We also look at Microsoft’s view of the technology and the benefits they see in working with NetApp to deliver this service.

Lee also shares some of the planned developments as well as some of the initial use cases for the service. Finally, he explains how you can get access to the preview service and test out Azure NetApp files for yourself and see if it can help meet some of your public cloud storage challenges.

For more details on the service, as well as where to sign up to access the preview you can visit the Azure Storage Site here https://azure.microsoft.com/en-gb/services/storage/netapp/

If you have other questions then you can contact Lee, via email at lee.jiles@netapp.com.

Azure NetApp files is a really interesting option for public cloud storage and well worth investigating.

I hope you enjoyed the show and as always, thanks for listening.

Advertisements

Veeam, heading in the right direction?

As the way we use data in our ever more fragmented, multi-cloud world continues to change, the way we manage, protect and secure our data is having to change with it. This need to change is mirrored by the leading data protection vendors who are starting to take new approaches to the challenge.

Around 18 months ago Veeam started shifting theirs and their customers focus by introducing their “Intelligent Data Management” methodology, highlighting the importance of visibility, orchestration and automation in meeting the modern demands of data protection.

Recently I was invited to the Veeam Vanguard summit in Prague, to learn about the latest updates to their platforms, I was very interested to see how these updates would build upon this methodology and ensure they remained well placed to tackle these new problems.

There was a huge amount covered but I just wanted to highlight a couple of key strategic areas that caught my attention.

The initial challenge facing Veeam, as they evolve, is their “traditional” background, the innovative approach to protecting virtual workloads, upon which they have built their success has to change as protecting modern workloads is a very different challenge and we have seen Veeam, via a mix of innovation and acquisition starting to redesign and fill gaps in their platform to tackle these new challenges.

However, this has introduced a new problem, one of integrating these new developments into a cohesive platform.

Tying it together

Looking across many of the updates it is clear Veeam also recognise the importance integration plays in delivering a platform that can protect and manage the lifecycle of data in a hybrid, multi-cloud environment.

A couple of technologies really highlighted moves in this direction, the addition of an external repository to their Availability for AWS components, allows the backups of native EC2 instances to be housed in an object store external to AWS or the native snapshots of EC2. On its own this is useful, however, when we add the upcoming update 4 for Veeam Backup and Replication(B&R), we can see a smart strategic move.

Update 4 brings the ability for B&R to be able to read and use the information held inside this object store, providing the capability for an on-prem B&R administrator to be able to browse the repository and recover data from it to any location.

Update 4 also includes a “cloud tier” extension to a backup repository, this is a remote S3/Azure blob external tier in which aged backup data can be moved into, to enable an unlimited backup repository. With this an organisation can take advantage of “cheap and deep” storage to retain data for the very long term, without needing to continually grow more expensive primary backup tiers, this integration is seamless and allows the integration of cloud storage, where appropriate, to a data protection strategy.

This is only the start, the potential of providing similar capabilities and integration with other public clouds and storage types is clearly there and it would seem only a matter of time before the flexibility of the platform expands further.

Smart Protection thinking

While integration is crucial to Veeam’s strategy, more intelligence about how we can use our protected data is equally crucial, particularly as the demands to ensure system availability continues to grow and put pressure on our already strained IT resources.

Secure and staged restore both add intelligence to the data recovery process allowing for modifications to be made to a workload before placing it back into production.

Secure Restore

Allows a data set to be pre-scanned before been returned into production, think about this as part of an “anti-virus” strategy. Imagine as you recover a set of data after a virus infection if you could pre-scan the data and address any issues before you place it back into production, that is secure restore, a Powerful, time saving and risk-reducing step.

Staged Restore

An equally powerful capability, allowing for a system to have alterations made to it before restoring it into production. The example given during the session was based on compliance, carrying out a check on data ahead of recovery to make sure that non-compliant data is removed before recovery. However, use cases such as patching would be equally useful with staged restore allowing a VM to be mounted and system updates applied ahead of it been placed back in production. Again simple, but very useful.

Both additions are excellent examples of smart strategic thinking on Veeam’s part, reducing the risks of recovering data and systems into a production environment.

How are they doing?

I went to Prague wanting to see how Veeam’s latest updates would help them and their customers to meet the changing needs of data management and the signs are positive, the increased integration between the on-prem platforms and the capabilities of the public cloud are starting to make a reality of the “Intelligent Data Management” strategy and with update 4 of Backup and Replication, Veeam can protect a VM on-prem or in the cloud and restore that VM to any location, given you true workload portability.

Veeam’s Intelligent Data Management platform is by no means all in place, however, the direction of travel is certainly clear and, even now, you can see how elements of that strategy are deliverable today.

There was lots covered at the summit, which built on much of the intelligence and automation discussed here, Veeam, In my opinion, remain a very smart player in the data protection space and alongside some of the new and innovative entrants, continue to make the world of data protection a fascinating and fast-moving part of the data market, which is useful, as availability and data protection is central to pretty much all of our long-term data strategies.

Cloud evolution or revolution? – W. Curtis Preston – Ep79

As we adopt an ever-increasing amount of cloud services into our businesses, are we part of a technology revolution or is it just the next evolutionary step in the way we do things? There is no doubt that cloud has revolutionised some businesses and that some would not exist without the incredible amount of services and innovation that the public cloud, in particular, can offer to us. However, that’s not the case for everyone, for those whose businesses pre-date “The Cloud”, we have legacy systems, “traditional” approaches to doing things and systems that are not architected like cloud applications.

So, what does that mean to us as we adopt cloud services? Especially when it comes to those “boring” topics such as data protection of our cloud integrated systems?

curtis preston newThat’s the topic I explore with this week’s guest “Mr Backup” also known as W. Curtis Preston, Curtis is Chief Technical Architect at Druva and has worked in the data protection space for 25 years.

We start out by discussing this evolution, from Terminals in datacentres to running our sensitive data “on someone else’s computer”, we look at what this means for data protection and clarify the position most cloud providers take when it comes to responsibility.

Curtis then shares some experience of what cloud data protection means and how we need to rethink our approach, as our on-prem methods do not necessarily translate to the cloud, in fact, if we are protecting “cloud native” then we need to think “cloud native” protection approaches.

We look at Druva’s approach to the problem and the power that comes with getting all of our data, regardless of location into a single repository and how that opens up options for getting insight and intelligence about the data we hold.

We also share some thoughts on the future and how the continued move to the cloud is going to break our on-prem data protection approaches if we don’t properly consider the way we protect our cloud-hosted information.

Finally, we dip into a topic we covered in the last episode as we look at VMware cloud on AWS, what that means for VMware customers and their transition to the cloud and of course the importance of protecting that data. If you are heading out to VMworld you will find Curtis In Barcelona discussing “The New Era of Cloud Data Management” why not look up his session.

If you want more information on what Druva are doing in this space visit Druva.com you can also follow them on twitter @druvainc and you can follow Curtis @wcpreston.

Great information from Curtis, hope you enjoy the show.

Next week we start a series of shows from my recent conference travels, with a large range of topics from data protection at scale to automation, ultra-fast performance to AI, If you want to ensure you don’t miss those shows you can subscribe and leave a review to help others find it.

Thanks for listening.

NetApp’s Future, do they matter?

A couple of weeks ago I was at a technology event speaking with some of the attendees when the subject of NetApp was raised, accompanied by the question “Are NetApp still relevant?” I was taken a back by this, particularly as over the last few years I felt NetApp had done a great job in re-positioning themselves and changing the view of them as a “traditional” storage company.

However, this message had clearly not reached everyone and made me consider “Does NetApp’s vision really deal with challenges that are relevant to the modern enterprise?” and “have they done enough to shake the traditional storage vendor label?”.

I’m writing this blog 33000 ft above the United States, heading home from NetApp’s Insight conference. Reflecting on the three days in Las Vegas, I wondered, did what I hear answer those questions? and would it keep NetApp relevant for a long time to come?

#DataDriven

The modern tech conference loves a hashtag, one that attempts to capture the theme of the event and #DataDriven was Insight 2018’s entry to the conference hashtag dictionary.

But what does  Data Driven actually mean?

Data plays a significant role in driving modern business outcomes and the way we handle, store and extract information from it, is a keen focus for many of us and this is clearly the same for NetApp.

Throughout Insight,  NetApp stated clearly their vision for the future is to be a data company not a storage one, a subtle but crucial difference. No longer are speeds and feeds (while still important) the thing that drives their decision making, it is Data that is at the heart of NetApp’s strategy, a crucial shift that matches how the majority of NetApp’s customers think.

Data Fabric 2.0

NetApp’s data fabric over the last 4 years has been at the centre of their thinking. Insight however, presented a fundamental shift in how they see the future of data fabric, starting with making it clear it is not “NetApp’s Data Fabric” but “your data fabric”.

A fabric shouldn’t be “owned” by a storage vendor, it is ours to build to meet our own needs. This shift is also driving how NetApp see the future delivery of a data fabric, no longer something that needs building, but “Data Fabric as a Service” a cloud powered set of tools and services that enable your strategy. This is a 180° turn for this approach making it no longer an on-prem infrastructure that integrates cloud services, but a cloud service that integrates and orchestrates all of your data end points regardless of location.

The demonstration of this vision was extremely impressive, the future data fabric was clear in its direction, a fabric is yours, to be consumed as you need it, helping us to deliver services and data as and when we need to, quickly, efficiently and at scale.

The awkward HCI Conversation

Perhaps the most immediate beneficiary of this shift is NetApp’s Hyper Converged Infrastructure (HCI) platform. NetApp are by no means early in this market and in some quarters there is debate as to whether NetApp HCI is a Hyper Converged platform at all. I’ll admit, while the industry definition of HCI doesn’t really bother me, as technology decisions should be about outcomes not arbitrary definitions, I do have reservations about the long term future of NetApp’s HCI platform.

However, what NetApp showed as part of their future Data Fabric vision was a redefinition of how they see HCI, redefined to the extent that NetApp’s view of HCI is no longer hyper converged but Hybrid Cloud Infrastructure.

What does this mean?

It’s about bringing the cloud “experience” into your datacentre, but this is much more than building a “private cloud” it is about HCI becoming a fully integrated part of a cloud enabled data strategy. Allowing organisations to deploy services and enable the simple movement of them from public cloud to on-prem and back again, making HCI just an end point, a location from which your cloud services could be delivered.

Ultimately HCI shouldn’t be about hardware or software, but outcomes and NetApp’s aim is to allow this technology to speed up your ability to drive those outcomes, regardless of location.

This transformed in my mind a platform from one that I struggled to see its long-term value to something that has the potential to become a critical component in delivering modern services to organisations of all types.

Summary

Did what I hear address the questions raised to me? Would it convince a wider audience that NetApp remain relevant? For that we will have to wait and see.

However, In my opinion NetApp presented a forward thinking, relevant strategy that if executed properly is going to be a fundamental shift in the way they are developing as a company and will ensure they remain relevant to organisations by solving real and complex business challenges.

I’m very interested to see how this new vision for Data Fabric evolves and if they can execute the vision presented so impressively at Insight, they may finally shed that “traditional” NetApp label and become the data authority company that they are aiming to be.

You can get further details on announcements from Insight by visiting the NetApp Insight site and where you will find a wide range of videos including the two general session keynotes.

If you want to find out more about NetApp’s vision for your self, then it’s not to late to register to attend NetApp’s Insight EMEA conference in Barcelona, details are here.

Stay Cloudy VMware – Glenn Sizemore – Ep78

As I’ve discussed many times in both blogs and podcasts, the move to public cloud comes with its challenges, sometimes it’s poor decision making, poor design or it’s just too complicated to integrate the flexibility and power of public cloud with your on-prem environment. However, you are probably also aware that this is beginning too change as more tech vendors look at ways of simplifying this by offering consistency and tooling to both ease the move to public cloud and simplify the integration with on-prem tech.

One such solution was announced by VMware at 2017’s VMworld conference and that was their partnership with Amazon Web Services (AWS) that allowed you to take a VMware stack inside of AWS that is completely yours, not shared, running your own environment, on top of Amazon hardware and managed completely by VMware. Delivering consistency of end point by providing and integrating with the VMware environment in your datacentre to deliver a seamless hybrid experience.

As with all things in the cloud these services continue to evolve and develop, so 12 months in I wanted to follow up on VMware Cloud on AWS (VMC) to see how it has changed, the lessons VMware have learned and what is coming in the near future to allow even more flexibility and tighter integration with your own on-prem enterprise technology.

Joining me to discuss this is Glenn Sizemore, Glenn is a Senior Technical Marketing Architect at VMware with a long and varied experience in the IT industry.

On this episode, Glenn shares a range of updates on what VMC is and where it’s heading, we talk about the importance of its hybrid design allowing customers to focus on workloads and not have to focus on complex infrastructure, simplifying cloud adoption for a range of enterprises.

We also look at how it goes beyond just simplifying the move to the cloud as the two-way relationship with AWS starts to offer the ability to move native Amazon services into your datacentre and we discuss how this is driving a different cloud strategy conversation.

Glenn also shares some plans for what we can expect to see in VMC especially when it comes to storage as VMware look to tackle both the needs of capacity intensive workloads as well as the need to offer integrations with 3rd party storage platforms, which will be crucial in ensuring VMC is a flexible enterprise platform and not one that is seen as a tool just to sell VMware technologies.

We finish up by discussing how you can start to build both proof of concepts and proof of value with VMC before you make a commitment, because it’s crucial to define outcomes with this platform, understand that the platform is right, before asking whether “you can afford it”.

To find out more you can check out the Virtual Blocks blog site as well as follow Glenn on twitter @glnsize.

Also, do check out these fascinating Tech Field Day presentations that Glenn did alongside NetApp.

If you want to pop back in time to hear our intro show to VMware cloud on AWS from last year, you can find that here.

Glenn provides some great insight into this interesting platform, enjoy the show.

Thanks for listening.