NetApp Winning Awards, Whatever Next?

WP_20160518_07_53_57_Rich_LI.jpgIn the last couple of weeks I’ve seen NetApp pick up a couple of industry awards with the all flash A200 earing the prestigious Storage Review Editors Choice as well as CRN UK’s storage Vendor of the year 2017, this alongside commercial successes (How NetApp continue to defy the performance of the storage market) is part of a big turnaround in their fortunes over the last 3 years or so, but why? What is NetApp doing to garner such praise?

A bit of disclosure, as a Director at a long-term NetApp Partner, Gardner Systems, and a member of the NetApp A-Team advocacy programme, I could be biased, but having worked with NetApp for over 10 years, I still see them meeting our customers’ needs better than any other vendor, which in itself, also suggests NetApp are doing something right.

What is it they’re doing? In this post, I share some thoughts on what I believe are key parts of this recent success

Clear Strategy

If we wind the clock back 4 years, NetApp’s reputation was not at its best, tech industry analysts presented a bleak picture, the storage industry was changing, with public cloud storage and innovative start-ups offering to do more than those “legacy” platforms and in many cases, they could, NetApp were a dinosaur on the verge of extinction.

Enter the Data Fabric, first announced at NetApp’s technical conference, Insight, in 2014. Data Fabric was the beginning of NetApp’s move from a company focussed on storing data to a company focused on the data itself. This was significant as it coincided with a shift in how organisations viewed data, moving away from just thinking about storing data to managing, securing, analysing and gaining value from it.

NetApp’s vision for data fabric, closely aligned to the aims of more data focussed organisations and also changed the way they thought about their portfolio, less worried about speeds and feeds and flashing lights and more about how to build a strategy that was focussed on data in the way their customers were.

It is this data-driven approach that, in my opinion, has been fundamental in this change in NetApp’s fortunes.

Embrace the Cloud

A huge shift and something that is taking both customers and industry analysts by surprise is the way NetApp have embraced the cloud, not a cursory nod, but cloud as a fundamental part of the data fabric strategy and this goes way beyond “cloudifying” existing technology.

ONTAP Cloud seamlessly delivers the same data services and storage efficiencies into the public cloud as you get with its on-prem cousin, this provides a unique ability to maintain data policies and procedures across your on-prem and cloud estates.

But NetApp has gone beyond this, delivering native cloud services that don’t require any traditional NetApp technologies, Cloud Sync, allows the easy movement of data from on-prem NFS datastores into the AWS cloud. While Cloud Control provides a backup service for Office365 (and now Salesforce) bringing crucial data protection functionality that many SaaS vendors do not provide.

If that wasn’t enough there is the recently announced relationship with Microsoft, with NetApp now powering the Azure NFS service, yep that’s right, if you take the NFS service from the Azure marketplace this is delivered fully in the background by NetApp.

For a storage vendor, this cloud investment is unexpected, but a clear cloud strategy is also appealing to those making business technology decisions.

Getting the basics right

With these developments, it’s clear NetApp have a strategy and are expanding their portfolio into areas other storage vendors do not consider, but there is also no escaping that their main revenue generation continues to come from ONTAP and FAS (NetApp’s hardware platform).

If I’m buying a hardware platform, what do I want from it? It should be robust with strong performance and a good investment that evolves with my business and if NetApp’s commercial success is anything to go by, they are delivering this.

The all-flash NetApp platforms (such as the award winning A200 mentioned earlier) are meeting this need, a robust enterprise-level platform, allowing organisations to build an always-on storage infrastructure that scales seamlessly with new business demands. 6-year flash drive warranties and the ability to refresh your controllers after 3 years also give excellent investment protection.

It is not just the hardware however, these platforms are driven by software, NetApp’s ONTAP operating systems is like any other modern software platform, with regular code drops (every 6 months) delivering new features and improved performance to existing hardware via a non-disruptive software upgrade, providing businesses with the ability to “sweat” their hardware investment over an extended period, which in today’s investment sensitive market is hugely appealing.

Have an interesting portfolio

NetApp for a long time was the FAS and ONTAP company, and while those things are still central in their plans, their portfolio is expanding quickly, we’ve discussed the cloud focussed services, there’s also Solidfire with its unique scale and QoS capabilities, Storage Grid a compelling object storage platform, Alta Vault provides a gateway to move backup and archive data into object storage on-prem or in the cloud.

Add to this the newly announced HCI platform you can see how NetApp can play a significant part in your next-generation datacenter plans.

For me the awards I mentioned at the beginning of this article are not because of one particular solution or innovation, it’s the data fabric, that strategy is allowing NetApp, its partners and customers to have a conversation that is data and not technology focussed and having a vendor who understands that is clearly resonating with customers, analysts and industry influencers alike.

NetApp’s continued evolution is fascinating to watch, and they have more to come, with no doubt more awards to follow, whatever next!

Advertisements

Architecting the Future – Ruairi McBride and Jason Benedicic – Ep 51

As we become more data-driven in our organisations and ever more used to the way big public cloud providers deliver our services, it is putting more pressure on internal IT to deliver infrastructure that provides this data focussed, cloud like experience, but where do you start in designing this next-generation of datacentre?

That’s the subject of this week’s podcast, the last of the shows recorded at NetApp Insight in Berlin, where I catch up with two members of a fascinating panel discussion I attended at the event, Ruairi McBride and Jason Benedicic.

082917_1433_ITAvengersP3.jpgRuairi is focussed on partner education for global technology distribution company Arrow ECS and has spent the last 9 months working with partners to help them to understand next-generation datacenters.

You can find Ruairi on twitter @mcbride_ruairi and his blog site ruairimcbride.wordpress.com

Jason is a principal consultant at ANS Group in the UK with a focus on next-082917_1433_ITAvengersP4.jpggeneration datacentres, Jason spends his time designing and implementing next-gen technology for a wide range of customers and with nearly 20 years of industry experience offers great insight and experience.

Catch up with Jason on twitter @jabenedicic and look out for his coming soon blog site www.thedatacentrebrit.co.uk.

Ruairi and Jason were part of a panel hosted by the NetApp A-Team which consisted of people who were not theorists but had practical experience of deploying next-generation technologies and working practices and as I know many listeners to this show are involved in developing their own next-generation strategy, I thought it would make an interesting episode.

We cover a range of topics and begin by looking to define what we mean by next-gen the types of technology and methodologies involved.

We discuss what is driving the move to next-generation datacentres, how public cloud and the move to automated, self-healing, self-service, software defined infrastructure is a major influence and how businesses who wish to maintain a competitive edge and improve the service to their customers and users, need to look at this next generation approach.

We wrap up by looking at how next gen datacenters are not about technology alone and is as much about philosophy and working practice, while Jason and Ruairi share ideas about the type of building blocks you need and the help and support that the technology community can bring as you look to deliver a next generation strategy to your organisation.

Jason and Ruairi provide some excellent insights and tips on developing a next generation datacentre approach if you have questions then please feel free to contact any of us by twitter or via the comments section on the site.

This is the last show of 2017, for all who have listened this year, thanks for your support and Tech Interviews will be back in the new year with a whole host of new interviews exploring a range of technology topics, if you have anything you’d like the show to explore in 2018, then why not drop me a note @techstringy.

To make sure you catch next years shows then why not subscribe in all of the usual places.

Just leaves me to say, enjoy the Christmas holiday season and I’d like to wish you the very best for 2018 and hope you’ll spend some of it listening to Tech Interviews .

For all of you who have enjoyed the show in 2017 – thanks for listening

 

merry-christmas

Scale that NAS – Justin Parisi – Ep 50

There is no doubt that the amount of data we have, manage, control and analyse continues to grow ever more rapidly and much of  this is unstructured data, documents, images, engineering drawings, data that often needs to be stored in one place and be easily accessible.

However, this presents problems, how do you get all of this data in one place when it’s not just TB’s it 100’s of TB’s and made up of billions of files that need to be accessed quickly, how on earth do you build that kind of capacity and deliver the performance you need?

Like any compute problem, there are two ways to scale things, up by adding more capacity to your existing infrastructure or you can scale out, adding not only more capacity but also more compute.

The other week I heard an excellent episode of the Gestalt IT On-Premise podcast where they posed the question “should all storage be scale out?” (find the episode here) and the answer was basically yes and in a world where we have these ever-growing unstructured data repositories scaling out our NAS services makes perfect sense, delivering not only massive capacity in a single repository, but also taking advantage of scaled-out compute to give us the ability to process the billions of transactions that comes with a huge repository.

So for Episode 50 of the Tech Interviews podcast, it seemed apt to celebrate the big five-oh talking about big data storage.

112117_0834_Theheartoft1.jpgTo discuss this evolution of NAS storage I’m joined by a returning guest, fresh from episode 48 (The heart of the data fabric), Justin Parisi to discuss NetApp’s approach to this challenge, FlexGroups.

We start the episode by discussing what a FlexGroup is and importantly why you may want to use them and why it’s about more than just capacity, as we discuss the performance benefits of spreading our single storage volume across multiple controllers and look at those all important use cases from archives to design and automation.

We explore the importance of simplification, while our need to manage ever-increasing amounts of data continues to grow, the resources available to do it are ever more stretched, so we look at how NetApp has made sure that the complexity of scale-out NAS is hidden away from the user by presenting a simple, intuitive and quick to deploy technology that allows users to have the capacity without the need to rearchitect or relearn their existing solutions.

We wrap up by looking at some of NetApp’s future plans for this technology, including how it may become the standard deployment volume, simplification of migration and other uses such as VMware datastores.

FlexGroups is a really smart technology designed to simply address this ever-growing capacity and performance problem encountered by our traditional approach to file services and if you are looking at scale-out NAS for your file services, it’s a technology well worth reviewing.

For some very informative FlexGroup blogs visit NetApp Newsroom.

There is also a selection of NetApp Technical Documents around the subject, check out TR’s 4557, 4571 and 4616.

You can also hear more from Justin and the Tech ONTAP podcast team discussing FlexGroups here in episode 46.

And finally, you can contact the FlexGroup team via email at flexgroups-info@netapp.com

If you want to find out more about Justin and the work he does in this area you can check out his excellent website https://whyistheinternetbroken.wordpress.com/ and follow him on Twitter @nfsdudeabides.

Next week It’s the last show of the year as I’m joined by Jason Benedicic and Ruairi Mcbride to discuss the future of datacentre architecture as we talk next-gen technologies.

To catch that show why not subscribe in any of the usual podcast places.

Hope you enjoyed Episode 50 – here’s to the next 50!

Thanks for listening.