NetApp Winning Awards, Whatever Next?

WP_20160518_07_53_57_Rich_LI.jpgIn the last couple of weeks I’ve seen NetApp pick up a couple of industry awards with the all flash A200 earing the prestigious Storage Review Editors Choice as well as CRN UK’s storage Vendor of the year 2017, this alongside commercial successes (How NetApp continue to defy the performance of the storage market) is part of a big turnaround in their fortunes over the last 3 years or so, but why? What is NetApp doing to garner such praise?

A bit of disclosure, as a Director at a long-term NetApp Partner, Gardner Systems, and a member of the NetApp A-Team advocacy programme, I could be biased, but having worked with NetApp for over 10 years, I still see them meeting our customers’ needs better than any other vendor, which in itself, also suggests NetApp are doing something right.

What is it they’re doing? In this post, I share some thoughts on what I believe are key parts of this recent success

Clear Strategy

If we wind the clock back 4 years, NetApp’s reputation was not at its best, tech industry analysts presented a bleak picture, the storage industry was changing, with public cloud storage and innovative start-ups offering to do more than those “legacy” platforms and in many cases, they could, NetApp were a dinosaur on the verge of extinction.

Enter the Data Fabric, first announced at NetApp’s technical conference, Insight, in 2014. Data Fabric was the beginning of NetApp’s move from a company focussed on storing data to a company focused on the data itself. This was significant as it coincided with a shift in how organisations viewed data, moving away from just thinking about storing data to managing, securing, analysing and gaining value from it.

NetApp’s vision for data fabric, closely aligned to the aims of more data focussed organisations and also changed the way they thought about their portfolio, less worried about speeds and feeds and flashing lights and more about how to build a strategy that was focussed on data in the way their customers were.

It is this data-driven approach that, in my opinion, has been fundamental in this change in NetApp’s fortunes.

Embrace the Cloud

A huge shift and something that is taking both customers and industry analysts by surprise is the way NetApp have embraced the cloud, not a cursory nod, but cloud as a fundamental part of the data fabric strategy and this goes way beyond “cloudifying” existing technology.

ONTAP Cloud seamlessly delivers the same data services and storage efficiencies into the public cloud as you get with its on-prem cousin, this provides a unique ability to maintain data policies and procedures across your on-prem and cloud estates.

But NetApp has gone beyond this, delivering native cloud services that don’t require any traditional NetApp technologies, Cloud Sync, allows the easy movement of data from on-prem NFS datastores into the AWS cloud. While Cloud Control provides a backup service for Office365 (and now Salesforce) bringing crucial data protection functionality that many SaaS vendors do not provide.

If that wasn’t enough there is the recently announced relationship with Microsoft, with NetApp now powering the Azure NFS service, yep that’s right, if you take the NFS service from the Azure marketplace this is delivered fully in the background by NetApp.

For a storage vendor, this cloud investment is unexpected, but a clear cloud strategy is also appealing to those making business technology decisions.

Getting the basics right

With these developments, it’s clear NetApp have a strategy and are expanding their portfolio into areas other storage vendors do not consider, but there is also no escaping that their main revenue generation continues to come from ONTAP and FAS (NetApp’s hardware platform).

If I’m buying a hardware platform, what do I want from it? It should be robust with strong performance and a good investment that evolves with my business and if NetApp’s commercial success is anything to go by, they are delivering this.

The all-flash NetApp platforms (such as the award winning A200 mentioned earlier) are meeting this need, a robust enterprise-level platform, allowing organisations to build an always-on storage infrastructure that scales seamlessly with new business demands. 6-year flash drive warranties and the ability to refresh your controllers after 3 years also give excellent investment protection.

It is not just the hardware however, these platforms are driven by software, NetApp’s ONTAP operating systems is like any other modern software platform, with regular code drops (every 6 months) delivering new features and improved performance to existing hardware via a non-disruptive software upgrade, providing businesses with the ability to “sweat” their hardware investment over an extended period, which in today’s investment sensitive market is hugely appealing.

Have an interesting portfolio

NetApp for a long time was the FAS and ONTAP company, and while those things are still central in their plans, their portfolio is expanding quickly, we’ve discussed the cloud focussed services, there’s also Solidfire with its unique scale and QoS capabilities, Storage Grid a compelling object storage platform, Alta Vault provides a gateway to move backup and archive data into object storage on-prem or in the cloud.

Add to this the newly announced HCI platform you can see how NetApp can play a significant part in your next-generation datacenter plans.

For me the awards I mentioned at the beginning of this article are not because of one particular solution or innovation, it’s the data fabric, that strategy is allowing NetApp, its partners and customers to have a conversation that is data and not technology focussed and having a vendor who understands that is clearly resonating with customers, analysts and industry influencers alike.

NetApp’s continued evolution is fascinating to watch, and they have more to come, with no doubt more awards to follow, whatever next!


Tech Trends – Object Storage – Robert Cox – Ep13

Over the last couple of weeks I’ve chatted about some of the emerging tech trends that I expect to see continue to develop during 2017 (Have a read of my look ahead blog post for some examples). To continue that theme this episode of Tech Interviews is the first of three looking in a little more detail at some of those trends.

First up, we look at a storage technology that is growing rapidly if not necessarily obviously, Object Storage.

As the amount of data the world creates continues to grow exponentially it is becoming clear that some methods of traditional storage are no longer effective. When we are talking billions of files, spread across multiple data centers across multiple geographies, traditional file storage models are no longer as effective (regardless of what a vendor may say!) that’s not to say that our more traditional methods are finished, in fact a long way from it, however there are increasingly use cases where that traditional model doesn’t scale or perform well enough.

For many of us, we’ve probably never seen an object store, or at least think we haven’t, but if you’re using things like storage from AWS or Azure then you’re probably using object storage, even if you don’t realise it.

With all that said, what actually is object storage? why do we need it? how does it address the challenges of more traditional storage? what are the use cases?

It’s those questions that we attempt to answer in this episode of Tech Interviews with my robert-coxguest Robert Cox. Robert is part of the storage team at NetApp working with their StorageGrid Webscale object storage solution.

During our chat we focus on giving an introduction to object storage, why is it relevant, the issues with more traditional storage and how object overcomes them, as well as Robert sharing some great use cases.

So, if you are wondering what object is all about and where it maybe relevant in your business, then hopefully this is the episode for you.


If you’d like to follow up with Robert with questions around NetApp’s object storage solutions you can email him at

You can find information on NetApp StorageGrid Webscale here 

And if you’d like a demo of StorageGrid then request one here

Next week we take a look at one of the most high profile of tech trends the emergence of DevOps, to make sure you don’t miss out you can subscribe to the Tech Interviews below.

Hope you can join us next week, thanks for listening…

Subscribe on Android

Viva Las VVOL’s

In this episode I’m joined by Pete Flecha, Senior Technical Marketing Architect at VMware, as we discuss VVOL’s. VMware’s new approach to delivering storage to virtual WP_20161116_12_44_06_Rich_LI.jpginfrastructures. VVOL’s look to address many of the problems traditional SAN based storage presents to Virtual infrastructures. Pete provides an intro to the problems VVOL’s look to address, how they go about it and what we can expect from the recent vSphere 6.5 release that brings us VVOL’s v2.0.

Although I’m not a VVOL expert I find what VMware are looking to do here really interesting as they look to tackle one of the key issues that IT leaders constantly look to address. How  to reduce the complexity of their environments so they can react quicker to new demands from their business.

VVOL’s allows for the complexity of any underlying storage infrastructure to be hidden from the virtualisation administrators, giving those managing and deploying applications, servers and services a uniformity of experience, so they can focus on quickly deploying their infrastructure resources.

As we all strive to ensure our IT infrastructures meet the ever changing needs and demands of our organisations, anything that simplifies, automates and ensures consistency across our environments is, in my opinion, a good thing.

It certainly seems that VVOL’s are a strong step in that direction.

In this episode Pete provides a brilliant taster of what VVOL’s are designed to do and the challenges they meet. I hope you enjoy it.

If you want more VVOL details Pete is the host of VMware’s fantastic vSpeaking podcast and last week they had an episode dedicated to VVOLS’s you can pick that up here.

vSpeaking Podcast ep:32 VVOLs 2.0

You can find all the other episodes of the vSpeaking podcast here

You can keep up with Pete and the excellent work he’s doing at VMware by following him on twitter @vpedroarrow

And of course, if you have enjoyed this episode of the podcast please subscribe for more episodes wherever you get your podcasts. You won’t want to miss next week, as I discuss data privacy with global privacy expert Sheila Fitzpatrick.

Subscribe on Android

Gold medals for data

Last week was the end of a wonderful summer of sport from Rio, where the Olympics and Paralympics gave us sport at its best, people achieving life time goals, setting new records and inspiring a new generation of athletes.

I’m sure many of you enjoyed the games as much as I did, but why bring it up here? Well for someone who writes a BLOG it’s almost a contractual obligation in an Olympic year, to write something that has a tenuous Olympic link. So here’s my entry!

One part of the Team GB squad that really stood in Rio were the Olympic cyclists, winning more gold medals than all of the other countries combined (6 of the 10teamgb_trott_archibald_rowsell_barker_rio_2000-1471125302 available) a phenomenal achievement.

This led to one question getting continually asked “What’s the secret?”. In one BBC interview Sir Chris Hoy was asked that question and his answer fascinated me, during his career the biggest impact on British cycling was not equipment, facilities, training, or super human cyclists. It was data, yes, data, not just collecting data, but more importantly the ability to extract valuable insight from it.

We hear it all the time

“those who will be the biggest successes in the future are those that get the most value from their data”

and what a brilliant example the cyclists where. We see this constantly in sport where the smallest advantage matters , but not just sport, increasingly this is the case in business, as organsations see data as the key to giving them competitive edge.

We all love these kind of stories, how technology can provide true advantage, but it’s always great to see it in action.

A couple of weeks ago I was on a call with the technical lead of one of our customers. He and his company see the benefit of technology investment and how it delivers business advantage. I’ve been lucky enough to work with them over the last 4 years or so and have watched the company grow around 300% in that time, we were talking with one of his key technology vendors and explaining to them how their technology was an instrumental part of their success.

During the call I realised this was my opportunity for a tenuous Olympic link BLOG post and how, as with the cyclists, getting the best from data was delivering real bottom line success to the business.

The business is a smart energy company, doing very innovative stuff in the commercial and private energy sectors. They’re in a very competitive industry, dominated by some big companies, but these guys are bucking that trend and a great example of how a company that is agile and knows how to exploit its advantage can succeed.

In their industry data is king, they pick up tonnes of data every day, from customers, from devices, from sensors, and manipulating this data and extracting valuable information from it is key to their success.

Until about a year ago they were running their database and reporting engines (SQL based) on a NetApp storage array, running 7-mode. That had worked but a year ago we migrated his infrastructure to clustered data ONTAP to provide increased flexibility, mobility of data and more granular separation of workloads.

However, the smartest thing they did as part of the migration was to deploy flashpools into their environment, why was this so earth shattering?

A big part of the value of their SQL infrastructure is reporting. This allows them to provide better services to their customers and suppliers giving them advantage over their competitors.

However many of those reports took hours to run, in fact the process was request the report and it would be ready the next day.

The introduction of flashpools into the environment (flashpools are flash based acceleration technology available in NetApp ONTAP arrays) had a dramatic effect taking these overnight reports and delivering them in 30-60 minutes.

This significant reduction in report running times, meant more reports could be run, more reports producing different data that could be used to present new and improved services to customers.

Last year the technical lead attended NetApp Insight in Berlin. One of the big areas of discussion that caught his interest was the development of all flash FAS (AFF), NetApp’s all flash variants of their ONTAP driven FAS controllers.

They immediately saw the value in this high performance, low latency technology. So earlier this year, we arranged an AFF proof of concept to be integrated into the environment, during this POC, the team moved a number of SQL workloads to the flash based storage and it’s no understatement to say this transformed their data analysis capabilities, those 30-60 minute reports where now running in 2-3 minutes.

An example of the kind of performance you can get from AFF (this is an AFF8080 cluster running ONTAP 8.3.1 – new platforms and ONTAP 9 have increased this performance further)

But this was not just about speed, this truly opened up brand new capabilities and business opportunities, now the organisation could provide their customers and suppliers with information that previously was impossible, providing quick access to data was allowing them to make decisions on their energy usage that gave true value.

They knew the proof of concept had gone well, when on taking it out the business began asking questions, why is everything so slow? Why can’t we do those reports anymore? And that was the business case, the deployment of NetApp flash was not just doing stuff quickly, or using flash because that’s what everyone says you should, this was because flash was delivering results, real business advantage.

As Chris Hoy discussed at the Olympics, it was not just getting the data because they could, it was getting the most out of it and in a sport where often 10th of seconds are between you and a gold medal, any advantage is critical.

A competitive business environment is no different, so an investment in technology that gives you the slightest edge makes perfect sense.

Today, all flash FAS is integrated into their new datacentre running the latest iterations of ONTAP, ensuring a low latency, high performance infrastructure, ensuring that they can continue to drive value from their most critical business asset, their Data.

A great use of technology to drive advantage, in fact Gold medals for data usage all round.


Hope that wasn’t to tenuous an Olympic link and if you have any questions then of course, @techstringy or via LinkedIN are great ways to get me.

If you’re interested in Flash you may also find this useful “Is Flash For Me?” from my company website.


Data Fabric – What is it good for?


Anyone who’s seen my social content recently will know I’m a big fan of the concept of data fabric, now the thing with I.T. is we love to get excited about phrases like this and assume everyone else will “get” what we’re talking about..well imagine my surprise the other day when I was talking to a business colleague and he asked me…

Data Fabric, what is it good for?

That’s a great question isn’t it (even if the immediate answer is to start to channel a bit of Edwin Starr)… what on earth is it?..and I guess he probably isn’t the only person asking..

I started by making it clear, when I’ve been talking about data fabric, my discussions have been around the strategic conversations I’ve been having with our customers and storage vendor NetApp, that’s probably not a surprise to those who know me, I’ve had a long association with them, so no shock there, however maybe more surprising is even if I did want to look elsewhere, no one else is really having this discussion and really, they should be…

So when my colleague then went on to ask two other questions, it got me thinking, that the answers would maybe make a good BLOG post..

What where these two questions? well, first up;

What problem does it solve ?

To be clear, data fabric is not a product or a bit of technology, it’s a strategy. I read a great article recently from the founder of CoHo, another storage vendor, who talked about how often the data storage debate gets lost in technology and completely loses site of the primary point of any business looking at data storage, they have data storage challenges and they want someone to solve a problem for them, not to go on about different feeds and speeds and flashing lights…

So let’s see if we can answer that, with a focus on business problems and not get lost in technology (and there is plenty of innovative tech behind the NetApp data fabric story) hopefully you’ll find it interesting and see why it’s maybe more important than ever that those making storage decisions need to think way beyond the silo of specific technologies and look much wider.

Today most of the customer conversations I have, pretty much always include two key topics, using the cloud and managing data, be that it’s security, availability or flexibility of access, the problem with those two things is that they don’t necessarily complement each other very well.

Many of the data strategies I see implemented often include data silo’s, flash over here for one project, archive storage elsewhere, because I don’t want it on flash, some stuff over here in the cloud, because i want access to that all over the place, or need it as part of my DR, but now i have a whole host of tools managing these things and the data in one silo can’t move to another, the problem with this is, as it becomes more complex, the more difficult it becomes to manage, the more difficult it becomes to control, mistakes happen, exposing our data and our businesses to risk.

Even if that doesn’t happen you end up put in a corner with all your data sat separately and no ability to easily move between your silo’s.

That’s what a data fabric strategy fixes, it addresses all of these data silo’s by allowing you to put your storage where you want it, while allowing it to be managed by a single toolset, allowing seamless movement between your storage types.

So this leads to the second question –

How does NetApp help me solve that problem then?

This is for me where NetApp have been smart, how so?

First let’s look at how other smart technology ecosystems deliver their data. let’s look at Apple for instance;Apple Data Fabric

If you buy into the Apple eco system, with your iTunes account, your Mac, your iPad, your phone etc. – as a user you don’t even think about how you get access to your content from one device to another – it’s just there, Apple have created a data fabric.

But if we look at our enterprise IT, are we doing the same, in many cases, no we are not…

NetApp have been smart and looked at this model and asked can we do something similar?

Many of you know NetApp as a storage vendor – supplying physical storage arrays. However what NetApp actually do, is is write software and their biggest software solution is their OnTap operating system and it is this operating system, that many people don’t necessarily see, that is the core of the data fabric.

How? in the end NetApp’s storage capability is delivered completely via the OnTap operating system and because fundamentally OnTap is a piece of software, like any piece of software it can be installed on any capable platform.

So what? – well just imagine if the storage operating system you have sat on your storage array, could be moved around, and maybe dropped into a virtual machine, or could sit in front of a big lump of public cloud storage.

Once you’ve done that, you really have an opportunity to break down storage silo’s and provide real flexibility of choice of where you put your data.


If you look at the image above, you can see Data OnTap right at the core of what the data fabric looks like, the OS can then be installed onto or over the top of any of those multiple platforms, once it’s there you have all of the same features and functionality regardless of what sits behind it.

We can sit our OnTap OS on top of an disk array full of SAS or SATA, or maybe it can sit on some All Flash infrastructure, but maybe we don’t have NetApp arrays, no problem, let’s sit it in front of a 3rd party disk array, or maybe we want it out in our branch office as a VM, or maybe we want it in the cloud sat in front of some AWS storage.

That gives us one operating system on a range of devices, one set of tools to manage it, the same capability across each of those platforms, which ultimately gives us the capability of easily moving data around our fabric, across different storage types, so we want to move our SAS data onto all flash – no problem, drop flash into our fabric and over it goes. Want to move data into the cloud, no problem, let’s mirror it across – what about when we want it back.. no problem, we mirror it back.

It’s that operational flexibility that addresses the issues we discussed in the answer to the first question, that of failing to look at the big picture and potentially puts our data into silo’s that can not be moved to other platforms, does that matter, well in some cases maybe not, but in many, if you are thinking strategically about your business technology, then you need to consider whether the decisions you are making are going to give you the flexibility to respond to changing business needs, allowing you to take advantage of future technology changes etc.

I appreciate that we have talked a lot about NetApp here, but at the minute I’ve not really seen this joined up thinking at this scale elsewhere, however if your technology partners are offering this kind of fabric, that’s great, explore it. All I ever look at with posts like this, is to get those reading it thinking about strategic considerations they may not have done before, hopefully this post has done that.

Hopefully some food for thought…

Flashy NetApp

netapp-logo_thumb.pngYou may be aware NetApp has announced the latest update to their Data OnTap operating system OnTap 8.3.1 (if you’re not you may want to read my Jumping NetApp Flash Post to explain why!) this post is to provide a touch more detail on what 8.3.1 brings to your storage party especially for those out there looking to deliver all flash storage into their datacentre.

NetApp are not bringing out new controllers, or some brand new platform, OnTap 8.3.1 is an update to the current version of the storage OS that is currently in the market.

The main thrust of this update is what 8.3.1 means for NetApp all flash arrays (all flash arrays are specific implementations of NetApp controllers in case you are not sure) 8.3.1 will also deliver benefits for users of hybrid controllers, which are utilising traditional disk tiers, but for this post we are focussing on All Flash (AFF).

Ok, so what is this release delivering in terms of AFF?

What is AFF?

Firstly it’s probably worth making clear what AFF means, the most fundamental thing to bear in mind, although maybe not the most surprising is AFF means exactly this, this is the usual NetApp controllers (8000 range only) but these controllers will only operate SSD drives, they will not work with standard disk tiers. There are some specific bits of code optimisation of OnTap on these controllers to take into account the use of flash drives only.

I’m not going to look at the hardware specs here, as you may know there are a range of controllers from the 8020 upwards that offer differing amount of processing capabilities, connectivity options etc, but all deliver the same Data OnTap capability and that’s the focus here;

Let’s get into a bit of detail then;

Enterprise level storage

The first thing to note with NetApp’s view of all flash, is that flash based storage should be delivered without compromising any of the enterprise level functionality that you should expect.

AFF does all of the things you expect any NetApp Controller to do..

  • Scale-out and non-disruptive operations
  • Data Mobility within a cluster
  • Integrated data protection (Snapshots, SnapMirror, SnapVault)
  • Storage efficiencies (RAID-DP, Thin Provisioning, FlexClone, Dedupe, Inline Compression)
  • Advanced application integration
  • Secure multi-tenancy, QOS, add without re-architecting
  • Full protocol flexibility – FC, FCoE, iSCSI, NFS/pNFS, CIFS/SMB

And of course operates as part of any type of cluster, be that all flash or as part of a hybrid cluster, so all flash controllers, with controllers operating mixed disk tiers, but of course, all delivered by one OS, managed by one platform and supporting all the same application integrations you expect.

aff cdot

So that’s the stuff you’d expect NetApp to do, what about some of the things specific to the AFF.

Getting the most out of your flashy controllers

NetApp have introduced a number of 8.3.1 features that optimise the way the Controllers work to both optimise performance and reduce wear on SSD drives, significantly reducing the potential for failure of a flash drive.

Write Optimisations

NetApp with a mixture of using the way the WAFL file system operates and a number of specific flash tweaks are achieving a number of things, to both increase the consistency of performance while lowering unnecessary workloads on the flash drives, for example lowering garbage collection and write amplifications which in turn extend the lifetime of the drives.

Read Optimisations

8.3.1 improves on some work NetApp had already started to reduce the number of steps that data has to pass through before been presented back to the requesting applications.

A storage request for data traditionally moves through the storage system stack, so in NetApp’s case;


However in 8.3.1 (assuming no requirement for error recovery) the data bypasses both the file system and RAID to take data straight from SSD and present out on the network layer making huge leaps in read performance, and remember if you are an existing NetApp user using AFF, you’ll benefit from this via an OnTap upgrade, no new stuff needed.

aff 831 read

Storage Efficiency

One of NetApp’s key industry differentiators has always been storage efficiency and to see this delivered and actually enhanced on the flash platforms, is in my opinion, a fantastic step forward for enterprise flash usage, with many of the new vendors not ticking all of the efficiency boxes all of the time.

We know NetApp do all the lovely stuff around thin provisioning, snapshots, clones, deduplication and compression, however the flash platform offers a couple of new and additional efficiency solutions;

  • Inline Compression – this is on by default on the AFF platforms, compress data as it’s written, laying down less to disk to start with.
  • Inline Zero Deduplication – This allows the controllers to inspect data as it arrives at the controller, it then identifies and removes zero blocks before writing the rest to disk… as we all know, we write a lot of zero’s to disk that we don’t really need!
  • Always on Deduplication – the AFF can also enable always on dedupe, so every minute the system carries out a dedupe on the housed data, this is great for VDI environments giving excellent space efficiency with no effect on performance
Enterprise Capabilities

In my opinion this is where this release plays very strongly, if you are an Enterprise IT decision maker, looking to deploy flash into your environment, then one area of concern is the lack of enterprise functionality, that is not “nice to have” features but are absolutely essential to your organisation.

NetApp as an enterprise player, of course have always understood that, but have made sure with the AFF range that none of that enterprise feature set is compromised.

Our AFF boxes fully exploit all the things you’d expect NetApp to bring;

  • High performance at ultra-low latency – a minimum for a flash solution of course
  • Non-Disruptive Operations – brilliant part of a NetApp cluster – upgrade, replace, update completely non disruptively.
  • Scale-Out – want more compute power – just slot it in!
  • Multi Protocol Support (NFS, iSCSI, FC / FCoE, CIFS) – this is a key NetApp benefit, a lot of the kids on the flash block are limited….to well..block protocols – no support for file stuff – so no support for VMware using NFS or Microsoft using SMB3 for both HyperV and SQL – both key directions for those technologies.
  • Deduplication / Inline Compression
  • Synchronous / Asynchronous / Semi-Sync Replication – and of course we need to replicate this stuff for backup, DR and continuity.
  • DR to cheaper SAS/SATA based systems – key benefit over the all flash companies out there, NetApp have the ability to take all flash in production but replicate that to much cheaper DR storage tiers, including via both AltaVault and Cloud OnTap the ability to replicate into public cloud storage.
  • Quality of Service – True QOS to allow you to manage your storage performance requirements – providing prioritisation of data if needed
  • Secure Multi Tennant Capable – and if you are building your own “cloud” infrastructure fully accredited secure multi tenancy, critical if you are delivering a true shared platform.

It’s not quite all folks!

All the techie stuff is great and of course it’s important, but it’s not the biggest hurdle to delivering flash.

We have two choices right now if we want flash, it’s to compromise some of the key enterprise features we have come to rely on by using some of the less mature stack of the newer flash players, or is to pay a premium for the enterprise quality stack.

A significant part of this NetApp announcement has been a clear realisation that this is not the way for the enterprise storage providers to play, it’s important to realise the modern data centre does gain advantage from tiers of flash in the infrastructure, however they should not be penalised because they want enterprise capabilities.

NetApp have reduced the costs of their all flash controller platforms quite significantly, bringing their prices right in line with some of those “startup” all flash guys, but in no way compromising the NetApp enterprise capability.

It is this last part that makes this such a complete package, the technology is great, delivering 350,000 IOPS to a unified storage controller is some fantastic performance, but doing that at a price that makes enterprise quality flash a reality for many customers is seriously impressive.

Flashy you may say!

Jumping NetApp Flash


This post is definitely going to talk about the range of NetApp announcements that you may of heard today, but before I do that I want to focus on the last sentence.

Announcements you may of heard today

Now the reason I wanted to focus on that for a second, is that as I’ve mentioned previously the IT industry can be quite odd and if you take time to listen to the opinion pieces out there, then often the coverage that gets the most noise is the vendors who are either really cool… Apple for instance, or the ones who market themselves real well – you can fill in your own blanks there.

Now in many of those cases, when they market themselves well, any kind of new release of software, or a bit of a hardware refresh becomes major news, regardless of whether the thing they are doing is ground breaking or even new to the industry at all (even if it’s new to their platform). My son has a great example of this at the minute, in the new Samsung smartphone ad, where they have Rita Ora (yes i’m down with the kids) advertising how her new phone charges wirelessly on a charging plate. Brilliant innovation – well it’s not really, my son uses a  Lumia and has done for a good three years and through that entire time, his Lumia has wirelessly charged on a plate… but no fanfare or Rita Ora endorsements there.

However on the flip side of this, some companies and especially those big industry behemoths, often deliver this kind of innovation with no more than a “meh” it’s just something we do… now if you’re a user of that technology, brilliant, new features, new capabilities all for free, just delivered, but if you’re not you never know about it and only hear the noisy marketeers!

So what’s all this got to do with NetApp, well full disclosure here, I’ve worked with NetApp for around 10 years and always been a fan of their technology, even if as a company they sometimes get things wrong, the tech is always rock solid, smart and innovative – however NetApp are really really bad at telling people, hence the second sentence, and that’s a real pity, NetApp has a great set of solutions, some excellent tech, a really solid vision around how a data fabric from datacenter to public cloud can be delivered… yet nobody ever knows… and the tech press talk of a company without innovation, becoming less relevant in the industry, and all those comments are fine and those views, are views…based on what they know, but it’s a pity when marketing gets in the way of good technology.

However this week NetApp have made some really impressive updates to the latest shhhversion of their industry leading storage OS, Data OnTap. But for NetApp it is kind of one of those “meh” moments and incremental updates to their OS is just something they do, providing enterprises with a new set of possibilities doesn’t warrant a big NetApp press conference, or new product launch fanfare – it’s almost as if everyone releases news with their fingers to their lips suggesting we do a lot of shhhhh .

Well just for once i’m going to break the well held NetApp stance and wave some flags on their behalf, because these new set of announcements I believe, are pretty important for those out there delivering enterprise IT.

What’s it all about then?

IT is full of trends and one of the key ones for the storage industry is flash. Flash is a really interesting proposition in the datacentre – it’s fast, very fast and for some that’s important, and it can be relatively cheap, lot’s of new startup storage companies popping up and offering lightning quick flash at lowish prices.

But then there is the dilemma, what many of these new flash and hybrid vendors are not delivering is all of the enterprise storage facilities that we need to make sure that not only is my data fast, but it’s protected, my critical applications are fully integrated with the fast disks and I have the full set of efficiency features I expect, compression, deduplication, snapshots, thin provisioning etc.

But there’s the problem, those enterprise class vendors are offering that, but not at the right cost and those offering the price point we want are not providing the functionality we need. Dilemma indeed!

NetApp drum roll

Well that was until today – take a bow NetApp, an enterprise vendor that has recognised that customer requirement – the enterprise features you expect but at a cost point challenging the most aggressively priced of the startup all flash vendors.

What have NetApp done?

The release of OnTap 8.3.1 to power their existing all flash FAS controllers (yep these already are in the market running previous versions of OnTap – so if you have one you are getting some new goodness) provides a host of new flash based benefits, adding increased performance to some already impressive independent benchmarks (top 10 in SPC-1 performance benchmarks) but critically and the reason I wanted to call NetApp out here, was doing that without compromising any of the enterprise class functionality.

without compromising any of the enterprise class functionality

Oh yes, and I nearly forgot, all this, while dealing with the biggest of flash adoption hurdles, the price, taking this high performance all flash solution and reducing the cost by 40% taking an entry level solution to below £30K is pretty major stuff.

This isn’t some new standalone tech either, it can integrate seamlessly into an existing NetApp storage cluster, or you can build your new cluster, using not just all flash, but integrate in controllers using more traditional storage tiers if needed, but for you to manage as a single cluster, managed by a single OS, set of management tools and integrating seamlessly with your NetApp data protection suites, as well as the ability to mirror and vault the data off to alternate locations, oh and those don’t need to be all flash locations – just somewhere running Data OnTap (other controllers, software only OnTap VM’s or even straight into a hyperscale cloud).

It’s this focus on flash performance without compromising enterprise quality that I think is the most interesting to us out there looking to deliver enterprise class IT solutions.

If you want some more detail about what NetApp are bringing to the all flash party with these releases then I’ve done a separate post here providing some more technical detail, to give you a summary of some of the things you can expect from NetApp with this release;

  • High Performance
  • Consistent low latency
  • Full enterprise storage efficiency capabilities – including inline compression and deduplicaiton
  • Full enterprise software integration delivering rich data management
  • Scalability both up and out
  • Multi protocol support – connect in the way that’s best for you (FC, ISCSI, CIFS,NFS)
  • Low cost – for example VDI from ~£30 per desktop

As you can see, it’s a pretty impressive list of capabilities from a company that has already shipped 4000+ all flash controllers, so already know what they are doing and all this at some really competitive price points.

I’ve no intention of turning this BLOG into a marketing site, so hopefully you don’t think this to salesy – I wanted to make a point that sometimes some companies are really great at delivering and really rubbish at telling people, so, as flash is such an important technology for many, I thought i’d do a bit of marketing on their behalf – now just need to send NetApp my PR invoice and we’ll be good!!!

If you want a bit more tech detail on what this new NetApp announcement is about – please feel free to have a read here