Keeping on top of ONTAP

The last year has been a big one for NetApp, the turnaround in the company’s fortunes continues, fantastic growth in the all flash array market, the introduction of cloud native solutions with tools and of course not to forget Solidfire and the newly announced HCI platform. All have created lots of interest in this “new” NetApp.

If you have read any of my content previously, you’ll know I’m a fan of how NetApp operate and their data fabric strategy continues to make them the very best strategic data partner to meet the needs of many of the people I work with day-to-day.

Why am I telling you all of this? Well, like with all technology companies, it’s easy to get wrapped up in exciting new tech and sometimes forget the basics of why you work with them and what their core solutions still deliver.

For all the NetApp innovations of the last couple of years, one part of their business continues to be strong and even at 25 years old remains as relevant to customer needs as ever and that is the ONTAP operating system.

ONTAP, in its latest incarnation, version 9 (9.2 to be exact), maybe more than anything shows how NetApp continue to meet the ever-changing needs of the modern data market, because it would be easy, regardless of its strength, to write off an operating system that is 25 years old, but NetApp have not, they have developed it into something markedly different from the versions I first worked with 10 years ago.

These changes reflect the changes we, as users in more data focussed businesses, demand from our storage, it’s not even really storage we demand, it’s an ability to make our data a core part of our activities, to quote a friend “Storing is boring” and although storing is crucial, if all we are doing is worrying about storing it, then we are missing the point and if the focus for ONTAP was only that, then it would become very quickly irrelevant to a modern business.

How are NetApp ensuring that ONTAP 9 remains relevant and continues to be at the heart of data strategies big and small?

Staying efficient

Although storing may be boring, in a world where IT budgets continue to be squeezed and datacentre power and space are at a costly premium, squeezing more and more into less and less continues to be a core requirement.

Data Compaction, inline deduplication, and the newly introduced aggregate wide deduplication all provide fantastic efficiency gains. If you align this with integration of increasing media sizes (10TB SATA, 15TB Flash, something not always easy for NetApp’s competition), you can see how ONTAP continues to let you squeeze more and more of your data into smaller footprints (60Tb on one SSD drive anyone?), something that remains critical in any data strategy.

Let it grow

As efficient as ONTAP can be, nothing is efficient enough to keep up with our desire to store more data and different types of data. However, ONTAP is doing a pretty good job of keeping up. Not only adding additional scalability to ONTAP clusters (Supporting up to 24 nodes) NetApp have also taken on a different scaling challenge with the addition of FlexGroups.

FlexGroups allow you to aggregate together up to 200 volumes into one large, high performance single storage container, perfect for those who need a single point of storage for very large datasets. This is something I’ve already seen embraced in areas like analytics where high performance access to potentially billions of files is a must.

Keep it simple

A goal for any IT team should be the simplification of its environment.

NetApp have continued developing ONTAP’s ability to automate more tasks and by using intelligent analysis of system data they are helping you to take the guess-work out of workload placements and their impacts, allowing you to get it right, first time, every time.

The continued development of quick deployment templates has also greatly simplified provisioning of application storage environments from out of the box to serving data, taking just minutes not days.

In a world where an ability to respond quickly to business needs is crucial, then the value of developments like this cannot be underestimated.

Keep it secure

Maybe the most crucial part of our data strategy is security and in the last 12 months NetApp have greatly enhanced the capability and flexibility of this in ONTAP.

SnapLock functionality was added 12 months ago, allowing you to lock your data into data archives that can meet the most stringent regulatory and compliance needs.

However, the biggest bonus is the implementation of onboard, volume level encryption, previous to ONTAP9, the only way to encrypt data on a NetApp array, was like most storage vendors, with the use of self-encrypting drives.

This was a bit of an all or nothing approach, it meant buying different and normally more expensive drives and encrypting all data regardless of its sensitivity.

9.1 introduced the ability to deliver encryption on a more granular level, allowing you to encrypt single volumes, without the need for encrypting drives, meaning no need for additional hardware and importantly the ability to only encrypt what is necessary.

In modern IT, this kind of capability is critical both in terms of data security and compliance.

Integrate the future!

I started this piece by asking how you keep a 25-year-old operating system relevant, in my opinion the only way to do that is to ensure it seamlessly integrates with modern technologies.

ONTAP has a pretty good record of that, be it by luck or design, it’s port into the world of all flash, was smooth, no need for major rewrites, the ONTAP method of working was geared to work with flash before anyone had thought of flash!

The ability for ONTAP to see media as another layer of storage regardless of type was key in supporting 15TB SSD’s before any other major storage vendor and it is this flexibility of ONTAP to integrate new storage media which has led to one of my favourite features of the last 12 months, FabricPools.

This technology allows you to seamlessly integrate S3 storage directly into your production data, be that an on-prem object store, or a public cloud S3 bucket from a provider like AWS.

In the V1.0 release in ONTAP 9.2, FabricPools tier cold blocks from flash disk to your S3 complaint storage, wherever that is, bringing you the ability to lower your total cost of ownership for storage by moving data not actively in use to free up space for other workloads. All done automatically via policy, seamlessly providing an extension to your production storage capacity by integrating modern storage technology.

ONTAP everywhere

As ONTAP continues to develop, the ways you can consume it also continue to develop to meet our changing strategic needs.

Fundamentally ONTAP is a piece of software and like any piece of software it can run anywhere that meets the requirements to run it. ONTAP variants Select and Cloud, provide software defined versions of ONTAP that can be run on white box hardware or delivered straight from the cloud marketplaces of AWS and Azure.

The benefit of this stretches far beyond just been able to run ONTAP in more places, it means that management, security policies and data efficiencies are all equally transferable. It’s one way to manage, one set of policies to implement, meaning that where your data resides at a given moment becomes less important, as long as it is in the right place at the right time for the right people.

In my opinion, this flexibility is critical for a modern data strategy.

Keep it coming

Maybe what really keeps ONTAP relevant is the fact that these new capabilities are all delivered in software, none of the features have required new hardware or for you to purchase an add-on, they are all delivered as part of the ONTAP development cycle.

And the modern NetApp has fully embraced a more agile way of delivering ONTAP, with a 6-month release cadence, meaning they can quickly absorb feature requests and get them delivered to platforms that desire them quickly, allowing them and us to respond to changing business needs.

So, while NetApp have had a fascinating year, delivering great enhancements to their portfolio, ONTAP still retains a very strong place at the heart of their data fabric strategy and still, in my opinion, is the most complete data management platform, continuing to meet the needs presented by modern data challenges.

Find out more

If you want to know more about ONTAP and its development then try these resources.

NetApp’s Website

Justin Parisi’s BLOG – providing links to more detailed information on all of the technologies discussed and much more!

TechONTAP Podcast – NetApp’s excellent TechONTAP podcast has detailed information of all of the information shared here, it’s all in their back catalogue.

And of course you can leave a comment here or contact me on twitter @techstringy

What is a next generation data centre? – Martin Cooper – Ep35

There is no doubt that our organisations are becoming ever more data centric, wanting to know how we can gain insight into our day to day operations and continue to be competitive and relevant to our customers, while delivering a wide range of new experiences for them.

This move to a more data driven environment is also altering the way we engage and even purchase technology in our businesses, with technology decisions now no longer the preserve of IT people.

These changes do mean we need to reconsider how we design and deliver technology. Which has led to the idea of “The Next Generation Datacenter”, but what does that mean? What is a Next Generation Datacentre?

That is the subject of this week’s podcast, as I’m joined by Martin Cooper, Senior Director of the Next Generation Datacentre Group (NGDC), at NetApp.

With over 25 years in the technology industry, Martin is well placed to understand the changes that are needed to meet our increasingly digitally driven technology requirements.

In this episode, we look at a wide range of topics, starting with trying to define what we mean by Next Generation Datacentre. The good news is that NGDC is not necessarily about buying a range of new technologies, but about optimising the processes and technology that we already have.

We touch on how a modern business needs flexibility in its operations and how decisions made in different parts of the business, who focus on applications and data, not infrastructure, require IT teams to respond in an application and data focused way.

Martin also discusses the types of organisations that can benefit from this NGDC way of thinking, and how in fact, it’s not about entire organisations, but about understanding where the opportunities for transformation exist, and delivering change there, be that an entire business, a single department or even a single application.

We also provide a word of caution and how it’s important to understand that not all our current applications and infrastructure are going to migrate to this brave new world of Next Generation Datacentres.

Next Generation Datacentre is not about a technology purchase, but is about understanding how to optimise the things we do, to meet our changing business needs and Martin provides some excellent insight into how we do that and the kind of areas we need to consider.

To find out more from Martin and from NetApp you can follow them in all the usual ways.

Their website Netapp.com

On twitter @NetApp @NetAppEMEA

You can also follow Martin @mr_coops

Martin also mentioned a selection of podcasts that often discuss next generation datacentre, you can find more details on those shows by clicking the links below.

SpeakingINTech

The Cloudcast

NetApp’s own TechONTAP podcast.

I hope you enjoyed the show, if you did and want to catch all future Tech Interviews episodes, then please subscribe and leave us a review in all of the normal places.

Subscribe on Android

SoundCloud

Listen to Stitcher

All Aboard the Data Train

The other night myself and Mrs Techstringy were discussing a work challenge. She works for a well-known charity and one of her roles is to book locations for fundraising activities, on this occasion the team were looking at booking places at railway stations and considering a number of locations, however all they really had to go on was a “gut feeling”.

As we discussed it we did a bit of searching and came across this website http://www.orr.gov.uk/statistics/published-stats/station-usage-estimates which contained information of footfall in every UK railway station over the last 20 years, this information was not only train geek heaven, it also allowed us to start to use the data available to make a more informed choice and to introduce possibilities that otherwise would not have been considered.

This little family exercise was an interesting reminder of the power of data and how with the right analysis we can make better decisions.

Using data to make better decisions is hardly news, with the ever-increasing amounts of data we are collecting and the greater access to powerful analytics, machine learning and AI engines, all of us are already riding the data train taking us to a world of revolutionary ideas, aren’t we?

The reality is, that most of us are not, but why?

For many, especially with data sets gathered over many years, it’s hard, hard to package our data in such a way that we can easily present it to analytics engines and get something useful from it.

But don’t let it stop you, there is potentially huge advantage to be had from using our data effectively, all we need is a little help to get there.

So what kind of steps can we take so we too can grab our ticket and board the data train?

Understand our data

The first thing may seem obvious, understand our data, we need to know, where is it? what is it? is it still relevant?

Without knowing these basics, it is going to be almost impossible to identify and package up the “useful” data.

The reality of data analytics is we just can’t throw everything at it, remember the old adage garbage in, garbage out, it’s not changed, if we feed our data analytics elephant a lot of rubbish, we aren’t going to like what comes out the other end!

Triage that data

Once we’ve identified it, we need to make sure we don’t feed our analytics engine a load of nonsense, it’s important to triage, throw out the stuff that no one ever looks at, the endless replication, the stuff of no business value, we all store rubbish in our data sets, things that shouldn’t be there in the first place, so weed it out, otherwise at best we are going to process irrelevant information, at worst we are going to skew the answers and make them worthless.

Make it usable

This is perhaps the biggest challenge of all, how do we make our massive onsite datasets useful to an analytics engine.

Well we could deploy an on-prem analytics suite, but for most of us this is unfeasible and the reality is, why bother? Amazon, Microsoft, Google, IBM to name but a few have fantastic analytics services ready and waiting for your data, however the trick is how to get it there.

man-lifting-heavy-boxThe problem with data is it has weight, gravity, it’s the thing in a cloud led world that is still difficult to move around, it’s not only its size that makes it tricky, but there is our need to maintain control, meet security requirements, maintain compliance, these things can make moving our data into cloud analytics engines difficult.

This is where building an appropriate data strategy is important, we need to have a way to ensure our data is in the right place, at the right time, while maintaining control, security and compliance.

When looking to build a strategy that allows us to take advantage of cloud analytics tools, we have two basic options;

Take our data to the cloud

Taking our data to the cloud is more than just moving it there, it can’t just be a one off copy, ideally in this kind of setup, we need to move our data in, keep it synchronised with changing on-prem data stores and then move our analysed data back when we are finished, all of this with the minimum of intervention.

Bring the cloud to our data

Using cloud data services doesn’t have to mean moving our data to the cloud, we can bring the cloud to our data, services like Express Route into Azure or Direct Connect into AWS means that we can get all the bandwidth we need between our data and cloud analytics services, while our data stays exactly where we want it, in our datacentre, under our control and without the heavy lifting required for moving it into a public cloud data store.

Maybe it’s even a mix of the two, dependent on requirement, size and type of dataset, what’s important is that we have a strategy, a strategy that gives us the flexibility to do either.

All aboard

Once we have our strategy in place and have the technology to enable it, we are good to go, well almost, finding the right analytics tools and of course what to do with the results when we have them, are all part of the solution, but having our data ready is a good start.

That journey does have to start somewhere, so first get to know your data, understand what’s important and get a way to ensure you can present it to the right tools for the job.

Once you have that, step aboard and take your journey on the data train.

If you want to know more on this subject and are in or around Liverpool on July 5th, why not join me and a team of industry experts as we discuss getting the very best from your data assets at our North West Data Forum.

And for more information on getting your data ready to move to the cloud, check out a recent podcast episode I did with Cloud Architect Kirk Ryan of NetApp as we discuss the why’s and how’s of ensuring our data is cloud ready.

New fangled magic cloud buckets – Kirk Ryan – Ep32

Analysing the availability market – part two – Dave Stevens, Mike Beevor, Andrew Smith – Ep30

Last week I spoke with Justin Warren and Jeff Leeds at the recent VeeamON event about the wider data availability market, we discussed how system availability was more critical than ever and how or maybe even if our approaches where changing to reflect that, you can find that episode here Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29.

In part two I’m joined by three more guests from the event as we continue our discussion. This week we look at how our data availability strategy is not and can not just be a discussion for the technical department and must be elevated into our overall business strategy.

We also look how technology trends are affecting our views of backup, recovery and availability.

First I’m joined by Dave Stevens of Data Gravity,  as we look at how ou060617_0724_Analysingth1.jpgr backup data can be a source of valuable information, as well as a crucial part in helping us to be more secure, as well as compliant with ever more stringent data governance rules.

We also look at how Data Gravity in partnership with Veeam have developed the ability to trigger smart backup and recovery, Dave gives a great example of how a smart restore can be used to quickly recovery from a ransomware attack.

You can find Dave on Twitter @psustevens and find out more about Data Gravity on their website www.datagravity.com

Next I chat with Mike Beevor of HCI vendor Pivot3 about how simplifying our approach to system availability can be a huge benefit. Mike also makes a great point about how, although focussing on application and data availability is right, we must consider the impact on our wider infrastructure, because if we don’t we run the risk of doing more “harm than good”.

You can find Mike on twitter @MikeBeevor and more about Pivot 3 over at www.pivot3.com

Last but my no means least I speak with Senior Research Analyst at IDC, Andrew Smith, we chat about availability as part of the wider storage market and how over time, as vendors gain feature parity, their goal has to become to add additional value, particularly in areas such as security and analytics.

We also discuss how availability has to move beyond the job of the storage admin and become associated with business outcomes. Finally we look a little into the future and how a “multi cloud” approach is a key focus for business and how enabling this will become a major topic in our technology strategy conversations.

You can find Andrews details over on IDC’s website .

Over these two shows, to me, it has become clear that our views on backup and recovery are changing, the shift toward application and data availability is an important one and how, as businesses, we have to ensure that we elevate the value of backup, recovery and availability in our companies, making it an important part of our wider business conversations.

I hope you enjoyed this review, next week, is the last interview from VeeamON, as we go all VMWare as I catch up with the hosts of VMWare’s excellent Virtually Speaking Podcast Pete Flecha and John Nicholson.

As always, If you want to make sure you catch our VMWare bonanza then subscribe to the show in the usual ways.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29

Now honestly, this episode has not gone out today sponsored by British Airways, or in any way taking advantage of the situation that affected 1000’s of BA customers over the weekend, the timing is purely coincidental.

However, those incidents have made this episode quite timely as they again highlight just how crucial to our day to day activities as individuals and businesses technology is.

As technology continues to be integral to pretty much anything we do, the recent events at BA and the disruption caused by WannaCrypt are all examples of what happens when our technology is unavailable, huge disruption, reputational damage, financial impacts, as well as the stress it brings to the lives of both those trying to deal with the outage and those on the end of it.

Last week I spoke with Veeam’s Rick Vanover (Remaining relevant in a changing world – Rick Vanover – Ep28) about how they where working to change the focus of their customers from backup and recovery to availability, ensuring that systems and applications where protected and available, not just the data they contained.

As part of my time at the recent VeeamON event, I also took the opportunity to chat with the wider IT community who attended, not just those charged with delivering availability and data protection, but also those who looked at the industry through a broader lens, trying to understand not just how vendors viewed availability, but also at the general data market trends and whether businesses and end users where shifting their attitudes in reaction to those trends.

So over the next couple of weeks, I’ve put together a collection of those chats to give you a wider view of the availability market, how analysts see it and how building a stack of technologies can play a big part in ensuring that your data is available, secure and compliant.

First up, I speak with Justin Warren and Jeff Leeds.

Justin, is a well-known industry analyst and consultant as well as the host of the fantastic Eigencast podcast (if you don’t already listen you should try it out) Justin is often outspoken, but always provides a fascinating insight into the wider industry, and shares some thoughts here, on how the industry is maturing, how vendors and technology is changing and how organisations are changing or perhaps not changing to meet new availability needs.

You can follow Justin on twitter @jpwarren and do check out the fantastic Eigencast podcast.

Jeff Leeds, was part of a big NetApp presence at the event and I was intrigued why a storage vendor, famed for their own robust suite of data protection and availability technologies, should be such a supporter of a potential “competitor”.

However, Jeff shared how partnerships and complimentary technologies are critical in building an appropriate data strategy, helping us all ensure our businesses remain on.

You can follow Jeff on twitter at @HMBcentral and find out more about NetApp’s own solutions over at www.netapp.com

I hope you enjoyed the slightly different format and next week we’ll dig more into this subject as I speak with Andrew Smith from IDC and technology vendors Pivot3 and Data Gravity.

To catch it, please subscribe in all the normal homes of Podcasts, thanks for listening.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

Remaining relevant in a changing world – Rick Vanover – Ep28

One of the biggest challenges we face in technology is constant change. Change is not bad of course, but it presents challenges, from upgrading operating systems and applications, to integrating the latest technology advancements, to responding to new business problems and opportunities.

But it is not only those implementing and managing technology who are affected.

Technology vendors are equally effected, the IT industry is full of stories of companies who had great technologies but have then been blindsided by a shift in the needs of their customer base, or a technology trend that they failed to anticipate.

It was with this in mind that I visited Veeam’s, VeeamON conference.

Veeam are a technology success story, a vendor who arrived into the already established data protection market and shifted how people looked at it. They recognised the impact virtualisation was having on how organisations of all types where deploying their infrastructures and how traditional protection technologies where failing to evolve to meet these new needs.

Veeam changed this and that is reflected in their tremendous success over the last 9 years, today they are a $600M+ company, with 100’s of thousands of customers. But the world is now changing for them also, as we start to move more workloads to the cloud, as we want more value from our data, as security starts to impact every technology design decision, and of course as we all live ever more digitally focussed lives, our needs from our systems are changing hugely.

How are Veeam going to react to that ?, what are they going to do to continue the success they’ve had and to remain relevant in the new world that much of their market is shifting into ?.

For this week’s podcast, I look at that very question and discuss Veeam’s future with Rick Vanover, Director of Technical Product Marketing & Evangelism, Rick is a well-known industry face and voice, and we had an excellent conversation looking at Veeam’s future aims.

We discuss their repositioning as an availability company, look at how Veeam are developing a range of solutions to give them an availability platform and how this platform will allow their customers to build a strategy, to not only protect their critical data assets, in a range of different data repositories, but will also allow them to move their data seamlessly between them.

We also take a look at some of the big announcements from the show and pick out our top new features.

In my opinion, Veeam’s strategic vision is a good one, the ability to provide organisations with the data protection they need regardless of data location and the ability to move data between those locations is important, but, as ever, remaining relevant will be dictated by their ability to execute that vision.

Hope you enjoy the show.

To find more about Veeam you can of course check out their website www.veeam.com and engage with them on twitter @veeam and if you want to catch up with Rick, he can also be found on twitter @RickVanover.

Over the next couple of weeks we will be looking more at availability and protection, as we talk with the wider technology community as well as industry analysts on how they see the evolving data market.

To catch those shows then subscribe in all the normal ways.

Oh and I hope you like the new theme tune!

Thanks for listening.
Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

Veeam On It – Day Two at Veeam ON

Day two of Veeam ON in the can, and a big day for thier core product Veeam Availability suite, with the announcement of Version 10, delivering some key new functionality. There was also some smart additions to the wider Veeam Platform family, but more on those at a later date.

Let’s start with Availability Suite V10, still very much at the core of what Veeam are delivering;

Physical Servers and NAS

While Veeam introduced the ability to backup physical servers with their free end point protection tool, V10 sees that capability more tightly integrated into the suite, this with the addition of agents for both Windows and Linux strengthens their capabilities in the wider enterprise, allowing Veeam to truly move just beyond virtual machines workloads.

NAS support is also a very welcome addition, allowing direct interaction with data housed on those enterprise storage repositories, housing TB’s of unstructured data. In a Veeam world previously the only way to protect that data would be if it resided on a Windows File Server and for many of us, that’s just not the case.

Although great additions, I don’t think I’m been overly harsh suggesting these are “table stakes”, fleshing out the suite to capture as many potential data sources as possible and really bringing them in line with most of the enterprise data protection market.

But, the announcements did more than just fill gaps, recognising both critical business challenges and embracing key technology developments in how we store our data much more effectively.

Continuous Data Protection

Some workloads in a business are a real challenge to protect, their availability is so critical to our business that they have the most stringent recovery point and time objectives, tolerating close to zero outages and data loss.

Often this is dealt with by the application design itself taking advantage of clustering and multiple copies of data across the business (think SQL Always on and Exchange DAG’s for example), but what if your application doesn’t allow that, how do you protect that equally critical asset.

CDP is the answer, limited currently to virtual machines hosted within a VMware environment (due to it exploiting specific VMware technologies) CDP provides a continuous backup of that key workload and in the event of a critical failure, not only can Veeam now make that workload quickly available again, but data loss will be only a matter of seconds, allowing us to meet the most stringent of service levels for those critical applications.

Object and Archives

My personal favourite announcement is the addition of native object storage support in V10. Object storage is becoming the de-facto standard for storing very large datasets needing long term retention, it is the basis of storage for the public hyperscale providers such as Microsoft and Amazon.

The addition of native support, alongside the addition of backup archiving capabilities, really start to introduce the possibility of a backup fabric giving On-Prem production, to backup repository, off to cloud for cheap and deep long-term retention.

Delivering that without the need for large and expensive 3rd party cloud gateway appliances, is a real plus.

The critical inclusion of S3 support also means that if you are already deploying any of the leading object storage platforms into your current infrastructure, as long as they support S3, and those leaders do, you can hook your Veeam data protection strategy straight in.

Veeam have certainly fleshed out version 10 nicely, adding some missing functionality, but also dealing with some tricky availability challenges, while embracing some of those emerging storage technologies.

And that’s just the Availability Suite, more to come on some of the wider announcements – but now, time for day 3…