Getting on my bike for Marie Curie

This isn’t something I normally use this site for but I hope you won’t mind me making an exception on this occasion to share a challenge that Mrs Techstringy has “encouraged” me to join her on this year!

My wife works for the Marie Curie Charity here in the UK, they do incredible work helping to care for those with terminal illness who require end of life care. As you can imagine the work can be very challenging, in fact helping those and those close to them deal with terminal illness, is perhaps some of the most challenging circumstances you could be faced with.

Through a range of services from nursing support to hospice care, this incredible charity takes on this challenge daily, providing crucial services and support for those who need it, every day of someone’s life matters – from the first to the last and the charities role is to ensure that is the case.

All these services are provided for free, but of course aren’t free to provide, as Mrs Techstringy says, “Marie Curie is an amazing charity and working for them has given me an appreciation of just how much money needs to be raised for us to be able to continue to support as many people as possible”.

Over the last 5 years my wife has supported several charities, primarily through cycling events, be they local, national, long and even longer rides including riding through Vietnam, raising thousands of pounds and acting as a constant source of inspiration to me as I’ve watched her take on these epic challenges.

After a year working with Marie Curie she knew her next challenge would be something to help support the work they do, to do that she has decided to take on the Prudential Ride London event, a 100 mile ride around the UK’s capital city. Her inspirational examples of taking on long cycling challenges to raise money for great causes has rubbed off as she has “inspired” me to join her and having never really done any long distance cycling, 100 miles seemed a sensible place to start!!

We are a couple of months away from the event, training is well underway, I rode my first 100km event a couple of weeks ago (well thanks to some suspect measuring 106km) we are taking on hilly midweek rides with a long ride at weekends and spending a bit of time down the gym on the cycling machines, my bottom has adopted the relevant resistance to time in the saddle, so with a few more weeks of training to go, we should be ready to take it on.

Why am I sharing this? well of course, not only has my wife’s willingness to spend many hours in a bicycle seat inspired me to want to have a go, but the incredible work this fantastic charity does in providing end of life care has also inspired me to want to help and see if I can do a bit to financially support this great work.

How can the techstringy.com readers and listeners to the Tech Interviews podcast help? Well of course some heartfelt good lucks on this page or on the twitters would be wonderful, but of course what would help Marie Curie is if you’d be able to contribute on our Just Giving Page to help us towards our target of £1100.

Every penny you donate will make a difference so please, if you can help we would both really appreciate it, and if you can’t, that’s no problem, a good luck message will help with those hours in the saddle.

Thanks for letting me steal some space on my blog site to share this personal adventure and if you can help, that would be marvellous.

Right, where’s my bike!?

For more on Marie Curie and the amazing work they do visit mariecurie.org.uk

To find out more about the challenge visit the Prudential Ride London Event page.

If you can help us to support the charity financially then please visit our Just Giving Page.

Advertisements

Wrapping up VeeamON – Michael Cade – Ep 66

A couple of weeks ago in Chicago Veeam had their annual tech conference VeeamON, it was one of my favourite shows from last year, unfortunately I couldn’t make it out this time but did catch up remotely and shared my thoughts on some of the strategic messages that where covered in a recent blog post looking at Veeam’s evolving data management strategy ( Getting your VeeamON!).

That strategic Veeam message is an interesting one and their shift from031318_0833_Availabilit2.jpg “backup” company to one focused on intelligent data management across multiple repositories is, in my opinion, exactly the right move to be making. With that in mind, I wanted to take a final look at some of those messages as well as some of the other interesting announcements from the show and that is exactly what we do on this week’s podcast, as I’m joined by recurring Tech Interviews guest, Michael Cade, Global Technologist at Veeam.

Michael, who not only attended the show but also delivered some great sessions, joins me to discuss a range of topics. We start by taking a look at Veeam’s last 12 months and how they’ve started to deliver a wider range of capabilities which builds on their virtual platform heritage with support for more traditional enterprise platforms.

Michael shares some of the thinking behind Veeam’s goal to deliver an availability platform to meet the demands of modern business data infrastructures, be they on-prem, in the cloud, SaaS or service provider based. We also look at how this platform needs to offer more than just the ability to “back stuff up”

We discuss the development of Veeam’s 5 pillars of intelligent data management, a key strategic announcement from the show and how this can be used as a maturity model against which you can compare your own progress to a more intelligent way of managing your data.

We look at the importance of automation in our future data strategies and how this is not only important technically, but also commercially as businesses need to deploy and deliver much more quickly than before.

We finish up by investigating the value of data labs and how crucial the ability to get more value from your backup data is becoming, be it to carry out test, dev, data analytics or a whole range of other tasks without impacting your production platforms or wasting the valuable resource in your backup data sets.

Finally, we take a look at some of the things we can expect from Veeam in the upcoming months.

You can catch up on the event keynote on Veeam’s YouTube channel https://youtu.be/ozNndY1v-8g

You can also find more information on the announcements on Veeam’s website here www.veeam.com/veeamon/announcements

If you’d like to catch up with thoughts from the Veeam Vanguard team, you can find a list of them on twitter – https://twitter.com/k00laidIT/lists/veeam-vanguards-2018

You can follow Michael on twitter @MichaelCade1 and on his excellent blog https://vzilla.co.uk/

Thanks for listening.

Getting your VeeamON!

Recently software vendor Veeam held its 2018 VeeamON conference in Chicago. VeeamON was one of my favourite conferences of last year, unfortunately I couldn’t make it out this time, but I did tune in for the keynote to listen to the new strategy messages that were shared.

The availability market is an interesting space at the minute, highlighted by the technical innovation and talent recruitment you can see companies like Veeam, Rubrik and others making. Similar to the storage industry of 5 years ago, the data protection industry is being forced to change its thinking with backup, replication and recovery no longer enough to meet modern demands. Availability is now the primary challenge, and not just of the data in our datacentre but also that sat in service providers, on SaaS platforms or with the big public hyperscalers, we need our availability strategy to cover all of these locations.

As with the storage industry when it was challenged by performance and the emergence of flash, two things are happening; New technology companies are emerging offering different approaches and thinking to take on modern challenges that traditional vendors are not addressing. But that challenge also inspires those established vendors, with experience, proven technologies, teams and budgets to react and find answers to these new challenges, well at least it encourages the smart ones.

This is where the availability industry currently sits and why the recent VeeamON conference was of interest. Veeam’s position is interesting, a few years ago they were the upstart with a new way of taking on the challenge presented by virtualisation. However, as our world continues to evolve so do the challenges, cloud, automation, security, governance and compliance just a few of the availability headaches many of us face and Veeam must react to.

One of the things I like about Veeam (and one of the reasons I was pleased to be asked to be a part of their Vanguard program this year) is that they are a very smart company, some of the talent acquisition is very impressive and the shift in how they see themselves and the problem they are trying to solve is intriguing.

VeeamON 2018 saw a further development of this message as Veeam introduced their 5 stages of intelligent data management which sees them continue to expand their focus beyond Veeam “The backup company”. The 5 stages provide the outline of a maturity model, something that can be used to measure progress towards a modern way of managing data.

Of these 5 stages, many of us are on the left-hand side of the graph with a robust policy-based backup approach as the extent of our data management. However, for many this is no longer appropriate as our infrastructures become more complex, changing more rapidly with data stored in a range of repositories and locations.

This is coupled with a need to better understand our data for management, privacy and compliance reasons, we can no longer operate an IT infrastructure without understanding at the very least where our data is located and what that means for its availability.

In my opinion, modern solutions must provide us with a level of intelligence and the ability to understand the behaviour of our systems and act accordingly. This is reflected on the right-hand side of Veeam’s strategy, that to meet this modern challenge will demand increasingly intelligent systems that can understand the criticality of a workload or what is being done to a dataset and act to protect it accordingly.

Although Veeam aren’t quite doing all of that yet, you can see steps moving them along the way, solutions such as Availability Orchestrator which takes the complexities of continuity and delivers automation to its execution, documentation and ongoing maintenance, are good examples.

It’s also important to note that Veeam understand they are not the answer to all of an organisations data management needs, they are a ultimately a company focussed on availability, but what they do realise is that availability is crucial and far beyond just recovering lost data, this is about making sure data is available “any data, any app, across any cloud” and they see the opportunity in becoming the integration engine in the data management stack.

Is all this relevant? Certainly, a major challenge for most businesses I chat with is how to build an appropriate data strategy, one that usually includes only having the data they need, to know how it’s been used and by who, where it is at any given time and having it in the right place when needed so they can extract “value” and make data driven decisions. This can only be achieved with a coherent strategy that ties together multiple repositories and systems, ensures that data is where it should be and maintains the management and control of that data across any platform that is required.

With that in mind Veeam’s direction makes perfect sense, with the 5 steps to intelligent data management model providing a framework upon which you can build a data management strategy, which is hugely beneficial to anyone who is tasked with developing their organisations data management platform.

In my opinion, Veeam’s direction is well thought out and I’ll be watching with interest in not only how it continues to develop, but importantly how they deliver tools and partnerships that allow those invested in their strategy to successfully execute it.

You can find more information on the announcements from VeeamON on Veeam’s website here www.veeam.com/veeamon/announcements

NetApp, The Cloud Company?

051718_1626_NetAppTheCl1.jpgLast week I was fortunate enough to be invited to NetApp’s HQ in Sunnyvale to spend 2 days with their leadership hearing about strategy, product updates and futures (under very strict NDA, so don’t ask! ) as part of the annual NetApp A-Team briefing session. This happened in a week were NetApp revealed their spring product updates which, alongside a raft of added capabilities to existing products, also included a new relationship with Google Compute Platform (GCP).

The GCP announcement now means NetApp offer services to the 3 largest hyperscale platform providers. Yes that’s right, NetApp the “traditional” On-prem storage vendor are offering an increasing amount of cloud services and what struck me while listening to their senior executives and technologists was this is not just a faint nod to cloud but is central to NetApp’s evolving strategy.

But why would a storage vendor have public cloud so central to their thinking? It’s a good question and I think the answer lies in the technology landscape many of us operate in. The use of cloud is commonplace, its flexibility and scale are driving new technology into businesses more quickly and easily than ever before.

However, this comes with its own challenges, while quick and easy is fine for deploying services and compute, the same can not be said of our data and storage repositories, not only does data continue to have significant “weight” but it also comes with additional challenges, especially when we consider compliance and security. It’s critical in a modern data platform that our data has as much flexibility as the services and compute that need to access it, while at the same time, allowing us to maintain full control and stringent security.

NetApp has identified this challenge as something upon which they can build their business strategy and you can see evidence of this within their spring technology announcements not only as they tightly integrate cloud into their “traditional” platforms, but also the continued development of cloud native services such as those in the GCP announcement, the additional capabilities in AWS and Azure, as well as Cloud Volumes and services such as SaaS backup and Cloud Sync. It is further reflected in an intelligent acquisition and partnering strategy with a focus on those who bring automation, orchestration and management to hybrid environments.

Is NetApp the on-prem traditional storage vendor no more?

In my opinion this is an emphatic no. During our visit we heard from NetApp Founder Dave Hitz, he talked about NetApp’s view of cloud and how initially they realised that it was something they needed to understand and decided to take a gamble on it and its potential. What was refreshing was that they did this without any guarantees they could make money from cloud, but just they understood how potentially important it would be.

Over the last 4 years NetApp has been reinvigorated with a solid strategy built around their data fabric and this strong cloud centric vision, which has not only seen share prices rocket, but has also seen market share and revenue grow. That growth has not been from cloud services alone, in fact the majority is from strong sales of their “traditional” on-prem platforms and they are convinced this growth has been driven by their embracing of cloud, a coherent strategy that looks to ensure your data is where you need it, when you need it, while maintaining all of the enterprise class qualities you’d expect on-prem, whether the data is in your datacentre, near the cloud or in it.

Are NetApp a cloud company?

No. Are they changing? Most certainly.

Their data fabric message honed over the last 4 years is now mature in not only strategy but in execution, with NetApp platforms, driven by ONTAP as a common transport engine, providing a capability to move data between platforms be they on-prem, near the cloud or straight into public hyperscalers, while crucially maintaining the high quality of data services and management we are used to within our enterprise across all of those repositories.

This strategy is core to NetApp and their success and it certainly resonates with businesses that I speak with as they become more data focussed than ever, driven by compliance, cost or the need to garner greater value from their data. Businesses do not want their data locked away in silo’s, nor do they want it at risk when they move it to new platforms to take advantage of new tools and services.

While NetApp are not a cloud company, during the two days It seemed clear to me that their embracing of cloud puts them in a unique position when it comes to providing data services. As businesses look to develop their modern data strategy they would be, in my opinion, remiss to not at least understand NetApp’s strategy and data fabric and the value that approach can bring, regardless of ultimately if they use NetApp technology or not.

NetApp’s changes over the last few years have been significant and their future vision is fascinating and I for one look forward to seeing their continued development and success.

For more information on the recent spring announcements, you can review the following;

The NetApp official Press Release

Blog post by Chris Maki summarising the new features in ONTAP 9.4

The following NetApp blogs provide more detail on a number of individual announcements;

New Fabric Pool Capabilities

The new AFF A800 Platform

Google Compute Platform Announcement

Latest NMVe announcements

Tech ONTAP Podcast – ONTAP 9.4 Overview

 

 

Building a modern data platform – Control

In the first parts of this series we have looked at ensuring the building blocks of our platform are right so that our data is sitting on strong foundations.

In this part we look at bringing management, security and compliance to our data platform.

As our data, the demands we place on it and the amount of regulation controlling it, continues to grow then gaining deep insight into how it is used can no longer be a “nice to have” it has to be an integral part of our strategy.

If you look traditionally at the way we have managed data growth you can see the basics of the problem, we have added file servers, storage arrays and cloud repositories as demanded, because more, has been easier than managing the problem.

However, this is no longer the case, as we see our data as more of an asset we need to make sure it is in good shape, holding poor quality data is not in our interest, the cost of storing it is no longer going unnoticed, we can no longer go to the business every 12 months needing more and while I have no intention of making this a piece about the EU General Data Protection Regulation (GDPR), it and regulation like it, is forcing us to rethink how we view the management of our data.

So what do I use in my data platforms to manage and control data better?

Varonis

varonis logo

I came across Varonis and their data management suite about 4 years ago and this was the catalyst for a fundamental shift in the way I have thought about and talked about data, as it opened up brand new insights on how unstructured data in a business was been used and highlighted the flaws in the way people were traditionally managing it.

With that in mind, how do I start to build management into my data platform?

It starts by finding answers to two questions;

Who, Where and When?

Without understanding this point it will be impossible to properly build management into our platform.

If we don’t know who is accessing data how can we be sure only the right people have access to our assets?

If we don’t know where the data is, how are we supposed to control its growth, secure it and govern access?

And of course when is the data accessed or even, is it accessed? let’s face it if no one is accessing our data then why are we holding it at all?

What’s in it?

However, there are lots of tools that tell me the who, where and when of data access, that’s not really reason I include Varonis in my platform designs.

While who, where and when is important it does not include a crucial component, the what. What type of information is stored in my data.

If I’m building management policies and procedures I can’t do that without knowing what is contained in my data, is it sensitive information like finances, intellectual property or customer details? Or, as we look at regulation such as GDPR, knowing where we hold private and sensitive data about individuals is increasingly crucial.

Without this knowledge we cannot ensure our data and business compliance strategies are fit for purpose.

Building Intelligence into our system

In my opinion one of the most crucial parts of a modern data platform is the inclusion of behavioural analytics, as our platforms grow ever more diverse, complex and large, one of the common refrains I hear is “this information is great, but who is going to look at it, let alone action it?”, this is a very fair point and a real problem.

Behavioural Analytics tools can help address this and supplement our IT teams. These technologies are capable of understanding and learning the normal behaviour of our data platform and when those norms are deviated from can warn us quickly and allow us to address the issue.

This kind of behavioural understanding offers significant benefits from knowing who the owners of a data set are to helping us spot malicious activity, from ransomware to data theft.

In my opinion this kind of technology is the only realistic way of maintaining security, control and compliance in a modern data platform.

Strategy

As discussed in parts one and two, it is crucial the vendors who make up a data platform have a vision that addresses the challenges businesses see when it comes to data.

There should be no surprise then that Varonis’s strategy aligns very well with those challenges, as one of the first companies I came across that delivered real forethought to the management, control and governance of our data assets.

That vision continues, with new tools and capabilities continually delivered, such as Varonis Edge and the recent addition of a new automation engine which provides a significant enhancement to the Varonis portfolio, the tools now don’t only warn of deviations from the norm, but can also act upon them to remediate the threat.

All of this tied in with Varonis’ continued extension of its integration with On-Prem and Cloud, storage and service providers, ensure they will continue to play a significant role in bringing management to a modern data platform.

Regardless of whether you choose Varonis or not it is crucial you have intelligent management and analytics built into your environment, because without it, it will be almost impossible to deliver the kind of data platform fit for a modern data driven business.

You can find the other posts from this series below;

modern data platform
Introduction
modern storage
Part One – The Storage
alwayon
Part Two – Availability

Building a modern data platform – The Series – Introduction

For many of you who read my blog posts (thank you) or listen to the Tech Interviews Podcast (thanks again!) you’ll know talking about data is something I enjoy, it has played a significant part in my career over the last 20 years, but today data is more central than ever too what so many of us are trying to achieve.

pexels-photo-373543.jpegIn today’s modern world however, storing our data is no longer enough, we need to consider much more, yes storing it effectively and efficiently is important, however, so is its availability, security, privacy and of course finding ways to extract value from it, whether that’s production data, archive or backup, we are looking at how we can make it do more (For examples of what I mean, read this article from my friend Matt Watts introducing the concept of Data Amplification Ratio) and deliver a competitive edge to our organisations.

To do this effectively means developing an appropriate data strategy and building a data platform that is fit for today’s business needs. This is something I’ve written and spoken about on many occasions, however, one question I get asked regularly is “we understand the theory, but how do we build this in practice, what technology do you use to build a modern data platform?”.

That’s a good question, the theory is all great and important, however seeing practical examples of how you deliver these strategies can be very useful. With that in mind I’ve put together this series of blogs too go through the elements of a data strategy and share some of the practical technology components I use to help organisations build a platform that will allow them to get the best from their data assets.

Over this series we’ll discuss how these components deliver flexibility, maintain security and privacy, provide governance control and insights, as well as interaction with hyperscale cloud providers to ensure you can exploit analytics, AI and Machine Learning.

So, settle back and over the next few weeks I hope to provide some practical examples of the technology you can to deliver a modern data strategy, parts one and two are live now and can be accessed in the links below. The other links will become live as I post them, so do keep an eye out for them.

modern storage
Part One – The Storage
alwayon
Part Two – Availability
control
Part Three – Control
what the cloud can bring
Part Four – Prevention (Office365)

I hope you enjoy the series and that you find these practical examples useful, but remember, these are just some of the technologies I’ve used and are not the only technologies available and you certainly don’t have to use any of these to meet your data strategy goals, however, the aim of this series is to help you understand the art of the possible, if these exact solutions aren’t for you, don’t worry, go and find technology partners and solutions that are and use them to help you meet your goals.

Good Luck and happy building!

Coming Soon;

Part Five – out on the edges

Part Six – Exploiting the Cloud

Part Seven – A strategic approach

Building a modern data platform – The Storage

wp_20160518_07_53_57_rich_li.jpgIt probably isn’t a surprise to anyone who has read my blogs previously to find out that when it comes to the storage part of our platform, NetApp are still first choice, but why?

While it is important to get the storage right, getting it right is much more than just having somewhere to store data, it’s important, even at the base level, that you can do more with it. As we move through the different elements of our platform we will look at other areas where we can apply insight and analytics, however, it should not be forgotten that there is significant value in having data services available at all levels of a data platform.

What are data services?

These services provide added capabilities beyond just a storage repository, they may provide security, storage efficiency, data protection or the ability to extract value from data. NetApp provide these services as standard with their ONTAP operating system bringing considerable value regardless of whether data capacity needs are large or small, the ability to provide extra capabilities beyond just storing data is crucial to our modern data platform.

However, many storage providers offer data services on their platforms, not often as comprehensive as those provided in ONTAP, but they are there, so if that is the case, why else do I choose to use NetApp as a foundation of a data platform?

Data Fabric

“Data Fabric” is the simple answer (I won’t go into detail here, I’ve written about the Data-Fabric_shutterstock.jpgfabric before for example Data Fabric – What is it good for?), when we think about data platforms we cannot just think about them in isolation, we need considerably more flexibility than that, we may have data in our data centre on primary storage, but we may also want that data in another location, maybe with a public cloud provider, we may want that data stored on a different platform, or in a different format all together, object storage for example. However, to manage our data effectively and securely, we can’t afford for it to be stored in different locations that need a plethora of separate management tools, policies and procedures to ensure we keep control.

The “Data Fabric” is why NetApp continue to be the base storage element of my data platform designs, the key to the fabric is the ONTAP operating system and its flexibility which goes beyond an OS installed on a traditional controller. ONTAP can be consumed as a software service within a virtual machine or from AWS or Azure, providing the same data services, managed by the same tools, deployed in all kinds of different ways, allowing me to move my data between these repositories while maintaining all of the same management and controls.

Beyond that, the ability to move data between NetApp’s other portfolio platforms, such as Solidfire and StorageGrid (Their Object storage solution), as well as to third party storage such as Amazon S3 and Azure Blob, ensures I can build a complex fabric that allows me to place data where I need it, when I need it. The ability to do this while maintaining security, control and management with the same tools regardless of location is hugely powerful and beneficial.


API’s and Integration

When we look to build a data platform it would be ridiculous to assume it will only ever contain the components of a single provider and as we build through the layers of our platform, integration between those layers is crucial and does play a part in the selection of the components I use.

API’s are increasingly important in the modern datacentre as we look for different ways to automate and integrate our components, again this is an area where NetApp are strong, providing great third party integrations with partners such as Microsoft, Veeam, VMware and Varonis (some of which we’ll explore in other parts of the series) as well as options to drive many of the elements of their different storage platforms via API’s so we can automate the delivery of our infrastructure.

Can it grow with me?

One of the key reasons that we need a more strategic view of data platforms is the continued growth of our data and the demands we put on it, therefore scalability and performance are hugely important when we chose the storage components of our platform.

NetApp deliver this across all their portfolio. ONTAP allows me to scale a storage cluster up to 24 nodes delivering huge capacity, performance and compute capability. The Solidfire platform, inspired by the needs of service providers, allows simple and quick scale and a quality of service engine which lets me guarantee performance levels of applications and data, this is before we talk about the huge scale of the StorageGrid object platform or the fast and cheap capabilities of E-Series.

Crucially NetApp’s Data Fabric strategy means I can scale across these platforms providing the ability to grow my data platform as I need and not be restricted by a single technology.

Does it have to be NetApp?

Do you have to use NetApp to build a data platform? Of course not, but do look at whatever you choose as the storage element of your platform that it can tick the majority of the boxes we’ve discussed , data services, a strategic vision and ability to move data between repositories and locations and provide great integration , while ensuring your platform can meet the performance and scale demands you have on it.

If you can do that, then you’ll have a great start for your modern data platform.

In the next post In this series we’ll look at the importance of availability – that post is coming soon.

Click below to return to “The Intro”

 

modern data platform
Building a modern data platform – The Series – Introduction

 

 

NetApp Winning Awards, Whatever Next?

WP_20160518_07_53_57_Rich_LI.jpgIn the last couple of weeks I’ve seen NetApp pick up a couple of industry awards with the all flash A200 earing the prestigious Storage Review Editors Choice as well as CRN UK’s storage Vendor of the year 2017, this alongside commercial successes (How NetApp continue to defy the performance of the storage market) is part of a big turnaround in their fortunes over the last 3 years or so, but why? What is NetApp doing to garner such praise?

A bit of disclosure, as a Director at a long-term NetApp Partner, Gardner Systems, and a member of the NetApp A-Team advocacy programme, I could be biased, but having worked with NetApp for over 10 years, I still see them meeting our customers’ needs better than any other vendor, which in itself, also suggests NetApp are doing something right.

What is it they’re doing? In this post, I share some thoughts on what I believe are key parts of this recent success

Clear Strategy

If we wind the clock back 4 years, NetApp’s reputation was not at its best, tech industry analysts presented a bleak picture, the storage industry was changing, with public cloud storage and innovative start-ups offering to do more than those “legacy” platforms and in many cases, they could, NetApp were a dinosaur on the verge of extinction.

Enter the Data Fabric, first announced at NetApp’s technical conference, Insight, in 2014. Data Fabric was the beginning of NetApp’s move from a company focussed on storing data to a company focused on the data itself. This was significant as it coincided with a shift in how organisations viewed data, moving away from just thinking about storing data to managing, securing, analysing and gaining value from it.

NetApp’s vision for data fabric, closely aligned to the aims of more data focussed organisations and also changed the way they thought about their portfolio, less worried about speeds and feeds and flashing lights and more about how to build a strategy that was focussed on data in the way their customers were.

It is this data-driven approach that, in my opinion, has been fundamental in this change in NetApp’s fortunes.

Embrace the Cloud

A huge shift and something that is taking both customers and industry analysts by surprise is the way NetApp have embraced the cloud, not a cursory nod, but cloud as a fundamental part of the data fabric strategy and this goes way beyond “cloudifying” existing technology.

ONTAP Cloud seamlessly delivers the same data services and storage efficiencies into the public cloud as you get with its on-prem cousin, this provides a unique ability to maintain data policies and procedures across your on-prem and cloud estates.

But NetApp has gone beyond this, delivering native cloud services that don’t require any traditional NetApp technologies, Cloud Sync, allows the easy movement of data from on-prem NFS datastores into the AWS cloud. While Cloud Control provides a backup service for Office365 (and now Salesforce) bringing crucial data protection functionality that many SaaS vendors do not provide.

If that wasn’t enough there is the recently announced relationship with Microsoft, with NetApp now powering the Azure NFS service, yep that’s right, if you take the NFS service from the Azure marketplace this is delivered fully in the background by NetApp.

For a storage vendor, this cloud investment is unexpected, but a clear cloud strategy is also appealing to those making business technology decisions.

Getting the basics right

With these developments, it’s clear NetApp have a strategy and are expanding their portfolio into areas other storage vendors do not consider, but there is also no escaping that their main revenue generation continues to come from ONTAP and FAS (NetApp’s hardware platform).

If I’m buying a hardware platform, what do I want from it? It should be robust with strong performance and a good investment that evolves with my business and if NetApp’s commercial success is anything to go by, they are delivering this.

The all-flash NetApp platforms (such as the award winning A200 mentioned earlier) are meeting this need, a robust enterprise-level platform, allowing organisations to build an always-on storage infrastructure that scales seamlessly with new business demands. 6-year flash drive warranties and the ability to refresh your controllers after 3 years also give excellent investment protection.

It is not just the hardware however, these platforms are driven by software, NetApp’s ONTAP operating systems is like any other modern software platform, with regular code drops (every 6 months) delivering new features and improved performance to existing hardware via a non-disruptive software upgrade, providing businesses with the ability to “sweat” their hardware investment over an extended period, which in today’s investment sensitive market is hugely appealing.

Have an interesting portfolio

NetApp for a long time was the FAS and ONTAP company, and while those things are still central in their plans, their portfolio is expanding quickly, we’ve discussed the cloud focussed services, there’s also Solidfire with its unique scale and QoS capabilities, Storage Grid a compelling object storage platform, Alta Vault provides a gateway to move backup and archive data into object storage on-prem or in the cloud.

Add to this the newly announced HCI platform you can see how NetApp can play a significant part in your next-generation datacenter plans.

For me the awards I mentioned at the beginning of this article are not because of one particular solution or innovation, it’s the data fabric, that strategy is allowing NetApp, its partners and customers to have a conversation that is data and not technology focussed and having a vendor who understands that is clearly resonating with customers, analysts and industry influencers alike.

NetApp’s continued evolution is fascinating to watch, and they have more to come, with no doubt more awards to follow, whatever next!

Going to gain some Insight – What I’m looking forward to from NetApp Insight 2017

This week I’m in Berlin at NetApp’s data management conference Insight.

Always a great chance to catch up with industry friends, hear from leaders in the data industry, a range of technology companies and about the strategic direction that NetApp and the data management industry is taking.

With 4 days ahead in Berlin, what are the things I’m hoping to hear about at Insight 2017?

Extending the fabric

If you’ve read any of my blogs on data strategy in the past you’ll be familiar with NetApp’s data fabric concept, the fabric was developed to enable us to break down the data silos’s that we have become used to and enable us to build a strategy to allow us to simply and freely move data between any repository be that on-prem, software-defined, in the cloud or near the cloud while maintaining all of the security, management and control of our data that we have grown used to on-prem.

Today the data fabric is much more than a strategic aim as it is now practically delivered across much of the NetApp portfolio and I’ll be paying attention to how this continues to evolve.

Gaining understanding of our data

This is the next step for “storage” companies, especially those, like NetApp, who are repositioning themselves as data management companies.

Long gone are the days where we just want somewhere to store our data, you have to remember not only is “storing boring” it also does not serve us well, whether you are concerned about governance and security, or how to extract value from your data, this can only come with full understanding of where your data is, what it contains, who accesses it and when, all are critical in a modern data strategy and I’ll be interested in how NetApp is allowing us to gain more understanding.

Securing all of the data things

Nothing is higher on the priority list for CIO’s and those making business decisions than the security of our business data (well it should be high on the priority list), I’m keen to see how NetApp build on what they currently have (encryption, data security policies, API’s for 3rd party security vendors) to fully secure and understand the data within an environment.

I’ll also be interested to hear more about the changes the data industry continues to make to enable us to not only secure our data from the ever-evolving security challenge but also how we can meet increasing compliance and privacy demands.

Analysing the stuff

I fully expect to hear more about how data continues to be the new oil, gold etc, as marketing based as this messaging is, it is not without validity, I constantly speak with business decision makers who are eager to understand how they can use the data they own and collect to gain a business advantage.

NetApp has made some interesting moves in this space, with integrated solutions with Splunk and the Cloud Connect service allowing easy movement of data into AWS analytics tools.

It will be interesting to see how this evolves and how NetApp can ensure the data fabric continues to extend to so we can take advantage of the ever-growing analytics tools that allow us to gain value from our data sets.

Integrating all of the things

NetApp has long innovated in the converged infrastructure market, with their joint Cisco solution Flexpod.

However, this market continues to evolve with the emergence of hyper-converged infrastructure (HCI), which companies like Nutanix and Simplivity (now owned by HPE) have led the way. However, up to now, I have the feeling HCI is only scratching the surface by taking infrastructure, servers, storage and networking and squeezing it into a smaller box. In my opinion what’s missing is the software and automation to allow us to use HCI to deliver the software-defined architectures many are striving for.

It is this that is beginning to change, VMware and Microsoft, amongst others, are bringing us more tightly integrated software stacks extracting hardware complexity and letting us drive infrastructure fully in software, bringing that cloud like experience into the on-prem datacentre.

It is these software stacks that really starts to make HCI an interesting platform, marrying this simplified hardware deployment method, with automated software driven infrastructure has the potential to be the future of on-prem datacentres.

I’ll certainly be keeping an eye on NetApp’s new HCI platform and how that will allow us to continue to simplify and automate infrastructure so we can deliver a flexible, scalable, agile IT into our businesses.

What else will I be up to?

Many of you know I’m proud to be a part of the NetApp A-Team, and this association has also made Insight a very different proposition from a couple of years ago.

For the first time I’ll be part of a couple of sessions at the event, feel free to come and check them out and say hello;

You’ll find me doing session 18345-1-TT – Ask the A-Team – Cloud and Possibilities with NetApp Data Fabric and 18348-2 – From the Beginning – Becoming a service provider.

I’ll also be hosting the pop-up tech talks sessions – If you want to come and meet up and chat (on camera) about your views of NetApp or the data market in general, why not come find me.

And lastly, I’ll be appearing on The Cube as they broadcast live from Berlin giving in-depth coverage of Insight.

I’ll be discussing HCI platforms on Tuesday 14th at 2.30, you’ll find that broadcast on thecube.net

If you’re at Insight, do come say hello or hook up on the twitters @techstringy

And let’s discuss if you too have gained any insights.

Look out for more blogs and podcasts from Insight 2017 over the coming weeks.

Chaining the blocks, a Blockchain 101 – Ian Moore – Ep46

As the world continues to “transform” and be more digitally driven, then the inevitable also has to happen, systems that support our day to day processes start to become outdated, inefficient and ineffective for a world that needs to move more quickly and in different ways.

One such innovation that is gathering momentum is the use of blockchain and it is starting to have a major disruptive impact on the way many traditional transactions are done, with current mechanisms often been slow, inefficient and vulnerable to compromise, as well as in many cases, especially with financial transactions, a lack of trust in many of the existing methods.

But what exactly is blockchain, like many people it’s a technical term I’m familiar with but don’t fully understand how it works, why it’s relevant and how is it impacting business right now, as well as the potential future applications.

If you are like me and interested in the technology and would like to know more, then maybe this week’s podcast episode is for you as I’m joined by Ian Moore to provide a beginners guide to blockchain, a blockchain 101 no less.

Ian is not a blockchain expert, but certainly is an enthusiast and the perfect person to introduce the concept and provide a good overview of the technology. In his day job, Ian works for IBM in their data division.

During our conversation, he introduces us to the importance of ledgers, how the four key blockchain tenants of consensus, provenance, immutability and finality are allowing blockchain transactions to be quick, secure and trusted.

We also discuss how the speed of digital transformation is demanding improvement in speed and efficiency and how transactions that used to take weeks are no longer acceptable as blockchain takes those long slow processes and does them almost instantly.

Ian also shares some great use cases, as well as outlining the basic requirements needed for a blockchain, we wrap up by discussing possible futures uses for this technology approach and how blockchain will do for transactions what the Internet has done for communications.

Ian provides us with an excellent introduction to blockchain, to find out more on this topic and how it may impact your business, IBM has some great resources on their blockchain page here https://www.ibm.com/blockchain/what-is-blockchain.html

You can find out more from Ian on twitter @Ian_DMoore

I also mentioned during the show another fascinating blockchain introduction podcast, where Stephen Foskett joins Yadin Porter De Leon on the Intech We Trust podcast, you can find that show here https://intechwetrustpodcast.com/e/130-on-the-blockchain/

I hope you enjoyed the show, to catch future episodes then you can subscribe on iTunes, Soundcloud and Stitcher as well as other good homes of podcasts.