Availability of all of the things – Michael Cade – Ep 60

Recently I wrote a blog post as part of a series that explored the importance of availability to a modern data platform, especially in a world were our reliance on technology is ever increasing, from the way we operate our business, to the way we live our lives and how the digitally focussed businesses can no longer tolerate downtime, planned or unplanned in the way they could even 5 years ago (you can read that post here).

So how do we mitigate against the evils of downtime? That’s simple, we build recovery and continuity plans to ensure that our system remain on regardless of the events that go on around it, from planned maintenance to the very much unplanned disaster. But there’s the problem, these things aren’t simple, are they?

I’ve recently worked on a project where we’ve been doing exactly this, building DR and continuity plans in the more “traditional” way, writing scripts, policies and procedures to ensure that in the event of some kind of disaster the systems could be recovered quickly and meet stringent recovery time and point objectives. What this project reminded me of is how difficult these things are, keeping your documentation up to date, making sure your scripts are followed and ensuring you can fully test these plans, is tricky.

With that in mind the recent product announcement from Veeam of their new Availability Orchestrator solution, caught my attention, a solution that promises to automate and orchestrate not only the delivery of a DR solution, but also automating its documentation and testing, this was something that I needed to understand more and thought I wouldn’t be the only one.

So that is the topic of this weeks podcast, as serial guest Michael Cade, Global Technologist at Veeam, joins me to provide an insight into Availability Orchestrator, what challenges it addresses, why Veeam thought it was important to develop and how it can help you deliver better availability to your critical systems.

During the show Michael shares some insight into understanding your availability gap and why today business cannot tolerate downtime of key systems as well as the difficulties that come with maintaining a robust and appropriate strategy.

We explore the challenges of testing when the business doesn’t want downtime, how to keep track of all of the little tricks that our tech team keep in their heads how to get that into a continuity plan.

We finish up looking at how Availability Orchestrator can help, by providing a automation and orchestration solution to automate testing, documentation and execution of our continuity plans and how it can also be a tool to help us build test and dev environments, as well as help us to migrate to cloud platforms like VMware on AWS.

Availability Orchestrator, in my opinion, is a very powerful tool, having just worked on a continuity and DR project, the challenges that come with manually maintaining these plans are still very fresh in my mind and had this tool been available when I started that project it would certainly of been worthy of investigation into how it could help.

If you want to find out more about Veeam availability orchestrator, check out the Veeam Website.

You can follow Michael on twitter @MichaelCade1

And if you’d like to read his blog series on Veeam replication you’ll find that on his blog site starting here.

Hope you’ve found the show useful.

Thanks for listening.


Managing the future – Dave Sobel – Ep59

As our IT systems become ever more complex, with more data, devices and ways of working, the demands on our systems and ensuring they are always operating efficiently grow. This in turn presents us and our IT teams with a whole new range of management challenges.

Systems management has always been a challenge for organisations, how do we keep on top of an ever-increasing amount of systems ? how do we ensure they remain secure and patched ? and how do we cope with our users and their multitude of devices and ensure we can effectively look after them?.

Like most of our technology, systems management is changing, but how? And what should we expect from future management solutions?

That’s the subject of this weeks podcast, as I’m joined by returning guest Dave Sobel. Dave is Senior Director of Community at SolarWinds MSP, working with SolarWinds partners and customers to ensure they deliver a great service.

As part of this role, Dave is also charged with looking at the future (not the distant future, but the near future of the next 2 years) of systems management and what these platforms need to include in them to continue to be relevant and useful.

Dave provides some excellent insight into the way the management market is shifting and some of the technology trends that will change and improve the way we control our ever more complex yet crucial IT systems.

We start by asking why looking at the future is such an important part of the IT strategists role, whether you are a CIO, IT Director, or any person who makes technology direction strategy decisions, if you are not taking a look at future trends, it will seriously limit your ability to make good technology decisions.

We see why we need to rethink how we see a “computer” and how this is leading to a proliferation of different devices with the emergence of Internet of Things (IoT) as well as looking at why that is such a horrible phrase and how this is affecting our ability to manage.

We discuss the part Artificial Intelligence is going to play in future systems management as we try to supplement our over stretched IT staff and provide them with ways of analysing ever more data and turning it into something useful.

We also investigate increased automation, looking at how our management systems can be more flexible in supporting new devices as they are added to our systems, as well as been smarter in the way we can apply management to all of our devices.

Finally, we look at the move to human centric management, instead of our systems been built to support devices, we need to be able to understand the person who uses the technology, and build our management and controls around them, allowing us to provide them with better management and importantly a better technology experience.

We wrap up looking at how smarter systems management is going to allow us to free our IT teams to provide increased value to the business, as well as looking at a couple of areas you can focus on today, to start to look at the way you manage your systems.

To find more from Dave you can follow him on twitter @djdaveet

You will find Dave’s Blog is here

I hope you found the chat as interesting as I did.

Until next time, thanks for listening.

Building a modern data platform – Control

In the first parts of this series we have looked at ensuring the building blocks of our platform are right so that our data is sitting on strong foundations.

In this part we look at bringing management, security and compliance to our data platform.

As our data, the demands we place on it and the amount of regulation controlling it, continues to grow then gaining deep insight into how it is used can no longer be a “nice to have” it has to be an integral part of our strategy.

If you look traditionally at the way we have managed data growth you can see the basics of the problem, we have added file servers, storage arrays and cloud repositories as demanded, because more, has been easier than managing the problem.

However, this is no longer the case, as we see our data as more of an asset we need to make sure it is in good shape, holding poor quality data is not in our interest, the cost of storing it is no longer going unnoticed, we can no longer go to the business every 12 months needing more and while I have no intention of making this a piece about the EU General Data Protection Regulation (GDPR), it and regulation like it, is forcing us to rethink how we view the management of our data.

So what do I use in my data platforms to manage and control data better?


varonis logo

I came across Varonis and their data management suite about 4 years ago and this was the catalyst for a fundamental shift in the way I have thought about and talked about data, as it opened up brand new insights on how unstructured data in a business was been used and highlighted the flaws in the way people were traditionally managing it.

With that in mind, how do I start to build management into my data platform?

It starts by finding answers to two questions;

Who, Where and When?

Without understanding this point it will be impossible to properly build management into our platform.

If we don’t know who is accessing data how can we be sure only the right people have access to our assets?

If we don’t know where the data is, how are we supposed to control its growth, secure it and govern access?

And of course when is the data accessed or even, is it accessed? let’s face it if no one is accessing our data then why are we holding it at all?

What’s in it?

However, there are lots of tools that tell me the who, where and when of data access, that’s not really reason I include Varonis in my platform designs.

While who, where and when is important it does not include a crucial component, the what. What type of information is stored in my data.

If I’m building management policies and procedures I can’t do that without knowing what is contained in my data, is it sensitive information like finances, intellectual property or customer details? Or, as we look at regulation such as GDPR, knowing where we hold private and sensitive data about individuals is increasingly crucial.

Without this knowledge we cannot ensure our data and business compliance strategies are fit for purpose.

Building Intelligence into our system

In my opinion one of the most crucial parts of a modern data platform is the inclusion of behavioural analytics, as our platforms grow ever more diverse, complex and large, one of the common refrains I hear is “this information is great, but who is going to look at it, let alone action it?”, this is a very fair point and a real problem.

Behavioural Analytics tools can help address this and supplement our IT teams. These technologies are capable of understanding and learning the normal behaviour of our data platform and when those norms are deviated from can warn us quickly and allow us to address the issue.

This kind of behavioural understanding offers significant benefits from knowing who the owners of a data set are to helping us spot malicious activity, from ransomware to data theft.

In my opinion this kind of technology is the only realistic way of maintaining security, control and compliance in a modern data platform.


As discussed in parts one and two, it is crucial the vendors who make up a data platform have a vision that addresses the challenges businesses see when it comes to data.

There should be no surprise then that Varonis’s strategy aligns very well with those challenges, as one of the first companies I came across that delivered real forethought to the management, control and governance of our data assets.

That vision continues, with new tools and capabilities continually delivered, such as Varonis Edge and the recent addition of a new automation engine which provides a significant enhancement to the Varonis portfolio, the tools now don’t only warn of deviations from the norm, but can also act upon them to remediate the threat.

All of this tied in with Varonis’ continued extension of its integration with On-Prem and Cloud, storage and service providers, ensure they will continue to play a significant role in bringing management to a modern data platform.

Regardless of whether you choose Varonis or not it is crucial you have intelligent management and analytics built into your environment, because without it, it will be almost impossible to deliver the kind of data platform fit for a modern data driven business.

You can find the other posts from this series below;

modern data platform
modern storage
Part One – The Storage
Part Two – Availability

Straight as an Arrow – David Fearne & Richard Holmes – Ep58

If there is one thing that we can say is a certainty in the technology industry it is the constant state of change, how technology works, how we want to use it, where we want to use it and what we expect from it is constantly changing and in reality is ever more demanding.

For those of us who work in technology, either as IT pro’s or IT decision makers, this presents its own challenges, when we are planning our IT strategy how do we know where to focus, what technology bets should we be taking and what trends are others taking advantage of that we could bring into our organisation to help us to improve our services.

One of the things I like to do in my role is spend time looking at technology predictions and listen to ideas from those in the industry tasked with defining the strategic direction of their businesses, not to judge whether they are right or wrong (predicting things in this industry is so very difficult) but to pick out trends and areas that are of interest to the work I do and then at least be aware of it and keep a watching brief on how it develops.

Keeping a watching brief gave me the idea for this week’s podcast as I catch up with two guests who produce an annual technology predictions blog and back that up with episodes on their own successful podcast where they look in more detail at those predictions.

David Fearne and Richard Holmes work for Arrow ECS, a global technology supplier and one of the worlds largest companies. David is Technical Director, charged with looking after the relationship and developing strategy for over 100 different technology partners and suppliers. Richard is Business Development Director for Arrow’s Internet Of Things (IoT) business. The gents also present the excellent Arrow Bandwidth podcast.

This week we look at their predictions from 2017, not to review whether they have been successful, but rather to focus on just a few areas of particular interest and look at how those areas have developed over the last 12 months and how we expect they will continue to shift.

We start by discussing data management and the concept of “data divorce” and why in a rapidly changing landscape how we look after our data will become increasingly important. We also look at how, in a world that is removing barriers to our ability to collect more and more data, how we manage that and importantly how we only collect things that are relevant and of use to us and our organisations.

The second area we explore is data analytics and how do we build into our businesses the ability to make data driven decisions. We discuss the fact that all businesses make decisions based on data, however, how do we remove our human inefficiencies and more importantly bias when we look at data, how many of us make decisions based on someone’s “version of the truth”?

We also investigate the inhibitors to more of us embracing data analytics capabilities, capabilities that are increasingly available to us particularly via providers like Microsoft, AWS and Google, the challenge isn’t a technology one, but more about how we get those tools into the hands of the right people and empower them.

We wrap up looking at security and David’s assertion of a change in “security posture” and how it’s crucial that we rethink the way we look at security of our systems. We discuss why “assuming breach” is an important part of that change. We look at, as the security problem becomes ever more complex, how do we continue to address it, is the answer to employ ever more security specialists?

We wrap up by discussing how each of these areas have a common thread running through them and how as technology strategists it is important that, when making technology decisions, we don’t focus on technology but fully understand the business outcomes we are trying to achieve.

It’s a great chat with David and Richard and we could have discussed these trends for hours, luckily for you, it’s only 40 minutes!

Enjoy the Show.

You’ll find David and Richards full list of prediction from 2017 here – https://www.arrowthehub.co.uk/blog/posts/2017/february/what-are-the-hottest-technology-trends-of-2017-part-1/

You’ll also find the 2018 predictions here https://www.arrowthehub.co.uk/blog/posts/2018/january/what-are-the-hottest-technology-trends-for-2018-part-1/

If you’d rather listen, then check out the excellent Arrow Bandwidth podcast you can find the episodes discussing all of last years predictions as well as this years in the following places Tech Trends 2017 Part One, Tech Trends 2017 Part Two, Tech Trends 2018 Part One, Tech Trends 2018 Part Two.

If you’d like to keep up with David and Richard, you can find them both on twitter @davidfearne and @_Rich_Holmes.

Thanks for listening.

Building a modern data platform – Availability

In part one we discussed the importance of getting our storage platform right, in part two we look at availability.

The idea that availability is a crucial part of a modern platform was something I first heard from a friend of mine, Michael Cade from Veeam, who introduced me to “availability as part of digital transformation” and how this was changing Veeam’s focus.

This shift is absolutely right, today as we build our modern platforms backup and recovery is still a crucial requirement, however, a focus on availability is at least, if not more, crucial. Today nobody in your business really cares how quickly you can recover a system, what our digitally driven businesses demand is that our systems are always there and downtime in ever more competitive environments is not tolerated.

With that in mind why do I choose Veeam to deliver availability to my modern data platform?

Keep it simple

Whenever I meet a Veeam customer their first comment on Veeam is “it just works”, the power of this rather simple statement should not be underestimated when you are protecting key assets. Too often data protection solutions have been overly complex, inefficient and unreliable and that is something I have always found unacceptable, for business big or small you need a data protection solution you can deploy and then forget and trust it just does what you ask, this is perhaps Veeam’s greatest strength and a crucial driver behind its popularity and what makes it such a good component part of a data platform.

I would actually say Veeam are a bit like the Apple of availability, although much of what they do has been done by others (Veeam didn’t invent data protection, in the same way Apple didn’t invent the smartphone) but what they have done is make it simple and usable and something that just works and can be trusted. Don’t underestimate the importance of this.


If ever there was a byword for modern IT, flexibility could well be it, it’s crucial that any solution and platform we build has the flexibility to react to ever changing business and technological demands. Look at how business needs for technology and the technology itself has changed in the last 10 years and how much our platforms have needed to change to keep up, flash storage, web scale applications, mobility, Cloud, the list goes on.

The following statement sums up Veeam’s view on flexibility perfectly

“Veeam Availability Platform provides businesses and enterprises of all sizes with the means to ensure availability for any application and any data, across any cloud infrastructure”

It is this focus on flexibility that make Veeam such an attractive proposition in the modern data platform, allowing me to design a solution that is flexible enough to meet my different needs, providing availability across my data platform, all with the same familiar toolset regardless of location, workload type or recovery needs.


As mentioned in part one, no modern data platform will be built with just one vendors tools, not if you want to deliver the control and insight into your data that we demand as a modern business. Veeam, like NetApp, have built a very strong partner ecosystem allowing them to integrate tightly with many vendors, but more than just integrate Veeam deliver additional value allowing me to simplify and do more with my platform (take a look at this blog about how Veeam allows you to get more from NetApp snapshots). Veeam are continuously delivering new integrations and not only with on-prem vendors, but also as mentioned earlier, with a vast range of cloud providers.

This ability to extend the capabilities and simplify the integration of multiple components in a multi-platform, multi-cloud world is very powerful and a crucial part of my data platform architecture.


As with NetApp, over the last 18 months it has been the shift in Veeam’s overall strategy that has impressed me more than anything else, although seemingly a simple change, the shift from talking about backup and recovery to availability is significant.

As I said at the opening of this article, in our modern IT platforms nobody is interested in how quickly you can recover something, it’s about availability of crucial systems. A key part of Veeam’s strategy is to “deliver the next generation of availability for the Always-On Enterprise” and you can see this in everything Veeam are doing, focussing on simplicity, ensuring that you can have your workload where you need it when you need it and move those workloads seamlessly between on-prem, cloud and back again.

They have also been very smart, employing a strong leadership team and, as with NetApp, investing in ensuring that cloud services don’t leave a traditionally on-premises focussed technology provider adrift.

The Veeam and NetApp strategies are very similar, and it is this similarity that makes them attractive components in my data platform. I need my component providers to understand technology trends and changes so they, as well as our data platforms, can move and change with them.

Does it have to be Veeam?

In the same way it doesn’t have to be NetApp, of course it doesn’t have to be Veeam, but in exactly the same way, if you are building a platform for your data, then make sure your platform components deliver the kinds of things that we have discussed in the first two parts of this series, ensure that they provide the flexibility we need, the integration with components across your platform and a strategic vision that you are comfortable with, as long as you have that, that will give you rock solid foundations to build on.

In Part Three of this series we will look at building insight, compliance and governance into our data platform.

You can find the Introduction and Part One – “The Storage” below.

modern data platform
The Introduction
modern storage
Part One – The Storage



IT Pro’s and the Tech Community – Yadin Porter de Leon – Ep 57

One of the favourite parts of my role over the last few years has been my involvement in tech community, whether that’s been working with advocacy groups like the NetApp A-Team, with local user groups like TechUG, presenting at a range of different community events or just answering questions in technical communities, all of these investments (and they are investments) have paid back, they’ve introduced me to great people, given me access to resources and expertise I would never have found normally and opened up great opportunities for travel and too develop some great friendships.

We are fortunate to be part of an industry that does have a strong sense of community, full of people with shared interests and a passion for their subject, a passion they are often happy to share with anyone who’s interested.

One of the challenges with tech community is however its size and if you are new to it or even a part of it, it can be overwhelming and hard to know where to start? How do you find the resources you need, find out which events you can attend or find out who the leaders are that you can engage with?

071517_1725_Livingonthe1.jpgLast year I was invited to get involved in a project called “Level Up”, a project started by this week’s guest on the podcast Yadin Porter de Leon, Yadin has been on the show before in his capacity at data protection company Druva, however that’s not what we discuss this week as we chat about the Level Up project, why he started it, the project aims and how it can help you in your career.

In this week’s episode we discuss why you may want to get involved in community and what benefits it can bring and how involvement in the wider community can benefit both you and your business providing you with opportunities to develop your skills.

Yadin shares how one of the focuses of the project is to engage those who are not already involved in community and provide them a way to get started.

We look at Level Up’s first project the vTrail Map a fantastic guide to the world of VMware and the virtualisation community and we also look ahead to what’s next for the project and the longer terms aims.

We wrap up by asking Yadin about another project he is involved in which is the excellent Tech Village Podcast, again focussed on career development and the technology business, a great show which I’d recommend anyone gets on their regular podcast list you can find the show on Soundcloud and follow the show on twitter @TechVillagePod

For more information on Level Up, you can find them on twitter @Tech_LevelUp

You can also contact Yadin on twitter @porterdeleon

Hope you find the show interesting and if you’re not already involved in tech community maybe this will give you a bit of inspiration to involve yourself more, it’s most definitely worth it.

Thanks for listening.

Building a modern data platform – The Series – Introduction

For many of you who read my blog posts (thank you) or listen to the Tech Interviews Podcast (thanks again!) you’ll know talking about data is something I enjoy, it has played a significant part in my career over the last 20 years, but today data is more central than ever too what so many of us are trying to achieve.

pexels-photo-373543.jpegIn today’s modern world however, storing our data is no longer enough, we need to consider much more, yes storing it effectively and efficiently is important, however, so is its availability, security, privacy and of course finding ways to extract value from it, whether that’s production data, archive or backup, we are looking at how we can make it do more (For examples of what I mean, read this article from my friend Matt Watts introducing the concept of Data Amplification Ratio) and deliver a competitive edge to our organisations.

To do this effectively means developing an appropriate data strategy and building a data platform that is fit for today’s business needs. This is something I’ve written and spoken about on many occasions, however, one question I get asked regularly is “we understand the theory, but how do we build this in practice, what technology do you use to build a modern data platform?”.

That’s a good question, the theory is all great and important, however seeing practical examples of how you deliver these strategies can be very useful. With that in mind I’ve put together this series of blogs too go through the elements of a data strategy and share some of the practical technology components I use to help organisations build a platform that will allow them to get the best from their data assets.

Over this series we’ll discuss how these components deliver flexibility, maintain security and privacy, provide governance control and insights, as well as interaction with hyperscale cloud providers to ensure you can exploit analytics, AI and Machine Learning.

So, settle back and over the next few weeks I hope to provide some practical examples of the technology you can to deliver a modern data strategy, parts one and two are live now and can be accessed in the links below. The other links will become live as I post them, so do keep an eye out for them.

modern storage
Part One – The Storage
Part Two – Availability
Part Three – Control

I hope you enjoy the series and that you find these practical examples useful, but remember, these are just some of the technologies I’ve used and are not the only technologies available and you certainly don’t have to use any of these to meet your data strategy goals, however, the aim of this series is to help you understand the art of the possible, if these exact solutions aren’t for you, don’t worry, go and find technology partners and solutions that are and use them to help you meet your goals.

Good Luck and happy building!

Coming Soon;

Part Four – What the cloud can bring

Part Five – out on the edges

Part Six – Exploiting the Cloud

Part Seven – A strategic approach

Building a modern data platform – The Storage

wp_20160518_07_53_57_rich_li.jpgIt probably isn’t a surprise to anyone who has read my blogs previously to find out that when it comes to the storage part of our platform, NetApp are still first choice, but why?

While it is important to get the storage right, getting it right is much more than just having somewhere to store data, it’s important, even at the base level, that you can do more with it. As we move through the different elements of our platform we will look at other areas where we can apply insight and analytics, however, it should not be forgotten that there is significant value in having data services available at all levels of a data platform.

What are data services?

These services provide added capabilities beyond just a storage repository, they may provide security, storage efficiency, data protection or the ability to extract value from data. NetApp provide these services as standard with their ONTAP operating system bringing considerable value regardless of whether data capacity needs are large or small, the ability to provide extra capabilities beyond just storing data is crucial to our modern data platform.

However, many storage providers offer data services on their platforms, not often as comprehensive as those provided in ONTAP, but they are there, so if that is the case, why else do I choose to use NetApp as a foundation of a data platform?

Data Fabric

“Data Fabric” is the simple answer (I won’t go into detail here, I’ve written about the Data-Fabric_shutterstock.jpgfabric before for example Data Fabric – What is it good for?), when we think about data platforms we cannot just think about them in isolation, we need considerably more flexibility than that, we may have data in our data centre on primary storage, but we may also want that data in another location, maybe with a public cloud provider, we may want that data stored on a different platform, or in a different format all together, object storage for example. However, to manage our data effectively and securely, we can’t afford for it to be stored in different locations that need a plethora of separate management tools, policies and procedures to ensure we keep control.

The “Data Fabric” is why NetApp continue to be the base storage element of my data platform designs, the key to the fabric is the ONTAP operating system and its flexibility which goes beyond an OS installed on a traditional controller. ONTAP can be consumed as a software service within a virtual machine or from AWS or Azure, providing the same data services, managed by the same tools, deployed in all kinds of different ways, allowing me to move my data between these repositories while maintaining all of the same management and controls.

Beyond that, the ability to move data between NetApp’s other portfolio platforms, such as Solidfire and StorageGrid (Their Object storage solution), as well as to third party storage such as Amazon S3 and Azure Blob, ensures I can build a complex fabric that allows me to place data where I need it, when I need it. The ability to do this while maintaining security, control and management with the same tools regardless of location is hugely powerful and beneficial.

API’s and Integration

When we look to build a data platform it would be ridiculous to assume it will only ever contain the components of a single provider and as we build through the layers of our platform, integration between those layers is crucial and does play a part in the selection of the components I use.

API’s are increasingly important in the modern datacentre as we look for different ways to automate and integrate our components, again this is an area where NetApp are strong, providing great third party integrations with partners such as Microsoft, Veeam, VMware and Varonis (some of which we’ll explore in other parts of the series) as well as options to drive many of the elements of their different storage platforms via API’s so we can automate the delivery of our infrastructure.

Can it grow with me?

One of the key reasons that we need a more strategic view of data platforms is the continued growth of our data and the demands we put on it, therefore scalability and performance are hugely important when we chose the storage components of our platform.

NetApp deliver this across all their portfolio. ONTAP allows me to scale a storage cluster up to 24 nodes delivering huge capacity, performance and compute capability. The Solidfire platform, inspired by the needs of service providers, allows simple and quick scale and a quality of service engine which lets me guarantee performance levels of applications and data, this is before we talk about the huge scale of the StorageGrid object platform or the fast and cheap capabilities of E-Series.

Crucially NetApp’s Data Fabric strategy means I can scale across these platforms providing the ability to grow my data platform as I need and not be restricted by a single technology.

Does it have to be NetApp?

Do you have to use NetApp to build a data platform? Of course not, but do look at whatever you choose as the storage element of your platform that it can tick the majority of the boxes we’ve discussed , data services, a strategic vision and ability to move data between repositories and locations and provide great integration , while ensuring your platform can meet the performance and scale demands you have on it.

If you can do that, then you’ll have a great start for your modern data platform.

In the next post In this series we’ll look at the importance of availability – that post is coming soon.

Click below to return to “The Intro”


modern data platform
Building a modern data platform – The Series – Introduction



Turning Up The Amp On Your Data – Matt Watts – Ep56

Wanting to get the very best from your data and “extracting value” from it seems to be a constant conversation with technology and business leaders in pretty much any organisation, but what does getting value from it mean and how do we go about it?

A couple of weeks ago I read a very interesting article from this weeks guest where he introduced the concept of Data Amplification Ratio, the basic premise of this article was that one of the key ways to get more from your data is to ensure that the datasets you have can be presented to multiple different systems and services all of which can add their own particular value and extract their own unique information from the data presented to them (you can read the whole article here What is your data amplification ratio?). I thought this article presented a really good insight in to the practicalities of getting the most from our data and wanted to get the author to share that insight for the Tech Interviews listeners.

032617_2030_TheFutureis1.jpgThat’s exactly what we do this week, as I’m joined by Matt Watts, Director Technology and Strategy for Data Management company NetApp, to explore more this idea of Data Amplification, what it means and what it could mean for those can take advantage of their data to deliver new services, opportunities and value.

We explore the wide range of ideas that Matt covered in his article in a little more depth. We start by exploring what is a “Data Amplification Ratio” and why it’s important to focus on the right things if we want to make the most from our data.

We discuss the line between what are underlying storage “table stakes” versus what are things that can allow us to do more with our data assets. We look at how the secret to unlocking your data is having the flexibility to present it to numerous different systems, services or people who can gain insight and information from it.

We also examine the idea of how a technical chat still has a place in a world where increasingly our technology investments are about delivering business outcomes and not about the technology itself. Matt also discusses the concept of a “data fabric” and how data mobility is going to be a crucial for getting the very best from the data you have.

There is also a bit of Tech Interviews controversy as Matt shares his view on why one of the tech industries favourite phrases “data is the new oil” may not actually be true!

We wrap up by looking at what’s next for data amplification and how frequency and speed is the next challenge to overcome.

Matt as always shares some fantastic insights on the data industry and its direction.

To find out more from Matt you can read his latest article at watts-innovating.com and find him on twitter @mtjwatts

If you enjoyed the show, then why not subscribe, you’ll find Tech Interviews in all of the usual places.

Until next time, thanks for listening.

Hybrid Cloud It’s Just Like Lego – John Woodall – Ep55

As organisations we can see the benefit of cloud and almost all organisations are taking advantage of it, be it a software service such as Office365 or backup and DR, the efficiency and simplicity of deployment of these kinds of services make them an attractive option.

However, what we are not seeing, in general, is organisations moving 100% to a cloud-based IT infrastructure, this can be for many reasons, for example, suitability, complexity or cost.

But our desire to take advantage of cloud services is not going to diminish, especially as we look at how we start to get the best from our data with analytics, business intelligence and machine learning, often the only practical way to access these services is via cloud.

With all this in mind we will see increasing numbers of organisations want to deploy a hybrid IT model, using cloud where appropriate and on-premises infrastructure as needed. If we are deploying a hybrid infrastructure, what does that mean to us and what do we need to consider when we are designing our IT services and importantly preparing our data assets to operate in this hybrid world?

Recently I watched a video prepared by storage vendor NetApp called “What it takes to get your data hybrid cloud ready” in which John Woodall, VP Engineering at Integrated Archive Systems, provided insights on how to prepare your data and infrastructure to operate effectively in a hybrid world.

John covered some great information and I thought exploring in more detail the points he raises in the short video would make an interesting Tech Interviews episode, so that’s exactly what I’ve done.

This week John joins the show to discuss and share his experience on how to prepare your IT infrastructure to operate in a hybrid environment.

We start our chat by looking at what we mean by hybrid cloud and the importance that it plays in a modern businesses IT strategy.

We also explore the challenges that come with building a hybrid model and why it’s crucial we don’t lose sight of who is responsible for the data that we share with any cloud service provider, remember IT’S YOUR DATA.

We talk about the joys of finances and why if you are thinking cloud is going to be a money saver, you maybe looking at this in the wrong way. We also discuss the importance, before we head off on our cloud adventure, of fully appreciating exactly where we are right now.

John also shares some of the basics you need to get in place to not only ensure you get your strategy right, but that you can deploy something that is consistent, secure and extensible. We also get to talk about Lego and Minecraft!

We wrap up with looking at why you may want to consider hybrid, whether you are a traditional business with lots of on-prem IT or even if you are a “born in the cloud” company who until now has delivered everything via the cloud, why you may want to bring some of that on-premises.

John has a real enthusiasm for deploying technology and helping customers get the best from their investment and he shares that in this episode, if you want to find out more from John or about Integrated Archive Systems then you can in the following ways;

Follow John on Twitter @John_Woodall

Check out his company website at www.iarchive.com

Hope you enjoyed the show, until next time, thanks for listening.