The fast and the intelligent – Glenn Dekhayser & John Woodall – Ep83

It’s that most wonderful time of the year, yes, the last Tech Interviews of 2018. For this show, we focus on two of technologies biggest trends, ultra-high performance and the world of artificial intelligence and machine learning.

At the core of these technology trends is data, businesses of all types are now realising that there is value to be taken from being able to use their data to drive change, be that delivering new services, increasing operational efficiency, or finding new ways to make money from the things that they do.

As organisations realise the value of their data it inevitably leads to the question, “how can we use technology to help extract that value and information we need?”

That’s the focus of this episode, they are the last two chats I did on my recent tech conference tour, both of which focus on technology that can start to deliver, into organisations of all types, the ability to use their data more effectively to drive business change.

First, we look at ultra-high performance as I chat with Glenn Dekhayser, Field CTO at Red8, about NetApp’s new technology MAX Data and how it is changing the very nature of delivering extreme performance.

Glenn provides some insight into what MAX Data is and how it drives the storage industry close to its ultimate goal, to ensure that storage performance is never the bottleneck of any solution. We also discuss not only how this solution delivers extreme performance but how it does so while greatly simplifying your infrastructure.

Glenn also shares, importantly, how not only does this solution offer unparalleled performance it can do it at a very low cost, not only commercially, but by negating the need to refactor or modernise existing applications.

We wrap up taking a look at some of the use cases for such a high-performance solution.

In the second part of the show, it’s the technology industries favourite topics, artificial intelligence (AI) and machine learning (ML). John Woodall, VP of engineering at Integrated Archive Systems (IAS) joins me to talk about the industry in general and how NetApp is the latest tech provider, in association with NVIDIA, to offer businesses the chance to buy an “off the shelf” AI solution.

We talk about how the technology industry is finally starting to deliver something that technologists and scientists have spoken about for decades, the ability to build true AI and make it available in our daily lives. Beyond this NetApp solution (known as ONTAP AI) we look at the wider use of AI and ML. John shares some thoughts on why business is looking at these technologies and what it can deliver, but he also cautions on the importance of understanding the questions you want this kind of technology to answer before you get started.

We discuss the importance of not only knowing those questions but also ensuring we have the skills to know how to ask them. We also discuss why you may want to build an AI solution yourself as opposed to using the plethora of cloud services available to you.

We wrap up looking at why it’s important to be able to have AI platforms that allow you to start small and we also explore some of the use cases, operational efficiency, increasing margins or finding new ways to monetise your business expertise, but most importantly to focus on business outcomes and not the technology.

There is no doubt that AI and ML and the desire for extreme high performance are key parts of many technology strategies and it’s clear from both these developments from NetApp and the trends in the wider technology industry, that ways of meeting these business desires are becoming more widely available and importantly, affordable for an increasingly wide range of businesses.

To find out more about these technologies you can visit NetApp’s website for MAX Data and ONTAP AI.

If you want to get in touch with or find out more about Glenn and John, you can find them both on twitter @gdekhayser and @John_Woodall.

This is the last show of 2018, so I would just like to thank you all for listening to Tech Interviews throughout the year and I hope you’ll be able to join me for more shows in 2019.

That leaves me to just wish you all a great Christmas holiday and until next year, thanks for listening.

turned on red and blue merry christmas neon sign
Photo by Jameel Hassan on Pexels.com
Advertisements

Intelligent, secure, automated, your data platform future

I was recently part of a workshop event where we discussed building “your future data platform”, during the session I presented a roadmap of how a future data platform can look. The basis of the presentation, which looked at the relatively near future, was how developments from “data” vendors are allowing us to rethink the way we manage the data that we have in our organisations.

What’s driving the need for change?

Why do we need to change the way we manage data? The reality is that the world of technology is changing extremely quickly and at the heart of it is our desire for data, be it creating it, storing it, analysing it or learning from it and demanding that we use data increasingly to help drive business outcomes, strategies and improve customer experience.

Alongside this need to use our data more are other challenges, from increasing regulation to the ever more complex security risk (See the recent Marriot Hotels breach of 500 million customer records) which are making further, unprecedented demands on our technology platforms.

Why aren’t current approaches meeting the demand?

What’s wrong with what we are currently doing? Why aren’t current approaches helping us to meet the demands and challenges of modern data usage?

As the demands on our data grow, the reality for many is we have architected platforms that have never considered many of these issues.

Let’s consider what happens when we place data onto our current platform.

We take our data, it could be confidential, it may not be, often we don’t know, that data is placed into our data repository when it’s placed there how many of us know;

  • Where it is?
  • Who owns it?
  • What does it contain?
  • Who is accessing it?
  • What’s happening to it?

In most cases, we don’t, and this presents a range of challenges from management and security to reducing our ability to compete with those who are effectively using their data to innovate and gain an advantage.

What If that platform instead could recognise the data as it was deposited? Then made sure it was in the right secure area, with individual file securities that would ensure it remained secure regardless of its location, then the system, if necessary, would protect the file immediately (not when the next protection job ran) and then tracked the use of that data from its creation to deletion.

That would be useful, wouldn’t it?

What do we need our modern platform to be?

As the amount of data and the ways we want to use it continues to evolve, our traditional approaches will not be able to meet the demands placed upon them and we certainly cannot expect human intervention to be able to cope, the data sets are too big, the security threats too wide-reaching and the compliance requirements ever more stringent.

However, that’s the challenge we need our future data platforms to meet, they need to be, by design, secure, intelligent and automated. The only way we are going to be able to deliver this is with the help of technology augmenting our efforts in education, process and policy to ensure we use our data and get the very best from it.

That technology needs to be able to deliver this secure, intelligent and automated environment from the second it starts to ingest data, it needs to understand what we have and how it should be used and importantly it shouldn’t just be reactive, it has to be proactive, the minute new data is written it applies intelligence, ensuring immediately we secure our data, store and protect it accordingly and be able to fully understand its use throughout its lifecycle.

Beyond this, we also need to make sure that what we architect is truly a platform, something that acts as a solid foundation for how we want to use our data. We need to ensure once we have our data organised, secure and protected, that our platform can make sure that we can move it to places we need it, allowing us to take advantage of new cloud services, data analytics tools, machine learning engines or whatever may be around the corner, while ensuring we continue to maintain control and retain insights into its use regardless of where it resides.

These are key elements of our future data platform and ones we are going to need to consider to ensure that our data can meet the demands of our organisations to make better decisions and provide better services, driven by better use of data.

How do we do it?

Of course, the question is, can this be done today and if it can how?

The good news is, much of what we need to do this, is already available or coming very soon and which means, realistically within the next 6-18 months, if you have the desire, you can develop a strategy and build a more secure, intelligent and automated method for managing your data.

I’ve shared some thoughts here on why we need to modernise our platforms and what we need from them, in the next post I’ll share a practical example of how you can build this kind of platform by using tools that are available to you today or coming very shortly, to show that a future data platform is closer than you may think.