Featured

Getting on my bike for Marie Curie

This isn’t something I normally use this site for but I hope you won’t mind me making an exception on this occasion to share a challenge that Mrs Techstringy has “encouraged” me to join her on this year!

My wife works for the Marie Curie Charity here in the UK, they do incredible work helping to care for those with terminal illness who require end of life care. As you can imagine the work can be very challenging, in fact helping those and those close to them deal with terminal illness, is perhaps some of the most challenging circumstances you could be faced with.

Through a range of services from nursing support to hospice care, this incredible charity takes on this challenge daily, providing crucial services and support for those who need it, every day of someone’s life matters – from the first to the last and the charities role is to ensure that is the case.

All these services are provided for free, but of course aren’t free to provide, as Mrs Techstringy says, “Marie Curie is an amazing charity and working for them has given me an appreciation of just how much money needs to be raised for us to be able to continue to support as many people as possible”.

Over the last 5 years my wife has supported several charities, primarily through cycling events, be they local, national, long and even longer rides including riding through Vietnam, raising thousands of pounds and acting as a constant source of inspiration to me as I’ve watched her take on these epic challenges.

After a year working with Marie Curie she knew her next challenge would be something to help support the work they do, to do that she has decided to take on the Prudential Ride London event, a 100 mile ride around the UK’s capital city. Her inspirational examples of taking on long cycling challenges to raise money for great causes has rubbed off as she has “inspired” me to join her and having never really done any long distance cycling, 100 miles seemed a sensible place to start!!

We are a couple of months away from the event, training is well underway, I rode my first 100km event a couple of weeks ago (well thanks to some suspect measuring 106km) we are taking on hilly midweek rides with a long ride at weekends and spending a bit of time down the gym on the cycling machines, my bottom has adopted the relevant resistance to time in the saddle, so with a few more weeks of training to go, we should be ready to take it on.

Why am I sharing this? well of course, not only has my wife’s willingness to spend many hours in a bicycle seat inspired me to want to have a go, but the incredible work this fantastic charity does in providing end of life care has also inspired me to want to help and see if I can do a bit to financially support this great work.

How can the techstringy.com readers and listeners to the Tech Interviews podcast help? Well of course some heartfelt good lucks on this page or on the twitters would be wonderful, but of course what would help Marie Curie is if you’d be able to contribute on our Just Giving Page to help us towards our target of £1100.

Every penny you donate will make a difference so please, if you can help we would both really appreciate it, and if you can’t, that’s no problem, a good luck message will help with those hours in the saddle.

Thanks for letting me steal some space on my blog site to share this personal adventure and if you can help, that would be marvellous.

Right, where’s my bike!?

For more on Marie Curie and the amazing work they do visit mariecurie.org.uk

To find out more about the challenge visit the Prudential Ride London Event page.

If you can help us to support the charity financially then please visit our Just Giving Page.

Advertisements

The State of the data nation – Howard Marks – Ep70

A couple of times a year I like to do a show that reviews the current state of the data market, the chance to take a check on the challenges facing both the makers and consumers of data technology, how they are been addressed, the technology changes and trends that decisions makers should consider and what the future holds for the industry.

I always think these shows are useful for those who are tasked with making strategic decisions and designing data platforms for their businesses, I know when I speak with people on these topics it’s always useful to understand current market thinking and the general direction that the technology vendors are taking.

Earlier this year I spoke with industry analyst Chris Evans as we looked ahead at what 2018 had in-store (you can find that episode here). For this half year review, I was very fortunate to get some time with renowned industry analyst Howard Marks.

Howard is founder and Chief Scientist at DeepStroage.net as well as co-host of the excellent Greybeards on storage podcast. With over 30 years’ experience as a consultant and writer on the storage industry he is very well placed to comment on the current state of the market and its direction.

In episode 70 of Tech Interviews, I chat with Howard about a range of topics. We discuss the current rate of change of the industry and is it the rate or amount of change that’s concerning us?

We look at the impact cloud is having and how much of a driver to change it is, Howard shares some thoughts on Software as a Service (SaaS) and its impact on traditional roles.

We examine in more detail changing roles, how storage admins need to be in charge of “data paranoia” and we ask if simplification is a good thing? and why cloud simplicity doesn’t sit well with organisational complexity.

We end our show looking ahead, what Howard would like to see the storage industry tackle, why a focus on data management will be key and the impact that storage class memory is going to have on both producers of and consumers of technology.

Howard shares some fantastic insights and left me with a lot of food for thought. I am sure he will you too.

To find out more about what Howard does, you can visit DeepStroage.net follow Howard on twitter @DeepStorageNet and if you deal with data and want to understand the data technology market then get the Greybeards podcast on your listening playlist, you’ll find it here.

Thanks for listening.

Don’t forget me and Mrs Techstringy are taking on the Prudential Ride London event for the Marie Curie charity in the UK to help support their work in delivering end of life care, if you can help and support us, it would be much appreciated, you can find our story here.

Fear of the delete button – Microsoft and compliance – Stefanie Jacobs – Ep69

Compliance of data continues to trouble many business execs, whether IT focused or not, it is high on the agenda for most organisations. Anyone who has listened to this show in the past will know, while technology only plays a small part in a building an organisations compliance programme, it can play a significant part in their ability to execute it.

A few weeks ago I wrote an article as part of the “Building a modern data platform” series, this article Building a modern data platform “prevention” focussed on how Microsoft Office365 could aide an organisation in preventing the loss of data, either accidental or malicious. This article explains how Microsoft have some excellent, if not well known tools, inside Office365 including a number of predefined templates which when enabled allow us to deploy a range of governance and control capabilities quickly and easily, immediately improving an organisations ability to execute its compliance plans and reduce the risk of data leaks.

This got me to thinking, what else do Microsoft have in their portfolio that people don’t know about? What is their approach to business compliance and can that help organisations to more effectively deliver their compliance plans?

This episode of the podcast explores that exact topic, this is a show I’ve wanted to do for a while and finally have found the right person to help explore Microsoft’s approach and what tools are quickly and easily available to help us deliver robust compliance.

This week’s guest is Stefanie Jacobs, a Technology Solutions Professional at Microsoft, with 18 years’ experience in compliance. Stefanie, who has the fantastic twitter handle of @GDPRQueen, shares with fantastic enthusiasm the importance of compliance, Microsoft’s approach and how their technology is enabling organisations to make compliance a key part of their business strategy.

In this episode we explore all the compliance areas you’d ever want, including the dreaded “fear of the delete button”. Stefanie shares Microsoft’s view of compliance and how it took them a while to realise that security and compliance are different things.

We talk about people, the importance of education and shared responsibility. We also look at the triangle of compliance, people, process and technology. Stefanie explains the importance of terminology and understanding exactly what we mean when we discuss compliance.

We also discuss Microsoft’s 4 steps to developing a compliance strategy, before we delve into some of the technology they have available to help underpin your compliance strategy, especially the security and compliance section of Office365.

We wrap up with a chat on what a regulator looks for when you have had a data breach and also what Joan Collins has to do with compliance!

Finally, Stefanie provides some guidance on the first steps you can take as you develop your compliance strategy.

Stefanie is a great guest, with a real enthusiasm for compliance and how Microsoft can help you deliver your strategy.

To find out more about how Microsoft can help with compliance you can visit both their Service Trust and GDPR Assessment portals.

You can contact Stefanie via email Stefanie.jacobs@microsoft.com as well as follow her on twitter @GDPRQueen.

Thanks for listening

If you enjoyed the show, why not subscribe, you’ll find Techstringy Tech Interviews in all good homes of podcasts.

While you are here, why not check out a challenge I’m undertaking with Mrs Techstringy to raise money for the Marie Curie charity here in the UK, you can find the details here.

Microsoft the digital transformation cool kids? – Andy Kent – Ep68

I recently attended a fascinating event with the British Interactive Media Association (BIMA) and Microsoft. BIMA exist to drive innovation and excellence across the digital industry. One of the ways they do this is via community events and forum to help give industry leaders an opportunity to share ideas with their peers.

I personally believe forum like these should play a central part in an IT strategists role, the opportunity to share ideas with and learn from peers in your industry is essential in helping to gain the understanding needed to develop modern business strategies. The benefit of involving yourself in communities regardless of your sector should not be underestimated, that ability to build relationships with others who face the same challenges and opportunities is incredibly valuable.

One of the other things communities like BIMA do well is engage with influential organisations who are shaping the way industries develop and BIMA have done that recently for thier members with a series of roadshows with Microsoft. So, when I was invited to the recent event in Liverpool, I was fascinated to understand what Microsoft were doing in the digital agency space and how their technology was shaping the way digitally focussed organisations are innovating and bringing new solutions to market.

What I found however was the technology Microsoft discussed was the same as that they were sharing with all other types of businesses, their cloud vision of how Azure and Office 365 allows quick and easy deployment of technology and services that only a few years ago were out of the reach of most organisations.

Why was this message the same? While we may assume “digital” companies are ones focussed on marketing and media creation, the reality is most organisations are rapidly becoming digital businesses and starting to transform our organisations with technologies that only a few years ago were unavailable to most, cognitive services, AI, machine learning, deep analytics capabilities, bots and business intelligence just a few examples of the kind of technology that can transform the way we operate.

The event covered some fascinating topics and on this weeks podcast I share some of that with you, with my guest and one of the organisers of the BIMA event, Andy Kent CEO of Angel Solutions and the chair of BIMA Liverpool. Andy joins me to discuss the event, the technology Microsoft shared and how the new Microsoft is helping companies transform the way they do business.

We begin with a look at BIMA and the part they play and value they bring to the digital community.

We discuss the new Microsoft and ask if they are now the technology “cool kids” and how their change in attitude is encouraging people to engage with them and explore how technology can help them and their organisation.

We chat about some of the technology highlights Andy had from the show, how he sees AI as having a huge transformational effect and how Cloud is commoditising access to this kind of technology so that all organisations can benefit, big or small.

We look at some of the smart technologies Microsoft are now embedding into their more familiar tools and allowing that intelligence to do much of the “heavy lifting” for the end user, but how that does not replace a skilled and experienced professional, but does help keep them focussed on the high value work they do.

Finally, we look at how advances in technology has also advanced customer expectations and how theses advances are allowing organisations to do new things with and ask new questions of their data.

Andy shares with enthusiasm both the technological direction of this “new” Microsoft as well as the value that community has played in his business life.

If you want to find out more about Andy and the work that Angel Solutions do you can find them on twitter @angel_solutions.

You can find also find Andy on twitter @AndyCKent

If you want to know more about BIMA check out thier website www.bima.co.uk

Thanks for listening.

Building a modern data platform – Out on the edge

In this series so far we have concentrated on the data under our control in our datacentres and managed clouds and protected by enterprise data protection tools.

However, the reality of a modern data platform is not all of our data lives in those safe and secure locations. Today most organisations expect mobility, we want access to our key applications and data on any device and from any location.

This “edge data” presents a substantial challenge when building a modern data platform, not only is the mobility of data a security problem, it’s a significant management and compliance headache.

How do we go about managing this problem?

The aim of this series is to give examples of tools that I’ve used to solve modern data platform challenges, however with edge data it’s not that simple. It’s not only the type and location of data, but also the almost infinite range of devices that hold it.

Therefore, rather than present a single solution, we are going to look at some of the basics of edge data management and some tools you may wish to consider.

Manage the device

The fundamental building block of edge data protection is maintaining control of our mobile devices, they are repositories for our data assets and should be treated as any other in our organisation.

When we say control, what do we mean? In this case control comes from strong endpoint security.

Strong security is essential for our mobile devices, their very nature means they carry a significant risk of loss and therefore data breach, so it’s critical we get the security baseline right.

To do this mobile device management tools like Microsoft Intune can help us to build secure baseline policies, which may, for example, demand secure logon, provide application isolation and in the event of device loss ensure we can secure the data on that device to help minimise the threat of data leak and compliance breach.

Protecting the data

As critical as ensuring our mobile data repository is managed and secure, protecting the data on it is crucial. We can take three general approaches to controlling our edge data;

  • No data on the device
  • All data synchronised to a secure location
  • Enforce edge data protection

Which approach you use depends on both the type of data and the working practices of your organisation.

For example, if your mobile users only access data from good remote links, home office for example, then having data only within our controlled central repositories and never on the device is fine.

That however, is not always practical, therefore a hybrid approach that allows us to cache local copies of that data on our devices may be more appropriate, think OneDrive for Business, Dropbox or build your own sync tools such as Centrestack.

These tools allow users access to a cached local copy of the data housed in our central data stores regardless of connectivity, with managed synchronisation back to these stores when possible.

This provides up to date data copies for users for convenience, while we maintain a central data repository ensuring the authoritative copy resides under our control.

Enforce Data Protection

However, this hybrid approach relies upon users placing the data in the correct folder locations and if they don’t this then presents a data security and compliance risk.

To overcome this we can ensure we protect all of the data on these devices by extending our enterprise data protection solution, for example we can use Veeam Agents to protect our Windows workloads, or a specialised edge data tool such as Druva InSync, which can help us protect edge data on a range of devices and operating systems.

This goes beyond synchronisation of a set of predefined folders and allows us to protect as much of the data and configuration of our mobile devices as we need to.

Understand the edge

While ensuring the device and data is robustly protected, our modern platform also demands insight into our data, where it is, how it is used and importantly how to find it when needed.

This is a real challenge with edge data, how do we know who’s mobile device has certain data types on it? If we lose a device can we identify what was on it? The ability to find and identify data across our organisation, including that on the edge, is essential to the requirements of our modern data platform.

Ensuring we have a copy of that data, that is held securely and is indexed and searchable, should be a priority.

Druva InSync, for example, allows you to do compliance searches across all of the protected mobile devices, so you can find the content on a device, even if that device is lost.

Centralising content via enterprise backup, or synchronisation tools also provides us this capability, how you do it will depend on your own platform and working practice, doing it however should be seen as a crucial element of your modern data platform.

In Summary

The importance of having our data controlled even when it spends much of it’s time on the very edges of our networks is crucial to our modern data strategy. When it is, we can be sure  all of our business security and compliance rules are applied to it and we can ensure it’s protected, recoverable and always available.

Managing the data on the edges of our network is a difficult challenge, but by ensuring we have strong management of devices, robust data protection and insight into that data, we can ensure edge data is as core a part of our data platform as that in our datacentre.

This is part 5 in a series of posts on building a modern data platform, the previous parts of the series can be found below.

modern data platform
Introduction

modern storage
The Storage

031318_0833_Availabilit1.png
Availability

control
Control

 

 

what the cloud can bring
Prevention (Office365)

 

Managing all of the clouds – Lauren Malhoit – Ep67

As the move to cloud continues we are starting to see a new development, with organisations no longer relying on a single cloud provider to deliver their key services, many now opting for multiple providers, from their own data centre to hyperscale big boys, multi-cloud environments are becoming the norm.

This multi-cloud environment makes perfect sense, the whole point of adopting cloud is to provide you with the flexibility to consume your data, infrastructure, applications and services from the best provider at any given time, which would be very difficult to do if we only had a single provider.

However, multi-cloud comes with a challenge, one rather well summed up at an event recently by the phrase “clouds are the new silo’s”. Our cloud providers are all very different in the way they build and operate their infrastructure and although when we take services from one provider we may well not notice or care, when we start to employ multiple vendors it can quickly become a problem.

How to avoid cloud silo’s is seemingly becoming a technology “holy grail” engaging many of the world’s biggest tech vendors.  This is only good news, as we move into a world where we want the freedom and flexibility to choose whichever “cloud” is the best fit for us at any given time, then will will only be able to do this if we overcome the challenge that comes with managing and operating across these multiple environments.

Taking on this challenge is the subject of this week’s podcast with my guest Lauren Malhoit of Juniper Networks and co-host of the excellent Tech Village Podcast.

Lauren recently sent me a document entitled “The Five Step Multi Cloud Migration Framework” It caught my attention as it discusses the multi-cloud challenge and provides some thoughts on how to address it and it is those ideas that form the basis for this week’s show..

We open the discussion by trying to define what multi-cloud is and why it’s important that we don’t assume that all businesses are already rushing headlong into self-driving, self-healing, multi-cloud worlds. We chat about how a strategy is more likely to be for helping a business start along this road, rather than managing something they already have.

We explore how multi-cloud doesn’t just mean Azure and AWS, but can equally apply to multiples of your own datacenters and infrastructure.

Lauren shares her view on the importance of automation, especially when we look at the need for consistency and how this is not just about consistent infrastructure, but also compliance, security and manageability.

We also ask the question, why bother? Do we really need a multi-cloud infrastructure? Does it really open up new ways for our organisation to operate?

We wrap up looking at the importance of being multi-vendor, multi-platform and open and how that openness cannot come with a cost of complexity.

Finally, we discuss some use cases for multi-cloud as well as taking on the challenge of people in our business and the importance of how a multi-cloud world shouldn’t be seen as a threat, but as an opportunity for career growth and development.

I hope you enjoy what I thought was a fascinating conversation about an increasingly pressing challenge.

To find out more about the work Juniper are doing in this space you can look out for forthcoming announcements at Juniper.net as well as check out some of the information published on their Github repo’s.

To find out more about the work Lauren is doing you can follow her on twitter @malhoit or her blog over at adaptingit.com

Also check out the fantastic Techvillage Podcast if you are interested in career development and finding out about the tech world of others in the IT community.

Juniper also have some great resources for learning about designing a multi cloud environment check out the original white paper that inspired this podcast The Five Step Multi Cloud Migration Framework and you’ll also find some great info in this post Get Your Data Center Ready for Multicloud

Until next time – thanks for listening

Wrapping up VeeamON – Michael Cade – Ep 66

A couple of weeks ago in Chicago Veeam had their annual tech conference VeeamON, it was one of my favourite shows from last year, unfortunately I couldn’t make it out this time but did catch up remotely and shared my thoughts on some of the strategic messages that where covered in a recent blog post looking at Veeam’s evolving data management strategy ( Getting your VeeamON!).

That strategic Veeam message is an interesting one and their shift from031318_0833_Availabilit2.jpg “backup” company to one focused on intelligent data management across multiple repositories is, in my opinion, exactly the right move to be making. With that in mind, I wanted to take a final look at some of those messages as well as some of the other interesting announcements from the show and that is exactly what we do on this week’s podcast, as I’m joined by recurring Tech Interviews guest, Michael Cade, Global Technologist at Veeam.

Michael, who not only attended the show but also delivered some great sessions, joins me to discuss a range of topics. We start by taking a look at Veeam’s last 12 months and how they’ve started to deliver a wider range of capabilities which builds on their virtual platform heritage with support for more traditional enterprise platforms.

Michael shares some of the thinking behind Veeam’s goal to deliver an availability platform to meet the demands of modern business data infrastructures, be they on-prem, in the cloud, SaaS or service provider based. We also look at how this platform needs to offer more than just the ability to “back stuff up”

We discuss the development of Veeam’s 5 pillars of intelligent data management, a key strategic announcement from the show and how this can be used as a maturity model against which you can compare your own progress to a more intelligent way of managing your data.

We look at the importance of automation in our future data strategies and how this is not only important technically, but also commercially as businesses need to deploy and deliver much more quickly than before.

We finish up by investigating the value of data labs and how crucial the ability to get more value from your backup data is becoming, be it to carry out test, dev, data analytics or a whole range of other tasks without impacting your production platforms or wasting the valuable resource in your backup data sets.

Finally, we take a look at some of the things we can expect from Veeam in the upcoming months.

You can catch up on the event keynote on Veeam’s YouTube channel https://youtu.be/ozNndY1v-8g

You can also find more information on the announcements on Veeam’s website here www.veeam.com/veeamon/announcements

If you’d like to catch up with thoughts from the Veeam Vanguard team, you can find a list of them on twitter – https://twitter.com/k00laidIT/lists/veeam-vanguards-2018

You can follow Michael on twitter @MichaelCade1 and on his excellent blog https://vzilla.co.uk/

Thanks for listening.

Getting your VeeamON!

Recently software vendor Veeam held its 2018 VeeamON conference in Chicago. VeeamON was one of my favourite conferences of last year, unfortunately I couldn’t make it out this time, but I did tune in for the keynote to listen to the new strategy messages that were shared.

The availability market is an interesting space at the minute, highlighted by the technical innovation and talent recruitment you can see companies like Veeam, Rubrik and others making. Similar to the storage industry of 5 years ago, the data protection industry is being forced to change its thinking with backup, replication and recovery no longer enough to meet modern demands. Availability is now the primary challenge, and not just of the data in our datacentre but also that sat in service providers, on SaaS platforms or with the big public hyperscalers, we need our availability strategy to cover all of these locations.

As with the storage industry when it was challenged by performance and the emergence of flash, two things are happening; New technology companies are emerging offering different approaches and thinking to take on modern challenges that traditional vendors are not addressing. But that challenge also inspires those established vendors, with experience, proven technologies, teams and budgets to react and find answers to these new challenges, well at least it encourages the smart ones.

This is where the availability industry currently sits and why the recent VeeamON conference was of interest. Veeam’s position is interesting, a few years ago they were the upstart with a new way of taking on the challenge presented by virtualisation. However, as our world continues to evolve so do the challenges, cloud, automation, security, governance and compliance just a few of the availability headaches many of us face and Veeam must react to.

One of the things I like about Veeam (and one of the reasons I was pleased to be asked to be a part of their Vanguard program this year) is that they are a very smart company, some of the talent acquisition is very impressive and the shift in how they see themselves and the problem they are trying to solve is intriguing.

VeeamON 2018 saw a further development of this message as Veeam introduced their 5 stages of intelligent data management which sees them continue to expand their focus beyond Veeam “The backup company”. The 5 stages provide the outline of a maturity model, something that can be used to measure progress towards a modern way of managing data.

Of these 5 stages, many of us are on the left-hand side of the graph with a robust policy-based backup approach as the extent of our data management. However, for many this is no longer appropriate as our infrastructures become more complex, changing more rapidly with data stored in a range of repositories and locations.

This is coupled with a need to better understand our data for management, privacy and compliance reasons, we can no longer operate an IT infrastructure without understanding at the very least where our data is located and what that means for its availability.

In my opinion, modern solutions must provide us with a level of intelligence and the ability to understand the behaviour of our systems and act accordingly. This is reflected on the right-hand side of Veeam’s strategy, that to meet this modern challenge will demand increasingly intelligent systems that can understand the criticality of a workload or what is being done to a dataset and act to protect it accordingly.

Although Veeam aren’t quite doing all of that yet, you can see steps moving them along the way, solutions such as Availability Orchestrator which takes the complexities of continuity and delivers automation to its execution, documentation and ongoing maintenance, are good examples.

It’s also important to note that Veeam understand they are not the answer to all of an organisations data management needs, they are a ultimately a company focussed on availability, but what they do realise is that availability is crucial and far beyond just recovering lost data, this is about making sure data is available “any data, any app, across any cloud” and they see the opportunity in becoming the integration engine in the data management stack.

Is all this relevant? Certainly, a major challenge for most businesses I chat with is how to build an appropriate data strategy, one that usually includes only having the data they need, to know how it’s been used and by who, where it is at any given time and having it in the right place when needed so they can extract “value” and make data driven decisions. This can only be achieved with a coherent strategy that ties together multiple repositories and systems, ensures that data is where it should be and maintains the management and control of that data across any platform that is required.

With that in mind Veeam’s direction makes perfect sense, with the 5 steps to intelligent data management model providing a framework upon which you can build a data management strategy, which is hugely beneficial to anyone who is tasked with developing their organisations data management platform.

In my opinion, Veeam’s direction is well thought out and I’ll be watching with interest in not only how it continues to develop, but importantly how they deliver tools and partnerships that allow those invested in their strategy to successfully execute it.

You can find more information on the announcements from VeeamON on Veeam’s website here www.veeam.com/veeamon/announcements

Casting our eye over HCI – Ruairi McBride – Ep65

I’ve spoken a bit recently about the world of Hyper Converged Infrastructure (HCI) especially as the technology continues to mature, with both improved hardware stacks and software looking to take advantage of this hardware, it is becoming an ever more compelling prospect.

ruairiHow do these developments, an HCI version 2.0 if you like, manifest themselves? Recently I saw a good example in a series of blog posts and videos from friend of the show Ruairi McBride, which demonstrated really well both the practical deployment and look and feel of a modern HCI platform.

The videos focussed on NetApp’s new offering and covered the out of the box experience, how to physically cable together your HCI building blocks and how to take your build from delivery to deployment in really easy steps. This demonstration of exactly how you build a HCI platform was interesting, not just on a practical level, but also gave me some thoughts around why and how you may want to use HCI platforms in a business context.

With that in mind, I thought a chat with Ruairi about his experience with this particular HCI platform, how it goes together, how it is practically deployed and how it meets some of the demands of modern business would make an interesting podcast.

So hear it is, Ruairi joins me as we cast our eye over HCI (stole the title from Ruairi’s BLOG post!).

We start by discussing what HCI is and why it’s simplicity of deployment is useful, we also look at the pro’s and cons of the HCI approach. Ruairi shares some thoughts on HCI’s growing popularity and why the world of smartphones may be to blame!

We look at the benefit of a single vendor approach within our infrastructure, but also discuss that although the hardware elements of compute and storage are important, the true value of HCI lies in the software.

We discuss the modern business technology landscape and how a desire for a more “cloud like” experience within our on-premises datacentres has demanded a different approach to how we deploy our technology infrastructure.

We wrap up by looking at why as a business you’d consider HCI, what problems will it solve for you and what are the use cases that are a strong HCI fit and of course, it’s important to remember that HCI isn’t the answer to every question!

To find out more about NetApp HCI visit here.

Ruairi’s initial “Casting Our Eye Over HCI” blog and video series is here.

If you have further questions for Ruairi, you can find him on twitter @mcbride_ruairi.

Until next time.

Thanks for listening.

NetApp, The Cloud Company?

051718_1626_NetAppTheCl1.jpgLast week I was fortunate enough to be invited to NetApp’s HQ in Sunnyvale to spend 2 days with their leadership hearing about strategy, product updates and futures (under very strict NDA, so don’t ask! ) as part of the annual NetApp A-Team briefing session. This happened in a week were NetApp revealed their spring product updates which, alongside a raft of added capabilities to existing products, also included a new relationship with Google Compute Platform (GCP).

The GCP announcement now means NetApp offer services to the 3 largest hyperscale platform providers. Yes that’s right, NetApp the “traditional” On-prem storage vendor are offering an increasing amount of cloud services and what struck me while listening to their senior executives and technologists was this is not just a faint nod to cloud but is central to NetApp’s evolving strategy.

But why would a storage vendor have public cloud so central to their thinking? It’s a good question and I think the answer lies in the technology landscape many of us operate in. The use of cloud is commonplace, its flexibility and scale are driving new technology into businesses more quickly and easily than ever before.

However, this comes with its own challenges, while quick and easy is fine for deploying services and compute, the same can not be said of our data and storage repositories, not only does data continue to have significant “weight” but it also comes with additional challenges, especially when we consider compliance and security. It’s critical in a modern data platform that our data has as much flexibility as the services and compute that need to access it, while at the same time, allowing us to maintain full control and stringent security.

NetApp has identified this challenge as something upon which they can build their business strategy and you can see evidence of this within their spring technology announcements not only as they tightly integrate cloud into their “traditional” platforms, but also the continued development of cloud native services such as those in the GCP announcement, the additional capabilities in AWS and Azure, as well as Cloud Volumes and services such as SaaS backup and Cloud Sync. It is further reflected in an intelligent acquisition and partnering strategy with a focus on those who bring automation, orchestration and management to hybrid environments.

Is NetApp the on-prem traditional storage vendor no more?

In my opinion this is an emphatic no. During our visit we heard from NetApp Founder Dave Hitz, he talked about NetApp’s view of cloud and how initially they realised that it was something they needed to understand and decided to take a gamble on it and its potential. What was refreshing was that they did this without any guarantees they could make money from cloud, but just they understood how potentially important it would be.

Over the last 4 years NetApp has been reinvigorated with a solid strategy built around their data fabric and this strong cloud centric vision, which has not only seen share prices rocket, but has also seen market share and revenue grow. That growth has not been from cloud services alone, in fact the majority is from strong sales of their “traditional” on-prem platforms and they are convinced this growth has been driven by their embracing of cloud, a coherent strategy that looks to ensure your data is where you need it, when you need it, while maintaining all of the enterprise class qualities you’d expect on-prem, whether the data is in your datacentre, near the cloud or in it.

Are NetApp a cloud company?

No. Are they changing? Most certainly.

Their data fabric message honed over the last 4 years is now mature in not only strategy but in execution, with NetApp platforms, driven by ONTAP as a common transport engine, providing a capability to move data between platforms be they on-prem, near the cloud or straight into public hyperscalers, while crucially maintaining the high quality of data services and management we are used to within our enterprise across all of those repositories.

This strategy is core to NetApp and their success and it certainly resonates with businesses that I speak with as they become more data focussed than ever, driven by compliance, cost or the need to garner greater value from their data. Businesses do not want their data locked away in silo’s, nor do they want it at risk when they move it to new platforms to take advantage of new tools and services.

While NetApp are not a cloud company, during the two days It seemed clear to me that their embracing of cloud puts them in a unique position when it comes to providing data services. As businesses look to develop their modern data strategy they would be, in my opinion, remiss to not at least understand NetApp’s strategy and data fabric and the value that approach can bring, regardless of ultimately if they use NetApp technology or not.

NetApp’s changes over the last few years have been significant and their future vision is fascinating and I for one look forward to seeing their continued development and success.

For more information on the recent spring announcements, you can review the following;

The NetApp official Press Release

Blog post by Chris Maki summarising the new features in ONTAP 9.4

The following NetApp blogs provide more detail on a number of individual announcements;

New Fabric Pool Capabilities

The new AFF A800 Platform

Google Compute Platform Announcement

Latest NMVe announcements

Tech ONTAP Podcast – ONTAP 9.4 Overview